Does the article you cited cost money to read? I found a description on google scholar:
> Ten years left to redesign lithium-ion batteries
> Reserves of cobalt and nickel used in electric-vehicle cells will not meet future demand. Refocus research to find new electrodes based on common elements such as iron and silicon, urge Kostiantyn Turcheniuk and colleagues.
I notice that the article was published in 2018. So I guess we only have to wait two more years to decide if it's right or not. Will we be out of cobalt and nickel by then? I'd be happy to take a bet with you, assuming you stand by the article you cited.
> If you want this to work across ARM and x86 (or even multiple ARM vendors), you are screwed, and need to restrict yourself to using only the basic arithmetic operations and reimplement everything else yourself.
Is this problematic for WASM implementations? The WASM spec requires IEEE 754-2019 compliance with the exception of NaN bits. I guess that could be problematic if you're branching on NaN bits, or serializing, but ideally your code is mostly correct and you don't end up serializing NaN anyway.
They make more than they would in Japan. But people can make $0 in any country. Regardless, part-time Walmart greeters are fortunately not paying full price for health insurance in the US.
I'm curious what you're implying. Is there a country where the poorest person is so rich they can get all the insurance and care they require without government subsidy?
It interacts badly with insurance being offered as workplace benefit. If you quit or lose your job, you'd lose your health insurance. And any plan you signed up for after that would then treat you as "pre-existing embers" and expect you to pay accordingly. The bundling of health insurance with workplace seems like the healthcare original sin to me.
Obama couldn't change that, so the ACA redesigned the system to work with it. Despite being called insurance, health insurance is no longer really viewed or designed to be any kind of insurance. Instead, it's supposed to be Netflix for healthcare. You pay a flat rate, and then get unlimited healthcare. Obviously, the issue with this is that if you don't need healthcare you can just not sign up for the subscription. So the ACA tried to solve this by requiring everyone to sign up. Once everyone is required to sign up, it's not right to discriminate against preexisting conditions. It may not be an especially good system, but it is coherent.
The US is allergic to taxes. Maybe it's a marketing thing. Benefits paid for by society.
Maybe a department of Return on Investment. See what those taxes pay for. Contrast to buying private versions of the services at the same SLA or better.
It’s more that the US is more like a collection of 50 little countries, and it’s supposed to be hard to accomplish much at a federal level. That separation has eroded a bit in the last 50 years but it’s still very much a part of our political ideology.
> I haven't seen a single "AI evangelist" address any concerns and limitations
You see what you choose to focus on. I come across many people who are excited about the possibilities of AI-assisted coding, who are frustrated by its limitations, who share strategies for overcoming or avoiding those limitations, and s on. For a concrete and famous example, I would put Andrej Karpathy in this category. Where are you looking that you're not finding any of these people? linkedin?
In my experience the people who are excited about ai assisted coding are people who aren't good at coding in the first place and don't care about quality, consistency, or understanding what they are having it write, and people who have a vested interest in ai coding tools being used (leadership who want to say "my team uses ai" and "ai experts" who have a personal brand dependent on ai being successful)
AI assisted coding is really good as an enhanced auto-complete, often better as it picks up patterns in the code and will complete whole lines or chunks of code. There, I'll assess the results like any other auto-completed suggestions.
For other things like when asking questions I won't just blindly copy what the LLM is suggesting. I'll often rewrite it in a style that best fits the style of the codebase I'm working on, or to better fit it into what I'm trying to achieve. Also, if I've asked it for how to do a specific one-line query and it has rewritten a whole chunk of code, I'll only make use of that one line, or specific fix/change. -- This also helps me to understand the response from the LLM.
I'll then do testing to make sure that the code is working correctly, with unit tests where relevant.
The user you're replying to has made many similar posts like this. I previously tried engaging in good faith. I try not to fall into the XKCD 386 trap now, my time is better spent with Claude Code. Hope I can help save you some time too!
I have been thinking about this myself. I'm working on some custom dictionaries for words I discover from my corpus of movie subtitles. Which I'm sure is not a new idea, but it's fun, because it gives me a dictionary that only contains the words that people "actually use", and with "real" example sentences. (words in quotes because movie dialogue isn't 100% as real as I'd like.)
I'm sure this is not a remotely new idea, but I'm having fun with it. I also like that I can see how common every form of every word is. I was surprised to learn that almost none of the most common words are nouns. And in my internal tools I can filter by movies released a certain date to track changes, which is neat.
if your movie collection is big enough that might be really useful for language learning. Create your own frequency lists and common phrases.
I would be curious how it stacks up against the written word.
I mean all words were added to a dictionary because someone was using them. It's just that they may not be used by people in your particular region or time.
rust would be pretty unusable without references. affine lambda calculus isn’t even turing complete. however, you’re right that a borrow checker is unnecessary, as uniqueness types (the technical term for types that guarantee single ownership) are implemented in clean and idris without a borrow checker. the borrow checker mainly exists because it dramatically increases the number of valid programs.
Supporting single-ownership in a language doesn't mean you can't have opt-in copyability and/or multiple-ownership. This is how Rust already works, and is independent of the borrow checker.
If we consider a Rust-like language without the borrow checker, it's obviously still Turing-complete. For functions that take references as parameters, instead you would simply pass ownership of the value back to the caller as part of the return value. And for structs that hold references, you would instead have them hold reference-counted handles. The former case is merely less convenient, and the latter case is merely less efficient.
Well, it's not quite that easy because someone still has to test the agent's output and make sure it works as expected, which it often doesn't. In many cases, they still need to read the code and make sure that it does what it's supposed to do. Or they may need to spend time coming up with an effective prompt, which can be harder than it sounds for complicated projects where models will fail if you ask them to implement a feature without giving them detailed guidance on how to do so.
Definitely, but that's kind of my point: the maintainers are still going to be way better at all of that than some random contributor who just wants a feature, vibe codes it, and barely tests it. The maintainers already know the codebase, they understand the implications of changes, and they can write much better plans for the agent to follow, which they can verify against. Having a great plan written down that you can verify against drastically lowers the risk of LLM-generated code
You can do all the steps I mentioned as a random contributor. I've done it before. But I agree that donations are better than just prompting claude "implement this feature, make no mistakes" and hoping it one-shots it. Honestly, even carefully thought-out feature requests are much more valuable than that. At least if the maintainer vibe-codes it they don't have to worry that you deliberately introduced a security vulnerability or back door.
Yeah. You cannot achieve native performance with web apps, but most tasks are simple enough that wasm is plenty fast. If you generate a frame in 7ms or 1ms, the user can't tell the difference.
I think cloud-first design is natural because webapps have nowhere good to store state. On Safari, which is the only browser that matters for many web developers, everything can be deleted at any time. So if you don't want to have a horrible user experience, you have to force your users to make an account and sync their stuff to the cloud. Then, the most natural thing to do is to just have the user's frontend update when the backend updates (think old-school fully-SSR'd apps). You can do much better than that with optimistic updates but it adds a lot of complexity. The gold standard is to go fully local-first, but to really do that right requires CRDTs in most cases, which are their own rabbit hole. (That's the approach I take in my apps because i'm a perfectionist, but I get why most people wouldn't think it's worth it)
With the files API, apps could actually replicate the microsoft word experience of drafting a file and saving it to your desktop and praying that your hard drive doesn't fail, but despite offering great benefits in terms of self-custody of data it was never a great user experience for most people.
> With the files API, apps could actually replicate the microsoft word experience of drafting a file and saving it to your desktop and praying that your hard drive doesn't fail,
Even withou the files API, with local storage, web apps can (and some—mostly extremely casual games that are free—do!) duplicate that experience with the extra risk of your data being lost because your disk became too full or some other event causing the local storage to be cleared.
I once ran out of disk space while Chrome was running and, despite me clearing the space again shortly after, the damage was already done and Chrome had already decided to wipe all my local storage and cookies. It didn't keep it in memory to save again once there was space, it just deleted it all permanently.
I honestly just don't believe that Rust is more complex to onboard to compared to languages like Python. It just does not match my experience at all. I've been a professional rust developer for about three years. Every time I look at python code, it's doing something insane where the function argument definition basically looks like line noise with args and kwargs, with no types, so it's impossible to guess what the parameeters will be for any given function. Every python developer I know makes heavy use of the repl just to figure out what methods they can call on some return value of some underdocumented method of a library they're using. The first time I read pandas code, I saw something along the lines of df[df["age"] < 3] and thought I was having a stroke. Yet python has a reputation for being easy to learn and use. We have a python developer on our team and it probably took me about a day to onboard him to rust and get him able to make changes to our (fairly complicated) Rust codebase.
Don't get me wrong, rust has plenty of "weird" features too, for example higher rank trait bounds have a ridiculous syntax and are going to be hard for most people to understand. But, almost no one will ever have to use a higher rank trait bound. I encounter such things much more rarely in rust than in almost any other mainstream language.
The language itself is not more complex to onboard. For Scala also not. It feels great to have all these language features to ones proposal. The added complexity is in the way how expert code is written. The experts are empowered and productive, but heightens the barrier of entry for newcomers by their practices. Note that they also might expertly write more accessible code to avoid the issue, and then I agree with (though I can't compare to Python, never used it).
Hm, you claim that Rust and Scala are not more complex to onboard than Python... but then you say you never used Python? If that's the case, how do you know? Having used both, I do think Rust is harder to onboard, just because there is more syntax that you need to learn. And Rust is a lot more verbose. And that's before you are exposed to the borrow checker.
Well, the parent wrote "I honestly just don't believe that Rust is more complex to onboard to compared to languages like Python." And you wrote "The language itself is not more complex to onboard." So... to contract Rust with Scala, I think it's clearer to write "The language itself is not more complex to onboard _than Scala_."
To that, I completely agree! Scala is one of the most complex languages, similar to C++. In terms of complexity (roughly the number of features) / hardness to onboard, I would have the following list (hardest to easiest): C++, Scala, Rust, Zig, Swift, Nim, Kotlin, JavaScript, Go, Python.
I see the confusion. ChadNauseam mentions Python to another comment of mine, where I mentioned Gleam. In your list hardest-to-easiest perhaps Gleam is even easier than Python. They literally advertise it as "the language you can learn in a day".
Thanks a lot! I wasn't aware of Gleam, it really seems simple. I probably wouldn't say "learn in a day", any I'm not sure if it's simpler than Python, but it's statically typed, and this adds some complexity necessarily.
> I honestly just don't believe that Rust is more complex to onboard to compared to languages like Python.
Most people conflate "complexity" and "difficulty". Rust is a less complex language than Python (yes, it's true), but it's also much more difficult, because it requires you to do all the hard work up-front, while giving you enormously more runtime guarantees.
Doing the hard work up front is easier than doing it while debugging a non-trivial system. And there are boilerplate patterns in Rust that allow you to skip the hard work while doing throwaway exploratory programming, just like in "easier" languages. Except that then you can refactor the boilerplate away and end up with a proper high-quality system.
> Ten years left to redesign lithium-ion batteries
> Reserves of cobalt and nickel used in electric-vehicle cells will not meet future demand. Refocus research to find new electrodes based on common elements such as iron and silicon, urge Kostiantyn Turcheniuk and colleagues.
I notice that the article was published in 2018. So I guess we only have to wait two more years to decide if it's right or not. Will we be out of cobalt and nickel by then? I'd be happy to take a bet with you, assuming you stand by the article you cited.
reply