A month ago the company I work at with over 400 engineers decided to cancel all IDE subscriptions (Visual Studio, JetBrains, Windsurf, etc.) and move everyone over to Claude Code as a "cost-saving measure" (along with firing a bunch of test engineers). There was no migration plan - the EVP of Technology just gave a demo showing 2 greenfield projects he'd built with Claude Opus over a weekend and told everyone to copy how he worked. A week later the EVP had to send out an email telling people to stop using Opus because they were burning through too many tokens.
Claude seems to be getting nerfed every week since we've switched. I wonder how our EVP is feeling now.
Pretty bad decision on his part. I've been telling other engineers within my company who felt threatened by AI that this would happen. That prices would rise and the marginal cost for changes to big codebases would start to exceed the cost of an engineer's salary. API credits are expensive, especially for huge contexts, and sometimes the model will use $200 in credits trying to solve a problem that could be fixed in an hour by a good engineer with enough context.
It kind of reminds me of the joke where a plumber charges $500 for a 5 minute visit. When the client complains the plumber says it's $50 for labor and $450 for knowing how to fix the problem.
A good lesson for all - I always really liked the Picasso version:
In a bustling restaurant, an excited patron recognized the famous artist Picasso dining alone. Seizing the moment, the patron approached Picasso with a simple request. With a plain napkin and a big smile, he asked the artist for a drawing. He promised payment for his troubles. Picasso, ever the creator, didn’t hesitate. From his pocket, he produced a charcoal pencil and he brought to life a stunning sketch of a goat on the napkin—a clear mark of his unique style. Proudly, he presented it to the patron.
The artwork mesmerized the patron, who reached out to take it, only to be stopped by Picasso’s firm hand. “That will be $100,000,” Picasso declared.
Astonished, the patron balked at the sum. “But it took you just a few seconds to draw this!”
With a calm demeanor, Picasso took back the napkin, crumpled it, and tucked it away into his pocket, replying, “No, it has taken me a lifetime.”
A good engineer and / or a tenured engineer could very well be compared to Picasso in this story. A tenured engineer did not just sit their entire career drawing that painting on the napkin, they delivered other results too. But at the end of it, they are able to deliver a Picasso at a moment's notice.
It actually matches up well with the current AI scene, except backwards. We use these model which cost ridiculous amounts of money to train, and all of that effort goes into producing the outputs we use, but we're paying something not too far above the marginal cost of inference when we use them.
Extremely applicable to illustrate the difference between people (time is precious, training and experience amortize across a relatively small amount of paid work) and software (can replicate infinitely, time is cheap, startup costs can amortize across billions of hours of paid work).
It seems very unlikely that prices would rise in the long term. Yes, RAM and GPU prices are suddenly going up due to the demand spike and OpenAI's shenanigans, but I doubt it's going to last very long. Some combination of new capacity and reduced demand will most likely put things back on the usual course where this stuff gradually gets cheaper over time. And models are getting better, so next year you can probably get the same results for less compute. That $200 in credits becomes $150, then $100, then....
Competition will prevent that from happening. When anyone can host open models and there is giant demand for LLMs companies can not easily raise token prices without sending a lot of traffic to their competitors.
That “with enough context” is doing a lot of work here. If you take a great engineer, drop them in front of an unfamiliar codebase, it’ll take them more than an hour to do most non-trivial tasks.
Equal sounds like a terrible argument given all the other problems with replacing engineering thought with ai. I don't know where the line is but I expect it's far beyond equal AND there needs to be a level of "this can debug effectively in production" before that makes any sense for a real business case.
Even if you take it as true that prices have risen recently, and may continue to rise as the VC subsidies dry up, they will fall again long-term. Inference will get more power efficient with model-on-chip solutions like Taalas and God willing we will get cheaper and cheaper renewable energy.
Despite this I don't think engineers should feel threatened. As long as there is a need for a human in the loop, as today, there will still be engineering jobs. And if demand for engineering effort is elastic enough, there could easily be even more jobs tomorrow.
Rather than threatened, I think engineers should feel exposed. To danger, yes, but opportunity as well.
I can’t believe how many small to mid size companies are being destroyed by bad decisions like this.
A friend’s company fired all EMs and have engineers reporting to product managers. They aren’t allowed to do refactors because the CTO believes the AI doesn’t need organized code.
CTO is in many cases a rank more than a role, and given out accordingly. You should never take someone seriously based on their rank alone, much less a CTO.
Or more cynically they reach their level of competence, go one level further and stay there to keep them from ruining the productivity of the people doing the work...
These are like $20-50 subs, you’re probably paying your dev a hell of a lot more. Let them use the tools they want. I spend almost all of my time in Emacs or Cursor, but I still haven’t found a database client that I like better than Datagrip.
Hopefully that EVP feels embarrassed that a big bet was made that not only didn't pay off but left the company in a worse position. Some schadenfreude may be all you can expect, since this is an executive.
Wow, that sucks. Getting Claude for everyone wasn’t even the stupid thing, it was thinking that a shiny new hammer meant you could throw away all your wrenches.
lol. dude is so incompetent. changing tool for cost cutting is so stupid, we all know real cost cutting is firing people. if he is really good at he's doing, just fire 10% people and replace them with his Claude. If that didn't get backfired in 3 months, he will be CT0.
>I think though that the day is coming where I can trust the code it produces and at that point I'll just by writing specs. It's not there yet though.
Must be nice to still have that choice. At the company I work for they've just announced they're cancelling all subscriptions to JetBrains, Visual Studio, Windsurf, etc. and forcing every engineer to use Claude Code as a cost-saving measure. We've been told we should be writing prompts for Claude instead of working in IDEs now.
Honestly while I know everyone needs a job, just speed run all this crap and let the companies learn from making a big unmaintainable ball of mud. Don't make the bad situation work by putting in your good skills to fix things behind the scenes, after hours, etc.
Management has made it very clear that we’re still responsible for the code we push even if the llm wrote it. So there will be no blaming Claude when things fall apart.
I wonder how much cost savings there are in the long term when token prices go up, the average developer's ability to code has atrophied, and the company code bases have turned into illegible slop. I will continue to use LLMs cautiously while working hard to maintain my ability to code in my off time.
Realistically that's an increase of maybe a couple percent of cost per employee. If it truly does end up being a force multiplier, 2-5% more per dev is a bargain. I think it's exceedingly unlikely that LLMs will replace devs for most companies, but it probably will speed up dev work enough to justify at least a single-digit percent increase in per-dev cost.
“speeding up dev work” is pointless to a company. That benefit goes entirely to the developer and does not trickle down well.
You might think “ok, we’ll just push more workload onto the developers so they stay at higher utilization!”
Except most companies do not have endless amounts of new feature work. Eventually devs are mostly sitting idle.
So you think “Ha! Then we’ll fire more developers and get one guy to do everything!”
Another bad idea for several reason. For one, you are increasing the bus factor. Two, most work being done in companies at any given time is actually maintenance. One dev cannot maintain everything by themselves, even with the help of LLMs. More eyes on stuff means issues get resolved faster, and those eyes need to have real knowledge and experience behind them.
Speed is sexy but a poor trade off for quality code architecture and expert maintainers. Unless you are a company with a literal never ending list of new things to be implemented (very few), it is of no benefit.
Also don’t forget the outrage when Cursor went from $20/month to $200/month and companies quickly cancelled subscriptions…
> Except most companies do not have endless amounts of new feature work. Eventually devs are mostly sitting idle.
At every place I have ever worked (as well as my personal life), the backlog was 10 times longer than anyone could ever hope to complete, and there were untold amounts of additional work that nobody even bothered adding to the backlog.
Some of that probably wouldn't materialize into real work if you could stay more on top of it – some of the things that eventually get dropped from the backlog were bad ideas or would time out of being useful before they got implemented even with higher velocity – but I think most companies could easily absorb a 300% increase or more in dev productivity and still be getting value out of it.
I used to report bugs, read release notes; I was all in on the full stack debug capability in pycharm of Django.
The first signs of trouble (with AI specifically) predated GitHub copilot to TabNine.
TabNine was the first true demonstration of AI powered code completion in pycharm. There was an interview where a jetbrains rep lampooned AI’s impact on SWE. I was an early TabNine user, and was aghast.
A few months later copilot dropped, time passed and now here we are.
It was neat figuring out how I had messed up my implementations. But I would not trade the power of the CLI AI for any *more* years spent painstakingly building products on my own.
I'm using Claude in JetBrains, using the Zed editor's ACP connector.
It's actually pretty slick. And you can expose the JetBrains inspections through its MCP server to the Claude agent. With all the usual JetBrains smarts and code navigation.
Even if you're using Claude, canceling the IDEs might be poor strategy. Steve Yegge points out in his book that the indexing and refactoring tools in IDEs are helpful to AIs as well. He mentions JetBrains in particular as working well with AI. Your company's IDE savings could be offset by higher token costs.
Perhaps it would help if I include the quote, so from Vibe Coding pages 165-166:
> [IDEs index] your code base with sophisticated proprietary analysis and then serve that index to any tool that needs it, typically via LSP, the Language Services Protocol. The indexing capabilities of IDEs will remain important in the vibe coding world as (human) IDE usage declines. Those indexes will help AIs find their way around your code, like they do for you.
> ...It will almost always be easier, cheaper, and more accurate for AI to make a refactoring using an IDE or large-scale refactoring tool (when it can) than for AI to attempt that same refactoring itself.
> Some IDEs, such as IntelliJ, now host an MCP server, which makes their capabilities accessible to coding agents.
Yes, it's fantastic. Hard to imagine a better resource for getting started with vibe coding, on through developing large high-quality projects with it. It doesn't get into the details of particular tools much, so it should stay relevant for a while.
Definitely didn't want to imply that the author is a bad engineer, quite the contrary he seems like a very good one. Apologies if it came across that way.
Just that many brilliant engineers as themselves test agentic tools without the same level of thorough understanding that they give to other software engineering tools that they test out.
Now maybe it’s just my familiarity with Promises, but I look at the third example and I can quickly see an opportunity.
This entire article is built around the author's ignorance and could easily be summarised as "I avoid async/await syntax because I'm more familiar with promises". The author doesn't even appear to understand that async/await is syntactic sugar for promises.
I did not take me long to reach the same conclusion. The article can be summarized as "I am ignorant of the meaning of async/await, thus I don't use it". This is perhaps one incremental improvement from "I am ignorant of async/await, thus I use it poorly". But in the wrong direction.
How would you handle two asynchronous saves which can happen in parallel without using a Promise.all? Don’t think you can…and that’s pretty much the entire point of the article.
Async/await is useless unless you are willing to serialize your calls defeating the entire point of async code.
> How would you handle two asynchronous saves which can happen in parallel without using a Promise.all?
This question doesn't make sense. Async/await is just a nicer syntax for interacting with promises. So my answer to your "gotcha" question is just:
await Promise.all([..., ...])
There's nothing impure going on here. The majority of the time, async/await can make it much easier to see a code's control flow by getting rid of most of the Promise related cruft and callbacks.
I would call Promise.all a benefit here, as it makes it stand out where I'm doing something in parallel.
I think you’re trying to recreate the semantics of Promise.all without using Promise.all.
You’re effectively saying that Promises are a better async programming paradigm than async/await…which is also what the author is saying in the article.
I'm not saying anything about promises vs async/await. The original comment said that you can't have 2 async things happen in parallel without Promise.all, my code snippet proves that you can.
Node 16 will exit if an exception is thrown and not awaited for some finite period of time. So if your goal is to keep those promises in some cache and then resolve them later on at your leisure, you will find the entire node process will abort. There is a feature flag to restore the older behavior but it’s a pretty big gotcha.
There is no such finite period of time. You can call an async function and never await it.
Exception handling is something completely different. Yes, if you call an async function and do not catch the exception, Node will stop. But that is independent of having called await or not. Whether or not you await something async does not affect exception behavior.
Promises are also just syntactic sugar to make your code look more synchronous, you can do everything with plain callbacks. Which I find ironic with the article that he argues against that but still just stops at the next turtle, instead of following his own advice and actually learning how Javascript and it's runtime works.
It's easy to do two things at once when you can ask two different entities to do them for you (threads).
What's hard is thinking about how to coordinate the work they are doing for you: when to consider them done, how to ask them if they did the work successfully, what to do if they need to use the same tool at some point during the work etc.
Languages with threading require learning techniques to use them safely and many, including myself, have learned how.
Even if concurrency is easier to get right on node I'd say the node ecosystem has just layered on complexity in other ways to get to something just as difficult to use overall.
Promises and async/await sugar are only the tip of the iceberg.
It drove me crazy too, until I needed to use Puppeteer which requires you to write async/await (there are Puppeteer implementations in other languages, but they all seem to make compromises I didn't want). Generally speaking, async/await allows you to write code that looks and feels serial. Perhaps try using one of the async libraries for PHP to wrap your mind around the concept of async/await (like https://github.com/spatie/async)
The author even implies in a footnote that switch statements are unusable. I mean, we probably all had painful experience with the “gotcha”, and I appreciate efforts towards safer designs. But I mean, let’s not be ridiculous. It works fine.
what the author wants doesn't exists because the two saves will not actually run in parallel in any case.
Not with `async save(); async save();`
nor with `Promise.all`
nor with any callback or any other means.
the author is conflating parallel with concurrent programming.
and in (the mono-thread world of) javascript the two calls will still occurs sequentially.
Consider that 'save()' might do multiple steps under the covers (network, database, localstorage, whatever). Allowing those steps to run interleaved, if necessary (with Promise.all), might be quite different from serializing the 'save()' calls completely in the caller.
So while it is true that neither is truly parallel in the "parallel vs concurrent" sense, it is not true that the "sequential"/"concurrent" execution of both styles has the same performance characteristics.
I've clicked that action every single time I've seen it for at least the past year and it still continues to show the very thing I want to see less often. It's absolutely infuriating.
Whenever you have such a complaint - and choose to make it public - please include more details. What system, what browser, add-ons (adblocker) at least.
EDIT: Downvotes? For answering the question truthfully? It does work for me. The question was (is) "is anyone else unable to scroll on this page?" -- and I have no problem scrolling. I answered the question! The question was NOT "do you think the page is designed badly". If that is what was meant, that is why I said "needs more details". All the person asked was about "being able to scroll"! And the design issue does not prevent it. I answered the question that was actually asked! I assumed, and still do, that the person isn't able to scroll at all.
It's got odd scrolling. If the mouse is over the top banner, and you scroll the scroll wheel (or I assume pull down on mobile?) then the page doesn't scroll (this is on Chrome, on Windows 10).
The scrollbar is also hidden behind the top banner in some way, which suggests it's doing something non standard.
Why do people want to fiddle with the most basic UI/X idiom on the net?
Claude seems to be getting nerfed every week since we've switched. I wonder how our EVP is feeling now.
reply