Hacker Newsnew | past | comments | ask | show | jobs | submit | zazibar's commentslogin

A month ago the company I work at with over 400 engineers decided to cancel all IDE subscriptions (Visual Studio, JetBrains, Windsurf, etc.) and move everyone over to Claude Code as a "cost-saving measure" (along with firing a bunch of test engineers). There was no migration plan - the EVP of Technology just gave a demo showing 2 greenfield projects he'd built with Claude Opus over a weekend and told everyone to copy how he worked. A week later the EVP had to send out an email telling people to stop using Opus because they were burning through too many tokens.

Claude seems to be getting nerfed every week since we've switched. I wonder how our EVP is feeling now.


Pretty bad decision on his part. I've been telling other engineers within my company who felt threatened by AI that this would happen. That prices would rise and the marginal cost for changes to big codebases would start to exceed the cost of an engineer's salary. API credits are expensive, especially for huge contexts, and sometimes the model will use $200 in credits trying to solve a problem that could be fixed in an hour by a good engineer with enough context.

It kind of reminds me of the joke where a plumber charges $500 for a 5 minute visit. When the client complains the plumber says it's $50 for labor and $450 for knowing how to fix the problem.


A good lesson for all - I always really liked the Picasso version:

In a bustling restaurant, an excited patron recognized the famous artist Picasso dining alone. Seizing the moment, the patron approached Picasso with a simple request. With a plain napkin and a big smile, he asked the artist for a drawing. He promised payment for his troubles. Picasso, ever the creator, didn’t hesitate. From his pocket, he produced a charcoal pencil and he brought to life a stunning sketch of a goat on the napkin—a clear mark of his unique style. Proudly, he presented it to the patron.

The artwork mesmerized the patron, who reached out to take it, only to be stopped by Picasso’s firm hand. “That will be $100,000,” Picasso declared.

Astonished, the patron balked at the sum. “But it took you just a few seconds to draw this!”

With a calm demeanor, Picasso took back the napkin, crumpled it, and tucked it away into his pocket, replying, “No, it has taken me a lifetime.”


[flagged]


A good engineer and / or a tenured engineer could very well be compared to Picasso in this story. A tenured engineer did not just sit their entire career drawing that painting on the napkin, they delivered other results too. But at the end of it, they are able to deliver a Picasso at a moment's notice.

It actually matches up well with the current AI scene, except backwards. We use these model which cost ridiculous amounts of money to train, and all of that effort goes into producing the outputs we use, but we're paying something not too far above the marginal cost of inference when we use them.

So not applicable at all

Extremely applicable to illustrate the difference between people (time is precious, training and experience amortize across a relatively small amount of paid work) and software (can replicate infinitely, time is cheap, startup costs can amortize across billions of hours of paid work).

It seems very unlikely that prices would rise in the long term. Yes, RAM and GPU prices are suddenly going up due to the demand spike and OpenAI's shenanigans, but I doubt it's going to last very long. Some combination of new capacity and reduced demand will most likely put things back on the usual course where this stuff gradually gets cheaper over time. And models are getting better, so next year you can probably get the same results for less compute. That $200 in credits becomes $150, then $100, then....

>That prices would rise

Competition will prevent that from happening. When anyone can host open models and there is giant demand for LLMs companies can not easily raise token prices without sending a lot of traffic to their competitors.


> When anyone can host open models

They'd still need to pay the actual power costs.


I didn't say that inference would be free, but that everything to do inference is a commodity which means that competition is easy to do.

> the model will use $200 in credits trying to solve a problem that could be fixed in an hour by a good engineer with enough context

So the price for fixing the problem is equal. Sounds like a great argument for AI.


99% of software developers earn less than 200 USD a hour

That “with enough context” is doing a lot of work here. If you take a great engineer, drop them in front of an unfamiliar codebase, it’ll take them more than an hour to do most non-trivial tasks.

Most good engineers are way cheaper than that. The world is bigger than the united states.

Equal sounds like a terrible argument given all the other problems with replacing engineering thought with ai. I don't know where the line is but I expect it's far beyond equal AND there needs to be a level of "this can debug effectively in production" before that makes any sense for a real business case.

Even if you take it as true that prices have risen recently, and may continue to rise as the VC subsidies dry up, they will fall again long-term. Inference will get more power efficient with model-on-chip solutions like Taalas and God willing we will get cheaper and cheaper renewable energy.

Despite this I don't think engineers should feel threatened. As long as there is a need for a human in the loop, as today, there will still be engineering jobs. And if demand for engineering effort is elastic enough, there could easily be even more jobs tomorrow.

Rather than threatened, I think engineers should feel exposed. To danger, yes, but opportunity as well.


Increased demand will not drive down energy costs.

Of course not necessarily, but I keep seeing articles about how wind and especially solar power just keep getting cheaper.

Why not?

I can’t believe how many small to mid size companies are being destroyed by bad decisions like this.

A friend’s company fired all EMs and have engineers reporting to product managers. They aren’t allowed to do refactors because the CTO believes the AI doesn’t need organized code.


How do people like that ascend to CTO?

CTO is in many cases a rank more than a role, and given out accordingly. You should never take someone seriously based on their rank alone, much less a CTO.


Or more cynically they reach their level of competence, go one level further and stay there to keep them from ruining the productivity of the people doing the work...

He must be feeling pretty good, after all he still believes that it was the right call, and he definitely won't be admitting a mistake.

There's 0 chance of him facing the consequences for it either.


But cancelling IDE subscriptions? You need a proper IDE to along side AI augmented development unless you want to simply be along for the ride.

Well, you can resubscribe in an afternoon. The fired workers? No real recovery from that.

`git diff` is probably all you need.

Free VS Code is probably fine

I'm using the JetBrains IDE's and it's definitely worth paying for, even in the age of AI.

These are like $20-50 subs, you’re probably paying your dev a hell of a lot more. Let them use the tools they want. I spend almost all of my time in Emacs or Cursor, but I still haven’t found a database client that I like better than Datagrip.

A database client better than Datagrip is a tough one, yet I'm attempting to do just that [1] :).

I'm in month 4 of development, working on it full-time.

[1] https://seaquel.app


Hopefully that EVP feels embarrassed that a big bet was made that not only didn't pay off but left the company in a worse position. Some schadenfreude may be all you can expect, since this is an executive.

Wow, that sucks. Getting Claude for everyone wasn’t even the stupid thing, it was thinking that a shiny new hammer meant you could throw away all your wrenches.

Should have started slowly instead of being so aggressive with it.

lol. dude is so incompetent. changing tool for cost cutting is so stupid, we all know real cost cutting is firing people. if he is really good at he's doing, just fire 10% people and replace them with his Claude. If that didn't get backfired in 3 months, he will be CT0.

Wow, that sounds like you have a astoundingly terrible EVP.

>I think though that the day is coming where I can trust the code it produces and at that point I'll just by writing specs. It's not there yet though.

Must be nice to still have that choice. At the company I work for they've just announced they're cancelling all subscriptions to JetBrains, Visual Studio, Windsurf, etc. and forcing every engineer to use Claude Code as a cost-saving measure. We've been told we should be writing prompts for Claude instead of working in IDEs now.


This is completely insane, and that's coming from someone who does 95% of edits in Claude Code now.


Honestly while I know everyone needs a job, just speed run all this crap and let the companies learn from making a big unmaintainable ball of mud. Don't make the bad situation work by putting in your good skills to fix things behind the scenes, after hours, etc.


Management has made it very clear that we’re still responsible for the code we push even if the llm wrote it. So there will be no blaming Claude when things fall apart.


My personal line is they can't say that if you force devs to use LLMs "and be quick about it"


I wonder how much cost savings there are in the long term when token prices go up, the average developer's ability to code has atrophied, and the company code bases have turned into illegible slop. I will continue to use LLMs cautiously while working hard to maintain my ability to code in my off time.


You shouldn't have to maintain your ability to code in your off time. Is your company one of those that's requiring AI only coding?


That’s going to give you all a ton of job security in a year when we realize that prompt first yields terrible results for maintainability.


Or they fire the existing staff who prompted this mess and bring in mkinsey to glue the mess together


Thats insane!


I hope they are prepared to pay the $500/month per head when subsidies expire.


Realistically that's an increase of maybe a couple percent of cost per employee. If it truly does end up being a force multiplier, 2-5% more per dev is a bargain. I think it's exceedingly unlikely that LLMs will replace devs for most companies, but it probably will speed up dev work enough to justify at least a single-digit percent increase in per-dev cost.


“speeding up dev work” is pointless to a company. That benefit goes entirely to the developer and does not trickle down well.

You might think “ok, we’ll just push more workload onto the developers so they stay at higher utilization!”

Except most companies do not have endless amounts of new feature work. Eventually devs are mostly sitting idle.

So you think “Ha! Then we’ll fire more developers and get one guy to do everything!”

Another bad idea for several reason. For one, you are increasing the bus factor. Two, most work being done in companies at any given time is actually maintenance. One dev cannot maintain everything by themselves, even with the help of LLMs. More eyes on stuff means issues get resolved faster, and those eyes need to have real knowledge and experience behind them.

Speed is sexy but a poor trade off for quality code architecture and expert maintainers. Unless you are a company with a literal never ending list of new things to be implemented (very few), it is of no benefit.

Also don’t forget the outrage when Cursor went from $20/month to $200/month and companies quickly cancelled subscriptions…


> Except most companies do not have endless amounts of new feature work. Eventually devs are mostly sitting idle.

At every place I have ever worked (as well as my personal life), the backlog was 10 times longer than anyone could ever hope to complete, and there were untold amounts of additional work that nobody even bothered adding to the backlog.

Some of that probably wouldn't materialize into real work if you could stay more on top of it – some of the things that eventually get dropped from the backlog were bad ideas or would time out of being useful before they got implemented even with higher velocity – but I think most companies could easily absorb a 300% increase or more in dev productivity and still be getting value out of it.


Where I work, if you literally implemented everything in the backlog as is you’d fuck everything up.

Things in a backlog are not independent units of work ready to go, there are nasty dependencies and unresolved questions that cross domains.


I’ve heard estimates of starting at 2k a month per person, and thats for the “normal” user-base


They'll just skip raises and say it's part of your comp for increasing your productivity or some tone-deaf BS


Well, Visual Studio Code + Claude Code is better than the other options.


Thoughts and prayers.


Isn't Visual Studio a one time purchase?


I didn't renew Jet Brains this month. Been a loyal customer and would have quit jobs from 2008 onwards without it.


Me too.

I used to report bugs, read release notes; I was all in on the full stack debug capability in pycharm of Django.

The first signs of trouble (with AI specifically) predated GitHub copilot to TabNine.

TabNine was the first true demonstration of AI powered code completion in pycharm. There was an interview where a jetbrains rep lampooned AI’s impact on SWE. I was an early TabNine user, and was aghast.

A few months later copilot dropped, time passed and now here we are.

It was neat figuring out how I had messed up my implementations. But I would not trade the power of the CLI AI for any *more* years spent painstakingly building products on my own.

I’m glad I learned when I did.


I'm using Claude in JetBrains, using the Zed editor's ACP connector.

It's actually pretty slick. And you can expose the JetBrains inspections through its MCP server to the Claude agent. With all the usual JetBrains smarts and code navigation.


Fwiw, IntelliJ at least has an MCP server so coding agents can use the refactoring tools. Don't know about the other JetBrains IDEs.


Even if you're using Claude, canceling the IDEs might be poor strategy. Steve Yegge points out in his book that the indexing and refactoring tools in IDEs are helpful to AIs as well. He mentions JetBrains in particular as working well with AI. Your company's IDE savings could be offset by higher token costs.


Perhaps it would help if I include the quote, so from Vibe Coding pages 165-166:

> [IDEs index] your code base with sophisticated proprietary analysis and then serve that index to any tool that needs it, typically via LSP, the Language Services Protocol. The indexing capabilities of IDEs will remain important in the vibe coding world as (human) IDE usage declines. Those indexes will help AIs find their way around your code, like they do for you.

> ...It will almost always be easier, cheaper, and more accurate for AI to make a refactoring using an IDE or large-scale refactoring tool (when it can) than for AI to attempt that same refactoring itself.

> Some IDEs, such as IntelliJ, now host an MCP server, which makes their capabilities accessible to coding agents.


Would you recommend that book?


Yes, it's fantastic. Hard to imagine a better resource for getting started with vibe coding, on through developing large high-quality projects with it. It doesn't get into the details of particular tools much, so it should stay relevant for a while.


This account constantly posts LLM-generated comments.


If you don't like it, flag the comment as per the guidelines: https://news.ycombinator.com/newsguidelines.html#generated


A "you're holding it wrong" with the implication that the author is a bad engineer as the cherry on top. Brilliant stuff.


Definitely didn't want to imply that the author is a bad engineer, quite the contrary he seems like a very good one. Apologies if it came across that way.

Just that many brilliant engineers as themselves test agentic tools without the same level of thorough understanding that they give to other software engineering tools that they test out.


   > …I did a bit of digging…
I didn't do any digging.

   > …*he* seems like a very good one…
But I did some scrolling (to the bottom of the blog post). What would you bet that "Gabriella" is probably a she? ;)


Now maybe it’s just my familiarity with Promises, but I look at the third example and I can quickly see an opportunity.

This entire article is built around the author's ignorance and could easily be summarised as "I avoid async/await syntax because I'm more familiar with promises". The author doesn't even appear to understand that async/await is syntactic sugar for promises.


I did not take me long to reach the same conclusion. The article can be summarized as "I am ignorant of the meaning of async/await, thus I don't use it". This is perhaps one incremental improvement from "I am ignorant of async/await, thus I use it poorly". But in the wrong direction.


I didn’t read the article like that at all.

How would you handle two asynchronous saves which can happen in parallel without using a Promise.all? Don’t think you can…and that’s pretty much the entire point of the article.

Async/await is useless unless you are willing to serialize your calls defeating the entire point of async code.


Just because you use async/await doesn't mean you can't use Promise.all.

In fact, my immediate intuition with the await examples was to parallelize with Promise.all.

    await Promise.all([/* build promises */]);


Yeah they had that in the post.


> How would you handle two asynchronous saves which can happen in parallel without using a Promise.all?

This question doesn't make sense. Async/await is just a nicer syntax for interacting with promises. So my answer to your "gotcha" question is just:

  await Promise.all([..., ...])
There's nothing impure going on here. The majority of the time, async/await can make it much easier to see a code's control flow by getting rid of most of the Promise related cruft and callbacks.

I would call Promise.all a benefit here, as it makes it stand out where I'm doing something in parallel.


  const x = somethingAsync();
  const y = somethingAsyncToo();

  return  { foo: await x, bar: await y }
There is no point in returning one before the other because you need both?


I think you’re trying to recreate the semantics of Promise.all without using Promise.all.

You’re effectively saying that Promises are a better async programming paradigm than async/await…which is also what the author is saying in the article.


I'm not saying anything about promises vs async/await. The original comment said that you can't have 2 async things happen in parallel without Promise.all, my code snippet proves that you can.


But in JavaScript, these two awaits will not happen in parallel, you really need to await Promise.all() for that.


You've really missed the point spectacularly of that example.

Both those promises start, and both are waited for after both have started..

That is the same as promise.all... There's just an explicit order for the wait, rather than as they resolve, but the result is the same.

Now, promise.any.... You'd have a point...


This is correct, I wasn't paying attention.


In Javascript, a Promise is started as soon as it is created. In other words, this is not the `await` that starts the Promise.

If the first await is the slowest, the second one will return immediately (like calling .then on an already resolved promise).


+1 Notably, this is different in Python where promises (futures) are executed lazily, i.e. when you await them.


Pretty sure there can be unexpected behavior if you wait too long before you do those awaits at the end.


No, why would there be?


Node 16 will exit if an exception is thrown and not awaited for some finite period of time. So if your goal is to keep those promises in some cache and then resolve them later on at your leisure, you will find the entire node process will abort. There is a feature flag to restore the older behavior but it’s a pretty big gotcha.


There is no such finite period of time. You can call an async function and never await it.

Exception handling is something completely different. Yes, if you call an async function and do not catch the exception, Node will stop. But that is independent of having called await or not. Whether or not you await something async does not affect exception behavior.


Promises are also just syntactic sugar to make your code look more synchronous, you can do everything with plain callbacks. Which I find ironic with the article that he argues against that but still just stops at the next turtle, instead of following his own advice and actually learning how Javascript and it's runtime works.


Not really correct.

    const fooP = fetch(a)
    const bar = await fetch(b)
    const foo = await fooP


Wait, why don't do that? What's the point?


Yup - I prefer async/await but that was actually a good example on optimizing multiple promises I had not though of before.


I avoid Javascript outright because async/await/promise is confusing to me. I blame it on being a PHP Programmer and likes things to run serially.


I felt the same way coming from a threaded language.

Learning the event loop, then promises, then async/await is a must. Today, you probably should throw typescript on top.

A steep learning curve just to get back to a typed language that can do things concurrently.

You do get used to it, but it is a mess of stuff.


Threads are their own steep learning curve, I think it's just hard to do two things at once.


It's easy to do two things at once when you can ask two different entities to do them for you (threads).

What's hard is thinking about how to coordinate the work they are doing for you: when to consider them done, how to ask them if they did the work successfully, what to do if they need to use the same tool at some point during the work etc.


This is ridiculous. Handling real threads is much more complicated than handling async calls and the event loop of JavaScript.


Languages with threading require learning techniques to use them safely and many, including myself, have learned how.

Even if concurrency is easier to get right on node I'd say the node ecosystem has just layered on complexity in other ways to get to something just as difficult to use overall.

Promises and async/await sugar are only the tip of the iceberg.


/r/gatekeeping


It drove me crazy too, until I needed to use Puppeteer which requires you to write async/await (there are Puppeteer implementations in other languages, but they all seem to make compromises I didn't want). Generally speaking, async/await allows you to write code that looks and feels serial. Perhaps try using one of the async libraries for PHP to wrap your mind around the concept of async/await (like https://github.com/spatie/async)


Hyperscript can help with this. https://hyperscript.org/

Makes using a bit of JavaScript relatively simple, just not much in Stack Exchange yet which means reading docs..


The author even implies in a footnote that switch statements are unusable. I mean, we probably all had painful experience with the “gotcha”, and I appreciate efforts towards safer designs. But I mean, let’s not be ridiculous. It works fine.


Agreed, I stopped reading after this sentence.


What the author wants is something like this:

    async {
        save()
        save()
    } catch (Exception e) {
        console.log("Handle error")
    }
async does not deliver this at all.


what the author wants doesn't exists because the two saves will not actually run in parallel in any case. Not with `async save(); async save();` nor with `Promise.all` nor with any callback or any other means.

the author is conflating parallel with concurrent programming.

and in (the mono-thread world of) javascript the two calls will still occurs sequentially.


This is only partly true, but misses the point.

Consider that 'save()' might do multiple steps under the covers (network, database, localstorage, whatever). Allowing those steps to run interleaved, if necessary (with Promise.all), might be quite different from serializing the 'save()' calls completely in the caller.

So while it is true that neither is truly parallel in the "parallel vs concurrent" sense, it is not true that the "sequential"/"concurrent" execution of both styles has the same performance characteristics.


There is a growing trend where people just memorize how to do things instead of understanding what those things actually do.


Corporate greed has no bounds.


I've clicked that action every single time I've seen it for at least the past year and it still continues to show the very thing I want to see less often. It's absolutely infuriating.


A wonderful surprise to see Phaser on that list!


I love this idea, congratulations on the launch.


Off-topic but is anyone else unable to scroll on this page?


Works for me.

Whenever you have such a complaint - and choose to make it public - please include more details. What system, what browser, add-ons (adblocker) at least.

EDIT: Downvotes? For answering the question truthfully? It does work for me. The question was (is) "is anyone else unable to scroll on this page?" -- and I have no problem scrolling. I answered the question! The question was NOT "do you think the page is designed badly". If that is what was meant, that is why I said "needs more details". All the person asked was about "being able to scroll"! And the design issue does not prevent it. I answered the question that was actually asked! I assumed, and still do, that the person isn't able to scroll at all.


It's got odd scrolling. If the mouse is over the top banner, and you scroll the scroll wheel (or I assume pull down on mobile?) then the page doesn't scroll (this is on Chrome, on Windows 10).

The scrollbar is also hidden behind the top banner in some way, which suggests it's doing something non standard.

Why do people want to fiddle with the most basic UI/X idiom on the net?


There doesn't seem to be a lot of critical thinking done by developers at times.

They'll read a blog post, echo chamber the benefits over some kale juice and start hacking away.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: