Hacker Newsnew | past | comments | ask | show | jobs | submit | lelanthran's commentslogin

I've got a few projects I've generated, along with a wholly handwritten project started in Dec.

The difference I've noticed is that the act of actually typing out code made me backtrack a few times refining the possible solutions before even starting the integration tests, sometimes before even doing a compile.

When generating, the LLM never backtracked, even in the face of broken tests. It would proceed to continue band-aiding until everything passed. It would add special exceptions to general code instead of determining that the general rule should be refined or changed.

The reason that some devs are reporting 10x productivity is because a bunch of duct-taped, band-aided, instant-legacy code is acceptable. Others who dont see that level of productivity increase are spending time fixing the code to be something they can read.

Not sure yet if accepting the spaghetti is the right course. If future LLMs can understand this spaghetti then theres no point in good code. If we still need human coders, then the productivity increase is very small.


> It would add special exceptions to general code instead of determining that the general rule should be refined or changed.

That is pretty bad..


> AI means that you cannot defer software design until you've written half code; you cannot defer documentation to random notes at the end.

> It has the effect of finally forcing people to think about the software they're making,

Ah, and all this time I was reliably assured that waterfall, design-upfront, was a broken process...


Single iteration waterfall is a broken process. You really need those late stage usage feedback signals unless your requirements were somehow captured by God.

Thats accurate. We are a terrible market.

Unfortunately I have a launch planned soon for a dev B2B product. I'm hoping that the combination of non AI coded work over many months combined with separating the docs intended for LLMs and thebdics intended for humans will break through the noise ceiling.

But, you know, maybe I should have just vibed it in a week and crossed my fingers.


Good luck!

> Is your baromiter of success the acquisition of cash?

Who cares? In the context of layoffs, it's the definition the company uses that matters.

Your definition of success has no bearing on whether the company is going to do layoffs or not. The company's definition does.


> lol certifications for a proprietary model stack is not worth the storage or paper

Are you sure? What about all those AWS, Azure, etc certifications that many places require their engineers to have?


> I think it’s pretty clear what the purpose of this stuff is: get people so invested into the Claude ecosystem with certs and “modernization kits”, so that when the subsidies end and subscription costs shoot up they feel they’re in too deep now to switch to something cheaper.

It worked for cloud services :-)


Did it? AWS seems to be getting cheaper over time, not more expensive.

> Did it? AWS seems to be getting cheaper over time, not more expensive.

It was cheaper prior to them issuing certificates, then it got expensive.


Do you have a source for that? Certainly things like compute and other services that I'm aware of are objectively cheaper, so I'm curious what has gone up.

> And smart people usually have moral convictions.

Dumb people have moral convictions. Smart people see the nuance.


Yeah, I use Slime in vim to drive programs (like psql) via their stdin/stdout, so an "agent" that does stdin/stdout only for UI is perfect.

If I ever write my own agent, it will be in this fashion.

-----------------

[1] I have a `scratchpad.sql` file filled with whatever sql snippets I am testing and have `psql mydbname` in a vertical split. Doing C-c C-c in the scratchpad sends the paragraph to the psql instance.


Well, their position on AI.

By their own accounts they are just pressing enter.


This is a very one-sided article, unashamedly so.

Where's the references to the decline in quality and embarrassing outages for Amazon, Microsoft, etc?


Everything you read is in service of someone's business model.

What’s your point? Journalists have jobs?

journalists are like our own simon willinson: they need to put food on their plate by networking with powerful entities that fly them out to conferences

NYT doesn't like digital advertisers and the programmers who make that possible. They're directly in competition.

Do we know that it decreased the quality, or introduced more opportunities for bugs by simply increasing the velocity? If every commit has a fixed probability of having a bug, you'll run into more bugs in a week by going faster.

> Do we know that it decreased the quality, or introduced more opportunities for bugs by simply increasing the velocity?

That's an easy question to answer - you can look at outages per feature released.

You may be instead looking at outages per loc written.


AI is constantly trying to introduce bugs into my code. I've started disabling it when I know exactly where I'm going with the code, because the AI is often a lot more confused than I am about where the code is going.

Do we know it increased the velocity and didnt just churn more slop?

Even before AI the limiting factor on all of the teams I ever worked on was bad decisions, not how much time it took to write code. There seem to be more of those these days.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: