"Notably, increases in codebase size are a major determinant of increases in static analysis warnings and code complexity, and absorb most variance in the two outcome variables. However, even with strong controls for codebase size dynamics, the adoption of Cursor still has a significant effect on code complexity, leading to a 9% baseline increase on average compared to projects in similar dynamics but not using Cursor."
They're measuring development speed through lines of code. To show that's true they'd need to first show that AI and humans use the same number of lines to solve the same problem. That hasn't been my experience at all. AI is incredibly verbose.
Then there's the question of if LoC is a reliable proxy for velocity at all? The common belief amongst developers is that it's not.
This is actually one thing I have found LLMs surprisingly useful for.
I give them a code base which has one or two orders of magnitude of bloat, and ask them to strip it away iteratively. What I'm left with usually does the same thing.
At this point the code base becomes small enough to navigate and study. Then I use it for reference and build my own solution.
Uh huh.. but the data in Andrej's visualizer is showing software development growth outlook is at 15% (much faster than average)
Over the past year (where Opus has supposedly changed the game), we're seeing ~10% more job postings for software developers compared to this time last year [1,2]
A huge amount of our work is not easily verifiable, therefore it's extremely hard to actually train an LLM to be better at it. It doesn't magically get better across the board.
AI HAS WON. SURF OR DROWN. YOU DONT KNOW WHATS COMING!!!?!?!
Stop with this doomer drivel. It's sick. It's not based in reality and all it does is stress innocent people out for no reason.
This is fantasy completely disconnected from reality.
Have you ever tried writing tests for spaghetti code? It's hell compared to testing good code. LLMs require a very strong test harness or they're going to break things.
Have you tried reading and understanding spaghetti code? How do you verify it does what you want, and none of what you don't want?
Many code design techniques were created to make things easy for humans to understand. That understanding needs to be there whether you're modifying it yourself or reviewing the code.
Developers are struggling because they know what happens when you have 100k lines of slop.
If things keep speeding in this direction we're going to wake up to a world of pain in 3 years and AI isn't going to get us out of it.
I’ve found much more utility even pre AI in a good suite of integration tests than unit tests. For instance if you are doing a test harness for an API, it doesn’t matter if you even have access to the code if you are writing tests against the API surface itself.
I do too, but it comes from a bang-for-your-buck and not a test coverage standpoint. Test coverage goes up in importance as you lean more on AI to do the implementation IMO.
I mean, real algo trading shops use "AI" to do it all the time, they just don't use LLMs. While I'm not the GP I think the idea they're trying to express is that the nuts and bolts of structuring programs is going away. The engineer of today, according to this claim and similar to Karpathy's Software 3.0 idea, structures their work in terms of blocks of intelligence and uses these blocks to construct programs. Nothing stopping Claude Code or another LLM coding harness from generating the scaffolding for a time-series model and then letting the author refactor the model and its hyperparameters as needed to achieve fit.
Though I don't know of any algo trading shop that relies purely on algorithms as market regimes change frequently and the alpha of new edge ends up getting competed away frequently.
(And personally I'm a believer of the jagged intelligence theory of LLMs where there's some tasks that LLMs are great at and other tasks that they'll continue being idiotic at for a while, and think there's plenty of work left for nuts and bolts program writers to do.)
My trading agent builds its own models, does backtesting, builds tools for real time analysis and trading. I wrote zero of the code, i haven't even seen the code. The only thing I make sure is that it's continuously self improving (since I haven't been able to figure out how to automate that yet).
If an LLM could be profitable trading why wouldn't the creators use it themselves and not release it? It'd be by far the most profitable thing they could do.
The problem with AI writing isn't its style, it's the content.
It's full of fluff. Analogies that sound like something a 12 year old would make, but make no sense when you stop to think about them.
It's full of baloney that the author didn't even intend to communicate.
That's where the "soulless" part comes from. There's no consistent mind behind the writing with opinions of its own, formulated into one understandable framework it's trying to convey. It's just a mishmash of BS that only superficially resembles it, made to trick us.
In fact that was their first and only contribution.
Weird.
reply