The code is also recognizable as slop to those who know how. Not the tropey "Not X, but Y" kind that's super easy to spot. But tons of repetition, deeply nested code, etc.
A counterpoint is that (maybe) nobody cares if the code is understandable, clean and maintainable. But NYT is explicitly in the business of selling ads surrounded by cheap copy just good enough to attract eyeballs. I suspect getting LLMs to write that is going to be far easier than getting LLMs to maintain large code bases autonomously.
If you explicitly make it go over the code file by file to clean up, fix duplication and refactor, it'll look much better, while no amount of "fix this slop" prompting can fix AI prose.
> no amount of "fix this slop" prompting can fix AI prose
What's the proof for that? What fundamental limitation of these large language models makes them unable to produce natural language? A lot of people see the high likelihood of ever increasing amounts of generated, no-effort content on the web as a real threat. You're saying that's impossible.
>What fundamental limitation of these large language models makes them unable to produce natural language?
LLMs can get indefinitely good at coding problems by training in a reinforcement learning loop on randomly generated coding problems with compiler/unit tests to verify correctness. On the other hand, there's no way to automatically generate a "human thinks this looks like slop" signal; it fundamentally requires human time, severely limiting throughput compared to fully automatable training signals.
A counterpoint is that (maybe) nobody cares if the code is understandable, clean and maintainable. But NYT is explicitly in the business of selling ads surrounded by cheap copy just good enough to attract eyeballs. I suspect getting LLMs to write that is going to be far easier than getting LLMs to maintain large code bases autonomously.