Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article is ragebaiting people and it's an embarrassing piece from the NYT.
 help



NYT has it out for digital advertisers, who directly compete with them. I do sense some schadenfreude here that the tech nerds who work at these places might be in trouble.

"Silicon Valley panjandrums spent the 2010s lecturing American workers in dying industries that they needed to “learn to code."

To copywriters at the NYT, LLMs are far better at stringing together natural language prose than large amounts of valid software. Get ready to supervise LLMs all day if you're not already.


LLMs are much better at coding now than at writing prose that doesn't sound like slop.

The code is also recognizable as slop to those who know how. Not the tropey "Not X, but Y" kind that's super easy to spot. But tons of repetition, deeply nested code, etc.

A counterpoint is that (maybe) nobody cares if the code is understandable, clean and maintainable. But NYT is explicitly in the business of selling ads surrounded by cheap copy just good enough to attract eyeballs. I suspect getting LLMs to write that is going to be far easier than getting LLMs to maintain large code bases autonomously.


>But tons of repetition, deeply nested code, etc.

If you explicitly make it go over the code file by file to clean up, fix duplication and refactor, it'll look much better, while no amount of "fix this slop" prompting can fix AI prose.


> no amount of "fix this slop" prompting can fix AI prose

What's the proof for that? What fundamental limitation of these large language models makes them unable to produce natural language? A lot of people see the high likelihood of ever increasing amounts of generated, no-effort content on the web as a real threat. You're saying that's impossible.


>What fundamental limitation of these large language models makes them unable to produce natural language?

LLMs can get indefinitely good at coding problems by training in a reinforcement learning loop on randomly generated coding problems with compiler/unit tests to verify correctness. On the other hand, there's no way to automatically generate a "human thinks this looks like slop" signal; it fundamentally requires human time, severely limiting throughput compared to fully automatable training signals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: