Hacker Newsnew | past | comments | ask | show | jobs | submit | SCdF's commentslogin

Yeah, I couldn't be bothered getting my accurate chest strap out, but my watch (which is generally very close to the strap) was anywhere from 10-20 off what it was reporting. This is sitting down, 30min after a run.


tbf they have been saying they've started doing this since December, so we're only a few months in. And like most software it's an iceberg: 99% of work on not observable by users, and in spotify's case listeners are only one of presumably dozens of different users. For all we know they are shipping massive improvements to eg billing


If it indicates, culturally in the current zeitgeist, that an AI wrote it, it becomes a bad structure.


I'd prefer we look to other trust metrics, rather than change for the sake of other's interpretation of who we are.


Other metrics:

Comment 1: 2026-02-18T23:45:12 1771458312 https://news.ycombinator.com/item?id=47067991

Comment 2: 2026-02-18T23:45:32 1771458332 https://news.ycombinator.com/item?id=47067994

20 seconds between comments.

--

or:

Comment 1: 2026-02-18T20:06:49 1771445209 https://news.ycombinator.com/item?id=47065649

Comment 2: 2026-02-18T20:07:12 1771445232 https://news.ycombinator.com/item?id=47065653

23 seconds between comments.

---

It's a bot.


Agreed, and were gonna see this everywhere that AI can touch. Our filter functions for books, video, music, etc are all now broken. And worst of all that breaking coincides with an avalanche of slop, making detection even harder.

There is this real disconnect between what the visible level of effort implies you've done, and what you actually have to do.

It's going to be interesting to see how our filters get rewired for this visually-impressive-but-otherwise-slop abundance.


My prediction is that reputation will be increasingly important, certain credentials and institutions will have tremendous value and influence. Normal people will have a hard time breaking out of their community, and success will look like acquiring the right credentials to appear in the trusted places.


That's been the trajectory for at least the last 100 years, an endless procession of certifications. Just like you can no longer get a decent-paying blue collar job without at least an HS diploma or equivalent, the days of working in tech without a university education are drying up and have been doing so for a while now.


The recent past was a nice respite from a strict caste system, but I guess we’re going back.


I think the recent past was a respite in very specific contexts like software maybe. Others, like most blue collar jobs, were always more of an apprentice system. And, still others, like many branches of engineering, largely required degrees.


This isn't new- it's been happening for decades.


Not new. No. But will be more.


Maybe my expensive university degree was worth it after all


I have a sci-fi series I've followed religiously for probably 10 years now. It's called the 'Undying Mercenaries' series. The author is prolific, like he's been putting out a book in this series every 6 months since 2011. I'm sure he has used ghost writers in the past, but the books were always generally a good time.

Last year though I purchased the next book in the series and I am 99% sure it was AI generated. None of the characters behaved consistently, there was a ton of random lewd scenes involving characters from books past. There were paragraphs and paragraphs of purple prose describing the scene but not actually saying anything. It was just so unlike every other book in the series. It was like someone just pasted all the previous books into an LLM and pushed the go button.

I was so shocked and disappointing that I paid good money for some AI slop I've stopped following the author entirely. It was a real eye opener for me. I used to enjoy just taking a chance on a new book because the fact that it made it through publishing at least implied some minimum quality standard, but now I'm really picky about what books I pick up because the quality floor is so much lower than in the past.


Yes, I have not bought a few books after reading their free chapters and getting suspicious.

Honestly: there is SO much media, certainly for entertainment. I may just pretend nothing after 2022 exists.


When I do YouTube searches I tend to limit the search to video’s prior to 2022 for this reason.


If you've some time to burn, write the author and/or his publisher and let them know that the guy's new ghostwriter sucks shit. If this is very seriously making your consider not picking up the next book in the series, be sure to mention that.

If folks just stop purchasing the new books, they can imagine a reason for the lost sales that's convenient for them, but if folks tell them why they stopped purchasing, there's a lot less room for that kind of nonsense.


That’s a good idea actually. It’s easy to forget this kind of thing is an option.


People will build AI 'quality detectors' to sort and filter the slop. The problem is of course it won't work very well and will drown all the human channels that are trying to curate various genres. I'm not optimistic about things not all turning into a grey sludge of similar mediocre material everywhere.


Is there a way to have a social media platform with hand-written letters, sent with ravens? That's AI proofed... for a while at least!


I worry that the focus on AI proofing will lead to a deanonymization of the internet. If we force every interaction to be associated with a real world id, we can kill a lot of the bots.


> deanonymization of the internet

This is going to happen anyway because 'the powerful' want to track what everyone does and says. AI is going to accelerate this because now they have a much more efficient means to filter and identify the people that are doing and saying things that the powerful don't like. The powerful will also be able to get real world ID credentials for their bots if they wanted or needed them, so this will not stop the problem of bots.


Exactly, and we will have those who will "game" the "detectors" like they already "game" the social media "algorithms" :\


I believe that a history of written work verified by stylometry will be a viable reputation system.


> In my mind, besides the self declared objectives, frameworks solve three problems .. “Simplification” .. Automation .. Labour cost.

I think you are missing Consistency, unless you don't count frameworks that you write as frameworks? There are 100 different ways of solving the same problem, and using a framework--- off the shelf or home made--- creates consistency in the way problems are solved.

This seems even more important with AI, since you lose context on each task, so you need it to live within guardrails and best practices or it will make spaghetti.


Blockchain as a vehicle for immutable data has passed. Crypto has given up pretending it's anything other than a financial vehicle for gambling.

Also, the recruitment attempts I've gotten from crypto have completely disappeared compared to the peak (it's all AI startups now).


You should still use swap. It's not "2x RAM" as advice anymore, and hasn't been for years: https://chrisdown.name/2018/01/02/in-defence-of-swap.html

tl;dr; give it 4-8GB and forget about it.


I've heard "square root of physical memory" as a heuristic, although in practice I use less than this with some of my larger systems.


The proper rule of thumb is to make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens.


That's not so much a rule of thumb as an assessment you can only make after thorough experimentation or careful analysis.


It doesn't take that much experimentation, though. Either set up not enough swap and keep increasing it by a little bit until you stop needing to increase it, or set up too much, and monitor your max use for a while (days/weeks), and then decrease it to a little more than the max you used.


I went with "set up 0 swap" and then never needed to increase it. I built my PC in 2023, when RAM prices were still reasonable, stuck 128GiB of ECC DDR5 in, and haven't run into any need for swap. Start with 0, turn on zswap, and if you don't have enough RAM then make a swap file & set it up as backing for zswap.


You don't need "horough experimentation or careful analysis". Just keep free swap space below few hundred megabytes but above zero.


"Keep swap space below few hundred megabytes but above zero" is a good example of a rule of thumb.

"Make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens" is not.


You need to take every comment about AI and mentally put a little bracketed note beside each one noting technical competence.

AI is basically an software development eternal september: it is by definition allowing a bunch of people who are not competent enough to build software without AI to build it. This is, in many ways, a good thing!

The bad thing is that there are a lot of comments and hype that superficially sound like they are coming from your experienced peers being turned to the light, but are actually from people who are not historically your peers, who are now coming into your spaces with enthusiasm for how they got here.

Like on the topic of this article[0], it would be deranged for Apple (or any company with a registered entity that could be sued) to ship an OpenClaw equivalent. It is, and forever will be[1] a massive footgun that you would not want to be legally responsible for people using safely. Apple especially: a company who proudly cares about your privacy and data safety? Anyone with the kind of technical knowledge you'd expect around HN would know that them moving first on this would be bonkers.

But here we are :-)

[0] OP's article is written by someone who wrote code for a few years nearly 20 years ago.

[1] while LLMs are the underlying technology https://simonwillison.net/tags/lethal-trifecta/


Presumably the Epstein files, but I'm not on twitter so not sure



Yet somehow there is always a version of the same thread that's not mangled https://www.jmail.world/thread/EFTA02512795?view=inbox

Can only assume DOJ overpaid the law firms like 5x by not specifying deliveries need to be deduplicated first.


Huh, Noam Chomsky, nice one!


Ooh, that reason. Sorry for having been dense. Thanks!


Jeff Epstein? The New York financier?


It's not clear why this is being upvoted.

There is not a sample chapter to check, the two authors own the website and appear to be just some developers who know go, and frankly the cover looks AI generated.

As a go developer, why am I supposed to care about this?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: