Hacker Newsnew | past | comments | ask | show | jobs | submit | fubdopsp's commentslogin

I personally cycle accounts on this site for pseudo-privacy reasons. HN does not allow you to delete old comments you made and thus the only way to maintain some semblance of control over my profile and privacy is to periodically switch new accounts. I've been doing this for years now. The only real downside for me is that as a new account you don't have the ability to downvote, which is super annoying but something I've learned to live with.

I'm not saying your idea is bad necessarily but giving another perspective.


I also do this. Pretty much every time I move.

It's much more than a "tangential annoyance" and it adds a lot to the conversation--among other things, it establishes a norm that AI-generated blogspam is, well, spam and unwelcome.

Blogging, sharing blog posts, reading them, commenting on them--these are all acts of human communication. Farming any of these steps out to an LLM completely breaks down the social contract involved in participating in an online forum like this. What's the point?

It's the exact same effect that's playing out in many other areas where LLMs are encroaching: bypassing the "human effort" step has negative side effects that people who are only looking at the output are ignoring.

I actually find your opinion so infuriating that it's taking all my composure to not reply with something nastier. If you guys want to spend your time reading shitty LLM spam posts with shitty LLM comments, why don't you find another site to do it on instead of destroying this one.


To provide a heads up to others for who feel similarly for whether something is worth spending time with there isn't a problem speculating if something is produced by AI if there are indicators of insufficient human authorship but that's a big if. If incorrect such comments themselves become noise.

In its worst form I've seen now many times in other communities users claim submissions are AI for things that are provably not, merely to dismiss points of view the poster disagrees with by invoking calls to action from knee-jerk voters who have a disdain for generative AI. I've also seen it expressed by users I expect feel intimidated by artwork from established traditional artists.

Thankfully on HN it hasn't reached that level but I have seen some here for instance still think use of em dashes with no surrounding spaces is some definitive proof by pointing to a style guide, without realizing other established style guides have always stated to omit the spaces (eg: Chicago Manual of Style). This just leads to falsely confident assessments and more unnecessary comment chains responding to them.

What one hopes for with curated communities is that people have discriminating taste at the submission and voting level. In my own case I'm looking for an experience from those who have seen a lot of things and only finds particular things compelling and are eager to share them. Compared to some submission that reaches the front page of say popular programming language docs that just provide another basis for rehashed discussion (and cynically since the poster knows such generalized submissions do this and grow karma).


> it establishes a norm that AI-generated blogspam is, well, spam and unwelcome.

It is welcome though. Being on the front page regularly is evidence that people enjoy it or find it informative.

You may feel that others shouldn't be ALLOWED to enjoy it, but that's just your opinion and is almost always tangential to the actual topic.

Worse, you seem to believe that it needs to be labeled to help you identify it. Why? If its good enough that you need help to spot it then its obviously of sufficiently high quality.


> Being on the front page regularly is evidence that people enjoy it or find it informative.

What makes you think that it's people who get it to the front page anymore? Or that most people aren't simply fooled by technology designed to mimic humans?

> Worse, you seem to believe that it needs to be labeled to help you identify it. Why?

Why not? Would adding a label and providing filtering capabilities hurt anyone else's experience?

Some people object to this content based on principle, not on its quality, or on how closely it resembles content authored by humans.


> Some people object to this content based on principle, not on its quality, or on how closely it resembles content authored by humans.

Its's OK to have political opinions, even the ones that I disagree with. It's not OK to ruin every unrelated conversation ranting about them. Some folks around here have turned into that one uncle nobody likes inviting around to dinner anymore.

If a label would stop that I might be in favor of it. However, I'm certain it would instead be used to remove otherwise high quality content and ultimately reduce the utility of this place.


I agree with you, but...

> Blogging, sharing blog posts, reading them, commenting on them--these are all acts of human communication.

Not anymore. Bots are now the majority of producers and consumers of all content on the internet. The social contract you mention has been broken for years, and this new technology has further cemented that.

Those of us who value communication with humans will have to find other platforms where content authorship is strictly regulated, or, at the very least, where tools are provided to somewhat reliably filter out machine-generated content. Or retreat from public spaces altogether.

FWIW I have very little hope that this issue will be addressed on HN, considering [1].

[1]: https://www.ycombinator.com/companies/industry/ai


It's in a lot of people's interest to keep platforms like HN free of LLM spam, frankly. It's in our interest as people who want to keep our discussion site for actual human discussion (though from the other comments in this thread, this sentiment isn't universally shared, god knows why). It's also in the interest of AI companies since if they destroy internet spaces like this they lose valuable future training data. So I'm (perhaps foolishly) optimistic--or at least not completely pessimistic--that there's hope yet for us.

Incidentally I foresee similar issues to this training data pollution arising with LLM coding taking over software engineering--which it inevitably is going to continue to do, at least in the short term. If LLMs torpedo human engineering, who is going to create the new infrastructure (tools, frameworks, programming languages, etc) that LLMs are making such good use of today? It feels to me like we risk technological stagnation as our collective skills atrophy and the market value of our skills plummets. Kind of like airplane pilots forgetting how to debug planes or handle edge cases because they just rely on autopilot all the time.


Like you say, some people are interested in keeping the discussion for humans only. Although we can't really know whether any opinion expressed here is coming from a human or not, including this one.

As for "AI" companies, their only interest is increasing their valuation. Historically speaking, most companies prioritize short-term profits, but during a bull market the incentives are even more skewed towards it. So poisoning the well of training data is seen as a future problem for someone else to figure out, or not. In the meantime, carpe pecuniam.

> If LLMs torpedo human engineering, who is going to create the new infrastructure (tools, frameworks, programming languages, etc) that LLMs are making such good use of today?

LLMs, of course. :) I don't think the people building these tools haven given these topics any serious thought. Whatever concerns they claim to have, regarding safety and otherwise, are merely performative.


Hey, I'm not a fan of LLM slop articles and blogspam either and if I could hold back the tide, I'd try to. But I'm just saying that pointing it out each and every time is just going to become its own form of spam. We're quickly entering a world where 99+% of what is written online, be it blogs, amateur news, or actual professional journalism, is LLM generated. You hate it, I hate it, but it's coming. The state of journalism is already in shambles and line must go up, so "everything written by AI" is sadly inevitable. Posting every time to remind people of that? I mean by the end of 2026 you might as well have a bot commenting on every article that it's probably LLM generated. I argue it adds no signal to the conversation.

I still think it has strong normative value. Maybe at some point when norms have become firmly established these comments will be pointless and spammy but I don't think we're anywhere close to that point yet.

A lot of blogging is essentially self-expression and that stuff won't be taken over by LLMs (it defeats the whole point). Other blogging is done with some kind of sales/promotional/brand purpose and the extent to which LLMs will dominate this will depend on how we as a society react to it (see the AI art battles) since if people react negatively to it it becomes counterproductive.


Perhaps it would be better to have comments that praise apparently human-written text?

I understand where you're coming from. I've been posting complaints about LLM-written articles almost as long as I've been here. (My analysis is definitely more complex than a search for blacklisted Unicode characters or words.)

But I've let off on that, partly because I agree the guideline is meant to encompass that kind of criticism (same with my comments about initial page content not rendering with JavaScript, honestly) but largely because it just seems futile. It's better material for a blog post than HN comments (and would be less repetitive).


Yes, that's the submission we're commenting on ;)

Haha. I meant an Android application. The website doesn't let you submit. The app makes it easy to submit.

Yeah, good luck with that at current RAM prices though. DDR5 RDIMMs are going for $20/gb+ right now which means 1tb is $20k, and that's with fairly conservative pricing too.

I've been looking at building a high memory workstation recently but the RAM prices are just prohibitive. Best option atm for 1tb+ seems to be to go back a couple gen and buy DDR4, you can get 1tb at under $5/gb right now. But obviously you're giving up some performance in the process.


Yupp, even before the ram shortage we were paying $60-$80k for 1u racks with 768 to 1.5tb ram and 48-128cores

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: