Books and newspapers have had editors for centuries. It is just code review for the written word.
[It looks like MS Word 97 had the ability to detect passive voice as well, so we're talking 30 year old technology there that predates LLMs -- how far down the Butlerian Jihad are we going with this?]
I don't personally use AI/LLMs for any informal writing here or on reddit, etc. But I think it is pretty weird to be overly concerned around people, particularly ESL, who use tools to clean up their writing. The only thing I really care about is when someone posts LLM regurgitated information on topics they personally don't know anything about. If the information is coming from the human but the style and tone is being tweaked by a machine to make it more acceptable/receptive and fix the bugs in it, then I don't understand why you're telling me I need to care and gatekeeping it. It also is unlikely to be very detectable, and this thread seems to only serve a performative use for people to get offended about it.
Other tools to clean up writing are allowed. They did not tell you you must care. You told them they must not. The submisson's use was to tell you and others LLM generated tone was not more acceptable.
> HN has always been a spirit-of-the-law place, and—contrary to the "technically correct is the best correct" mentality that many of us share—we consciously resist the temptation to make them too precise.
The problem with “spirit-of-the-law” is that having rules be subject to discretion is a pretty clear avenue for discrimination and abuse. Not as big of a deal for an Internet forum as it would be for, say, a country's legal code and the enforcement thereof, but the lack of a clear standard for a rule makes that rule hard to follow and harder to enforce impartially.
The typical problem with trying to create clear standards with no spirit of the law is that it never matches the intentions with the 1st, 2nd, etc iterations of developing the clear standards. At least when trying to deal with something nuanced. It can get to the point that it takes more time and effort to follow the clear standards than to think through it fresh each time. The rules can also eat up time and effort to maintain and distract from the original purpose.
"Don't post generated comments or AI-edited comments."
What about non-native speakers? Can they not use translation software like google translate any more?
"Don't post generated comments or AI-edited comments, except for translating to english"
What about cases of disabilities?
"Don't post generated comments or AI-edited comments, except for translating to english and when used as assistive technologies."
Some translation tools and assistive technologies are still going to case the same issues that we have right now so maybe limit the technologies used
"Don't post generated comments or AI-edited comments, except for translating to english and when used as assistive technologies. Technologies x, y, z are not allowed a and b and similar can be used for translation c and d as assistive technologies"
But we do not want to spend time/effort on filtering technologies and/or people into the above categories.
In the long run we likely will come up with technologies that most everyone is satisfied with using in different use cases, spelling grammar, assistive, maybe even tone, and others.
In the mean time we can not let the perfect be the enemy of the good. If there are clear standards that achieve the goals, great, if not we have to do something until everything shakes out.
Nobody is going to stop using grammarly extensions to post to HN, nobody is going to be able to detect its usage.
This thread just lets a certain kind of people put on their best condescending hall-monitor voice and lecture other people about how they should behave.
And the rule is arguably less useful than speed limits and will be broken about as often (at least speed limits have a very real link to physical safety via kinetic energy).
> This thread just lets a certain kind of people put on their best condescending hall-monitor voice and lecture other people about how they should behave.
I think it is, at least mostly, about the blatant cases that are often already down voted and flag and make it official.
> And the rule is arguably less useful than speed limits and will be broken about as often (at least speed limits have a very real link to physical safety via kinetic energy).
I often see the rules in:
https://news.ycombinator.com/newsguidelines.html
broken, mostly small ways, I still think we are better off with them or something similar rather than having nothing.
[It looks like MS Word 97 had the ability to detect passive voice as well, so we're talking 30 year old technology there that predates LLMs -- how far down the Butlerian Jihad are we going with this?]