When do you need to spellcheck or polish an HN comment?
I've never, ever, ever ever ever, seen anybody complain about spelling mistakes in a comment here. As long as you can understand the comment, people respond to it.
Extend spellcheck to asking questions like "does it meet HN rules" "how can I improve my writing" etc. Though these are the kinds of questions that do at very least still meet the spirit of the rule, I suppose.
> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?
I say this on behalf of all of my neurospicy friends… sometimes, yes. Especially having taken a look at the whole list of guidelines, I definitely am friends with people who would could struggle to determine whether a given comment fits or not.
> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?
Lots of people break HN guidelines. I see it virtually every day.
> And why would you want to "improve your writing" for an HN comment?
Some people like to write well regardless of the medium. Why is that a problem for you?
> I think people here value raw authenticity more than polished writing.
Classic false dichotomy. Asking an LLM for feedback is not making your comment less authentic. As I pointed out elsewhere, it can make your comment more authentic by ensuring that what you had in your head and what you wrote match.
Go and study writing and psychology. For anything of value, it's rare that your first attempt reflects what you meant to say. It's also rare that the first attempt, even if it reflects what you meant, will not be absorbed by the recipient. Saying what you mean, and having it understood as you meant it, is a difficult skill.
> Lots of people break HN guidelines. I see it virtually every day.
Yes, and AI won't help here. People will use AI to better break the guidelines.
> Go and study writing and psychology
Is this a case where you should have read the guidelines? Maybe an LLM could have helped you here? Please don't send me study anything, you know what they say of ASSuming.
> Some people like to write well regardless of the medium. Why is that a problem for you?
HN is more like talking than writing. And LLMs don't help you write well, they help you sound like a clone, which is unwanted.
> For anything of value, it's rare that your first attempt reflects what you meant to say.
You can always edit your comment. And in any case, HN is like a live conversation. Imagine if your friend AI-edited their speech in real-time as they talked to you.
Depends on how you use the AI. if you use it a bit like you'd ask a human to proof-read your work, AI can actually be quite helpful.
The other important thing you can do is have an AI check your claims before you post. Even with google and pubmed, a quick check against sources by hand can take 30 minutes or longer, while with AI tooling it takes 5. Guess which one is more likely to actually lead to people checking their facts before they post. (even if imperfectly!) .
I'm not talking about people who lazily ask the AI to write their post for them. Or those who don't actually go through and actually get the AI to find primary sources. Those people are not being as helpful. Though try consider educating them on more responsible tool use as well?
> To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling.
> I don't think that's what this new HN guideline is against either.
This is actually how many commenters here are interpreting it, though - and that's what I'm pushing back against. They are actively advocating against using LLMs this way.
I don't have the LLM write the comment for me. I (sometimes) give it my draft, along with all the parents to the root, and get feedback. I look for specific things (Am I being too argumentative? Am I invoking a logical fallacy? Is it obvious I misinterpreted a comment that I'm replying to? Is my comment confusing? etc). Adding things like (Am I violating an HN guideline?) are fair game.
Earlier today I wrote a lot of comments without using the LLM's feedback. In one particular thread I repeatedly misunderstood the original context of the discussion and wasted people's time. I reposted my draft to the LLM and it alerted me of my problematic comment. Had I used it originally, I would have saved a lot of people time.
Incidentally, since I started doing this (a few months ago), I've only edited my comment once or twice based on its feedback. Most of the time it just tells me my comment looks good.
The problem is that there's a vast range of values between “using AI to research/hone your arguments” v. “AI writing your comments for you”, and between the rule itself and dang's various remarks on it, where exactly the rule draws the line is about as clear as mud.
> Yes, and AI won't help here. People will use AI to better break the guidelines.
AI is a general purpose tool. People will use AI for multiple reasons, including yours. I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number.
> HN is more like talking than writing.
Says you. Many disagree.
> And LLMs don't help you write well, they help you sound like a clone, which is unwanted.
Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this.
> Imagine if your friend AI-edited their speech in real-time as they talked to you.
When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended.
> I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number.
I don't know how comparatively challenging, I only know your use case is now (fortunately!) against HN rules.
> Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this.
It's not false. It's one of the major reasons people have come to dislike AI written comments and articles. It all ends up sounding the same.
> When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended.
In real life? Sounds like a fucking dystopia. But everyone is free to choose the hell they want to live in.
People who are particular about spelling do not want to write misspelled words! It's not about whether you/others will tolerate it. I have my standards, and I hold to them.
I personally don't use an LLM to spellcheck (browser spellcheck works fine), but I see no problem with someone using an LLM to point out spelling errors.
And while I don't complain about others' spelling errors, I sure do notice them. And if someone writes a long wall of text as one giant paragraph that has lots of spelling/grammatical issues, chances are very high I won't read it.
Some people write very poorly by almost any standard. If an LLM helps the person write better, I'm all for it. There's a world of a difference between copy/pasting from the LLM and asking it for feedback.
> Spellcheckers exist, you don't need an AI to change your voice.
How is using an AI to spell check changing my voice?
Yes, thank you - I know spellcheckers exist, as my comment clearly states. The amusing thing is that an LLM who had access to the thread would have alerted you to a basic error you're making.
> Also, if you have standards, you can always train yourself to spell better!
"You can always ..." is not an argument against alternatives.
Calm down. You're getting defensive, but it's not warranted. I'm not attacking you.
> The amusing thing is that an LLM who had access to the thread would have alerted you to a basic error you're making.
I didn't make the "basic error" of assuming you didn't know spellcheckers existed. I was stressing that since spellcheckers already exist, you don't need an AI assisting your comments-writing. Much basic, non-style-altering alternatives exist and are better.
> "You can always ..." is not an argument against alternatives.
The argument I'm making is that if you care so much about standards you can always hone them yourself instead of taking the lazy way out of having an AI write for you.
Alternatively, if you're lazy then your standards aren't too high.
And yes, this is an argument against the alternative you're suggesting.
> The argument I'm making is that if you care so much about standards you can always hone them yourself instead of taking the lazy way out of having an AI write for you.
It's pretty clear that in this case the use of AI is not a matter of laziness, but rather quality/consistency assurance. I use code formatters not because I'm too lazy to indent code myself, but because it helps guarantee that it's formatted consistently. I use a stud finder when mounting things to walls not because I'm too lazy to do the “knock on the wall” trick, but because the stud finder is more precise and reliable at it.
I don't use AI to edit my comments, but if I did, it would be not because I'm too lazy to check for all the things I want to avoid putting in my comments, but as an extra layer of assurance on top of what I've already trained myself to do.
> It's pretty clear that in this case the use of AI is not a matter of laziness, but rather quality/consistency assurance
But that's not something anybody wants of you in an informal context such as this (HN). It will flatten your voice and make you sound like a drone. We value a human voice.
Code is different. Outside of hobbies, code is not a form of self-expression. There's a reason why following your companies coding styles & practices is valued in software engineering. Companies value coders being interchangeable with each other, they do not want a "unique voice". I think it's completely unrelated to what we're discussing here.
“We” value mutual intelligibility. The manic ravings and rantings of a lunatic are also a “human voice”, but that doesn't necessarily mean they're of particular value.
> Code is different. Outside of hobbies, code is not a form of self-expression.
For the vast majority of people here, commenting on Hacker News is also a hobby. The comparison to code formatting is more relevant than you think.
> What are we even debating, then?
Just because I don't feel the need to use AI to edit my comments doesn't mean that's true of everyone. Seems pretty selfish of me to insist that “I don't need it therefore you shouldn't have it”.
I think that people subconsciously perceive grammatically correct and stylistically appropriate writing as more authoritative. And author is perceived as smarter and/or better educated person.
At least that was the case before LLMs became a thing, now I'm not sure anymore.
Obvious spelling mistakes are usually ignored, but there are certain types of writing mistakes that really trigger the type of people that frequent HN.
For example, use "literally" for exaggeration rather than in the original meaning of the word and you'll likely trigger somebody.
Here's a better example. Use "a few bad apples" wrong, and you'll likely get a response. A few bad apples will cause the entire barrel to spoil rapidly, so a few bad apples is a big deal. But it's often used to say the opposite, that a few bad apples isn't a big deal.
Wow, I guess I never thought about the "few bad apples" figure of speech! Interesting. But regardless, everyone understands what it means in common use, even if it's logically wrong, and I swear I've never seen anybody be a pedant about it here.
And really, it goes against the spirit of HN to hyperfocus on idioms instead of addressing the meat of the argument...
As a personal observation, if an LLM was figuratively looking over my shoulder and pointed out something like "well, ackshually, 'a few bad apples' means..." I would delete the fucker.
A few bad apples is a great idiom though that applies to so many places. For examples, teachers often report that more than 2 troublemakers in a classroom ruins the entire class. A few bad cops destroy trust in all policemen, ruining the the entire force, et cetera.
And more relevant to us, a couple bad lines of code sprinkled in the millions in your code base can ruin the entire thing....
I wish I had posted a better example, but I couldn't recall anything at the moment and still can't. It's usually a more interesting complaint than the old man shaking fist at clouds of the usage of the word literally.
Would you prefer to be corrected on some logical fallacy/mistake you made in your argument, by another human being (and yes, maybe get slightly upset about it, we're human beings after all), or have both sides present bot-mediated iron-clad comments, like operators sparring with robots?
I prefer the raw, flawed human version. Even if, yes, I make a silly, avoidable mistake, or get upset, or make you upset in the heat of the argument. Maybe when I cool down I will have learned something.
I don't want flawless robotic arguments. I want human beings. (Fuck, that last bit sounded like an AI-ism, but I promise it's me, a human!).
I've never, ever, ever ever ever, seen anybody complain about spelling mistakes in a comment here. As long as you can understand the comment, people respond to it.