Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you’ve used LLMs you’d know that they are way more correct than not.


Even a small % of incorrectness quickly produces compounding effects, if you view LLMs as an information source. True or false statements are made with equal confidence, because the LLM can’t distinguish true from false.


That’s the same as in normal journalism so I’m not sure why you pick LLMs as particularly bad




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: