I found that many people don't have a radar for this. They may know about delve, emdashes, tapestry, multifaceted or "not just X but y" and if these are not there they don't see it.
Even a small % of incorrectness quickly produces compounding effects, if you view LLMs as an information source. True or false statements are made with equal confidence, because the LLM can’t distinguish true from false.
They said each time they want something to be easier, not each time they do something. And they didn’t mention it has to be one-shot. You might have read too quickly and you’ve responded to something that didn’t actually exist.
It’s good to keep your skepticism but at some point you have to be able to recognize normal human usage of these conventions.
And as we all read more AI content and talk to chatbots, that will influence how we do our own writing as well, humans will start to sound more like LLMs.
It’s like that but if the blindfolded free throw shooter was also the scorekeeper and the referee & told you with complete confidence that the ball went in, when you looked away for a second.
reply