>As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."...
That pretty much explain the AI Hysteria that we observe today.
>It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'.
That pretty much explains the "it's not real AI" hysteria that we observe today.
And what is "AI effect", really? It's a coping mechanism. A way for silly humans to keep pretending like they are unique and special - the only thing in the whole world that can be truly intelligent. Rejecting an ever-growing pile of evidence pointing otherwise.
>there was a chorus of critics to say, 'that's not thinking'.
And they were always right...and the other guys..always wrong..
See, the questions is not if something is the "real ai". The questions is, what can this thing realistically achieve.
The "AI is here" crowd is always wrong because they assign a much, or should I say a "delusionaly" optimistic answer to that question. I think this happens because they don't care to understand how it works, and just go by its behavior (which is often cherry-pickly optimized and hyped to the limit to rake in maximum investments).
Anyone who says "I understand how it works" is completely full of shit.
Modern production grade LLMs are entangled messes of neural connectivity, produced by inhuman optimization pressures more than intelligent design. Understanding the general shape of the transformer architecture does NOT automatically allow one to understand a modern 1T LLM built on the top of it.
We can't predict the capabilities of an AI just by looking at the architecture and the weights - scaling laws only go so far. That's why we use evals. "Just go by behavior" is the industry standard of AI evaluation, and for a good damn reason. Mechanistic interpretability is in the gutters, and every little glimpse of insight we get from it we have to fight for uphill. We don't understand AI. We can only observe it.
"What can this thing realistically achieve?" Beat an average human on a good 90% of all tasks that were once thought to "require intelligence". Including tasks like NLP/NLU, tasks that were once nigh impossible for a machine because "they require context and understanding". Surely it was the other 10% that actually required "real intelligence", surely.
The gaps that remain are: online learning, spatial reasoning and manipulation, long horizon tasks and agentic behavior.
The fact that everything listed has mitigations (i.e. long context + in-context learning + agentic context management = dollar store online learning) or training improvements (multimodal training improves spatial reasoning, RLVR improves agentic behavior), and the performance on every metric rises release to release? That sure doesn't favor "those are fundamental limitations".
Doesn't guarantee that those be solved in LLMs, no, but goes to show that it's a possibility that cannot be dismissed. So far, the evidence looks more like "the limitations of LLMs are not fundamental" than "the current mainstream AI paradigm is fundamentally flawed and will run into a hard capability wall".
Frankly, I don't buy that LeCun has that much of use to say about modern AI. Certainly not enough to justify an hour long podcast.
Don't get me wrong, he has some banger prior work, and the recent SIGReg did go into my toolbox of dirty ML tricks. But JEPA line is rather disappointing overall, and his distaste of LLMs seems to be a product of his personal aesthetic preference on research direction rather than any fundamental limitations of transformers. There's a reason why he got booted out of Meta - and it's his failure to demonstrate results.
That talk of "true understanding" (define true) that he's so fond of seems to be a flimsy cover for "I don't like the LLM direction and that's all everyone wants to do those days". He kind of has to say "LLMs are fundamentally broken", because if they aren't, if better training is all it takes to fix them, then, why the fuck would anyone invest money into his pet non-LLM research projects?
It is an uncharitable read, I admit. But I have very little charity left for anyone who says "LLMs are useless" in year 2026. Come on. Look outside. Get a reality check.
My opinions on the matter does not come from any experts and is coming from my own reason. I didn't see that video before I came across that comment.
>"LLMs are useless" in year 2026
Literally no one is saying this. It is just that those words are put into the mouths of the people that does not share the delusional wishful thinking of the "true believers" of LLM AI.
To be honest, I would prefer "I over-index on experts who were top of the line in the past but didn't stay that way" over "my bad takes are entirely my own and I am proud of it". The former has so much more room for improvement.
>Literally no one is saying this.
Did you not just advise me to go watch a podcast full of "LLMs are literally incapable of inventing new things" and "LLMs are literally incapable of solving new problems"?
I did skim the transcript. There are some very bold claims made there - especially when LLMs out there roll novel math and come up with novel optimizations.
No, not reliably. But the bar we hold human intelligence to isn't that high either.
>my bad takes are entirely my own and I am proud of it"
Sure, but the same could apply to you as well.
>"LLMs are literally incapable of inventing new things" and "LLMs are literally incapable of solving new problems"?
You keep proving that you have trouble resolving closely related ideas. Those two things that you mention does not imply that they are "useless". They are a better search and for software development, they are useful for reviews (at least for a while). But it seems that people like you can only think in binary. It is either LLMs are god like AI, or they are useless.
Mm..You seem to be consider this to be some mystical entity and I think that kind of delusional idea might be a good indication that you are having the ELIZA effect...
>We don't understand AI. We can only observe it.
Lol what? Height of delusion!
> Beat an average human on a good 90% of all tasks that were once thought to "require intelligence".
This is done by mapping those tasks to some representation that an non-intelligent automation can process. That is essentially what part of unsupervised learning does.
ELIZA couldn't write working code from an English-language prompt though.
I think the "AI Hysteria" comes more from current LLMs being actually good at replacing a lot of activity that coders are used to doing regularly. I wonder what Weizenbaum would think of Claude or ChatGPT.
>ELIZA couldn't write working code from an English-language prompt though.
Yea, that is kind of the point. Even such a system could trick people into delusional thinking.
> actually good at replacing a lot of activity that coders are used to...
I think even that is unrealistic. But that is not what I was thinking. I was thinking when people say that current LLMs will go on improving and reach some kind of real human like intelligence. And ELIZA effect provides a prefect explanation for this.
It is very curious that this effect is the perfect thing for scamming investors who are typically bought into such claims, but under ELIZA effect with this, they will do 10x or 100x investment....
Doesn't multi-world interpretation pretty much answer how life originated?
I mean, even if the starting state require to bootstrap life have impossibly low chance to happen random, multi-world interpretation implies that there will be some worlds where it happened, and observation of life is only possible in such worlds..
Multi-worlds is not really relevant here. You are just asking the question how the building blocks of life form in the Universe and how can they reach a planet like ours.
That's not true. If life has the odds of one in a quadrillion of happening, and we're here to discuss it, then we're that one in a quadrillion. If we weren't, we wouldn't be alive. By definition, we were the lucky ones with the perfect conditions that resulted in us.
But I don't think we are "lucky", because we are part of the world, not something that was placed inside it by choice. It is like asking why is Nile in Egypt and not in some other place. If Nile is in some other place, it would not be Nile...So does it make sense to say that Nile is lucky to be in Egypt? No, I think it does not make sense...
Sorry, but nothing you have said here is true or makes sense. Multi worlds are universes, not worlds within our universe. The multiworld interpretation is one of several interpretations of quantum mechanics of the exact same evidence--one or the other interpretation being "true" has no empirical implications. And it is an interpretation of quantum mechanics, which has nothing to do with the distribution of nucleotides. And it's incoherent to call an observed event "impossible". You seem to mean that you think that it is highly unlikely, but offer no reason to think so ... nor for the bizarre claim that "Multi worlds is the only way". I suspect that you are mixing up a very confused understanding of "Multi worlds" with some version of the anthropic principle. But the anthropic principle is an a posteriori explanation of an a priori unlikely occurrence, it's not a "way" for something to happen.
I won't comment further unless you offer a convincing proof of your assertion.
Ok, what is the mystery in the origin of life? As I understand, it is how all the required molecules came together in the right configuration spontaneously? Is that the question that we are trying to answer?
If this is the question, I think the Multi Worlds Interpretation provides the answer. Because it says that there is some worlds where any given random event will manifest.
So it follows that there is some worlds, where this random event that we call the "origin of life" manifested. And it is just that we are part of one such world.
>multiworld interpretation is one of several interpretations of quantum mechanics of the exact same evidence
I think we might think the other way around. That the origin of life, as well as the fact that we seem to be alone in the universe, as a proof of the MWI..
About the latter, I think we have an overwhelming chance to be alone, because while it is true that there can be universes where random events have lead to origin of life in multiple places, the universes where there is only a single "origin of life" event will vastly outnumber such universes that the chances of us finding oursleves in one such universe (where life has originated independently more than once) is vanishingly small.
AI is great for searching. I ll give you that. And that itself is a big deal. In software development, there is also real value provided by AI if you use it for code reviews. But I am not sure how much worth it would be if you have to retrain a model with new information just to give better search results and for code reviews..
Maybe that will be subsidized by all the people like you who want everything to be done by AI, for the rest of us to use it as a better search tool and use it for quick reviews..who knows!
But may be there is also an upside. May be it would actually cause the people to take notice of what is going on around them and actually starting to use their brains (to make predictions) instead of simply consuming 24x7 news.
So you are saying that if business entity starts a pharma company that creates a drug for some kind of novel disease, but the disease does not currently exist, they will take steps to make an epidemic of it more likely?
Yes, because they would have to. Why start a pharma company for a disease that doesn’t exist? How else would you get people to invest in a company providing something people don’t want unless you have a plan to make them want it?
But then we should ban profit motivated medical initiatives, because even though it could bring huge benefits from time to time (ie a new medical procedure or a drug to cure existing diseases), it also bring raises the chances of medical disasters by providing a constant incentive for such entities to make disaster happen and profit from it...
Flying is safe, but I think it is not because some rules/regulations or due to "science".
A plane falling out of sky is a pretty big event and cannot be suppressed or silenced. It affects a large number of people at once. If planes starts to fall out of sky often, then the commercial aviation will come to a halt in a month. Given this eventuality, if you want to make money by flying people, it in imperative that there is no other way than to * do everything possible to make sure* planes don't fall from the sky.
If planes could fall out of sky without everyone knowing about it (For example, imagine that when a plane crashes, instead of killing the passengers right away, they only get hit after a month or so, and it is hard to link the deaths with the flight they took a month before), and affecting their business, then I bet that flying will no longer be very safe as companies will start cutting expenses with maintenance etc and paying off regulators/inspectors..
A stock market crash is also a pretty big event that cannot be suppressed or silenced, but they still happen regularly. The sad truth is that people (and companies) are greedy and will gladly cut corners with safety if it means making more money. So regulations (and enforcement of those regulations) are needed to prevent a race to the bottom that will eventually lead to a crash. Coming back to aviation, you only have to look at countries like Nepal (https://kathmandupost.com/money/2025/11/10/nepali-sky-remain...) to see what happens when there are no regulations, or regulations are not enforced.
I believe it's less about politeness and more about pronouns. You used `who`, whereas I would use `what` in that sentence.
In my world view, a LLM is far closer to a fridge than the androids of the movies, let alone human beings. So it's about as pointless being polite to it as is greeting your fridge when you walk into the kitchen.
But I know that others feel different, treating the ability to generate coherent responses as indication of the "divine spark".
I get what you're saying, but I'm not talking about swearing at the model or anything, I'm only implying that investing energy in formulating a syntactically nice sentence doesn't or shouldn't bring any value, and that I don't care if I hurt the model's feelings (it doesn't have any).
Note, why would the author write "Email will arrive from a webhook, yes." instead of "yy webhook"? In the second case I wouldn't be impolite either, I might reply like this in an IM to a colleague I work with every day.
For the vast majority of people, using capital letters and saying please doesn't consume energy, it just is. There's a thousand things in your day that consume more energy like a shitty 9AM daily.
> investing energy in formulating a syntactically nice sentence
This seem to be completely subjective; I write syntactically/grammatically "nice" sentences to LLMs, because that's how I write. I would have to "invest energy" to force myself to write in that supposedly "simpler" style.
It's just easier for me to write that way. In that specific sentence, I also kind of reaffirmed what was going on in my head and typed my thought process out loud. There's no deeper logic than that, it's just what's easier for me.
I confidently assume that the model has been trained on an ungodly amount of abbreviated text and "yy" has always meant "yeah".
> literate adults who can type reasonably well
For me the difference is around 20 wpm in writing speed if just write out my stream of thoughts vs when I care about typos and capitalizing words - I find real value in this.
Which you seem to have exclusive access to, I suppose..
reply