Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That doesn't really follow from the linked research (which is interesting, though).


> > Chomsky said that LLMs are statistical regurgitators which means LLMs can never actually reason

Othello-GPT managed to develop an internal model of the board that actually works, it doesn't just regurgitate. Hence, wrong.


Regurgitators can't have internal representations? Sometimes the best way to regurgitate is to learn an internal representation. That doesn't mean it suddenly stopped being a statistical model.


IMO This an incorrect and unrigorous understanding of what "internal model" means which is why there is a valid scientific debate about this issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: