So you don't think 50T parameter
neural networks can encode the logic for adding two n-bit integers for reasonably sized integers? That would be pretty sad.
You are wrong. Especially that we are talking about models with 50T parameters.
Can they do arbitrary computations for arbitrarily long numbers? Nope. But that's not remotely the same statement, and they can trivially call out to tools to do that in those cases.
Third things can exist. In other words, you’re implying a false dichotomy between “human computation” and “computer computation” and implying that LLMs must be one or the other. A pithy gotcha comment, no doubt.
Edit: the implication comes from demanding that the OP’s definition must be rigorous enough to cover all models of “computation”, and by failing to do so, it means that LLMs must be more like humans than computers.
After dismissing it for a long time, I have come around to the philosophical zombie argument. I do not believe that LLMs are conscious, but I also no longer believe that consciousness is a prerequisite for intelligence. I think at this point it is hard to deny that LLMs do not possess some form of intelligence (although not necessarily human-like). I think P-zombies is a fitting description.
I don't think P-zombies can exist. There must be some perceptible difference between an intelligence w/ consciousness and one without. The only way there wouldn't be a difference is if we are mistaken about the consciousness (either both have it or neither do).
> There must be some perceptible difference between an intelligence w/ consciousness and one without
I think there are differences, and I think we can make good guesses, but I'm not sure we can reliably classify a P-zombie from a normal human from their behaviour with 100% accuracy..
https://www.youtube.com/watch?v=YEUclZdj_Sc