But you do need to understand that they generate lift, and be able to mathematically describe something that generates lift. The Wright brothers wrote to the Smithsonian in 1899 and got back, among other things, workable equations for lift and drag.
I think people think back propagation is the metaphorical lift equation here and we just need a “manufacturing” advancement (ie, more compute and techniques for using it). We’re close to that (I personally feel like with poor evidence) but definitely not there yet (as evidenced by nobody publishing this). We cannot describe what is happening with modern architectures as fully as a lift equation predicts fixed wing flight, and so it is largely an intuition + trial and error, which is a slow unreliable way to make progress.
My point is that while the brain and neurons are very complex and inherently confusing, there are billions of lifeforms that operate on this architecture and do not display sentience or intelligence.
Secondarily, just because neurons are complex on technical level, it does not mean that they should be complex on logical level.
For example, in computers if you would look at the CPU structure, on a low level you have quantum effects and tunneling and very insane stuff but on a logical level you are dealing with very trivial boolean logic concepts.
I would be not be surprised in a slightest if copying and reverse engineering neurons per se would not be necessary and defining aspect of anything related to AGI.
Yeah but we didn't need to fully understand how animal wings actually work, we just needed to understand what they do (generate lift). Similarly I don't understand the focus in this conversation on fully understanding the protein interactions that make neurons work. We just need to understand what neurons do. And I thought what they do is actually pretty simple due to the "all or nothing" principle. https://en.wikipedia.org/wiki/All-or-none_law
That’s pretty far from “when you do this, you get generalizable thought required for AGI”. The lyft equation said “this equation shows that when you do this, this object moves upward against the air”, which was the goal of flight- for AGI we have “when you do this, the loss goes down for this task”, we are missing so many pieces between that and the concept of AGI.
People think maybe the missing pieces might be in the other things we don’t understand about the brain. It makes sense- it does what we want, so the answer must be in there somehow. I agree we don’t need to perfectly understand it, it just seems like a good place to keep looking for those missing pieces.