Hacker Newsnew | past | comments | ask | show | jobs | submit | virgildotcodes's commentslogin

The real question is how to define intelligence in a way that isn't artificially constrained to eliminate all possibilities except our own.

I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique.

It's this pervasive belief that underlies so much discussion around what it means to be intelligent. The null hypothesis goes out the window.

People constantly make comments like "well it's just trying a bunch of stuff until something works" and it seems that they do not pause for a moment to consider whether or not that also applies to humans.

If they do, they apply it in only the most restrictive way imaginable, some 2 dimensional caricature of reality, rather than considering all the ways that humans try and fail in all things throughout their lifetimes in the process of learning and discovery.

There's still this seeming belief in magic and human exceptionalism, deeply held, even in communities that otherwise tend to revolve around the sciences and the empirical.


The ability to learn and infer without absorbing millions of books and all text on internet really does make us special. And only at 20 watts!

Last I checked humans didn't pop into existence doing that. It happened after billions of years of brute force, trial and error evolution. So well done for falling into the exact same trap the OP cautions. Intelligence from scratch requires a mind boggling amount of resources, and humans were no different.

To be fair, it is still pretty remarkable what the human brain does, especially in early years - there is no text embedded in the brain, just a crazily efficient mechanism to learn hierarchical systems. As far as I know, AI intelligence cannot do anything similar to this - it generally relies on giga-scaling, or finetuning tasks similar to those it already knows. Regardless of how this arose, or if it's relevant to AGI, this is still a uniqueness of sorts.

Human babies "train" their brain on literally gigabytes of multi-modal data dumped on them through all their sensory organs every second.

In a very real sense, our magic superpower is that we "giga-scale" with such low resource consumption, especially considering how large (in terms of parameters) the brain is compared to even the most advanced models we have running on those thousands of GPUs today. But that's where all those millions of years of evolution pay off. Don't diss the wetware!


And then an 18-to-20-something-year training run is required for each individual instance.

I know right, such a waste. Plus it's so random on how they will turn out!

Any suggestions on how to reduce that waste?


Do you think evolutionary pressures are the best explanation for why humans were able to posit the Poincaré conjecture and solve it? While our mental architecture evolved over a very long time, we still learn from miniscule amounts of data compared to LLMs.

Yeah. What else would it be ? A brain capable of doing that was clearly the result of evolutionary pressures.

But there is no evolutionary pressure for the Poincaré conjecture, we were never optimized for that in particular, unlike these kinds of LLMs.

We were optimized to rapidly adapt to changing environments by solving the problems that arise through tool-making and cooperation in complex multi-stage tasks (like say hunting that mammoth to make clothing out of it). It turns out that the cheapest evolutionary pathway to get there has some interesting emergent phenomena.

Of course it is evolution. What else could it be?

How is that relevant? The human brain is at the point of birth (or some time before that). We compare that with an LLM model doing inference. The training part is irrelevant, the same way the human brains' evolution is.

We have a tremendous amount of raw information flowing through our brains 24/7 from before we are born, from the external world through all our senses and from within our minds as it attempts to make sense of that information, make predictions, generally reason about our existence, hallucinate alternative realities, etc. etc.

If you were able to somehow capture all that information in full detail as you've had access to by the age of say 25, it would likely dwarf the amount of information in millions of books by several orders of magnitude.

When you are 25 years old and are presented a strange looking ball and told to throw it into a strange looking basket for the first time. You are relying on an unfathomable amount of information turned into knowledge and countless prior experiments that you've accumulated/exercised to that point relating to the way your body and the world works.


Humans are "multi-modal". Sure we get plenty of non-textual information, but LLMs were trained on basically every human-written world ever. They definitely see many orders of magnitude more language than any human has ever seen. And yet humans get fluent based after 3+ years.

If you treat the human brain as a model, and account for the full complexity of neurons (one neuron != one parameter!) it has several orders of magnitude more parameters than any LLM we've made to date, so it shouldn't come as a surprise.

What is surprising is that our brain, as complex as it is, can train so fast on such a meager energy budget.


You are right, but at the same time the human brain does way more stuff (muscle coordination, smell, touch sensing) and all those others take up at least some budget.

So interesting question, but I'm not convinced it's only a scale issue. Like finished models don't really learn the same way as humans do - we actually change the parameters "at runtime", basically updating the model and learning is not only for the current context.


For sure, it seems like there's something there primed to pick up human language quickly, clearly evolutionarily driven.

Not necessarily so for the dynamics of magnetic fields, or nonhuman animal communications, or dark energy/matter.

We are bombarded nonstop by magnetic fields, nonhuman animal communications, and live in a universe which seems to be majority dominated by dark energy and matter, and yet understand little to none of it all.


20 watts ignores the startup cost: Tens of millions of calories. Hundreds of thousands of gallons of water. Substantial resources from at least one other human for several years.

Just an interesting thought experiment: if you took all the sensory information that a child experiences through their senses (sight, hearing, smell, touch, taste) between, say, birth and age five, how many books worth of data would that be? I asked Claude, and their estimate was about 200 million books. Maybe that number is off ± by an order of magnitude. ...but then again Claude is only three years old, not five.

To be fair, the knowledge embedded in an LLM is also, at this point, a couple orders of magnitude (at least) larger than what the average human being can retain. So it's not like all those books and text in the internet are used just to bring them to our level, they go way beyond.

Now multiply that with 7 billion to distill that one who will solve frontier math problem.

Most people have absorbed way too few books to be able to infer properly. Hell, most people are confused by TV remotes.

It's only because humans came up with a problem, worked with the ai and verified the result that this achievement means anything at all. An ai "checking its own work" is practically irrelevant when they all seem to go back and forth on whether you need the car at the carwash to wash the car. Undoubtedly people have been passing this set of problems to ai's for months or years and have gotten back either incorrect results or results they didn't understand, but either way, a human confirmation is required. Ai hasn't presented any novel problems, other than the multitudes of social problems described elsewhere. Ai doesn't pursue its own goals and wouldn't know whether they've "actually been achieved".

This is to say nothing of the cost of this small but remarkable advance. Trillions of dollars in training and inference and so far we have a couple minor (trivial?) math solutions. I'm sure if someone had bothered funding a few phds for a year we could have found this without ai.


>It's only because humans came up with a problem, worked with the ai and verified the result that this achievement means anything at all.

Replace ai with human here and that's...just how collaborative research works lol.


The only things moving faster than AI are the goalposts in conversations like this. Now we're at "sure, AI can solve novel problems, but it can't come up with the problems themselves on its own!"

I'm curious to see what the next goalpost position is.


> I'm curious to see what the next goalpost position is.

I am as well. That's the point. Ai can do some things well and other things better than humans, but so can a garden hose and all technology. Is ai just a tool or is it the future of all work? By setting goalposts we can see whether or not it is living up to the hype that we're collectively spending trillions on.

The garden hose manufacturers aren't claiming that they're going to replace all human workers, so we don't set those kinds of goalposts to measure whether it's doing that.


Funding a few PhDs for a year costs orders of magnitude more than it did to solve this problem in inference costs. Also, this has been active research for some time. Or I guess the people working on it are just not as good as a random bunch of students? It's amazing the lengths that people go to maintain their worldview, even if it means belittling hardworking people.

I take it you're not a mathematician. This is an achievement, regardless of whether you like LLMs or not, so let's not belittle the people working on these kinds of problems please.


>It's amazing the lengths that people go to maintain their worldview, even if it means belittling hardworking people.

This is the most baffling and ironic aspects of these discussions. Human exceptionalism is what drives these arguments but the machines are becoming so good you can no longer do this without putting down even the top percenter humans in the process. Same thing happening all over this thread (https://news.ycombinator.com/item?id=47006594). And it's like they don't even realize it.


> Funding a few PhDs for a year costs orders of magnitude more than it did to solve this problem in inference costs.

I don't think PhD students are sitting around and solving one problem for a year. Also PhD students are way cheaper


How many math PhD students do you have? If you set the problem right, something like this per year on average is a good pace.

How are they cheaper? Your average grant where I am can pay for a couple of PhD students. I could afford to pay for inference costs out of my own salary, no grant needed. Completely different economic scales here. I like students better of course, but funding is drying up these days.


I was saying generally. I don't work in maths. PhD students do lots of other things than research. If we ask a PhD student to just solve these kinds of problems and nothing else, the student would do it without much difficulty.

I guess it's different in somewhere like Europe. But in Canada, most of the PhD students are paid for doing TAships, not primarily through grant. Average salary is 25k/year. Take 6-10k out for tuition, that's 15-19k/year. You get a student doing so many things for less pay. I guess, if your job only requires research then you can do it.


Inference costs are heavily subsidised. My point was that we've spent trillions collectively on ai, and so far we have a few new proofs. It's been active research but the problem estimates only 5-10 people are even aware that it is a problem. I wrote "math phd's" not "random students", but regardless, I wouldn't know how you interpreted my statement that people could have discovered without ai this as "belittling the people working on this". You seem like a stupid person with an out of control chatbot that can't comprehend basic arguments.

> You seem like a stupid person

And now you're belittling me. Yeah, good one, that'll convince people.

> out of control chatbot that can't comprehend basic arguments

I don't see how it is out of control. It is a tool. It is being used for a job. For low-level jobs it often succeeds. For tougher jobs, it is succeeding sufficiently often to be interesting. I don't care if it understands worldview semantics, that's for humans to do.

> we've spent trillions collectively on ai

The economics around AI do not suggest that continuing to perform large training runs is sustainable. That's also not relevant to the discussion. Once the training is done, further costs are purely on inference, and that is the comparison I was making.

> Inference costs are heavily subsidised

Even if you pay to run inference on your own hardware, economics of scale dictate that it is still cheaper than students.

> It's been active research but the problem estimates only 5-10 people are even aware that it is a problem.

That sounds about right for most pure math problems. Were you expecting more?

Let's not pretend that society would have invested that kind of money into pure mathematics research. It is extraordinarily difficult to get funding for that kind of work in most parts of the world. Mathematicians are relatively cheap, yes, but the money coming into AI was from blind VCs with a sense of grandeur. It wasn't to do maths research. If it's here anyway, and causing nightmares for actually teaching new students, may as well try to make some good of it. It has only recently crossed the edge of being useful. Most researchers I know are only now starting to consider it, mostly as a search engine, but some for proof assistance. Experiences a year ago were highly negative. They're a lot more positive now.

I'm trying to give a perspective from someone who actually does do math research at a senior level, who actually does have a half dozen math PhD students to supervise, to say that your blind attitude toward this is not sensible or helpful. Your comments about the problem being trivial do belittle the actual effort people have put into the problem without success. If they could easily have discovered this without AI, they would have already done so. Researchers do not have unlimited time and there are many more problems than students, especially good ones (hence my random comment).


>> we've spent trillions

Source? This sounds like hyperbole. The entire US GDP is low tens of trillions.


From various online estimates, i would estimate global ai spend just since 2020 at $2T. Some projections estimate that we might spend that per year starting next year. To the extent that many of these projects will be cancelled or shelved, capital is beginning to take stock of the feasibility of clawing back even the original investments. openai is apparently doubling its staff, but whether these are sales or (prompt?) engineering jobs, the biggest hypemongers are themselves unable to reduce headcount even with unlimited "at-cost" ai inference.

Comparing total ai spend to the value added of producing a few new maths/sciences proofs is unfair since ai is doing more than maths proofs, but for comparison one can estimate the total spent to date on mathematicians and associated costs (buildings, experiments etc). I would very roughly estimate that the total cost of all mathematics to date since 1600 is less than what we've spent on ai to date, and the results from investment in mathematicians are incomparable to a few derivative extensions of well-established ideas. For less than a few trillion we have all of mathematics. For an additional 2T dollars, we have trivial advancements that no one really cares about.


> I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique.

Because, empirically, we have numerous unique and differentiable qualities, obviously. Plenty of time goes into understanding this, we have a young but rigorous field of neuroscience and cognitive science.

Unless you mean "fundamentally unique" in some way that would persist - like "nothing could ever do what humans do".

> People constantly make comments like "well it's just trying a bunch of stuff until something works" and it seems that they do not pause for a moment to consider whether or not that also applies to humans.

I frankly doubt it applies to either system.

I'm a functionalist so I obviously believe that everything a human brain does is physical and could be replicated using some other material that can exhibit the necessary functions. But that does not mean that I have to think that the appearance of intelligence always is intelligence, or that an LLM/ Agent is doing what humans do.


>But that does not mean that I have to think that the appearance of intelligence always is intelligence, or that an LLM/ Agent is doing what humans do.

You can think whatever you want, but an untestable distinction is an imaginary one.


First of all, that's not true. Not every position has to be empirically justified. I can reason about a position in all sorts of ways without testing. Here's an obvious example that requires no test at all:

1. Functional properties seem to arise from structural properties

2. Brains and LLMs have radically different structural properties

3. Two constructs with radically, fundamentally different structural properties are less likely to have identical functional properties

Therefor, my confidence in the belief that brains and LLMs should have identical functional properties is lowered by some amount, perhaps even just ever so slightly.

Not something I feel like fleshing out or defending, just an example of how I could reason about a position without testing it.

Second, I never said it wasn't testable.


Your reasoning may lower your confidence, but until it connects to observable differences, it is still at least partly a story you are telling yourself.

More importantly, the question is not whether LLMs work the same way human brains do. You may care about that, but many people do not. The relevant question is whether they exhibit the functional properties we care about. Saying “they are structurally different, therefore not really intelligent” is a lot like insisting planes are not really flying because they do not flap like birds.

And on your last point: in practice, it is not testable. There is no decisive intelligence test that sorts all humans into one bucket and all LLMs into another. So if your distinction cannot be cashed out behaviorally, functionally, or empirically, then it starts to look less like a serious difference and more like a metaphysical preference.


No, but it does mean that you should know we don't understand what intelligence is, and that maybe LLMs are actually intelligent and humans have the appearance of intelligence, for all we know.

You're just defining intelligence as "undefined", which okay, now anything is anything. What is the point of that?

Indeed, there's quite a lot of work that's been done on what these terms mean. The fields of neuroscience and cognitive science have contributed a lot to the area, and obviously there are major areas of philosophy that discuss how we should frame the conversation or seek to answer questions.

We have more than enough, trivially, to say that human intelligence is distinct, so long as we take on basic assertions like "intelligence is related to brain structures" since we know a lot about brain structures.


Our intelligence is related to brain structures, not all intelligence. You can't get to things like "what all intelligence, in general, is" from "what our intelligence is" any more than you can say that all food must necessarily be meat because sausages exist.

But... we're talking about our intelligence. So obviously it's quite relevant. I didn't say that AI isn't intelligent, I said that we have good reason to believe that our intelligence is unique. And we do, a lot of good evidence.

I obviously don't believe that all intelligence is related to specific brain structure. Again, I'm a functionalist, so I believe that any structure that can exhibit the necessary functions would be equivalent in regards to intelligence.

None of this would commit me to (a) human exceptionalism (b) LLMs/ Agents being intelligent (c) LLMs/ Agents being intelligent in the way that humans are.


This is too dependent on what you mean by "unique", though. What do we have that apes don't, and which directly enables intelligence? What do we have that LLMs don't? What do LLMs have that we don't?

I don't think we know enough to definitively say "it's this bit that gives us intelligence, and there's no way to have intelligence without it". We just see what we have, and what animals lack, and we say "well it's probably some of these things maybe".


> What do we have that apes don't, and which directly enables intelligence?

Again, there are multiple fields of study with tons of amazingly detailed answers to this. We know about specific proteins, specific brain structures, we know about specific cognitive capabilities in the abstract, etc.

> What do we have that LLMs don't?

Again, quite a lot is already known about this.

This feels a bit like you're starting to explore this area and you're realizing that intelligence is complex, but you may not realize that others have already been doing this work and we have a litany of information on the topic. There are big open questions, of course, but we're definitely past the point of being able to say "there is a difference between human and ape intelligence" etc.


It'd probably be more productive for you to actually back up your claims with these things we know from neuroscience, rather than just stating that we know things, and so therefore you're right. What do we know?

EDIT: can't reply, so I'll just update here:

You're arguing that the mechanism that produces human intelligence is unique, so therefore the intelligence itself is somehow fundamentally different from the intelligence an LLM can produce. You haven't shown that, you just keep saying we know it's true. How do we know?


I don't need to do that unless you think that neurons interact exactly the way that LLMs do? That said, we have detailed, microscopic models of neurons, the ability to even simulate brain activity, intervention studies where we can make predictions, interact with brains in various ways, and then validate against predictions, we have cognitive benchmarks that we can apply to different animals or animals in different stages of development that we can then tie to specific brain states and brain development, etc.

So we're in a very good position to say quite a lot about the brain, an incredible amount really. And that puts us in a very good position to say that our brain is very different from other animal brains, and certainly in a very good position to say that's very different from an LLM.

Now, you can argue that an LLM is functionally equivalent to the brain, but given that it's so structurally distinct, and seemingly functions in a radically different way due to the nature of that structure, I'd put it on you to draw symmetries and provide evidence of that symmetry.


I'm following this mini-thread with interest but I've arrived here and I confess, I don't really know what your argument is.

I think this all stems from you objecting to this statement:

"I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique."

I think you're being uncharitable in how you interpret that. Human's are unique in the most literal reading of this sentence, we don't have anything else like humans. But the context is the ability to reason and people denying that a machine is reasoning, even though it looks like reasoning.


They're shocked that people believe that humans are unique. I explained why that shouldn't be shocking. I think I was pretty charitable here, I gave an alternative option for what they could mean in my very first reply:

> Unless you mean "fundamentally unique" in some way that would persist - like "nothing could ever do what humans do".

> I don't really know what your argument is.

I just said that I think that we have very good reasons for believing that human cognition is unique. The response was seemingly that we don't have enough of an understanding of intelligence to make that judgment. I've stated that I think we do have enough of an understanding of intelligence to make that judgment, and I've appealed to the many advances in relevant feilds.


I still think you're being far too literal, which doesn't make for an interesting conversation.

I'm open to hearing how you think I should be interpreting things. I don't really think I'm being too literal, it certainly hasn't been the case that they've suggested my interpretation is wrong, and I've provided two interpretations (one that I totally grant).

What's the better interpretation of their position?


Re: "I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique."

Perhaps this might better help you understand why this assumption still holds: https://en.wikipedia.org/wiki/Orchestrated_objective_reducti...


"Controversial theory justifies assumption". Because humans never hallucinate.

It doesn't. I actually completely reject that theory, and it's nice to see that Wikipedia notes that it is "controversial". There are extremely good reasons to reject this theory. For one thing, any quantum effects are going to be quite tiny/ trivial because the brain is too large, hot, wet, etc, to see larger effects, so you have to somehow make a leap to "tiny effects that last for no time at all" to "this matters fundamentally in some massive way".

It likely requires rejection of functionalism, or the acceptance that quantum states are required for certain functions. Both of those are heavy commitments with the latter implying that there are either functions that require structures that can't be instantiated without quantum effects or functions that can't be emulated without quantum effects, both of which seem extremely unlikely to me.

Probably for the far more important reason, it doesn't solve any problem. It's just "quantum woo, therefor libertarian free will" most of the time.

It's mostly garbage, maybe a tiny tiny bit of interesting stuff in there.

It also would do nothing to indicate that human intelligence is unique.


it is not the assumption that humans are unique. it is that statistical models cannot really think out of the box most of the time

And you know that humans aren't statistical models how?

because they would be more logical

Touche.

> I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique.

Uh, because up until and including now, we are...?


Every living thing on Earth is unique. Every rock is unique in virtually infinite ways from the next otherwise identical rock.

There are also a tremendous number of similarities between all living things and between rocks (and between rocks and living things).

Most ways in which things are unique are arguably uninteresting.

The default mode, the null hypothesis should be to assume that human intelligence isn't interestingly unique unless it can be proven otherwise.

In these repeated discussions around AI, there is criticism over the way an AI solves a problem, without any actual critical thought about the way humans solve problems.

The latter is left up to the assumption that "of course humans do X differently" and if you press you invariably end up at something couched in a vague mysticism about our inner-workings.

Humans apparently create something from nothing, without the recombination of any prior knowledge or outside information, and they get it right on the first try. Through what, divine inspiration from the God who made us and only us in His image?


I doubt you can even define intelligence sufficiently to argue this point. Since that's an ongoing debate without a resolution thus far.

But you claimed that humans aren't unique. I think it's pretty obvious we are on many dimensions including what you might classify as "intelligence". You don't even necessarily have to believe in a "soul" or something like that, although many people do. The capabilities of a human far surpass every single AI to date, and much more efficiently as well. That we are able to brute-force a simulacrum of intelligence in a few narrow domains is incredible, but we should not denigrate humans when celebrating this.

> There's still this seeming belief in magic and human exceptionalism, deeply held, even in communities that otherwise tend to revolve around the sciences and the empirical.

Do you ever wonder why that is? I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers.


> I doubt you can even define intelligence sufficiently to argue this point.

Agreed.

> But you claimed that humans aren't unique.

I'm arguing that it is up to us to prove that they are interestingly unique in the context of this post. Which is pretty narrow - how do we solve problems?

The theme I was arguing against that I've seen repeated throughout this thread is that AIs are just recombining things they've absorbed and throwing those recombinations at the wall until they see what sticks.

It raises the question of why we presume that humans do things any differently, when it seems quite clear that we can only ever possibly do the same, unless we are claiming that knowledge of the universe can enter the human mind through some means other than through the known senses.

Not at all disputing that humans possess many capabilities that AIs do not.

> Do you ever wonder why that is? I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers.

I touched on this elsewhere, will go ahead and paste it here again:

The fundamental thing I'm speaking out against is the arrogance of human exceptionalism.

This whole debate about what it means to be intelligent or human just seems like we're making the same mistakes we've made over and over.

Earth as the center of the universe, sun as the center of the universe, man as the only animal with consciousness and intellect, the anthropomorphic nature of the majority of the deities in our religions and the anthropocentric purpose of the universe within those religions...

I think this desire to believe that we are special, that the universe in some way does ultimately revolve around us, is seemingly a deep need in our psyche but any material analysis of our universe shows that it is extremely unlikely that we hold that position.


>The capabilities of a human far surpass every single AI to date

What does this mean ? Are you saying every human could have achieved this result ? Or this ? https://openai.com/index/new-result-theoretical-physics/

because well, you'd be wrong.

>, and much more efficiently as well. That we are able to brute-force a simulacrum of intelligence in a few narrow domains is incredible, but we should not denigrate humans when celebrating this.

Human intelligence was brute forced. Please let's all stop pretending like those billions of years of evolution don't count and we poofed into existence. And you can keep parroting 'simulacrum of intelligence' all you want but that isn't going to make it any more true.


> The capabilities of a human far surpass every single AI to date

Meaning however you (reasonably) define intelligence, if you compare humans to any AI system humans are overwhelmingly more capable. Defining "intelligence" as "solving a math equation" is not a reasonable definition of intelligence. Or else we'd be talking about how my calculator is intelligent. Of course computers can compute faster than we can, that's aside the point.

> Human intelligence was brute forced.

No, I don't mean how the intelligence evolved or was created. But if you want to make that argument you're essentially asserting we have a creator, because to "brute force" something means it was intentional. Evolution is not an intentional process, unless you believe in God or a creator of sorts, which is totally fair but probably not what you were intending.

But my point is that LLM's essentially arrive at answers by brute force through search. Go look at what a reasoning model does to count the letters in a sentence, or the amount of energy it takes to do things humans can do with orders of magnitude less (our brain runs on %20 of a lightbulb!).


> But my point is that LLM's essentially arrive at answers by brute force through search.

If "brute force" worked for this, we wouldn't have needed LLMs; a bunch of nested for-loops can brute force anything.

The reason why LLMs are clearly "magic" in ways similar to our own intelligence (which we very much don't understand either) is precisely because it can actually arrive at an answer without brute force, which is computationally prohibitive for most non-trivial problems anyway. Even if the LLM takes several hours spinning in a reasoning loop, those millions tokens still represent a minuscule part of the total possible solution space.

And yes, we're obviously more efficient and smarter. The smarter part should come as no surprise given that our brains have vastly more "parameters". The efficient part is definitely remarkable, but completely orthogonal to the question of whether the phenomenon exhibited is fundamentally the same or not.


>Meaning however you (reasonably) define intelligence, if you compare humans to any AI system humans are overwhelmingly more capable.

Really ? Every Human ? Are you sure ? because I certainly wouldn't ask just any human for the things I use these models for, and I use them for a lot of things. So, to me the idea that all humans are 'overwhelmingly more capable' is blatantly false.

>Defining "intelligence" as "solving a math equation" is not a reasonable definition of intelligence.

What was achieved here or in the link I sent is not just "solving a math equation".

>Or else we'd be talking about how my calculator is intelligent.

If you said that humans are overwhelmingly more capable than calculators in arithmetic, well I'd tell you you were talking nonsense.

>Of course computers can compute faster than we can, that's aside the point.

I never said anything about speed. You are not making any significant point here lol

>No, I don't mean how the intelligence evolved or was created.

Well then what are you saying ? Because the only brute-forced aspect of LLM intelligence is its creation. If you do not mean that then just drop the point.

>But if you want to make that argument you're essentially asserting we have a creator, because to "brute force" something means it was intentional.

First of all, this makes no sense sorry. Evolution is regularly described as a brute force process by atheist and religious scientists alike.

Second, I don't have any problem with people thinking we have a creator, although that instance still does necessarily mean a magic 'poof into existence' reality either.

>But my point is that LLM's essentially arrive at answers by brute force through search.

Sorry but that's just not remotely true. This is so untrue I honestly don't know what to tell you. This very post, with the transcript available is an example of how untrue it is.

>or the amount of energy it takes to do things humans can do with orders of magnitude less (our brain runs on %20 of a lightbulb!).

Meaningless comparison. You are looking at two completely different substrates. Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.


> Really ? Every Human ?

Yes, in many ways absolutely. Just because a model is a better "Google" than my dummy friend doesn't mean that this same friend is more capable at countless cases.

> Meaningless comparison. You are looking at two completely different substrates. Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.

Isn't that just more proof how efficient the human brain is? Especially that a wire has much better properties than water solutions in bags.


>Just because a model is a better "Google" than my dummy friend doesn't mean that this same friend is more capable at countless cases.

People use LLMs for a lot of things. 'Better Google' is is a tiny slice of that.

>Isn't that just more proof how efficient the human brain is?

Sure. So what ? If a game runs poorly on one hardware and excellently on another, does that mean the game was fundamentally different between the 2 devices ? No, Of course not.


I never said that humans are better than LLM's along every axis. Rather, a reasonable definition of intelligence would necessarily encompass domains that LLM's are either incapable of or inferior to us.

Here might be some definitions of intelligence for example:

> The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment.

> "...the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills".

> Goal-directed adaptive behavior.

> a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation

But even a housefly possesses levels of intelligence regarding flight and spacial awareness that dominates any LLM. Would it be fair to say a fly is more intelligent than an LLM? It certainly is along a narrow set of axis.

> Because the only brute-forced aspect of LLM intelligence is its creation.

I would consider statistical reasoning systems that can simulate aspects of human thought to be a form of brute force. Not quite an exhaustive search, but massively compressed experience + pattern matching.

But regardless, even if both forms of intelligence arrived via some form of brute force, what is more important to me is the result of that - how does the process of employing our intelligence look.

> This very post, with the transcript available is an example of how untrue it is.

The transcript lacks the vector embeddings of the model's reasoning. It's literally just a summary from the model - not even that really.

> Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.

You're so close to getting it lol


>I never said that humans are better than LLM's along every axis. Rather, a reasonable definition of intelligence would necessarily encompass domains that LLM's are either incapable of or inferior to us.

So all humans are overwhelmingly more intelligent but cannot even manage to be as capable in a significant number of domains ? That's not what overwhelming means.

>I would consider statistical reasoning systems that can simulate aspects of human thought to be a form of brute force.

That is not really what “brute force” means. Pattern learning over a compressed representation of experience is not the same thing as exhaustive search. Calling any statistical method “brute force” just makes the term too vague to be useful.

> what is more important to me is the result of that - how does the process of employing our intelligence look.

But this is exactly where you are smuggling in assumptions. We do not actually understand the internal workings of either the human brain or frontier LLMs at the level needed to make confident claims like this. So a lot of what you are calling “the result” is really just your intuition about what intelligence is supposed to look like.

And I do not think that distinction is as meaningful as you want it to be anyway. Flight is flight. Birds fly and planes fly. A plane is not a “simulacrum of flight” just because it achieves the same end by a different mechanism.

>The transcript lacks the vector embeddings of the model's reasoning. It's literally just a summary from the model - not even that really.

You do not need access to every internal representation to see that the model did not arrive at the answer by brute-forcing all possibilities. The observed behavior is already enough to rule that out.

> Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.

>You're so close to getting it lol.

No you don't understand what I'm saying. If we were to be more accurate to the brain in silicon, it would be even less efficient than LLMs, never mind humans. Does that mean how the brain works is wrong ? No it means we are dealing with 2 entirely different substrates and directly comparing efficiencies like that to show one is superior is silly.


> So all humans are overwhelmingly more intelligent but cannot even manage to be as capable in a significant number of domains

When the amount of domains in which humans are more capable than LLM's vastly exceeds the amount of domains in which LLM's are more capable than humans, yes.

I also agree that we don't have a great understanding of either human or LLM intelligence, but we can at least observe major differences and conclude that there are, in fact, major differences. In the same way we can conclude that both birds and planes have major differences, and saying that "there's nothing unique about birds, look at planes" is just a really weird thing to say.

> If we were to be more accurate to the brain in silicon, it would be even less efficient than LLMs

Do you think perhaps this massive difference points to there being a significant and foundational structural and functional difference between these types of intelligences?


It's very telling that you put "materialist" and "anti-human" in the same bucket.

> I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers.

I think it comes from a position of arrogance/ego. I'll speak for the US here, since that's what I know the most; but the average 'techie' in general skews towards the higher intelligence numbers than the lower parts. This is a very, very broad stroke, and that's intentional to illustrate my point. Because of this, techie culture gains quite a bit of arrogance around it with regards to the masses. And this has been trained into tech culture since childhood. Whether it be adults praising us for being "so smart", or that we "figured out the VCR", or some other random tech problem that literally almost any human being can solve by simply reading the manual.

What I've found, in the vast majority of technical problem solving cases that average people have challenges with, if they just took a few minutes to read a manual they'd be able to solve a lot of it themselves. In short, I don't believe as a very strong techie that I'm "smarter than most", but rather that I've taken the time to dive into a subject area that most other humans do not feel the need nor desire to do so.

There are objectively hard problems in tech to solve, but the amount of people solving THOSE problems in the tech industry are few and far in between. And so the tech industry as a whole has spent the last decade or two spinning circles on increasingly complex systems to continue feeding their own egos about their own intelligence. We're now at a point that rather than solving the puzzle, most techies are creating incrementally complex puzzles to solve because they're bored of the puzzles that are in front of them. "Let me solve that puzzle by making a puzzle solver." "Okay, now let me make a puzzle solver creation tool to create puzzle solvers to solve the puzzle." and so forth and so forth. At the end of the day, you're still just solving a puzzle...

But it's this arrogance that really bothers me in the tech bro culture world. And, more importantly, at least in some tech bro circles, they have realized that their target to gathering an exponential increase in wealth doesn't lie in creating new and novel ways to solve the same puzzles, but to try and tout AI as the greatest puzzle solver creation tool puzzle solver known to man (and let me grift off of it for a little bit).


It's funny because the fundamental thing I'm speaking out against is the arrogance of human exceptionalism.

This whole debate about what it means to be intelligent or human just seems like we're making the same mistakes we've made over and over.

Earth as the center of the universe, sun as the center of the universe, man as the only animal with consciousness and intellect, the anthropomorphic nature of the majority of the deities in our religions and the anthropocentric purpose of the universe within those religions...

I think this desire to believe that we are special, that the universe in some way does ultimately revolve around us, is seemingly a deep need in our psyche but any material analysis of our universe shows that it is extremely unlikely that we hold that position.


The need for human exceptionalism doesn't come from the psyche or anything like that, it's just basic survival skills. Humans believe themselves to be special because that's the only belief that isn't self-destructive.

You can choose to believe humans are not exceptional, in the same way I can choose to cut off all my fingers and eat them. Why would I do that?

If what you say about LLMs is true, that's bad for me. And for you. And for our families. Because it means our instrinic value of living just went down a lot. I choose not to believe it because I am not suicidal. And, ultimately, I think the people who do believe it can only ever make their lives worse. Probably my life worse too, but maybe if I'm all the way over here I'll avoid the blast radius.


I largely agree with you, but I also see this same type of thinking appear in people who I know are not arrogant - at least in the techbroisk way.

Humans are obviously unique in an interesting way. People only "move the goalpost" because it's not an interesting question that humans can do some great stuff, the interesting question is where the boundary is. (Whether against animals or AI).

Some example goals which makes human trivially superior (in terms of intelligence): invention of nuclear bomb/plants, theory of relativity, etc.


But that's unique in the sense of "you have a bag of ten apples and I have a bag of eleven apples, therefore my bag is unique". It's not qualitatively different intelligence than a dog's, you just have more of it.

I would argue that point. The biological components are the same, but emergent behavior is a thing. So both the scale and the number of connections/way they connect have surpassed some limit after which cognitive capabilities increased severalfold to the point that humans "took over the world".

And arguably further increase in intelligence seems to fall into a diminishing returns category, compared to this previous boom. (Someone being "2x smarter" doesn't give them enough benefit of reigning over others, at least history would look otherwise were it the case, in my opinion)

Probably dumb example, but just by increasing speed you get well-behaving laminar flow vs turbulence, yet it's fundamentally the same a level beneath.


Yeah, I don't know that there's such a jump. Dogs, for example, clearly communicate, both with us and with each other. They don't have language, but they also don't lack communication skills. To me, language is just "better communication" rather than a qualitatively different thing.

You may want to watch this video: https://youtu.be/e7wFotDKEF4?is=bl5TPvk9_mdnG3Om

Human language is way above what communication animals show. We don't really know what's the exact boundary, but again, the difference is significant and not just "scaled up".


You learned what was unsuitable over your entire life until now by making countless mistakes in human interaction.

A basic AI chat response also doesn't first discard all other possible responses.


More often than not, far, far, far more often than not, we do not already know that it will work. For all human endeavors, from the beginning of time.

If we get to any sort of confidence it will work it is based on building a history of it, or things related to "it" working consistently over time, out of innumerable other efforts where other "it"s did not work.


I don't see how this doesn't equally apply to the pre-AI economy. The results there have been quite stark, with the "entrepreneurs" ending up far better off than the "employees".

> I don't see how this doesn't equally apply to the pre-AI economy. The results there have been quite stark, with the "entrepreneurs" ending up far better off than the "employees".

This is wrong, in most cases the entrepreneur is worse off than the employees, since the entrepreneur spent all his savings on the projects and the employees walks away with all the money they got from their salaries.

And even when it is fully funded by external investors most of the time the founder just gets to keep the salary since the company fails and become worthless.

The only time the entrepreneur is better off is when the company succeeds and becomes big, but that is rare, most of the time it is better to be an employee.


It depends on risk preferences.

Risk seekers should be entrepreneurs.

Risk averse people, probably, should not.


Also notable that after Anthropic’s acquisition of Bun, the vast majority of the communication and seeming effort from Jared on twitter seemed to shift to fixing issues with Claude Code.

I imagine many of these efforts benefitted the community as a whole, but it does make sense that the owners will have these orgs at least prioritize their own internal needs.


Have you tried the 5.3 Codex Xhigh, 5.4 Xhigh, Opus 4.6, Gemini 3.1?

All of them (even Gemini, the worst of the bunch) far outclass Grok on everything I've thrown at them, especially coding.

Grok is good at summarizing what's happening on twitter though.


Just wanna compliment you guys on your UI/UX. App is really well designed, smooth, slick.


100%

We know that neurons can produce subjective experience.

This is the first time in my life that I've felt a scientific avenue of research should shut down.


Animal testing, weapons testing, medical trials, cloning, psychological experiments… had you just never considered them before? Why this?


Those things all exist within our conscious realm. “Human brain cells in a vat used for computation” suggests horrors beyond understanding


Same reason people get scared to fly but drive everyday. Humans are simultaneously wildly irrational and terrible at calculating risk.


This is somewhat novel unlike say weapons manufacturing. Also assuming that the GP is in the tech community to some degree, it makes sense they’d have a stronger reaction.

There’s lots of bad stuff humans shouldn’t be doing.


Not sure why this is being downvoted. It’s a valid point. This neuron chip stuff is far less problematic than a lot of animal testing where you clearly have a whole organism that experiences something.

Factory farming too. The way we treat chickens in particular is out of a horror movie, and that’s in countries with some standards. Globally I’m sure many billions of animals are constantly submitted to the most grotesque torture for food.


I spoke inaccurately. I’m an ethical vegan and nonconsensual animal industry and testing is an atrocity, as it would be were we to substitute in humans.

That said, the novelty of this, the unknowns, the mind reels at all the possibilities here and it frankly makes me nauseous.

What hells of existence could we create? I have no doubt that we could create an all encompassing misery that is beyond our comprehension.

Just, truly disgusting to me on a deep level.


At the very very least there are more productive ways of spending time.


We don't really know that.


Sounds like you're applying scifi tropes to real life. Don't do that. That's why some people are developing "AI psychosis" today after playing with LLMs.


The fear is that we don’t really understand what causes consciousness. I think that’s a valid fear, because we can’t know ahead of time whether we will inadvertently create a “person” inside the machine.

Unless your proposition is that no collection of human neurons outside of live birth can become sentient, and I’m not sure how you’d arrive at that conclusion without invoking some kind of spiritual argument.


You're equivocating two totally separate things


It's not even that Elon doesn't care as in he is ambivalent about it, it directly feeds into reinforcing his political preferences.

Try to create a brand new twitter account, you'll find that 80+% of the accounts that get suggested to you are right wing propaganda with climate denial being one of their greatest hits.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: