Hacker Newsnew | past | comments | ask | show | jobs | submit | lbrandy's commentslogin

I have no idea how you can assert what is necessary/sufficient for consciousness in this way. Your comment reads like you believe you understand consciousness far more than I believe anyone actually does.


I believe consciousness needs some kind of mutable internal state because otherwise literally everything is conscious, which makes the concept useless. A rock "computes" a path to fall when you drop it but I don't believe rocks are conscious. Panpsychism is not a common belief.


I think Nagel put it best in 1974: https://www.philosopher.eu/others-writings/nagel-what-is-it-...

Essentially, something is conscious iff "there is something that it is like to be" that thing. Some people find that completely unsatisfying, some people think it's an insight of utter genius. I'm more in the latter camp.

Also, I think consciousness is non-binary. Something could be semi-conscious, or more or less conscious than something else.

Anyway, I don't think that there's anything that it's like to be an LLM. I don't see how anybody who knows how they actually work could think that.


> Anyway, I don't think that there's anything that it's like to be an LLM. I don't see how anybody who knows how they actually work could think that.

While I have almost zero belief that LLMs are conscious, I just don't think this is so trivially asserted.

The easy half of this is thinking that LLMs aren't conscious given what we know about how they work. The hard part (and very, very famously so) is explaining how _you_ are conscious given what we know about how you work. You can't ignore the second half of this problem when making statements like this... because many of the obvious ways to argue that clearly LLMs aren't conscious would also apply to you.


I wouldn't say that we actually know how our brains work. Based mainly on my neuroscience minor from 10 years ago I'd say that understanding feels hopelessly far away.


> My point isn't that coordination is easy - it's that treating it as impossible becomes self-fulfilling.

While I see what you are getting at, and I think its super important we come up with philosophical frameworks to push back on the central idea in question (ie, the moral hazard of "its gonna happen anyway so why not pour a little more into the river").... I think your writing/responses miss the central point.

As I see it, the fundamental issue with this essay, and your responses, is you keep conflating impossible with probability zero. People are saying "this is inevitable" to mean this has probability 1 of occurring, with basic game theory reasoning (its a giant iterative prisoners dilemna), and your response "but it's possible". Yes, with measure zero.

Telling us that such a path surely exists isn't useful. If you want to push back on "inevitability" you need to find a credible path with probability > 0 (which is not the same as impossible).


Thanks for your thoughtful response. I think there's a misunderstanding (maybe my text wasn't clear. If so please point out where so I can fix it).

We actually agree: even if the probability of successful coordination is only 10%, accepting inevitability makes it 0%. That difference matters enormously given the stakes. My argument isn't "coordination is definitely possible" but rather "believing it's impossible guarantees failure." When tech leaders say "AGI is inevitable," they're not describing reality; they're shaping it by discouraging attempts to coordinate. Human cloning hasn't happened because we maintain active resistance despite technical feasibility.

You're asking for credible paths with P > 0. I'm saying: knowing P with certainty is impossible, so accepting P = 1 narratives makes alternative paths invisible. The path emerges through trial and error, not before it.


> When tech leaders say "AGI is inevitable," they're not describing reality; they're shaping it by discouraging attempts to coordinate.

No, they're describing reality. As I posted in another comment, progress in technology drops capital requirements for innovation. Even if there's global coordination to stop AGI development right now, progress in tech means that in 30 years someone in their basement could do what OpenAI is doing right now but with commodity hardware. Preventing this would require an oppressive regime controlling basic information technology and knowledge to an extent that isn't palatable to anyone.


"They're describing reality" - No, they're making predictions about the future. If AGI requires 30 years of compute improvements as you say, then it's not reality, it's a forecast contingent on those 30 years of development continuing unimpeded.

As for "oppressive regime", we already do this for nuclear and biotech, and most people find it quite palatable! Nuclear materials are tightly controlled globally. Cloning humans is illegal almost everywhere. We've had the knowledge for both for decades, yet basement nukes and basement human clones aren't happening.

I'm not saying we should make it illegal, I'm just saying there are more gray areas than what's generally accounted for.


> No, they're making predictions about the future. If AGI requires 30 years of compute improvements as you say, then it's not reality, it's a forecast contingent on those 30 years of development continuing unimpeded.

The idea that you can develop "good" information technology without enabling the creation of "bad" information technology is pure fantasy, and if your idea is actually that we could halt the progress of information technology wholesale, then that's laughable, sorry to say. Hence the inevitability.

> As for "oppressive regime", we already do this for nuclear and biotech, and most people find it quite palatable!

You mean we do this for raw materials that have inherent scarcity because they are somewhat rare, difficult to mine, and difficult to refine? And you think this natural scarcity is somehow comparable to natural abundance that follows from digital information that can be trivially copied at perfect fidelity?

Furthermore, you've misunderstood what was meant by "oppressive regime". The same technologies that allow us to email each other, make family photo albums and forecast the weather or the stock market are what also enable AI. There is no way in which to suppress AI without also suppressing these other benign uses that everyone enjoys and that enable considerable productivity. This is not comparable to the technology and raw materials for nuclear weapons.

> We've had the knowledge for both for decades, yet basement nukes and basement human clones aren't happening.

You seem awfully confident about declaring the non-existence of something that's inherently underground and thus difficult to measure.

But let's make the comparison of AI to cloning more apt: how confident would you be that cloning won't happen once the knowledge of how to construct artificial wombs is discovered? Now reconsider those probabilities about if such wombs also easy to construct with readily available materials. That's the reality of information technology.


If this were so, I've yet to see any explanation that accounts for why human cloning has been so successfully prevented, even though it's relatively inexpensive technology, to the point that we have (had?) pet cloning companies selling it as a consumer service.


1. How do you know human cloning has been prevented? Maybe you mean it's not provided as a commercial service, but does that entail it's not happening at all?

2. Preventing the manifestation of physical objects is a lot easier than preventing the dissemination of pure information. AIs are easy to copy, easy to run, and can assist in their own creation, advancement and proliferation, and it only gets easier over time. For an apt analogy, consider a cloning lab where every clone that escaped was compelled to create their own cloning lab, and everything you needed could be bought at any corner store.

3. All cloning requires existing biological organisms to participate at various stages. You need not only the biologists on board, but also the surrogate that has to carry the fetus to term. What do you think will happen when artificial wombs become available?


1. Because neither Elon Musk nor Kim Jong-Un have a clone. That is, if human cloning were being carried out, we'd see clones of famous and/or powerful people. Is there some chance it happened in a lab somewhere and kept extremely secret? Sure. Is it a technology that's available to the groups that could pay for it? No, it would be very visible (eventually, at least).

2. Computers powerful enough to train AI are also physical objects, ones that consume gigantic amounts of power as well. Maybe some day we'll have computers that can train Claude on 50kW of power running in your pocket, but maybe not. There are fundamental limits to how much computation you can get per watt, and we're getting closer to them. So, preventing AI may be as simple as banning use of any computer cluster that consumes more than some wattage, say 1000kW, without government audit, while also banning research into more computationally efficient ways of doing AI.

3. This is not a real problem, since some biologists that are into cloning may have wombs of their own to gestate the clone. Artifical wombs, even cheap ones, would change nothing in relation to cloning (except maybe reduce the diversity of rogue cloning research teams - angering the criminal enterprise DEI department, I'm sure).


I was struck how the argument is also isomorphic to how we talked about computers and chess. We're at the stage where we are arguing the computer isn't _really_ understanding chess, though. It's just doing huge amounts of dumb computation with huge amounts of opening book and end tables and no real understanding, strategy or sense of whats going on.

Even though all the criticism were, in a sense, valid, in the end none of it amounted to a serious challenge to getting good at the task at hand.


> has a model much as we humans do

The premise that an AI needs to do Y "as we do" to be good at X because humans use Y to be good at X needs closer examination. This presumption seems to be omnipresent in these conversations and I find it so strange. Alpha Zero doesn't model chess "the way we do".


Both that, and that we should not expect LLMs to achieve ability with humans as baseline comparison. It’s as if cars were rapidly getting better due to some new innovation, and expecting them to fly within a year. It’s a new, and different thing, where the universality of ”plausibly sounding” coherent text appeared to be general, when it’s advanced pattern matching. Nothing wrong with that, pattern matching is extremely useful, but drawing the equal sign to human cognition is extremely premature, and a bet that is very likely be wrong.


Alpha Zero is not trying to be AGI.

> The premise that an AI needs to do Y "as we do" to be good at X because humans use Y to be good at X needs closer examination.

I don't see it being used as a premise. It see it as speculation that is trying to understand why this type of AI underperforms at certain types of tasks. Y may not be necessary to do X well, but if a system is doing X poorly and the difference between that system and another system seems to be Y, it's worth exploring if adding Y would improve the performance.


Suppose this is as good a place to pile-on as any.

Though this was not the post I was expecting to show up today, it was super awesome for me to get to have played my tiny part in this big journey. Thanks for everything @je (and qi + david -- and all the contributors before and after my time!).


It's fun to see everyone arguing about what "everyone" thought.. when... we can just... look... https://news.ycombinator.com/item?id=3817840 is a fun thread from 2012.

The top reply to the top comment has some useful quotes for the purposes of this discussion...

> This is not going to be one of the best tech acquisitions of the next decade.

> Instagram is a photo service in a sea of other photo services.

> Bookmark this comment. See you in 2022.

Heh.


I think you are being selective. I looked at the top 10 top-level comments and by my judgment:

1. bullish 2. bullish 3. neutral 4. neutral 5. neutral 6. neutral 7. bullish 8. bearish 9. bearish 10. neutral

Of the top top-level comments, you have to go all the way to #8 to find a bearish comment.

Replies to the top comment are more bearish because they're directly responding to a bullish sentiment.


On the other hand the top ten comments as a whole are 3 bullish, 5 neutral, 2 bearish which is certainly not an overwhelming sentiment in any direction. That's despite the fact that bullish comments on start-ups tend to get more votes because it's a start-up community.


Sure, plenty of people thought that it was a good purchase, but my point is nobody thought of it as buying out their competition. The transition into a social media platform and algorithmic content machine occurred under Facebook's direction.


> "Where's the money in Instagram?" Preventing Instagram from developing into something that has a negative effect on Facebook. It's a "keep your enemies closer" move.

- Larrys 2012


The top comment compares it to YouTube.


Back then Google was trying to be social. Remember Google+?

https://www.joyoftech.com/joyoftech/joyarchives/1523.html


> Remember Google+?

How about Google People? [1]

[1] https://qntm.org/perso





HN is notorious for this kind of thing, such as the iPod: "less space than a nomad, no wireless, lame". Due to not understanding how much consumers value simplicity.


Every time I see a comment accusing HN of having some specific consensual position like a hive mind, I go back and see comments both contradicting and supporting the stance. In other words, different opinions. Every single time I check, and every single time it shows the original commenter engaged in selection bias.

This case is particularly wrong, as that iPod quote is from Slashdot. HN didn’t even exist in 2001.

https://slashdot.org/story/01/10/23/1816257/apple-releases-i...


But hindsight of HN is different from hindsight of FaceBook.

FaceBook was literally collecting data on what apps people were using on their phones and empirically saw the rise of Instagram. Of course the rise of Instagram didn't need to continue but that's why you buy all the realistic competitors so even if most of them fail you have a moat of dead companies.


After trying to set up wireless on my printer interface and enter a password with up and down arrows rotating through an entire set of keys, I'm fairly convinced that no wireless on the iPod was massively correct. If people were expected to set up wifi by entering a password with a rotation device adoption would be miniscule.


Dunno why the internet enjoys dunking so much on a poor anonymous poster who guessed wrong about a product that would catch on, whether that's the iPod, the iPad, Dropbox, etc.

We don't seem to spend half as much energy taking major news outlets to task when they similarly guess wrong, unless we feel that somehow adding a question mark negates any responsibility (i.e. "The Ouya will revolutionize gaming" vs. "Will the Ouya revolutionize gaming?").


It's a recurring cognitive dissonance between cynical tech people and the actual mass market. I mean I'm cynical about most things but the ones that aren't and who get onto the hype train earn big money off of it.


To be fair, the iPod comment was on Slashdot


Fair, but other than browsing the web, a Blackberry was superior for communications than an iPhone back then.


Those people aren't putting in their own money. The people who did put their money into Instagram got to see behind the corporate covers, and they decided to buy anyway. It's very easy to say whether you'd invest $1B if you're not putting in any of your money.


You took "everyone" literally but it's actually like "a lot/majority held the opinion/it was easy to see".

The top comment compares it to YouTube as a great acquisition.


For uninitialized memory reads, which is one of the biggest classes of issues, valgrind can still be invaluable. MSAN is one of the more difficult things to get setup and remove all the false positives. You typically need to transitively compile everything including dependencies, or annotate/hint lots of things to get the signal-to-noise ratio right. Sometimes its just easier/better/faster to run it under valgrind.


> most

Well... "a few" is probably more accurate.

https://twitter.com/tjaltimore/status/1763571057703723344?s=...


You've subtly changed the argument here.

That's the athletic department as a whole. At my alma mater, which is on that list as one where the athletics department costs 500-1000 per student, our football team (which is consistently mediocre) and basketball team were, as far as I know, profitable, and subsidized other D1 sports. I believe that cost also covers the funding for things like intramural sports, the student athletics complex (gym), and various other athletics adjacent things that are student-services shaped, and not D1-sports shaped.


I struggle to resonate with what you are saying, as my experience is the opposite. I'm curious where this discrepancy is rooted. Reckless hypothesis: are you working on majority latency or majority throughput sensitive systems?

I have seen so, so, so many examples of systems where latencies, including and especially tail latencies, end up mattering substantially and where java becomes a major liability.

In my experience, actually, carefully controlling things like p99 latency is actually the most important reason C++ is preferred rather than the more vaguely specified "performance".


The specific example that comes to mind was translating a Java application doing similarity search on a large dataset into fairly naive Rust doing the same. Throughput I guess. It may be possible to optimize Rust to get there but it’s also possible to (and in this case did) end up with less understandable code that runs at 30% the speed.

Edit: And probably for that specific example it’d be best to go all the way to some optimized library like FAISS, so maybe C++ still wins?


I've seen C++ systems that are considerably slower than equivalent Java systems, despite the lack of stack allocation and boxing in Java. It's mostly throughput, malloc is slow, the C++ smart pointers cause memory barriers and the heap fragments. Memory management for complex applications is hard and the GC often gives better results.


I've seen so may flat profiles due to shared_ptr. Rust has done a lot of things right but one thing it really did well was putting a decent amount of friction into std::sync::Arc<T>(and offering std::rc::Rc<T> when you don't want atomics!) vs &mut T or even Box<T>. Everyone reaches for shared_ptr when 99% of the time unique_ptr is the correct option.


I see comments like yours more than I see anyone suggesting actually using shared_ptr widely. In my experience, most people (that use smart pointers - many do not) prefer to use unique_ptr where they can.


I don't think anyone is suggesting shared_ptr explicitly, more that if you're coming from Python/Java/etc it's a similar to a GC memory model and a bit more familiar if that's where your experience is. I've observed in a number of codebases unless you're setting a standard of unique_ptr by default it's fairly easy for shared_ptr to become widely used.

FWIW I consider shared_ptr a bit of a code-smell that you don't have your ownership model well understood, there are cases to use it but 90% of the time you can eliminate it with a more explicit design(and avoid chances of leaks + penalties from atomics).


Bear in mind that my ultimate perspective is that you shouldn't use smart pointers (or C++) at all.

But even if you think they have some value, it isn't a flaw in a language if it's not immediately obvious how to write code in it for people that are new to it. If you're coming from Python or Java, then learn how to write C++. There are probably as many books on C++ as on any other language out there.


Badly written C++ will be as slow as well written Java. Absolutely. But there is no way any Java code will perform better than well optimised C++. High performance C++ use custom allocators to completely avoid using malloc (for example).


From my experience with embedded coding you are correct. Most stuff lives and dies enclosed in a single call chain and isn't subject to spooky action at a distance. And stuff that is I often firewall it behind a well tested API.


I agree entirely with the premise here save one subtle bit at the start. I think there is grave danger in reducing "vector database" to "vector search" as equivalent domains and/or pieces of software. I would argue that for "vector databases" there's alot more "database" problems than "vector" problems to be solved.

I fear there's going to be alot of homerolled "vector search" infra that accidentally wanders into an ocean of database problems.


Totally agree. It takes _a lot_ to go from Hnswlib to a full-fledged vector database.

Here's an architecture diagram for a production-ready vector database https://milvus.io/docs/architecture_overview.md. Not exactly something you can build in a month.


> I would argue that for "vector databases" there's alot more "database" problems than "vector" problems to be solved.

Why the need for new technologies then? Databases are well studied. Vector search is relatively easy to implement. Sure, there are some new insights to be gained by respecting a hybrid approach - but they are clearly overvalued.

Machine learning is supposed to make things easier. If you implement vector search across your company's data, there's no reason a LLM couldn't simply do the various SQL-style operations on chunks of that data retrieved via KNN. I'm not aware of this approach being used in practice - but I still think the obvious direction we are heading towards is to be able to talk to computers in plain english, not SQL or some other relational algebra framework.


Exactly!

It's much easier to start from a database and add vector search as one of the features, then to go backwards. We have spent 7.5 years on the DBMS part, while the vector search can literally be added in a week...

And that's why every major modern database is now integrating such solutions :)


So many projects forget the MS in DBMS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: