Hacker Newsnew | past | comments | ask | show | jobs | submit | andygroundwater's commentslogin

Macabre stuff, but indisputably scientific all the same.


It's stretching credulity beyond the usual exaggerated hype associated with AI. What we have now is semi-OK forecasting at scale, nothing more. We (as in the researches, platform and technology) can get a system to select what looks to be a valid response to a host of stimuli, e.g., chess moves, patient diagnostics, vehicle driving etc.

None of this "thinks for itself", nor is it remotely near to such levels of conscious self-awareness. I'm sick of this hype, it's been going on since the 1905's with hucksters promising robot household domestics, and all sorts of kooky weirdness that was swallowed up by the popular media.


The people who say it might be slightly conscious are just appealing to a functional, substrate-independent requirement for consciousness. I happen to agree with them that it's feasible and plausible.

Let me ask you. If we invented an AGI that was as smart as us based on much larger nets (perhaps with one or two algorithmic tweaks on current approaches) trained on much more data, running on commodity hardware, would it be conscious? If yes, why can't our current nets be slightly conscious?


'slightly conscious' is just word salad, it doesn't actually mean anything.


You could say the same about "conscious" in general. There's not a single coherent definition of the word, not even in academic debates.


Maybe we should just say "Shut up and program", similar to how some physicists say, "Shut up and calculate", when the philosophical wrangling gets out of hand. Copenhagen interpretation vs. many-worlds? Does it matter? Is there any way to find out? If not, back to work.

My comment on this for several decades has been that we don't know enough to address consciousness. We need to get common sense right first. Common sense, in this context, is getting through the next 30 seconds without screwing up. Automatic driving is the most active area there. Robot manipulation in unstructured environments is a closely related problem. Neither works well yet. Large neural nets are not particularly good at either of these problems.

We're missing something important. Something that all the mammals have. People have been arguing whether animals have consciousness for a long time, at least back to Aristotle. Few people claim that animals don't have some degree of common sense. It's essential to survival. Yet AI is terrible at implementing common sense. This is a big problem.


I don't think we're going to get any breakthroughs in AI by encouraging people to stop thinking about fundamentals and just program. If you're thinking about fundamentals, some degree of philosophizing isn't always avoidable.

And that's still framing it as if philosophizing is something to be avoided, a waste of time. I disagree with that sentiment. In particular, we can't really avoid thinking about consciousness even without an agreed-upon definition, because our beliefs on consciousness influence our actions. In particular, debates about the rights of animals are heavily influenced by our beliefs on their degree of consciousness.

IMO, "shut up and X" is code for "I don't enjoy thinking about the problem you're presenting (and perhaps I resent you a bit for making me think about it)". It's perfectly fine to just come out and say that you don't enjoy working on this particular problem. But that doesn't imply that the problem isn't worth thinking about.


Forget "addressing". Can we please start with a definition which a substantial number of people can agree to?


There isn't, but the response to this lack of definition shouldn't be to simply terminate the discussion.

We know it's probably a real thing because we experience it, and it's an extremely important open question whether an AGI on hardware will have "it" too.

The answer to the question will have large ethical implications a few decades into the future. If they can suffer just like animals can, we really need to know that so we don't accidentally create a large amount of suffering. If they can't suffer, just like rocks probably can't, this doesn't have to be a concern of ours.


The response to the lack of definition should be investigation into how that definition could look like, not arguing if we or something else has it or not. Without a definition and criteria to test you're never going to make progress.


Philosophers have been trying for decades to define it rigorously and have failed decisively. It really looks intractable at the moment. Given we are in this quagmire, I think it is ok to explore/discuss a bit further despite the shaky foundations of only having fuzzy definitions of "qualia" or "consciousness" to rely on.


Quite a lot of the philosophical debate has been tied up in the effort to show that minds cannot be the result of purely physical processes or will never be explained as such, which does not tell us anything about what they are.

We are not going to be able to say with any great precision what we are trying to say with the word 'consciousness' until we have more information. In lieu of that, what we can do is say what phenomena seem to be in need of explanations before we can compile a definition.

At this point, opinions that human-level consciousness is either just more of what has been done so far, or cannot possibly be just that, are just opinions.


Which probably means that someone with “chief scientist” title shouldn’t be using it when making public claims. Of course, he can do it for his own profit, but he is ruining the credibility of his research field, that’s why people working in this field object to it.


I am slightly conscious when I am extremely drunk and can barely think and feel, but yet still have some modicum of conscious experience. That's what it means.

If you don't agree that consciousness exists on a spectrum, and instead think that something is either conscious or not, then simply replace the words 'slightly conscious' with 'conscious'.


But why would I want to put an extremely drunk computer in charge of making decisions?


I was attempting to give an example of what a 'slightly conscious' state is to show that it isn't completely incoherent. Admittedly it was far from rigorous.


By far the best comment on here.


There’s no evidence that neural nets can form an AGI so it’s a moot point. The AGI is an I’ll defined inflection point.


I'd consider brains and other biological neutral systems as neutral nets. So to me there's pretty convincing evidence that neutral nets can form an AGI


Well you shouldn’t. They are not the same. Brains are not (ML) neural networks. Neural networks are just a mathematical approximation of one part of how the mind works


I never ever said that the tiny approximation of subsets of our brain was enough for AGI. Just because we haven't found out the exact structures of the neural net in our brain and how to emulate it, it's still very much a neural net. It's just bigger and more complex than anything we can make our emulate yet.


> It would never occur to me to create a programming library where you have the 'basic' open source version and the 'optimized' commercial version.

It's occurred to plenty of others - look at the GPT-3 libraries or a few of the more advanced Deep Learning ones.


Deep learning libraries are free; GPT-3 is not a library, it's a pretrained model. It's offered (for a fee) through an API. The company offering it (as much as I dislike their naming OpenAI, while being all but open) spent considerable amount of money training it, and is paying more in hardware costs for each instance you send through their API.

Actually, even though training models is very expensive, most models are even available online for free! A large collection is on HuggingFace (including GPT-2, which is essentially a smaller GPT-3), and there are studies proving that the quality is essentially the same. You can literally just download and run them, pretty much like... a free library.


I've got a major a-hole neighbour that's been stealing my food when delivery drivers accidentally call at their home instead of mine. This guy has even claimed to be me to the delivery folks, which I guess is out-and-out fraud.

Anyway, I've been sitting on a plan for a little while to deal out some karma - involving some ML, a camera, an SDR board and a Raspberry Pi.


Absolutely not! I got my current role by going in to an interview and critiquing my previous employer's architecture, data pipeline and (complete) lack of data oversight.


This just begs the question why?

It's not the 18th century anyone, if you want to get from A to B there's better ways. Even if you want to go sailing, just go and do it and have it done. This seems like some form of nautical itinerancy.


I am honestly confused by this comment. I am struggling to understand how anyone could miss the point of sailing around the world for no other reason than the experience. The journey is the destination is an idiom for a reason.


Some people don't have the travel bug. My dad sees the world as just going to work and making money and that's all he cares about.


In the words of the late Dr. Johnson, "No man will be a sailor who has contrivance enough to get himself into jail; for being in a ship is being in a jail, with the chance of being drowned".


> This seems like some form of nautical itinerancy.

Yes, this is exactly one of the reasons people do this.


Can you really not think up any reason people might want to do this?


> But Starbucks would never adopt such a robot, because they know customers wouldn’t like it.

Incorrect - I would much prefer getting top-quality coffee from a robot than some barista all other things being equal.


Starbucks is conspicous spending, a flex as much as convenient coffe. If you are an utilitarian like this of course you are not their target demographic.


Like it! Scrapegoating it is from now on in these circumstances.


The "do just the minimum" strategy of corporate inclusion and diversity.


Was working with a NOC technician who was responsible (along with some others) for a pretty large EMEA mobile network, with many millions of subscribers. There was an RFP to update their SMS/MMS system and a certain Israeli company came in to do a site survey, or installation or something in the network data center.

Anyway the long and the short of it was one of their technicians was caught with the previous vendor's SMS-C prized open and some USB device insert into it. Similar response to this, a lot of hollering and hair pulling, but ultimately no contractual or legal implications.

I guess it happens higher up the food chain too.


PR makes it possible.

I have personally identified more than a handful of employees who'd use their work computers for... let's say "access to inappropriate content". All of them where invited by HR & legal and let go with a more then decent deal.

Absolutely everything was done to prevent the company being associated with anything nasty.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: