Hacker Newsnew | past | comments | ask | show | jobs | submit | Chance-Device's commentslogin

From the beginning of this I’ve wondered the same question: how do these companies justify spending such massive amounts now (and 3 or 4 years ago) when software and hardware efficiencies will bring down the cost dramatically fairly soon?

They basically decided that scaling at any cost was the way to go. This only works as a strategy if efficiency can’t work, not if you simply haven’t tried. Otherwise, a few breakthroughs and order of magnitude improvements and people are running equivalent models on their desktops, then their laptops, then their phones.

Arguably the costs involved means that our existing hardware and software is simply non viable for what they were and are trying to do, and a few iterations later the money will simply have been wasted. If you consider funnelling everything to nvidia shareholders wasting it, which I do.


The decision is the right one. Scaling at any cost is the right way to go.

You cannot find the efficiency if you haven't been experimenting at scale, this is true personally as well.

If someone haven't been burning a few B tokens per month, everything coming out of their mouth about AI is largely theory. It could be right or wrong, but they don't have the practice to validate what they're talking about.

Not everyone scaling to that degree would have the right answer or outcome, many would be wrong and go bust. But everyone who didn't will not have the right answer.


Well said. Quantity itself is a quality.

In the worst of the worst case, they're building know-how of how to manage big datacenters, infra and data-labeling teams. These are incredibly valuable in the next few years. And no, no one, even the AI companies' executives themselves, believe that you can delegate business know-how to LLMs.


They're not just betting on the current tech, they're building out infra like this because probably any future tech currently being researched will also require massive data centers.

Like how the gpt llms were kind of a side project at openai until someone showed how powerful they could be if you threw a lot more parameters at it.

There could be some other architecture in the works that makes gpts look old - first to build and train that new ai will be the winner.


I think their current goal is to capture as much market as they can while they still have the best models, their only moat. Look at Anthropic, they are clearly trying to lock their users in their ecosystem by refusing to follow conventions (AGENT.md etc) and restricting their tools exclusively to their own services.

Because whoever wins the AI race (assuming they don't overshoot and trigger the hard takeoff scenario) becomes a living god. Everybody else becomes their slave, to be killed or exploited as they please. It's a risky gamble, but in the eyes of the participants the upside justifies it. If they don't go all in they're still exposed to all the downside risk but have no chance of winning.

I don't expect hardware prices to go down unless the third option (economic collapse) happens before somebody triggers the dystopia/extinction option.


Just to add some slight nuance but is an important distinction,

They aren't all necessarily racing to be "god", some are racing to make sure someone else is not "god".

If it weren't for Altman releasing ChatGPT, it's very likely that we would have markedly less powerful LLMs at our disposal right now. Deepmind and Anthropic were taking incredibly safe and conservative approaches towards transformers, but OAI broke the silent truce and forced a race.


It can be both

> I wrote this knowing it feeds the thing I’m warning you about. That’s not a contradiction. That’s the condition.

HN needs a better AI slop filter.

Or maybe I do. Maybe I can vibe code a browser extension that pre loads TFA links and auto hides anything that isn’t sufficiently human authored.


But… why not just write pseudo code, or any language you actually know, and just ask the AI to port it to the language you want? That’s a serious question by the way, is there some use case here I’m not seeing where learning this new syntax and running this actually helps instead of being extra steps nobody needs?

Indeed, it seems to occupy a middle ground between fast-and-easy AI prompting, and slow-but-robust traditional programming. But it's still relying on AI outputs unconstrained (as far as I can tell) by more formal methods and semantic checks.

But it's also hard for me to grasp the exact value add from the README, or why I should buy their story, so I'm not sure.


Install my executable bro, trust me just one more tool and you will be the 10x engineer!!!

Perhaps the author should have made it clearer why we should care about any of this. OpenAI want you to use their real react app. That’s… ok? I skimmed the article looking for the punchline and there doesn’t seem to be one.

Why does every article need a 'punchline'? It's a technical analysis. Do you expect punchlines when you read recipes or source code?

Where did I say “every article”? This is AI slop that’s set up like it’s some investigative expose of something scandalous and then shows us nothing interesting. A competent human writer would have reframed the whole thing or just not published it.

Do you think

1. Every person is born with the knowledge of how ChatGPT uses Cloudflare Turnstile?

2. This article contains factual mistakes? If so, what are they?

If neither of these is true, then this article strictly provides information and educational value for some readers. The writing style, AI-like or not, doesn't change that.


Do you think I have some obligation to agree with you or something? You love the article, nice, good for you. I think it’s crap.

Whilst you and a few other commentators call this AI slop and refuse to engage with it, the rest of us have read something interesting and learned something new. Is anything gained if one points out that it's written by AI? I personally know it's written by AI but the value outweighs the stylistic idiosyncrasies.

Consider also that many people aren't the best at writing blog-like posts but still have things to share and AI empowers them to do that. I can't find anything constructive in your post and I don't understand why you are posting at all.


What’s not constructive about it, Bogdan? I’ve said exactly what I think is wrong with the article, the framing is AI pattern matching to something that it isn’t. It’s a weird kind of incongruent clickbait, it’s not positioning itself as a piece about cloudflare or turnstile, it’s implicitly saying “look at this sneaky thing OpenAI are doing that I uncovered!” and it turns out they’re not doing much of anything at all.

This may be unintentional and the author simply couldn’t tell it sounded this way. The less charitable interpretation is that they did know it sounded this way and thought that a straightforward blog post about cloudflare bot detection wouldn’t end up on the HN front page.

What’s my constructive criticism to the author? Write your own posts. Use your own voice. Make sure that what you’re creating actually reads like the kind of thing it is. Don’t get the AI to write it for you. It’s annoying.

And I would say that if someone is really so bad at writing blogs that they are unable to do this, which I am not saying this author is, then maybe they shouldn’t be writing them.


The intended value is difficult to discern in AI written pieces.

I agree with both of you, there's some interesting tricks here for how a website builds anti-bot protection, but the AI sloppification is framing it as a consumer protection issue but not delivering on that premise.

It is a reasonable criticism that the post does not deliver a "so what?" on its basic framing.


For me the interesting parts of the article is how author got to the decompiled checks and what the checks are. Anti-bot is an interesting space.

That's because the article is AI slop.

Quality comment, this is the answer. Also insightful how the nature of the internet and real world separation has changed with time. This should be obvious but this is the first time I’ve seen it stated explicitly like this.


TLDR: Meta want to push all the age verification requirements onto the OS makers (Apple, Google, everyone else gets caught in the crossfire) so that they don’t have to do anything AND they want it done in such a way that they can use it to profile people to push them targeted ads.

Its like they want to keep being seen as the bad guys.


I think this is also a way of getting ahead of any “ban social media for teens and preteens” bills that might pop up in the US. They do not want repeats of Australia! By adding age verification into the operating system they can deflect responsibility but also respond to legislators with a scalpel rather than getting sledge-hammered.


…Honestly this seems something very likely, more than the other suggestions.


I want age verification but not at the OS level.


I want reverse age verification that lists the ages of every social network post. I think a lot of people that criticize social network toxicity don't realize their interlocutors are half their age. It's not one-to-one, meaning maturity doesn't follow from age, but I think there would be some affordances made in both directions. A younger person would be less surprised that a 60+ yr old would hold certain views. And vice versa.


> I want age verification

Please feel free to verify your own age with anyone you like. If you mean "I want other people to", then no.


Yes, let me send a picture of my ID to every app on the internet. That's so much better than having the device I own attest to my age anonymously.


What would a world with your preferred age verification system look like?


If age verification has to exist at all, it could look something like this: https://news.ycombinator.com/item?id=46447282

And responses to some common criticisms of the idea: https://news.ycombinator.com/item?id=46459959

I also forgot to mention in my original post that the token issuer is not a monopoly. Any company that wants to participate can do so, just like there are many brands of tobacco and alcohol. Require websites to accept at least 5 providers to ensure competition.

To be clear though if it's being used as wedge for privacy violation then it should not exist at all. And from reading TFA preventing that may need a similarly coordinated counter-effort.


That seems much more intrusive and bad for privacy than having the parent click a button that says the account is for a child under 18.


We already have that.

On a spectrum of options, no verification is the least privacy intrusive. Baking it in at the OS level or forcing passport uploads are the most intrusive. My proposal is in the middle.

A determined actor could maybe follow you to the store when you purchase your verification code, take a quick picture with a powerful camera (or bribe the store to do it sneakily) and unmask you online. But there's no way to do it at scale. And if you buy the code from a reseller (ask a panhandler to buy one for you, perhaps) then it's even more robust.


We don't have that. If we did, California wouldn't have to mandate it.


> I want age verification

Why?


Because it's absurd to allow children to simply click "I am 18." Nowhere else works like this.


> Nowhere else works like this.

Are you serious? Because this comment doesn't make it sound like you're serious.

EULAs and the like allow adults to simply click "I accept". That's apparently the way contracts work these days. Speaking of contracts: children aren't allowed to sign contracts. So those apps that children are using with EULAs? It's absurd to allow adults to simply click "I accept". We need to have "acceptance verification" laws to prevent this kind of abuse.

It's also absurd to allow children to simply enter a church. Churches teach dangerous thoughts. Have you read their books?! Those books have sex, murder, theft! Think of the children! There's many kinds of religions and we need to track the religion bracket of our children. It's absurd to allow a child to simply click "I am Christian." Nowhere else works like this. We need to have "religious verification" laws to prevent this kind of abuse.

What you want isn't conducive to a "high trust" society [0].

[0]: https://en.wikipedia.org/wiki/High-trust_and_low-trust_socie...


R-rated movies, explicit graphic novels, health/anatomy books, romance novels. All example of material that are contemporary harmful to minors yet are simply accessible to minors. In the recent past you could add contraception and talking about STDs

The absurdity here comes from the fact that this is only illegal when one convinces a group of wetware about the dangers of porn addiction and LGBT, even more absurd this can only be done through misinformation since neither LGBT grooming rings nor porn addiction are real.

I see the absurdity in pushing for laws in the hope of preventing a disease that only exists in your mind? Can you? I believe you can if you step out of idpol and look at the cold data/dollars.


FHE is the future of AI. I predict local models with encrypted weights will become the norm. Both privacy preserving (insofar as anything on our devices can be) and locked down to prevent misuse. It may not be pretty but I think this is where we will end up.


If you're interested in "private AI", see Confer [0] by Moxie Marlinspike, the founder of Signal private messaging app. They go into more detail in their blog. [1]

[0] https://confer.to/

[1] https://confer.to/blog/2025/12/confessions-to-a-data-lake/


I don't get how this can work, and Moxie (or rather his LLM) never bothers to explain. How can an LLM possibly exchange encrypted text with the user without decrypting it?

The correct solution isn't yet another cloud service, but rather local models.


The model is running in a secure enclave that spans the GPU using NVIDIA Confidential Computing: https://www.nvidia.com/en-us/data-center/solutions/confident.... The connection is encrypted with a key that is only accessible inside the enclave.

Within the enclave itself, DRAM and PCIe connections between the CPU and GPU are encrypted, but the CPU registers and the GPU onboard memory are plaintext. So the computation is happening on plaintext data, it’s just extremely difficult to access it from even the machine running the enclave.


How is it then much different than trusting the policies of Anthropic etc? To be fair you need some enterprise deal to get the truly zero retention policy.


Enclaves have a property that allows the hardware to compute a measurement (a cryptographic hash) of everything running inside it, such as the firmware, system software such as the operating system and drivers, the application code, the security configuration. This is signed by the hardware manufacturer (Intel/AMD + NVIDIA).

Then, verification involves a three part approach. Disclaimer: I'm the cofounder of Tinfoil: https://tinfoil.sh/, we also run inference inside secure enclaves. So I'll explain this as we do it.

First, you open source the code that's running in the enclave, and pin a commitment to it to a transparency log (in our case, Sigstore).

Then, when a client connects to the server (that's running in the enclave), the enclave computes the measurement of its current state and returns that to the client. This process is called remote attestation.

The client then fetches the pinned measurements from Sigstore and compares it against the fetched measurements from the enclave. This guarantees that the code running in the enclave is the same as the code that was committed to publicly.

So if someone claimed they were only analyzing aggregated metrics, they could not suddenly start analyzing individual request metrics because the code would change -> hash changes -> verification fails.


Thanks for explaining :)

> First, you open source the code that's running in the enclave, and pin a commitment to it to a transparency log (in our case, Sigstore).

This means you have reproducible builds as well? (source+build-artifacts is signed)

Also - even if there are still some risk that the link is not 100% safe, maybe it's safe to assume vendors like yourself going through all that trouble are honorable? (alternatively - they are very curious of what "paranoid" people would send through LLMs :sweatsmile:)


We don't have reproducible builds because we attest the full OS image that we run, which is the Ubuntu image. Unfortunately bit-by-bit reproducible binaries for OS images is kind of an unsolved problem, because it requires the hundreds of package maintainers across all dependencies to eliminate any sources of non-determinism in the compilation. Things like timestamps and file reordering are very common and even one of these changes the entire hash.

So we do the next best thing. We decide to trust Github and rely on Github Actions to faithfully execute the build pipeline. We also make sure to pin all images and dependencies.


They explain it in Private inference [0] if you want to read about it.

[0] https://confer.to/blog/2026/01/private-inference/


If encrypted outputs can be viewed or used, they can be reverse-engineered through that same interface. FHE shifts the attack surface, it does not eliminate it.


If you know how to reverse engineer weights or even hidden states through simple text output without logprobs I’d be interested in hearing about it. I imagine a lot of other people would be too.


I mean, no they cannot be viewed at any point once encrypted unless you have the key. That's the point. Even the intermediate steps are random gibberish unless you have the key


FHE is impractical by all means. Either it's trivially broken and unsecured or the space requirements go beyond anything usable.

There is basically no business demand beside from sellers and scholars.


In science fiction maybe. We're hitting real limits on compute while AI is still far from a level where it would harmful, and FHE is orders of magnitude less efficient than direct calculation.


Anthropic should never have gotten into bed with the military or intelligence services to begin with. They wanted to make a deal with the devil and dictate the terms, that is the problem. If they had stayed out this wouldn’t be happening. Yes, someone else will probably step in and do all the evil you have just refused to do, but that isn’t a reason to instead decide to do it personally.

Note that I give them a lot of credit for trying to stop and to have their own red lines about the use of their technology, and to stick to those red lines to the end.


According to legend the devil adheres precisely to the terms of the contacts he signs; it's usually the foolhardy peasant who didn't notice the fine print.


Very useful if you run into him in Georgia or if you want to get his tooth to make a guitar pick.


I think i'm gonna need an explanation for that sentence


“The Devil Went Down to Georgia” by Charlie Daniels


Other reference was Tenacious D in The Pick of Destiny.


The military is perhaps the biggest possible customer around. They do plenty of things that aren't blowing people up. It's not bad to help with non combat tasks.


Yeah, but aren’t all of those things in service of “blowing people up”?


National defense is important, just ask Europe post Ukraine war.

People taking a good idea and extending it to do bad does harm twice: in the bad act itself and in making a good thing seem bad.

I am strongly against US starting wars and as you say blowing people up.

I am also strongly against the US being defenseless in the case of a national emergency.


Blowing people up is sometimes morally correct. Defending yourself against attack is very nearly always so.


I was sympathetic to this line of reasoning but I feel it's repeatedly shown to be self-defeating.

What chance have the proverbial good-guys got if, even after _proving_ some modicum of good will, people will nonetheless condescend any attempt to influence bad/wildcard actors? It feels great to tell someone they 'should've known better' but I'm convinced that that's basically void of cautionary utility.


I've commented this in a different thread, but pretty sure something very similar would've happened in they refused to "get into bed with the military or intelligence services to begin with".

It's damned if you do and damned if you don't — lose-lose scenario either way.


Ironically this would actually be a good thing. As we can see from Iran Claude doesn’t quite have these bugs ironed out yet…


This is the exact attitude that lead to a chat bot being used to identify a school for girls as a valid target.

The chatbot cannot be held responsible.

Whoever is using chatbots for selecting targets is incompetent and should likely face war crime charges.


"that lead to a chat bot being used to identify a school for girls as a valid target"

Has it been stated authoritatively somewhere that this was an AI-driven mistake?

There are myrid ways that mistake could have been made that don't require AI. These kinds of mistakes were certainly made by all kinds of combatants in the pre-AI era.


Do you think anyone is ever going to say this under any circumstances? That Anthropic were right and they were proved right the very next day?

Yeah yeah, they probably had a human in the loop, that’s not really the point though.


Targeting and accuracy mistakes happen plenty in wars that aren't assisted by AI. I don't think it's fair to assume that AI had a hand in the bombing of the school without evidence.


What attitude exactly are you talking about? The one that says that if you’re going to morally sell out it would be better if you at least tried not to kill children?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: