Hacker Newsnew | past | comments | ask | show | jobs | submit | Someone1234's commentslogin

People have tried to run Qwen3-235B-A22B-Thinking-2507 on 4x $600 used, Nvidia 3090s with 24 GB of VRAM each (96 GB total), and while it runs, it is too slow for production grade (<8 tokens/second). So we're already at $2400 before you've purchased system memory and CPU; and it is too slow for a "Sonnet equivalent" setup yet...

You can quantize it of course, but if the idea is "as close to Sonnet as possible," then while quantized models are objectively more efficient they are sacrificing precision for it.

So next step is to up that speed, so we're at 4x $1300, Nvidia 5090s with 32 GB of VRAM each (128 GB), or $5,200 before RAM/CPU/etc. All of this additional cost to increase your tokens/second without lobotomizing the model. This still may not be enough.

I guess my point is: You see this conversation a LOT online. "Qwen3 can be near Sonnet!" but then when asked how, instead of giving you an answer for the true "near Sonnet" model per benchmarks, they suddenly start talking about a substantially inferior Qwen3 model that is cheap to run at home (e.g. 27B/30B quantized down to Q4/Q5).

The local models absolutely DO exist that are "near Sonnet." The hardware to actually run them is the bottleneck, and it is a HUGE financial/practical bottleneck. If you had a $10K all-in budget, it isn't actually insane for this class of model, and the sky really is the limit (again to reduce quantization and or increase tokens/second).

PS - And electricity costs are non-trivial for 4x 3090s or 4x 5090s.


I may have genuinely new data for you.

Qwen3.5-35B-A3B is reported to perform slightly better than the model you mentioned.

It runs fine but non-optimal on a single 3090 with even 131072 tokens of context , and due to the hybrid attention architecture, the memory usage and compute scale rather less drastically than ctx^2. I've had friends with smaller cards still getting work out of it. Generation is at around 20 tokens/sec on that 3090 (without doing anything special yet) . You'll need enough DRAM to hold the bits of the model that don't fit. Nothing to write home about, but genuinely usable in a pinch or for tasks that don't need immediate interactivity.

It's the first local model that passes my personal kimbench usability benchmark at least. Just be aware that it is extremely verbose in thinking mode. Seems to be a qwen thing.

(edit: On rechecking my numbers; I now realize I can possibly optimize this a lot better)


With respect, this isn't "new data" it is an anecdote. And it kind of represents exactly the problem I was talking about above:

- Qwen is near Sonnet 4.5!

- How do I run that?

- [Starts talking about something inferior that isn't near Sonnet 4.5].

It is this strange bait/switch discussion that happens over and over. Least of all because Sonnet has a 200K context window, and most of these ancdotes aren't for anywhere near that context size.


You're not wrong; but... imho it's closer to Sonnet 4.0 [1] on my personal benchmark [2]. And I HAVE run it at just over 200Ktoken context, it works, it's just a bit slow at that size. It's not great, but ... usable to me? I used Sonnet 4.0 over api for half a year or so before, after all.

Only way to know if your own criteria are now matched -or not yet- is to test it for yourself with your own benchmark or what have you.

And it does show a promising direction going forward: usable (to some) local models becoming efficient enough to run on consumer hardware.

[1] released mid-2025

[2] take with salt - only tests personal usability

+ Note that some benchmarks do show Qwen3.5-35B-A3B matching Sonnet 4.5 (released later last year); but I treat those with the same skepticism you do , clearly ;)


> The hardware to actually run them is the bottleneck, and it is a HUGE financial/practical bottleneck.

That's unsurprising, seeing as inference for agentic coding is extremely context- and token-intensive compared to general chat. Especially if you want it to be fast enough for a real-time response, as opposed to just running coding tasks overnight in a batch and checking the results as they arrive. Maybe we should go back to viewing "coding" as a batch task, where you submit a "job" to be queued for the big iron and wait for the results.


They did not.

You can still use OpenClaw on their API pricing tier as much as you want. What they did is not allow subscriptions to be used to power automated third-party workloads, including OpenClaw.

Now, is their messaging around this confusing? Absolutely. The whole thing has been handled shambolically. Everyone knows that they lack the compute to keep up, and likely have lower margins on subscriptions than API; but they cannot just say that because investors may be skittish.


Because you'll slowly start building the individual pieces of the database over the file system, until you've just recreated a database. Database didn't spawn out of nothing, people were writing to raw files on disk, and kept on solving the same issues over and over (data definitions, indexes, relations, cache/memory management, locks, et al).

So your question is: Why does the industry focus on reusable solutions to hard problems, over piece-meal recreating it every project? And when phased in that way, the answer is self-evident. Productivity/cost/ease.


Could you go into more details about why their "harness sucks?" This feels like a shared conclusion, but I've used several and theirs is better than many.

I generally agree that the harness isn't good, but it works and gets the job done and that seems to be the singular goal of the top 4 or 5 companies building them.

We saw what Claude Code looks like inside, and it's objectively bad-to-mediocre work, but the takeaway seemed to be 'yeah but it works and they've got crazy revenue'.

That's where we're at. The harness is kind of buggy. The LLM still wanders and cycles in it sometimes. It's a monolithic LLM herding machine. The underlying model is awesome and the harness works well enough to make it super effective.

We can do so much better but we could also do worse. It's a turbulent time. I'm not super pleased with it all the time, but it's hard to criticize in many ways. They're doing a good job under the circumstances.

I see it kind of like they're at war. If they slow down to perfect anything, they will begin to lose battles, and they will lose ground. It's a highly contentious space. The harness isn't as good as it could be under better circumstances, but it's arguably a necessary trade off Anthropic needs to make.


> We saw what Claude Code looks like inside, and it's objectively bad-to-mediocre work

Based on this, are there any open source harnesses that have objectively good-to-excellent work in their code?


I've been using OpenCode until yesterday (with some plugin to let me use their model until they implemented what it seems very sophisticated detection to reject you).

It just has a sane workflow it's easy to use, doesn't bother you with 1000 questions if you allow this or that to run and generally it feels like the model is dumber and makes more mistakes since yesterday since I have to use claude code.


pi.dev

very minimal, extensible.


Agreed, this is the best I've seen so far.

> We saw what Claude Code looks like inside, and it's objectively bad-to-mediocre

Do you have an example to contrast by what measure is good besides your word?


Because a bad guy can also generate their own signing key and deploy it alongside the installer.

See Notepad++ for how that winds up.


Then you can publish the public Code Signing certificate for download/import or publish it through WinGet.

Using Azure Trusted Signing or any other certificate vendor does not guarantee that a binary is 100% trustworthy, it just means someone put their name on it.


Both are great, where they differ is: Claude Code has a better instinct than Codex. Meaning it will naturally produce things like you, the developer, would have.

Codex shines really well at what I call "hard problems." You set thinking high, and you just let it throw raw power at the problem. Whereas, Claude Code is better at your average day-to-day "write me code" tasks.

So the difference is kind of nuanced. You kind of need to use both a while to get a real sense of it.


I think the way I and others use it is code with clause, review or bug hunt with codex. Then I pass the review back to Claude for implementation. Works well. Better than codex implementation and finds gaps versus using Claude to review itself in my opinion.

Codex just changed the way they calculate usage with a massive negative impact.

Before a Subscription was the cheapest way to gain Codex usage, but now they've essentially having API and Subscription pricing match (e.g. $200 sub = $200 in API Codex usage).

The only value of a subscription now is that you get the web version of ChatGPT "free." In terms of raw Codex usage, you could just as easily buy API usage.

edit: This is currently rolled out for Enterprise, but is coming to Pro/Plus soon. The people below saying "I haven't had this issue" haven't yet*.


> e.g. $200 sub = $200 in API Codex usage [...] In terms of raw Codex usage, you could just as easily buy API usage.

I don't think it's made out like that, I'm on the ChatGPT Pro plan for personal usage, and for a client I'm using the OpenAI API, both almost only using GPT 5.4 xhigh, done pretty much 50/50 work on client/personal projects, and clients API usage is up to 400 USD right now after a week of work, and ChatGPT Pro limit has 61% left, resets tomorrow.

Still seems to me you'd get a heck more out of the subscription than API credits.


This. ChatGPT Pro personal at $20/month and using GPT 5.4 xhigh is the best deal currently. I don't know if they are actually losing money or betting on people staying well under limits. Clearly they charge extra to businesses on the API plans to make up for it.

In the future, open models and cheaper inference could cover the loss-leading strategies we see today.


ChatGPT Personal Pro plan hasnt had the change yet. It is rolling out to Enterprise users first.

Right, because you're on the old and not new structure.

They just rolled it out for new subscribers and existing ones will be getting it in the "coming weeks." Enterprise already got hit with this from my understanding.


This is not true. The change applies to the credits, ie the incremental usage that exceeds your subscription limits.

OpenAI's own help page suggests otherwise.

Mostly, yes. But since they upstream Chromium, it is more likely to remain evergreen than MSHTML ever was.

Do you have uBlock Origin by any chance?

They cannot.

Unfortunately many believe they can, and it is impossible to disprove. So now real people need to write avoiding certain styles, because a lot of other people have decided those are "LLM clues." Bullets, EM Dash, certain common English phases or words (e.g. Delve, Vibrant, Additionally, etc)[0].

Basicaly you need to sprinkle subtle mistakes, or lower the quality of your written communications to avoid accusations that will side-track whatever youre writing into a "you're a witch" argument. Ironically LLM accusations are now a sign of the high quality written word.

[0] https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing


Someone with native fluency in American English can (should) be able to tell the difference between human writing and unpolished AI copy-paste.

Essentially 0 people use emoji to create a bulleted list. Nobody unintentionally cites fake legal precedents or non-existent events, articles, or papers. Even the “it’s not X, it’s Y” structure, in the presence of other suspicious style/tone cues signals LLM text.


Also one big tell that is hard to hide is making verbose lists with fluff but little actual informative content.

Ask an LLM to read your project specs and add a section headed: Performance Optimizations, to see an example of this

Another is a certain punchy and sensationalist style that does not change throughout a longer piece of writing.


One of my subtle favorites is the “H2 Heading with: Colorful Description”

Eg - The Strait of Hormuz: Chokepoint or Opportunity?


I’ve used titles like that for thirty years.

I'm going to ask the qustion I ask everyone who makes the claim that they wrote like that for years: Can you show us a link from prior 2022 that you wrote like that?

No, of course not. It’s all corporate internal documentation.

I suppose my high school essays were not. Apologies, but those are lost.


Nobody owes you evidence for your witch hunts.

Sure, but, look, we have seen these claims so many times, that if it were true by now someone would have linked at least one archived blog post to show that it is, indeed, how humans used to write.

The lack of a single example is very telling.


Sure, and an LLM-written article will use that pattern eight times in two pages.

Exactly, it's the monotony of the style that gives it away.

>Even the “it’s not X, it’s Y” structure

I wonder where some of this comes from. Another one is 'real unlock', it's not a common phrasing that I really recall.

https://trends.google.com/explore?q=real%2520unlock&date=all...


Emojis for lists: completely agree with you, but presumably this was learned in training?

I think that’s a RLHF issue - if you ask people “which looks better”, they too-frequently picked the emoji list. Same with the overuse of bolding. I think it’s also why the more consumer-facing models are so fawning: people like to be praised.

So are you saying that anyone with native fluency in English but who is not from the US can't tell the difference between human writing and unpolished AI copy-paste? I don't agree. Given that US-based LLM models tend to default their output to American English, its arguably much easier for "the rest of us" to spot the "US" language patterns...

> 0 people use emoji to create a bulleted list.

I haven't seen this yet, but I guess the only reason I haven't done it is because it never crossed my mind.

What I have found an easy detection is non-breaking spaces. They tend to get littered through the passages of text without reason.


I think the trope in this comment[0] from another thread is the most obvious tell, perhaps even more than "not x, but y".

> It’s the fake drama. Punchy sentences. Contrast. And then? A banal payoff.

It's great because it's a double-decker of annoying marketing copy style and nonsensical content.

[0]: https://news.ycombinator.com/item?id=47615075


I do use bullets and emojis

> Unfortunately many believe they can, and it is impossible to disprove. So now real people need to write avoiding certain styles, because a lot of other people have decided those are "LLM clues." Bullets, EM Dash, certain common English phases or words (e.g. Delve, Vibrant, Additionally, etc)[0].

I think people will be able to detect the lowest-user-effort version of LLM text pretty reliably after a while (ie what you describe; many people have a good sense of LLM clues). But there's probably a *ton* of LLM text out there where some of the instructions given were "throw a few errors in", "don't use bullet points or em dashes", "don't do the `it's not this, it's that` thing" going undetected.

And then those changes will get built into ChatGPT's main instructions, and in a few months people will start to pick up on other indicators, and then slightly smarter/more motivated users will give new instructions to hide their LLM usage... (or everyone stops caring, which is an outcome I find hard to wrap my head around)


This is the correct answer. We’re at a point where it will soon be safer to assume a human or someone with agency and their approval wrote the text, than to completely dismiss it as “written by LLM” or a human.

So judge the content on its merit irrespective of its source.


The key insight is to avoid – em dashes. You’re absolutely right. It’s not the content, it’s the style.

Ironically one of the big tells for me is the "It's not this. It's that." Your comment uses a comma though so you're probably a real person :)

I assume they were aping those terms ironically (especially given the 'you're absolutely right')

Busted!!!!

Staccato (too may short sentences with periods) is also a telltale for me. Most humans prefer longer sentences with more varied punctuation; I, for example, am a sucker for run-on sentences.


That's an en-dash.

You're absolutely right! I unintentionally used an en-dash instead of an em-dash. Here is the em-dash you requested: –

Sorry! Is this ok? —

You're absolutely right. That is an em dash

You're absolutely right. They are absolutely right

Indeed, isomorphic plagiarism by its nature forms strong vector search paths that were made from stealing both global websites, real peoples work, and LLM user-base input/markdown.

However, reasoning models adding a random typo to seem less automated, still do not hide the fairly repeatable quantized artifacts from the training process. For LLM, it is rather trivial to find where people originally scraped the data from if they still have annotated training metadata.

Finally, reading LLM output is usually clear once one abandons the trap of thinking "I think the author meant [this/that]", and recognizing a works tone reads like a fake author had a stroke [0]. =3

[0] https://en.wikipedia.org/wiki/Stroke


And I'm sure we've all seen what happens if you run the Declaration of Independence or the Gettysburg Address or the book of Genesis through an AI "detector". They usually come back as AI.

Only for poor quality systems. Unfortunately there are many systems that tried to make easy hype, but are the equivalent of an ML 101 classifier class project.

If one measures for perplexity (how likely text is under a certain language model), common text in a training set will be very likely. But you can easily create better models.


> Ironically LLM accusations are now a sign of the high quality written word.

Citation needed. The LLM accusations come from the specific cadence they use. You can remove all em-dashes from a piece of text and it still becomes clear when something is LLM written.

Can they be prompted to be less obvious? Sure, but hardly anyone does that.

It's more "The Core Insight", "The Key Takeaway", etc. than it is about emdashes.

Incidentally, the only people annoyed about "witch-hunts" tend to be those who are unable to recognise cadence in the written word.


i think another part of the problem is that some people are using AI so much that they are starting to mimic its cadence in their own writing. they may have had a prior coincidental predisposition for writing somewhat similar to AI with worse grammar, and now are inching towards alignment as they either intentionally or accidentally use AI output as a model to improve their writing

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: