Speaking of burning tokens, they also like to waste our tokens with paragraphs of system messages for every single file read you do with Claude. Take a look at your jsonl files, search for <system-reminder>.
The solution was simple for me: Cancel my max sub, start using Codex instead. Hopefully others do the same and Anthropic will learn to listen to their users.
Well they've successfully burned a bridge with me. I had 2 max subs, cancelled one of them and have been using Codex religiously for the last couple of weeks. Haven't had a need for Claude Code at all, and every time I open it I get annoyed at how slow it is and the lack of feedback - looking at it spin for 20 minutes on a simple prompt with no feedback is infuriating. Honestly, I don't miss it at all.
You have to go into /models then use the left/right arrow keys to change it. It’s a horrible UI design and I had no idea mine was set to high. You can only tell by the dim text at the bottom and the 3 potentially highlighted bars.
On high It would think for 30+ minutes, make a plan, then when I started the plan it would either compact and reread all my files, or start fresh and read my files, then compact after 2-3 changes and reread the files.
High reasoning is unusable with Opus 4.6 in my opinion. They need at least 1M context for this to work.
Out of principle I'm never paying them a cent for "fast mode". I've already started using Codex anyway, will probably just cancel my sub since I've found I actually haven't needed CC at all since making the switch.
It's all well and good for Anthropic developers who have 10x the model speed us regular users have and so their TUI is streaming quickly. But over here, it takes 20 minutes for Claude to do a basic task.
It feels like they're optimizing the UI for demo reels rather than real-world work. A clean screen is cool when everything is flying, but when things start lagging, I need verbose mode to see exactly where we're stuck and if I should even bother waiting
Agreed. That’s the one area where I think my experience will still have value (for a while anyway): translating customer requests into workable UI/UX, before handing off to the LLM.
We're on the precipice of something very disgusting. A massive power imbalance where a single company or two swallows the Earth's economy, due to a lack of competition, distribution and right of access laws. The wildest part is that these greedy companies, one of them in particular, are continuously framed in a positive light. This same company that has partnered with Palantir. AI should be a public good, not something gatekept by greedy capitalists with an ego complex.
Another explanation is that it's simply one form of lazy ineffective obfuscation performed by inexperienced relative luddites in an attempt to walk the fine line between complying with the supreme court directive & not releasing anything useful.
Other investigations into the files have found oddities like redaction of the word "don't" indicating a haphazard find-&-replace approach to redaction, possibly LLM-aided.
The DOJ/Akamai online hosted search feature is also incomplete - potentially due to some of these "digitally scanned" files not being subject to OCR.
> to pass off fraudulent or AI generated images as real.
Possibly but I don't find it compelling, if only because a significant portion of the media reportage on the files has made claims that are entirely baseless - if there were a narrative to be sold one would expect such reportage to be actively leveraging such fraudulent images.