The most “moral” of all moral companies, Anthropic, may not be able to stop building capabilities that look very much like weapons. This time it is cyber weapons.
In the latest AI Realist article, I go through the Mythos system card. 245 pages inevitably contain a bit too much information. When you read between the lines as an AI expert, you can start to infer what might actually be happening.
In this article, I discuss what environment Mythos was possibly trained in, and why I believe its “too dangerous to release” capabilities are not unexpected emergent properties, but appear to be shaped through the training environment.
I argue that Mythos could be seen as a cyber weapon, one that the vast majority of cybersecurity departments are not prepared for. Many are still busy cataloguing AI agents and writing agentic governance guidelines instead of building real defenses.
If this interpretation is even partially correct, then restricting access to the model may not improve safety. If I can form a view on how such a training environment might be set up from reading their own document, others are likely already building Mythos-class systems.
Disclaimer: This analysis is based on publicly available materials and reflects my interpretation, not confirmed internal details.
Daniela was just about to begin her studies in finance.
Before the semester started, she went to a music festival with her boyfriend, Daniel.
Daniela and Daniel were among 387 people killed by Hamas in October 2023.
Today, her mother finds comfort in AI-generated videos of her daughter.
She has made it her mission to tell the world who Daniela was.
On social media, she shares AI-generated videos and photos of her daughter, which receive hundreds of likes.
We spoke with Olga, who shared Daniela’s story with me.
We also discussed with a clinical psychologist Samuel Silva how AI-generated digital twins of deceased people may be used in grief therapy, and what risks they can carry.
Europe has started preparing for a US tech shutdown. What seemed impossible yesterday is becoming increasingly likely. European dependency on US tech is a leverage that Trump knows how to use. But what happens if he decides to cut it off? I wrote a fictional 2027 scenario about what might happen and how it would affect EU industry. In short, as the Wall Street Journal called it, this is a nightmare scenario. The US has an opportunity to cause a massive crisis in Europe. Yet there is one point they may not be fully realizing. When it comes to survival, you partner with whoever is necessary, even China. This kind of tech blackmail would damage both the US and the EU, but it would be a fantastic opportunity for China to cement its dominance in AI. Europe needs to stop thinking about what else it could prohibit and start thinking about what could be prohibited to it. It is time to start innovating instead of regulating.
The latest openAI image model when asked to generate a list of animals creates cursed pokemons.
The reason for this is the new state-of-the-art architecture of their image generation model.
The AI realist article "ChatGPT creates monsters" has a very detailed breakdown of why it happens. It It is for non-technical readers but it explains the details of the architecture. One can claim a free article.
Gartner just published a guide to burning AI budget.
It is called “Top 10 Strategic Technology Trends for 2026.”
In the next weeks it will land on every CIO and CTO desk, get copied by every consulting PPTX factory, and then get digested by LLMs, so yes, inevitably also by Deloitte (if you know what I mean;-))
And then it becomes the AI strategy for the next three years.
I wrote an AI realist article where I went through the trends, checked the sources, and added research and arguments for why following this list as a roadmap is one of the fastest ways to spend your budget, get no measurable ROI, and in some cases even incur net losses.
Their open weight models are the ones that keep research alive and enable startups and even bigger companies to build AI solutions while keeping control of the model.
The Western companies chose the path of hype and profit.
They overpromise, commit to deals they cannot pay for, and build datacenters they cannot power.
China, possibly strategically, burst this AI bubble by publishing their models as open weights.
Nothing hurts the narrative of “AGI is around the corner, we just need trillions for scaling” more than an agentic open weight model like Kimi K2 that was trained with less resources.
Whatever the motives are, Moonshot and MiniMax deliver impressive models.
How many times have you heard this phrase? And yet, ironically, crypto was never the main driver of Nvidia’s success.
At some point, mining Bitcoin on GPUs wasn’t the best method anymore.
The role of Nvidia in blockchain, crypto, and Bitcoin mining was bumpy. Their GPUs were repurposed by miners so heavily that Nvidia even introduced anti-mining measures in their hardware.
The key reason for their success is that they placed their bets on optimizing GPUs for deep learning. They invested a lot into CUDA, the software stack that enables efficient computation for training and inference of LLMs. CUDA has no viable alternatives. And that is why over 90% of data center GPUs come from Nvidia, and their valuation is larger than the GDP of every country except the US and China.
In the latest AI Realist article, I go through the Mythos system card. 245 pages inevitably contain a bit too much information. When you read between the lines as an AI expert, you can start to infer what might actually be happening.
In this article, I discuss what environment Mythos was possibly trained in, and why I believe its “too dangerous to release” capabilities are not unexpected emergent properties, but appear to be shaped through the training environment.
I argue that Mythos could be seen as a cyber weapon, one that the vast majority of cybersecurity departments are not prepared for. Many are still busy cataloguing AI agents and writing agentic governance guidelines instead of building real defenses.
If this interpretation is even partially correct, then restricting access to the model may not improve safety. If I can form a view on how such a training environment might be set up from reading their own document, others are likely already building Mythos-class systems.
Disclaimer: This analysis is based on publicly available materials and reflects my interpretation, not confirmed internal details.