A "Senior Software Engineer" at Microsoft is someone with a pulse and 3 years of experience (due to title inflation); so despite the "senior" in the title definitely not "senior engineering staff".
I have no doubt Azure sucks, but almost all huge projects like that have systemic issues.
Axel sounds like a pretty smart guy, but wanted to point out I've seen this kind of behavior before, often from mid-level "job-hopping" engineers (sometimes with overly inflated egos) that overconfidently declare everything the organization is doing is BS and they have the magic solution to it.
And yes, sometimes by sending long winded emails to very large internal groups about how their solution will address all the problems if only someone recognize their genius (and eventually give them a VP title and budget). Some of the time, they are well intended but missing crucial historical knowledge about why things are in the state they are and why what they're proposing was tried 5 times before and failed.
Support for Xbox Game Pass games (typically deployed as UWP / containerized) would be absolutely amazing and likely the final nail in the coffin for Windows for gaming for many people.
According to the enshittification playbook the next step is to discontinue the lower tiers (or price them so high they stop making sense), then celebrate "Copilot adoption" :)
I hope this keeps momentum. If nothing else, it may force assholes like Altman to think a little bit about the impact of a decision to sell services to a government / military.
And it may lead some folks into discovering privacy-preserving local inference as an alternative for a lot of use cases, which is always a plus.
I switched a very long time ago when Gemini was released and it was a very easy switch at the time. I have never missed ChatGPT and due to current circumstances I'm kind of happy I made the switch. It woukd be a lot harder for me now to switch from Gemini (except for code of course)
Reposting a comment I made on an earlier thread on this.
We need to be super careful with how legislation around this is passed and implemented. As it currently stands, I can totally see this as a backdoor to surveillance and government overreach.
If social media platforms are required by law to categorize content as AI generated, this means they need to check with the public "AI generation" providers. And since there is no agreed upon (public) standard for imperceptible watermarks hashing that means the content (image, video, audio) in its entirety needs to be uploaded to the various providers to check if it's AI generated.
Yes, it sounds crazy, but that's the plan; imagine every image you post on Facebook/X/Reddit/Whatsapp/whatever gets uploaded to Google / Microsoft / OpenAI / UnnamedGovernmentEntity / etc. to "check if it's AI". That's what the current law in Korea and the upcoming laws in California and EU (for August 2026) require :(
Is LTSC still impossible to get as someone who doesn't want to run cracked software or "license unlockers" on the same machine they do their banking on? I never found a way of buying it that didn't involve having to survive an interrogation by a sales team.
Haha, I always guess whether or not there will be an LTSC comment before checking the comments. These days it's always there, even early after posting.
Someone brought up the need for device attestation for trust purposes (to avoid token smuggling for example). That would surely defeat the purpose (and make things much much worse for freedom overall). If you have a solution that doesn't require device attestation, how does that solve the smuggling issue (are tokens time-gated, is there a limit to token generation, other things)?
We do not require an attestation and things like token smuggling is still a problem we need to solve. We have a system that prioritizes unlinkability. So an issuer cannot track the attribute they give you. And a verifier cannot link multiple disclosures with the same attribute. This privacy really helps things like token smuggling however. Time-gated tokens may increase the difficulty, but will probably not make it impossible. Making it illegal to verify someone else's qr codes could also help of course.
A Verifiable Credential fundamentally doesn't solve the problem of "sharing", "smuggling". All it takes is one verified adult to "leak" their VC somewhere, and millions of underage people would be able to use it to "prove" they are over 18.
This would only work with something like MS TPM 2 / Apple Secure Enclave (device attestation), which is anti-freedom by design. I was curious if they found a way around that (maybe with time/rate limits, or some actual useful use of blockchain tech).
You could use an oblivious pairwise pseudonym, and then you do not require hardware attestation. But that does essentially limit one ID to one account per service.
Besides the privacy argument (the claim that the UID can't be used for tracking via derivation is shaky at best, and not much different than MS's EK), there is the freedom argument: as in, who owns the device - the user, or Apple?
If Apple can remotely lock the device that an user bought mistakenly (for example because some corporation somewhere fat-fingers some entries), that fundamentally means the user doesn't own the device they bought and paid for. Add on top DRM and all the other evil that comes along with attestation.
Plus, you can still disable TPM2 (if you don't want to run Windows on your machine), you can never disable Apple's implementation.
I'd like to add we are discussing communication over the internet. It is an open standard. I should be allowed to build my own pcb without a secure element and talk to anyone over http so long as I am abiding by the correct rfcs.
I have read a variation of this headline once every 2 years since the early 2000s, yet never seen it turn into something real (that a consumer / enterprise can buy).
reply