The big thing keeping me from switching from Google Fi is how easy international roaming is. For every country I've been to, I've just had it automatically work within ten minutes of landing, at my regular price, without buying any addons
Except if you happen to travel for more than 45 days, in which case Google Fi will promptly tell you to get fucked and cut off your service without warning, advanced notice, or spelling out anywhere when you sign up. Not my idea of a carrier I can trust. I deleted my account and service with them to move to a carrier that I can trust and actually respects me as a customer.
tbf, that was because a lot of people abused it by being permanently outside of the US and relying exclusively on the roaming for all their data. I know because I was one of those people for 6 years.
I've switched to US Mobile. I haven't used it on an extended basis internationally yet, but I am about to travel internationally, so will find out soon. That said, the reviews are pretty good by people that use it internationally for an extended period of time.
I got bad speed even with perfect signal in malls and any place that is more crowded than a Costco. Google Fi doesn't have that problem. I blame it on T-mobile but I would rather Google Fi survives.
That presumes that “killer AI drones” are a valid way to accomplish some valid goal.
For example, I do in fact want to live in a world where only the bad guys have child soldiers, use human shields, deliberately target civilians, and abuse prisoners of war.
Do not succumb to "we have to win the race" reasoning and escalation, when the race is leading off a cliff. It is, in fact, possible to stop things via international cooperation. Treat it the way we do nuclear proliferation. (Efforts to stop nuclear proliferation have not been perfect, but they've been incredibly effective and made it much more difficult to make the problem worse than it already is.)
I suppose in the context of the article you're commenting on you're saying the bad people are the ones defending the women and children from being raped?
A permanent, non-negligible chance of becoming a collateral victim in an extrajudicial drone killing doesn't sound like order to me.
TFA mentions residents are very scared. They live in a war zone.
Edit: I get the argument that it was a war zone anyway and people are also afraid of the gangs. But that comes from the fallacy of seeing the drone strikes as the only option. There are better ways to create order than creating even more chaos and hope it hits the right people sometimes.
Haiti has been a shit show for like 200 years now. You don’t think they’ve tried every method they can think of to deal with the criminals? What are the better methods to deal with chaos that they have been ignoring?
There was some WP drama between Automattic and the WP community a while back.
Also the whole point of Bluesky is that they aren't supposed to be a big evil silicon valley tech company. But now you have a silicon-valley VC running the thing.
>that they feed to original code into a tool which they setup to make a copy of it
Well, no. They fed the spec (test cases, etc) into a tool which made a new program matching the spec. This is not a copy of the original code.
But also this feels like arguing over the color of the iceberg while the titanic sinks. If you have a tool that can make code to spec, what is the value in source code anymore? Even if your app is closed-source, you can just tell claude to write new code that does the same thing.
Everyone writes as if he just fed the spec and tests to Claude Code. Ignoring for now that the tests are under LGPL as well, the commit history shows that this has been done with two weeks of steering Claude Code towards the desired output. At every one of these interactions, the maintainer used his deep knowledge of the chardet codebase to steer Claude.
Blanchard fed the spec to the tool, and Anthropic fed the code to the tool, so Blanchard didn't do anything wrong, and Anthropic didn't do anything wrong. Nothing to see here.
Yes. Specifically: The use of words to express something different from and often opposite to their literal meaning, and not some knifey spoony confusion.
TL;DR the CIA stole an old soviet research paper from the 1950s about cancer. But the soviets didn’t have a cure either, and weren’t really any farther along than the Americans.
The mechanisms they describe (Warburg effect) are not secret and have been part of mainstream cancer research since the 1930s. They’ve tried a bunch of drugs that starve tumors of glycogen, but none of them work very well.
The backlash is just conspiracy garbage from the usual nutjobs.
Politics will make more sense once you realize that no one is really trying to have consistent principles.
People (and corporations/politicians/neighborhood groups/unions/countries/whatever) are by and large for whatever they think will benefit them, and against what they think will hurt them.
I'm dubious. There's no real evidence here to suggest that it was. This sounds like a good old-fashioned intel failure, which was common in every previous war in the middle east.
Also, so much for 'no new wars'. I'm sure this one will go better than the last five wars we've started in the middle east.
1. The school was clearly a school. Damn even OpenStreetMap lists it as a school.
2. The other bombings were very precise.
3. This was a school for the families of military people.
US has a history of these types of barbaric terrorism on civilians. There should be full accountability for this targeted terrorism before anything else. Also why are we talking about AI usage like it's some sort of scapegoat, it never is and never in war.
If you automate your intelligence, which they supposedly did, it becomes an automation failure as well. If Claude is grinding through their data, looking for hints and connecting the dots to designate the targets, it's definitely going to produce plausible false positives that are hard to verify quickly.
I'm dubious the degree to which they've actually automated their intel.
If it's anything like how my industry has 'adopted' AI, it means they've got a chatbot somewhere that no one actually uses. And a bunch of press releases.
I'm a complete Apple ecosystem user-- I have a Mac, an iPhone, an Apple Watch, Apple earbuds, and an Apple TV, and I also pay reasonably close attention to their announcements and developments-- and I couldn't tell you a single Apple Intelligence feature. Nor do I ever use Siri except for setting kitchen timers.
What do people even expect from these intelligence services? Apple is always said to have failed, yet I've seen nothing in Windows that I'd actually want to use WRT to intelligence services.
Siri being better at free form requests for actions and doing internet/knowledge searches is about all I can think of. But also, I use Kagi for that, and unless Siri has a pluggable backend for search I'm not sure being forced to use only Apple's search, if it ever exists, is a great design.
I don’t believe that’s true. Private Cloud Compute is restricted to newer phones that already support on device Apple
Intelligence. It’s just that the on device model is basically limited to simple stuff. Safari page summarization and the text rewriting features are run in the cloud. You can tell because those features go away without a network connection, and don’t cause the phone to warm.
I was wondering the same thing. I turned notification summaries off as they were less than useful, and I don't think I've stumbled across any other Apple Intelligence features apart from the laughable Image Playground or whatever it's called.
I cringe whenever I see the Image Playground icon on my MacBook.
It somehow looks worse than most scammy image generation apps you see on half-page search ads on the App Store. I have no idea how Apple willingly released it like that.
It was updated on my iPhone to a bland, forgettable abstract icon that’s still fairly mediocre but no longer an ongoing embarrassment for their corporate brand standards.
They're not having an easy time of it either, from the sound of it.
reply