Hacker Newsnew | past | comments | ask | show | jobs | submit | Legend2440's commentslogin

>Nisos estimated that in about a year, Jo, who was likely a newer member of the team, applied to about 5,000 jobs

They're not having an easy time of it either, from the sound of it.


How is this different from what other prepaid carriers like Mint offer?

The big thing keeping me from switching from Google Fi is how easy international roaming is. For every country I've been to, I've just had it automatically work within ten minutes of landing, at my regular price, without buying any addons

Except if you happen to travel for more than 45 days, in which case Google Fi will promptly tell you to get fucked and cut off your service without warning, advanced notice, or spelling out anywhere when you sign up. Not my idea of a carrier I can trust. I deleted my account and service with them to move to a carrier that I can trust and actually respects me as a customer.

tbf, that was because a lot of people abused it by being permanently outside of the US and relying exclusively on the roaming for all their data. I know because I was one of those people for 6 years.

We've exceeded that by months on multiple occasions and fully expected them to cut us off after reading similar dire warnings but they never have.

That said, we keep data usage rather low because we're on the metered plan.


What service do you use now?

I've switched to US Mobile. I haven't used it on an extended basis internationally yet, but I am about to travel internationally, so will find out soon. That said, the reviews are pretty good by people that use it internationally for an extended period of time.

I got bad speed even with perfect signal in malls and any place that is more crowded than a Costco. Google Fi doesn't have that problem. I blame it on T-mobile but I would rather Google Fi survives.

No, let's not. I really don't want to live in a world where the bad guys have killer AI drones and we don't.

That presumes that “killer AI drones” are a valid way to accomplish some valid goal.

For example, I do in fact want to live in a world where only the bad guys have child soldiers, use human shields, deliberately target civilians, and abuse prisoners of war.


If the other guys have child soldiers, you don't need child soldiers of your own to defeat them.

If the other guys have an army of killer robots and you don't, you are going to die.


Do not succumb to "we have to win the race" reasoning and escalation, when the race is leading off a cliff. It is, in fact, possible to stop things via international cooperation. Treat it the way we do nuclear proliferation. (Efforts to stop nuclear proliferation have not been perfect, but they've been incredibly effective and made it much more difficult to make the problem worse than it already is.)

Nukes are intrinsically complex and require a high degree of skill, time, and resources to pull off.

Attack drones can be as easy as strapping an off the shelf grenade to an off the shelf drone.


What fairyland do you live in?

You should take a hard look at who really is the bad guy.

I suppose in the context of the article you're commenting on you're saying the bad people are the ones defending the women and children from being raped?

"The use of drones in these areas causes more collateral damage among the civilian population than it truly neutralizes gangs."

Only 5% of the deaths seem to be collateral damage. There can be no freedom without order.

A permanent, non-negligible chance of becoming a collateral victim in an extrajudicial drone killing doesn't sound like order to me.

TFA mentions residents are very scared. They live in a war zone.

Edit: I get the argument that it was a war zone anyway and people are also afraid of the gangs. But that comes from the fallacy of seeing the drone strikes as the only option. There are better ways to create order than creating even more chaos and hope it hits the right people sometimes.


Haiti has been a shit show for like 200 years now. You don’t think they’ve tried every method they can think of to deal with the criminals? What are the better methods to deal with chaos that they have been ignoring?

> You don’t think they’ve tried every method

No, I don't. Their experimentation is constrained by many forces.


There was some WP drama between Automattic and the WP community a while back.

Also the whole point of Bluesky is that they aren't supposed to be a big evil silicon valley tech company. But now you have a silicon-valley VC running the thing.


Matt M. was behind the drama from WordPress' side though. It looks like Toni Schneider left in 2014.

Toni was in fact the adult supervision brought in by Automattic’s board when the company was young and Matt was inexperienced.

And apparently adult supervision was needed.

And still needed...

"Some drama"...yeah, the way there was drama between Germany and the Soviet Union back in 1941.

Automattic's Matt Mullenweg is downright insane. Just google their war with WP Engine and by extension the entire WordPress community.


They'd already taken VC money hadn't they? It's got to be said though that tech startups are getting very formulaic. Monster of the week vibes.

>that they feed to original code into a tool which they setup to make a copy of it

Well, no. They fed the spec (test cases, etc) into a tool which made a new program matching the spec. This is not a copy of the original code.

But also this feels like arguing over the color of the iceberg while the titanic sinks. If you have a tool that can make code to spec, what is the value in source code anymore? Even if your app is closed-source, you can just tell claude to write new code that does the same thing.


Everyone writes as if he just fed the spec and tests to Claude Code. Ignoring for now that the tests are under LGPL as well, the commit history shows that this has been done with two weeks of steering Claude Code towards the desired output. At every one of these interactions, the maintainer used his deep knowledge of the chardet codebase to steer Claude.

Is this perspective implying that the maintainer might be legally culpable because he, the *human*, was trained on the codebase?

Well I'm implying that someone who's been reading a codebase for 10+ years is the worst person to claim an "independent reimplementation".

Blanchard fed the spec to the tool, and Anthropic fed the code to the tool, so Blanchard didn't do anything wrong, and Anthropic didn't do anything wrong. Nothing to see here.

> Blanchard fed the spec to the tool,

Yes...

> and Anthropic fed the code to the tool,

Presumably, as part of the massive amount of open-source code that must have been fed in to train their model.

> so Blanchard didn't do anything wrong, and Anthropic didn't do anything wrong. Nothing to see here.

This is meant as irony, right?


Yes. Specifically: The use of words to express something different from and often opposite to their literal meaning, and not some knifey spoony confusion.

TL;DR the CIA stole an old soviet research paper from the 1950s about cancer. But the soviets didn’t have a cure either, and weren’t really any farther along than the Americans.

The mechanisms they describe (Warburg effect) are not secret and have been part of mainstream cancer research since the 1930s. They’ve tried a bunch of drugs that starve tumors of glycogen, but none of them work very well.

The backlash is just conspiracy garbage from the usual nutjobs.


Politics will make more sense once you realize that no one is really trying to have consistent principles.

People (and corporations/politicians/neighborhood groups/unions/countries/whatever) are by and large for whatever they think will benefit them, and against what they think will hurt them.


I'm dubious. There's no real evidence here to suggest that it was. This sounds like a good old-fashioned intel failure, which was common in every previous war in the middle east.

Also, so much for 'no new wars'. I'm sure this one will go better than the last five wars we've started in the middle east.


Why do we assume it's intel failure, or ai usage?

1. The school was clearly a school. Damn even OpenStreetMap lists it as a school. 2. The other bombings were very precise. 3. This was a school for the families of military people.

US has a history of these types of barbaric terrorism on civilians. There should be full accountability for this targeted terrorism before anything else. Also why are we talking about AI usage like it's some sort of scapegoat, it never is and never in war.


If you automate your intelligence, which they supposedly did, it becomes an automation failure as well. If Claude is grinding through their data, looking for hints and connecting the dots to designate the targets, it's definitely going to produce plausible false positives that are hard to verify quickly.

I'm dubious the degree to which they've actually automated their intel.

If it's anything like how my industry has 'adopted' AI, it means they've got a chatbot somewhere that no one actually uses. And a bunch of press releases.


The purpose of a system is what it does.

Nah, credit card fees are like 1.5 to 3.5%.

Country dependent, in the EU there is a legal cap of 0.2% for debit and 0.3% for credit card transactions.

Interchange is $0.35. On small transactions, that eats up a lot.

What are these servers actually used for?

The Siri+LLM features of Apple Intelligence aren’t launched yet, and the other features like notification summaries run on-device.


I'm a complete Apple ecosystem user-- I have a Mac, an iPhone, an Apple Watch, Apple earbuds, and an Apple TV, and I also pay reasonably close attention to their announcements and developments-- and I couldn't tell you a single Apple Intelligence feature. Nor do I ever use Siri except for setting kitchen timers.

Just a total failure of execution.


What do people even expect from these intelligence services? Apple is always said to have failed, yet I've seen nothing in Windows that I'd actually want to use WRT to intelligence services.

Siri being better at free form requests for actions and doing internet/knowledge searches is about all I can think of. But also, I use Kagi for that, and unless Siri has a pluggable backend for search I'm not sure being forced to use only Apple's search, if it ever exists, is a great design.


The gorgeous rainbow border is one of the Apple Intelligence features, unavailable to plain Siri :/

> Nor do I ever use Siri except for setting kitchen timers.

If it even works, it fails with "something went wrong" for me 3 out of 5 times


The next Siri is Siri by Gemini, running on Google servers with Apple Privacy requirements. (aiui)

https://www.macrumors.com/2026/01/30/apple-explains-how-gemi...


They are supposed to run Apple Intelligence for devices too old to do it themselves.

https://security.apple.com/blog/private-cloud-compute/


I don’t believe that’s true. Private Cloud Compute is restricted to newer phones that already support on device Apple Intelligence. It’s just that the on device model is basically limited to simple stuff. Safari page summarization and the text rewriting features are run in the cloud. You can tell because those features go away without a network connection, and don’t cause the phone to warm.

Well... you can write Apple Shortcuts that send AI requests to their cloud.

I was wondering the same thing. I turned notification summaries off as they were less than useful, and I don't think I've stumbled across any other Apple Intelligence features apart from the laughable Image Playground or whatever it's called.

I cringe whenever I see the Image Playground icon on my MacBook.

It somehow looks worse than most scammy image generation apps you see on half-page search ads on the App Store. I have no idea how Apple willingly released it like that.

It was updated on my iPhone to a bland, forgettable abstract icon that’s still fairly mediocre but no longer an ongoing embarrassment for their corporate brand standards.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: