I work in this field. I don't think this is going to get restricted anymore than AI will.
The modern battlefield is just signal jammers. Remote human pilots just wouldn't work like their proposals wants. Autonomy is becoming a requirement. No targeting humans is logical and the predictable autonomy is pretty much there.
A lot of this stuff in this video though, you can get off the shelf and implement today. Compute and battery life are the limiters. The whole world is pouring billions into making those much better though.
Isn’t it inevitable that the stuff in the video will happen then, specifically autonomous murder drones? I don’t really see a way to prevent that in the future… which is terrifying.
You can already watch hundreds, if not thousands of hours of Russians and Ukrainians being mercilessly hunted by drones with grenades. What difference does it make if it has a human pilot or not? The world has always been ugly and brutal if you were actually paying attention.
There’s a really obvious difference of scale, where you could kill millions without needing millions yourself. The implications when it comes to terrorism are scary. In that sense it’s similar to the threat of nuclear weapons, but the technology will be more accessible and more precise, able to target people based on things like race. You can also destroy all the people in an area without harming the resources/environment/structures, etc.
The asymmetric nature is great for smaller manpower counties. Drones will equalise combat for smaller countries and reorient it to capital and industrial capabilities.
The game-changer here would be bringing cost down and range and accuracy up.
I don't know what the cost of a modern army's soldier in the field is (or how that varies by national average income --- one of the cost drivers for the US military is simply typical wages and income), but as a very naive ballpark estimate, the US Army which boasts north of 450,000 active-duty soldiers and a $173 billion budget (FY2022) is therefor looking at about $380k/soldier-year, total cost.
I'll estimate that actual combat troops are roughly 10% of the total, so we're looking at about $4 million per pair of boots on actual ground.[1] Maybe.
A Tomahawk cruise missile costs about $1--4 million (varying by source). Single use.
A Predator MQ-1 drone costs about $40 million. Multi-use. The Hellfire missile (with which Predators may be equipped) run about $58k -- $150k each.
The DJI consumer-grade drones being used in the Russo-Ukraine war run about $1k--2k each. Multi-use.
Switchblade 300 and 600 drones run from $6k to $70k per unit.
Significant challenges for smaller drones are the warhead that can be carried, range, identification of targets, and avoiding counter-battery fire, in which the launch, recovery, and/or control points for drones are identified and attacked by retaliatory fire. Among other advantages of single-use drones are that such counter-battery fire is more challenging as one cannot simply follow the drone back home.
Slaughterbot-scale drone attacks strike me as reasonably implausible in contemporary warfare. Targets are insufficiently dense, ranges are too short, and costs are comparatively high. A swarm of, say, 10 to 100 drones might be viable, but that's going to only take on 10 to 100 individual targets. Swarms of 1,000s of drones would require a very capable military organisation (of which the US, possibly a few other NATO forces, and perhaps China might be capable of affording such weapons), and still have to be transported relatively close to the field of battle by some other means. Though drop-shipping a standard 40-foot container might well be one option available. Others could be launching from a larger mothership (a drone aircraft itself, manned cargo aircraft, a ship or submarine, or land-transports such as trucks or railcars). Those are easier than launching an manned land-invasion, but still a complex undertaking.
Even states that have significant technical capabilities and relatively low inhibitions (Israel, Saudi Arabia, North Korea, Iran, Russia) seem not to have widely adopted or deployed drone weapons, though whether that's a technical limitation or strategic decision I'm not sure.
________________________________
Notes:
1. As with all estimates, this is eliding a tremendous amount of detail and includes much guesswork and outright ignorance. I'd greatly appreciate pointers to accurate information on fighting vs. support troops. I'm omitting national guard though I'm well aware that they serve combat roles. My sources don't give hard numbers on soldiers in infantry, though I'm finding that the 1st, 2nd, 3rd, 11th, and 25th divisions are considered "infantry", that a typical division is 10k -- 25k soldiers, so we're looking at 50k -- 250k soldiers assigned to infantry divisions, though again some fraction of that is support roles. Still, $0.5 -- $5 million for a soldier-year on the ground seems a reasonable ballpark.
This study was organized by Google (Technically DeepMind).
I wouldn't be surprised if Google is wanting the lawsuit to lose. It would block open-source models like these from existing and give them potentially a competitive advantage to be able to afford whatever compliance is mandated. They'd be able to offer services that comply, but open-source models would only have access to lower quality data and would be stunted.
I think there's a concern also that smaller companies can gain a competitive edge via resistance to reputational damage. As Yann LeCun recently tweeted:
'By releasing public demos that, as impressive & useful as they may be, have major flaws, established companies have less to gain & more to lose than cash-hungry startups.
To your point on reputation: Didn't Meta recently unpublish a LLM after it was shown to hallucinate wrong answers? Smaller AI companies would have stuck to their guns, but Meta has internal controls to guard against damaging the "brand". IIRC, the tweet announcing the retraction was particularly salty,<speculation> sounded like someone whose hand was forced</speculation>
Yes, the Galactica LLM by Meta. Though LeCun isn't an author in the paper, he is "Chief AI Scientist for Facebook AI Research (FAIR)"[0], and he was quite angry about closing the Galactica demo[1].
I recently chatted with some game developer friends about this. You just write backstories for each of the characters, you can even have them interact with one another easily. The issue currently is the length of that characters story, but definitely see some interesting stuff happening.
I mean you could just take some short-term lived characters and that would work pretty well. Like an enemy you have to defeat. Procedurally visual, with a randomized backstory and traits all from ChatGPT, then can take in small details, but then each encounter would be a bit less repeating.
This is actually useful. I like this better than Codesandbox for just messing around with some data cleanup or something hacky and quick. I'm sure it's on the list, but I wouldn't be opposed to some Dark-mode.