I tried ls20 and it was surprisingly fun! Just from a game design POV, these are very well made.
Nit: I didn't see a final score of how many actions I took to complete 7 levels. Also didn't see a place to sign in to see the leaderboard (I did see the sign in prompt).
Agree 100%. I want to be able to see how many actions it took me. And it would be good if it were possible to see how well I'm doing compared to other humans, i.e. what is my percentile.
Gaming this out for peer adversaries is mostly moot, right? The post-Cold War strategic balance has mostly hung on MAD. And Russia, in particular, has responded to any attempt at building missile shields with more capable missiles.
It's likely more relevant for asymmetric conflicts that involve conventional weapons, and would enable an otherwise less resourced adversary to become a near peer.
Dennis Bushnell from NASA presented this deck in 2001, and is quite prescient about UAVs and distributed warfare.
Eh, he threw so much random stuff at the wall that some of it is bound to stick. An early slide in his presentation says there will be "no pixie dust," but that's 90% of what follows.
Reflected inertia does scale as the square of the gear ratio but it's a bit misleading unless you also consider the change in rotor inertia, which scales as a cube of the rotor radius (as the article points out).
The other side of the scaling laws say that motor torque scales as a square of air gap radius (roughly rotor radius), and output torque scales as linearly with gearing ratio.
When you balance these out, the reflected inertia depends on the inverse of power dissipated for a fixed output torque.
In an ideal world, your total reflected inertia is independent of the gearbox and largely depends on the motor fill factor and how hot you can run it.
Look at the guys above posting that within 18 months these sorts of robots will be able to cook in anyone’s home; the above reminder is very necessary.
I think ChatGPT has a huge advantage here. They have been collecting realistic multi-turn conversational data at a much larger scale. And generally their models appear to be more coherent with larger contexts for general purpose stuff.
Nope, they don't have that capacity. It's been shown multiple times in the past year.
Shutting down USAID being the clearest one. They just saw "they help brown people in other countries with our money" and shut it down. Fuck all second and third order effects that actually benefited the US.
- reducing the surface area of "acceptable use" (e.g., blocking third-party tools OpenClaw)
- tighter usage limits and more subscription tiers
- increasing existing subscription prices
- moving to usage based model completely
- taking away compute from training next gen models (future demand destruction)
reply