Hacker Newsnew | past | comments | ask | show | jobs | submit | coderenegade's commentslogin

The issues the US faces are political and humanitarian (and economic) rather than military. I don't see any compelling evidence that the US couldn't open the straits if it really wanted to, it's just that the cost in lives and hardware would be unlike anything the US has seen since Vietnam, maybe even the second world war. And of course, once you open the strait, you have to keep it open. The whole thing is a lose-lose situation for everyone involved.

It should probably also be pointed out that doing nothing has a cost too, and it's probable that the bill for doing nothing over a long period of time has come due. I, like most people, never bought the WMD claims leading up to Iraq. I'm not sure what to think here. I certainly don't buy that Iran wasn't working towards getting the bomb after how well it worked out for North Korea. I can't claim to know the calculus involved in determining whether or not it's worth going to war with Iran to stop them from getting the bomb.


The cost of doing nothing is going to be large.

Apart from the oil, there is the fertiliser that isn't being shipped. That means that august crops are going to be down. Assuming its a good year. prices go up, which means we can expect a wave of overthrown governments (similar to the arab spring) in 12-24 months time.

For the USA that means inflation, along with a credit crunch (probably)


Given you compare the cost of a US operation to open the straits to the Vietnam War, it seems prudent to mention that the outcome of the Vietnam war, according to Wikipedia, was a North Vietnam victory.

The victory was due to the people at home who protest and is politically against the war.

> I don't see any compelling evidence that the US couldn't open the straits if it really wanted to, it's just that the cost in lives and hardware would be unlike anything the US has seen since Vietnam, maybe even the second world war

The US invaded Iraq and toppled its government; Iraqi militias are still firing drones and missiles at US bases. Tankers and oil infra are much softer targets… all it takes is hitting one or two tankers and folks will stop shipping.


> I don't see any compelling evidence that the US couldn't open the straits if it really wanted to, it's just that the cost in lives and hardware would be unlike anything the US has seen since Vietnam, maybe even the second world war.

The second half of that sentence is literally explaining why the "impossible" you reject in the first part.


The US wasn't doing nothing about Iran though. The JCPOA was a thing, before trump tore it up. This approach is about the dumbest way Iran could be handled, which makes sense given who is giving the orders.

Does this use a boundary representation for the geometry?

I for one can't wait for ChatGPT-style sexting to become a thing.

It's not just dirty talk. It's a whole new paradigm in verbal filth.

On the topic of sora, though: current models are astounding. I watched a clip of Leonidas, Aragorn, William Wallace, Gandalf etc. all casually riding into a generic medieval town together, and if you showed that to me a few years ago, it would have seemed like magic. We're not far off from concerts featuring only dead artists, and all video and image testimony becoming unreliable. Maybe Sora was a victim of timing or mismanagement, because I don't see how this isn't still a seismic shift in the entertainment industry.


> all video and image testimony becoming unreliable

This is a "seismic shift" in the sense of the Big One hitting California. The knock on effects of trust erosion caused by AI are going to huge and potentially unrecoverable.


I mean, you just outlined why it won't be a seismic shift: the only way the videos reliably stay on-model is if that model violates someone's copyright. And then when the movie is made the output itself isn't copyrightable (the ultimate arrangement may be but no individual frame is).

This is my take as well. A human who learns, say, a Towers of Hanoi algorithm, will be able to apply it and use it next time without having to figure it out all over again. An LLM would probably get there eventually, but would have to do it all over again from scratch the next time. This makes it difficult combine lessons in new ways. Any new advancement relying on that foundational skill relies on, essentially, climbing the whole mountain from the ground.

I suppose the other side of it is that if you add what the model has figured out to the training set, it will always know it.


That's just not true at all. There are entire fields that rest pretty heavily on brute force search. Entire theses in biomedical and materials science have been written to the effect of "I ran these tests on this compound, and these are the results", without necessarily any underlying theory more than a hope that it'll yield something useful.

As for advances where there is a hypothesis, it rests on the shoulders of those who've come before. You know from observations that putting carbon in iron makes it stronger, and then someone else comes along with a theory of atoms and molecules. You might apply that to figuring out why steel is stronger than iron, and your student takes that and invents a new superalloy with improvements to your model. Remixing is a fundamental part of innovation, because it often teaches you something new. We aren't just alchemying things out of nothing.


Well, we know that mixing lead into copper won't make for a strong material. There's a lot of human ingenuity involved.

I failed to make my point clear: Humans make the search area way smaller compared to current day AI.


This. Code generation is cheap, so you can rapidly explore the space and figure out the architecture that best suits the problem. From there, I start fresh and pseudocode the basic pattern I want and have Claude fill in the gaps.


There needs to be a measure (or measures) of the entropy of a codebase that provides a signal of complexity. When you're paying for every token, you want code patterns that convey a lot of immediate information to the agent so that it can either repeat the pattern, or extend it in a way that makes sense. This is probably the next wave of assisted coding (imo), because we're at the stage where writing code works, the quality is mostly decent, but it can be needlessly complex given the context of the existing repo.


There's a way to measure "entropy" of a codebase. Take something like the binary lambda calculus or the triage calculus, convert your program (including libraries, programming language constructs, operating system) into it, and measure the size of the program in it in bits.

You can also measure the crossentropy, which is essentially the whole program entropy above minus entropy of the programming language and functions from standard libraries (i.e. abstractions that you assume are generally known). This is useful to evaluate the conformance to "standard" abstractions.

There is also a way to measure a "maximum entropy" using types, by counting number of states a data type can represent. The maximum entropy of a function is a crossentropy between inputs and outputs (treating the function like a communication channel).

The "difference" (I am not sure how to make them convertible) between "maximum entropy" and "function entropy" (size in bits) then shows how good your understanding (compared to specification expressed in type signature) of the function is.

I have been advocating for some time that we use entropy measures (and information theory) in SW engineering to do estimation of complexity (and thus time required for a change).


Maybe cyclomatic complexity would be a good proxy. It can obviously be gamed but it's obvious when it is


There was a measure used during the Toyota Unintended Acceleration case called McCabe Cyclomatic Complexity, I wonder if anyone is using it alongside AI assisted code.


It is roughly equivalent to diff size: https://entropicthoughts.com/lines-of-code


I mean, it's ultimately a string, and the measurement of the entropy of a string is well-studied. The LLM might start gaming that with variable names so you'd need to do the AST instead. I may actually try something like that; cool idea.


I think that in the long run, AI assisted coding will turn out to be better than handcrafted code. When you pay for every token, and code generation is quick, a clean, low entropy codebase with good test coverage gets you a lot more for your dollar than a dog's breakfast. It's also much easier to fix bad decisions made early on in a project's life, because the machine is doing all of the heavy lifting.

This also lines up with the history of automation in many other industries. Modern manufacturing is capable of producing parts that a medieval blacksmith couldn't dream of, for example. Sure, maybe an artisan can produce better code than an llm now, but AI assisted humans will beat them in the near future if they aren't already producing similar quality output at greater speed, and tomorrow's models will fix the bad code written today. The fact that there's even a discussion on automated vs hand written today means that the writing is almost certainly on the wall.


You mean like I have to pay my compiler to turn high level code into low level code?


I suspect this is more true than most people think. Today's bad code will be cleaned up by tomorrow's agents.

The other factor that gets glossed over is that llms create a financial incentive to create cleaner code, with tests, because the agent that you pay for will be more efficient when the code is easier to understand, and has clear patterns for extensibility. When I do code with llms, a big part of it is demonstration, i.e. pseudocoding a pattern/structure, asking the model if it understands, and then having it complete the pattern. I've had a lot of success with this approach.


> llms create a financial incentive to create cleaner code, with tests, because the agent that you pay for will be more efficient when the code is easier to understand, and has clear patterns for extensibility

Right, this is the kind of discussion we're having on my team: suddenly all of the already good engineering practices like good observability, clear tests with high coverage, clean design, etc. act as a massive force multiplier and are that much more important. They're also easier to do if you prioritize it. We should be seeing quality go up. It's trivial to explore the solution space with throwaway PoCs, collect real data to drive your design, do all of those "nice to have" cleanups, etc. The people who assume LLM = slop are participating in a bizarre form of cope. Garbage in, garbage out; quality in, quality out. Just accept that coding per se is not going to be a profession for long. Leverage new tools to learn more, do more, etc. This should be an exciting time for programmers.


You're more likely to save tokens in the architecture than the language. A clean, extensible architecture will communicate intent more clearly, require fewer searches through the codebase, and take up less of the context window.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: