Thank you for reading and the very thoughtful observations.
>> At a certain point, the verification complexity takes off. You literally run out of time to verify everything.
> Could you elaborate on this?
I plan to publish a thorough post with an interactive model. Whether human or AI, you are capacity constrained, and I glossed over `C` (capacity within a given timeframe) in the X post.
You are correct that verification complexity remains finite at n_0. The barrier is practical: n_0 is where V(n) exceeds your available capacity C. If V(n) = n^(1+k), then n_0 = C^(1/(1+k)). Doubling your capacity doesn't double n_0. It increases by a factor of 2^(1/(1+k)), which is always less than 2.
So the barrier always exists for, say, a given "dev year" or "token budget," and the cost to push it further grows superlinearly. It's not absolutely immovable, but moving it gets progressively harder. That's what I mean by "literally run out of time." At any given capacity, there is a finite n beyond which complete verification is not possible. Expanding capacity buys diminishing returns.
> Either way, this entire discussion assumes n will increase as more and more software gets written by AI. Couldn't it also be the opposite, though?
You are getting at my core motivation for exploring this question.
Verification requires a definition of "done" and I wonder if it will ever be possible (or desirable) for AI to define done on its own, let alone verify it and simplify software based on its understanding of our needs.
You make a great point that we are not required to add more components and "go right" along the curve. We can choose to simplify, and that is absolutely the right takeaway. AI has made many people believe that by generating more code at a faster pace they are accomplishing more. But that's not how software productivity should be judged.
To answer your question about assumptions, while AI can certainly be prompted to help reduce n or k in isolated cases where "done" is very clear, I don't think it's realistic to expect this in aggregate for complex systems where "done" is subjective and dynamic.
I'm speaking mainly in the context of commercial software dev here, informed by my lived experience building hundreds of apps. I often say software projects have a fractal complexity. We're constantly identifying new needs and broader scope the deeper we go, not to mention pivots and specific customer asks. You rarely get to stand still.
I don't mean to be pessimistic, but my hunch is that complexity growth outpaces the rate of simplification in almost every software project. This model attempts to explain why that is so. And notably, simplification itself requires verification and so it is in a sense part of the verification cost, too.
>> At a certain point, the verification complexity takes off. You literally run out of time to verify everything. > Could you elaborate on this?
I plan to publish a thorough post with an interactive model. Whether human or AI, you are capacity constrained, and I glossed over `C` (capacity within a given timeframe) in the X post.
You are correct that verification complexity remains finite at n_0. The barrier is practical: n_0 is where V(n) exceeds your available capacity C. If V(n) = n^(1+k), then n_0 = C^(1/(1+k)). Doubling your capacity doesn't double n_0. It increases by a factor of 2^(1/(1+k)), which is always less than 2.
So the barrier always exists for, say, a given "dev year" or "token budget," and the cost to push it further grows superlinearly. It's not absolutely immovable, but moving it gets progressively harder. That's what I mean by "literally run out of time." At any given capacity, there is a finite n beyond which complete verification is not possible. Expanding capacity buys diminishing returns.
> Either way, this entire discussion assumes n will increase as more and more software gets written by AI. Couldn't it also be the opposite, though?
You are getting at my core motivation for exploring this question.
Verification requires a definition of "done" and I wonder if it will ever be possible (or desirable) for AI to define done on its own, let alone verify it and simplify software based on its understanding of our needs.
You make a great point that we are not required to add more components and "go right" along the curve. We can choose to simplify, and that is absolutely the right takeaway. AI has made many people believe that by generating more code at a faster pace they are accomplishing more. But that's not how software productivity should be judged.
To answer your question about assumptions, while AI can certainly be prompted to help reduce n or k in isolated cases where "done" is very clear, I don't think it's realistic to expect this in aggregate for complex systems where "done" is subjective and dynamic.
I'm speaking mainly in the context of commercial software dev here, informed by my lived experience building hundreds of apps. I often say software projects have a fractal complexity. We're constantly identifying new needs and broader scope the deeper we go, not to mention pivots and specific customer asks. You rarely get to stand still.
I don't mean to be pessimistic, but my hunch is that complexity growth outpaces the rate of simplification in almost every software project. This model attempts to explain why that is so. And notably, simplification itself requires verification and so it is in a sense part of the verification cost, too.