> In a study of over 16,000 queries, measured against institutional benchmarks from McKinsey, Harvard, MIT, BCG, and others, we determined Perplexity Computer saved our internal teams $1.6M in labor costs and performed 3.25 years of work in only four weeks. And now we’re extending those same capabilities to other teams.
This is a wild statement that does not seem to be supported by any actual data.
What does it mean? Does clicking on a link counts as labor.
Having been involved in this sort of exercise before, I can explain.
What they will have done is asked a human who should be knowledgeable approximately how much time they spend on the activity (e.g.: how long do you normally spend copy and pasting? How long do you spend looking for the files you need? etc.). When you ask someone whose job it is, they tend to overestimate, and on top of that, you break down the questions as much as possible, so these small overestimations compound without it being obvious to the person that they're making a mistake. It's easy enough to spot that you've said 'four hours' but you know a task doesn't to take you a full morning.
Once you've got all these answers, you ask how often the person has to do it.
Then you ask someone in HR for an average salary you can use. Now that AI is doing that work, you multiply the number of hours saved by the average salary, and report that as your savings.
Something, as usual, stinks about these numbers. $1.6M in saved labour costs, and 3.25 years of work in 4 weeks are basically two ways of saying the same thing (labour costs vs improved productivity).
So let's say 3.25 years of work is around 160 weeks, minus the four weeks they actually took, so 156 weeks of productivity savings. Assume 40 hour weeks, that's 6,240 hours 'saved'. Which works out at around $250/hour, which... well. You decide if that's plausible.
> What does it mean? Does clicking on a link counts as labor.
I think we might be seeing what happens when people are being paid too much to spend all day emailing each other and jockeying excel/gantt charts/org charts. Yeah for some definition of "work" I guarantee that a LLM could perform 3.25 years worth in four weeks.
> people are being paid too much to spend all day emailing each other
Hmm, this does not sound exactly right. Also, does anybody seriously think that communication is not work, or is not important? A number of really impactful things started from people emailing each other. (Hell, Linux kernel development is still much about people emailing patches each other.)
The problem with human labor is that, as an organization scales, the amount of work any individual in the system can do shrinks due to the coordination problem.
Coordination consumes a larger and larger amount of employee time to the point that, in the absolute largest organizations, the vast majority of employee time is internal coordination vs. actual improvement/selling of the customer offering.
So if you go from 100 employees to 1,000 employees, they can MAYBE do 4X the work. Not 10X like you'd think. And this effect gets even worse as you scale further.
So if an AI can do 10X more labor in a human day, and can coordinate instantaneously via a central context ledger (say a git repo), it doesn't just create 10X gains in productivity for large orgs. It creates a multiple of that 10X due to also removing the human coordination overhead.
Don't you think AI itself is something that adds coordination overhead? A 1000 strong team with AI agents will feel like 5000-person company where more than 30% are not even at exception level - i.e. they need to be pulled along.
This is why having less people and more agents actually makes sense but the coordination problem remains either way.
And you cannot escape it because it is simply mathematical.
The coordination problem absolutely can be escaped with technology, hence why productivity gains exist and why the economy grows and isn't a fixed pie over time.
Here's an easy non-AI example:
In the past, a 'computer' was literally a person [1]. If you needed to synthesize large amounts of data, you needed to split the task among a team of people writing things down and then a team of people to check their work after the fact and then a team of people to combine all the work and then a team to double-check the combined work.
Tasks that in the past would have taken a room full of people coordinating with pencils are absolutely done by 1 machine today (what we know as computers) that no longer needs to split that task and coordinate, which is exactly what will happen with 'agents' who can take on vastly more work per unit of time.
Look up Amdahl's Law and Universal Scalability Law.
The math doesn't care whether the nodes are people, CPUs or language models. If agent A's next action depends on what agent B decided, you've introduced a sequential dependency.
The point is that we don't need an equivalent number of nodes (agents) as we needed people.
The computer flattened the coordination dependencies of that room full of people by doing all the calculations by itself. As they get smarter, you can theoretically assume 1 agent could eventually run the entire US federal government.
In the historical [human] computer example; if 15,000 calculations needed to be done, a CPU doesn't need to wait on Bob to come back from lunch to do the next 20 calculations...and doesn't need to wait on Alice to combine his work with the 20 calculations done by Jane...and doesn't need Bill to wait for everybody to be done to double check Jane's work.
The CPU does all 15,000 calculations instantly, by itself. This will be similar with AI agents.
Note that Amdahl's rule doesn't capture the practical situation.
1) The purpose of algorithms is ultimately to create value, not compute some fixed value X. This is important as it gives flexibility to choose different value producing tasks where parallelism dominates over serial tasks, whenever the the latter becomes a bottleneck.
2) In terms of producing value, perfect accuracy or the best possible solutions are not always necessary. Many serial tasks can become very parallel tasks when accuracy or certainty do not have to be complete.
3) Solutions that are reusable changes the math further. No matter how serial a calculation is, if something is calculated that can be reused, that serial part becomes effectively order O(1), after calculation if reused exactly, but as neural network demonstrate, many serial tasks become very parallelized after training a model that can be reused for now a wide class of specific problems. Resulting in very amortized serial computing costs.
It doesn't matter how many steps something takes, if those steps are now in the past and the value is "forever" reusable.
4) The economics of serial and parallel computation are not static, but improve relative to economic value achieved. Meaning that demand for cheaper serial time and currency costs result in improved scaled up hardware that delivers cheaper serial costs. This may have less impact than the previous points, but over years makes a tremendous difference on top of all those points.
This can go on.
The point being Amdahl's law certainly applies to specific algorithms, but is not the dominant determinant of computing in general, and not useful application of computing to a significant degree, where problems can be strategically chosen, strategically weakened or altered, and can be strategically fashioned to create O(V) of value - to balance any O(S) cost of serial computing, via direct reuse and generalization.
In an organization, the number of sequential steps doesn't really scale with number of participants, does it? Rather with dependent steps of the tackled process; say, devise building permit request, await approval, purchase materials, move materials to site, hire workforce, etc.
Theoretically, each of those steps is parallelizable to some extent. Amdahl's law equivalent here would be that some delays are outside the reach of an organization to improve. For instance, a building permit will take the time it takes to be examined based on an external public administration.
While it might sound like the ultimate hack, it’s not.
I’ve been in that situation and died a little inside everyday. It’s not like being a rentier, because you still have to lose most of your day at the office and pretend to work, and be available in case some higher up needs something so you don’t get caught.
Some type of emailing is important, what most people do, however, is not. Same with meetings, calls etc. Most of it is filling the day so they don't get fired.
The amazing thing is that soon (actually already) we will be seeing people being paid way too much to prompt a LLM to email other people or respond to other peoples emails. And then turn these emails into presentations which will be turned into meeting transcripts again followed by emails.
The lingering question is if the intermediate LLM translation steps will actually make our communication more efficient - or just amplify the already inefficient parts.
Inefficiency all too often is celebrated by our society, as I wrote in 2010: https://pdfernhout.net/beyond-a-jobless-recovery-knol.html
"Also, many current industries that employ large numbers of people (ranging from the health insurance industry, the compulsory schooling industry, the defense industry, the fossil fuel industry, conventional agriculture industry, the software industry, the newspaper and media industries, and some consumer products industries) are coming under pressure from various movements from both the left and the right of the political spectrum in ways that might reduce the need for much paid work in various ways. Such changes might either directly eliminate jobs or, by increasing jobs temporarily eliminate subsequent problems in other areas and the jobs that go with them (as reflected in projections of overall cost savings by such transitions); for example building new wind farms instead of new coal plants might reduce medical expenses from asthma or from mercury poisoning. A single-payer health care movement, a homeschooling and alternative education movement, a global peace movement, a renewable energy movement, an organic agriculture movement, a free software movement, a peer-to-peer movement, a small government movement, an environmental movement, and a voluntary simplicity movement, taken together as a global mindshift of the collective imagination, have the potential to eliminate the need for many millions of paid jobs in the USA while providing enormous direct and indirect cost savings. This would make the unemployment situation much worse than it currently is, while paradoxically possibly improving our society and lowering taxes. Many of the current justifications for continuing social policies that may have problematical effects on the health of society, pose global security risks, or may waste prosperity in various ways is that they create vast numbers of paid jobs as a form of make-work."
Philosophy territory now... you wrote about technology making labor unnecessary 15 years ago - Aristotele did ~2000 years ago too (same text where he tried to justify slavery but nvm that): "For if every instrument could accomplish its own work, obeying or anticipating the will of others, [...] if, in like manner, the shuttle would weave and the plectrum touch the lyre without a hand to guide them, chief workmen would not want servants, nor masters slaves."
I bet in 2000 years they will still be writing about it - yeah, technology changes our lives (for better or worse).
It's pretty fascinating to look at the impacts this has had in the last 2000 years, or even just the last 200.
Take construction work. Incredible improvements through power tools, gasoline-powered mobile cranes, etc. The productivity per worker has exploded. A lot of this has been captured by induced demand: we build bigger, taller, grander. But the improvements aren't distributed equally. Which means that crafts that haven't seen much improvement are now more expensive in comparison to everything else. Which has contributed to our buildings having less elaborate facades and becoming more "bland"
The same in clothing. Clothing has become dirt cheap. Even the poorest people can afford new clothing multiple times a year. But in the same transition we have gone from everything being custom tailored to most things only kind of fitting, being made for variations of the most common body shapes. Not necessarily because tailored clothing has become much more expensive (though higher labor costs from higher average productivity haven't helped), but because every other step has become cheaper and tailoring hasn't.
I wonder what we will say about the trajectory of software in a couple decades
That's a great angle - will handcrafted software of the future become the equivalent of a tailored suit today? One might argue it already is, most companies and individuals do just fine using cloud/SaaS offerings and COTS apps. So on first glance it seems like automating software engineering would mainly benefit exactly those providers. The other side of the coin is that it also allows for cheaper/faster in-house DIY solutions and competition.
Yeah, I could see a world where it swings exactly the opposite way for software. Writing software for yourself is becoming cheap, but gathering requirements, getting alignment between stakeholders or marketing your software isn't getting much cheaper. Maybe everyone will end up with their own in-house solution? Or maybe we end up with configurable SAP-like behemoths, but instead of an army of expensive consultants configuring the software for your use case you have AI agents taking that part
I'm sure whatever path this takes will seems obvious in hindsight
I see how this can boost productivity...for those that today already produce value voluntarily. These will move one level higher. The rest with 100x the amount of performative work. Everyone will be busier created presentations and charts that no one needs and no one will read. Managers will ask for new presentations and reports every sync, and hours will be spent discussing things that don't actually matter.
I don't think they're measuring the _value_ of the work, just what it would've cost to have humans do it. How long would it take you to produce a report of a specific length on the history of changes to the White House approved by presidents over time that includes citations and links to sources? Let's say 40 hours? Boom. $100 per hour * 40 hours = $4,000 report and 1 weeks worth of effort produced in 15 minutes. Multiply this type of "work" by 400 and you have $1.6M in labor costs and 7.6 years of work.
I'm perpetually cautious about wild tech claims myself, but if you watch the launch video, there are examples of how they could claim labor/cost savings.
For example, one task takes a document with data, charts, and metrics, and Perplexity Computer was tasked with creating a 10-page slide deck for a presentation. Prior to AI, that took human capital and labor costs.
I can't say whether the $1.6M in labor costs is legit or not, but these tools are not just clicking links in 2026.
You don't think there's cost/labor savings in making agents and workflows easier to use? I don't think your average back office employee is going to be setting up OpenClaw.
They didn't say they think there's no benefit (nor give an opinion the other way), just that that's the benefit that counts for a new tool like this, as opposed to the comparison you suggested.
Feels to me like it's missing a key component of how processes work. If they measure how much time it takes a person to complete a power point and then extrapolate the automation, sure, you can make three years worth of power points in a few weeks, but usually these are made in response to external events of some sort. You're giving a presentation to a customer or a conference. You're briefing internally the results of a study. Whatever it is, no matter how much faster you generate the slide decks, you can't speed up history to give you more stuff to actually put in the slide decks. You can't make the audience read or listen to it any faster. Typing is not the slow part of the critical path here.
When I'm running a lot of model training workflows concurrently, I can spend a small but noticeable amount of my day just clicking through links to see current progress and logs of any errors. If an AI would be capable of understanding the relatively complex UI, at least enough to find the right links to click, it could make a status report that takes me 15 seconds to read, and from that alone would save $2000 of labor annually.
I think their numbers of $1.6M and 3.25 years is still probably a massive overestimate, but the order of magnitude seems plausible.
Perplexity for sure saved / invented years of work.
The typical market research , Google analyze , put into spreadsheet is almost gone job. Imagine how many people were doing that as major part of their work
This is a wild statement that does not seem to be supported by any actual data.
What does it mean? Does clicking on a link counts as labor.