Hacker Newsnew | past | comments | ask | show | jobs | submit | smithcoin's commentslogin

I saw this posting from here the other day about how it’s meta wanting to pass the buck to OS and browser vendors a la Section 230: https://web.archive.org/web/20260313125244/https://old.reddi...

Which doesn't mean that isn't the right place for the age to be set.

Cutting layers of bureaucracy not replacing with AI

Google's projected AI capex spend is $170-180 billion for this year. It's unreasonable to think AI would not be a reason for companies to consider layoffs.

There are two ways to interpret your comment:

1. Google is getting so much productivity out of their AI that they need fewer people.

2. Google is spending so much on AI they can’t afford to keep the people they need.


Or

3. Google is spending so much on AI that they can't afford to keep paying people, but they are ok with this because they are convinced the AI investment will replace the people at an eventual cost savings.


That seems to have been Dorsey's approach. The business has been stagnant, so cut the roster and bet big on some future returns from AI.

Google (and almost all other BigTech) is spending on scaling compute (data centers/securing power generation/chip contracts). My comment was not related to AI producivity and its impact on reduction of workforce. I believe a company spending nearly all its free cash flow on scaling compute (or borrowing money to do so) would have a different opinion on the economics of human capital.

I subscribe to the second point of view. Several companies fall in that bucket. Oracle comes to mind.

Does that include R&D? Google is an AI _provider_, which is a considerably different profile in terms of spend from companies who are consumers. I would expect Google to be investing considerable resources to keep up with Anthropic and OpenAI.

I don't think it includes all of the R&D. From what I've read that's the amount they will spend on infrastructure for AI.

I guess some of that infrastructure will get used for AI R&D, but there are other R&D costs such as salaries that wouldn't be included in the figure.


Some of this smells purely nihilistic. The market rewarded layoffs with higher stock prices, incentivizing more layoffs.

It sure didn't reward Atlassian. If anything it accelerated the long, downward slide.

Atlassian hasn't made money in 10 years. Of course they can't ride on the latest stock slop meme, that company is such an unmitigated disaster it beats even their terrible software. And now they keep spamming me with that Rovo garbage, god I hope they go down among all of this.

I dont think zuck cares especially much about the stock price. He's certainly not beholden to any shareholders. He's doing this because he genuinely thinks it will help the company to trim the fat.

[flagged]


I’m down for that. There are so many “facilitators” in middle management, some really good and quite a few bad ones and many making no difference. I don’t know how people thought they were good positions to hire for.

Remember before Covid many a company deadweight showing us the vast amounts of unwork they did at their companies on their YouTube videos? Proudly showing how idle they were?

Not all the firings are deadweight but a lot are. There is also a general tightening of budgets and people who are part of dead-end programs that are being let go. When the economy was hotter companies would keep these people to add at the margins; I think now that money is still tight they’re not keeping that luxury.


In a zero interest rate environment, literally any return on investment justifies spending.

If interest rates are 3.8%, the company needs at least a 3.8% return every year on your yearly compensation to justify your job.


Not if they are compensated during the year.

Grocery stores have slim margins, but if you make 10k after selling 1M worth of stock buy turn over that stock 12 times a year that’s ~12% annual ROI not 1%.


If the company is spending $100k to employ you, you need to deliver $103.8k/year of value.

Again you assume a year delay on a workers full salary before the company gets compensated. Cash flow rarely works like that.

Uber driver does what 2 weeks of work before getting paid, they also front the cost of their car and gas etc. Meanwhile users are paying as soon as the ride occurs, so uber doesn’t need an account with a full years salary for every driver somewhere at the start of the year. Get paid before the worker and the worker is in effect giving you a zero interest loan.

Now for a consultant the company may get paid after the worker but the company is rarely waiting a more than a few weeks.


Do you see the broader point I'm making, which is that non-zero interest rates means delivering value isn't enough anymore?

+/- 0.1% over a year isn’t meaningful here. I understand that you’re unwilling to reconsider your beliefs when they are based on faulty math and thus don’t reflect the underlying reality here.

A plumber, doctor, teacher, cook, etc does work before getting paid and the company rarely needs to wait 365 days for someone to pay them for that work. This means your idea is inherently flawed, there’s no broader point when you’re making a mistake.

Further future revenue is generally inflation adjusted. If you borrow 1B to build a power plant you sell electricity at future prices not what electricity was worth when you started building. When the reverse happens say at collages when they get paid months before professors get paid, the school isn’t increasing salaries every month to keep up with inflation.


That's .. not at all how interest rates work.

You'd calculate return on investment based on invested capital, not on expenses, so this does not follow.

It's not the boomers' fault, they were misled into believing the social security system they were paying into was a genuine savings system, not just a perpetual wealth transfer system from the young to the old.

How were they mislead?

From day one, Social Security was a "new money pays old money" scheme, the one thing that makes it Ponzi-like.

To be fair, the boomers got screwed in the 1980's SS reform to pay for their parents (but had it sweet before), so maybe this is just paying it forward.


It was specifically sold as 'insurance' to the public around the time it was being passed.

Well except for a short period where that was going to be deliberated on by the courts, where they stopped calling it insurance since SCOTUS indicated this insurance wouldn't be constitutional, so instead they put it under general welfare clause but then changed up their rhetoric immediately after it was found constitutional back to it being insurance again.

Also the people that wrote the bill later admitted they intentionally wrote it in a confusing as way to evade public and judicial scrutiny.


[flagged]


FDR essentially pioneered the modern use of the omnibus bill by threatening to veto any assistance to the poor/elderly that didn't include social security. Basically his goal was to make the poor starve if social security didn't pass, and blackmail politicians into being forced to vote for it.

Of course this was all predicated on the other prong, which was the 'switch in time that saved 9' where he also threatened to pack the courts to ensure it was found 'constitutional'. FDR was quite ruthless in his destruction of constitutional and democratic controls, and now so much of our government depends on it that it's effectively politically impossible to unwind.


FDR also bullied Congress into passing laws and threatened their individual reelections using his cult of personality the same way Trump has been doing for a decade now to keep the moderates and fiscal conservatives in his party from making any noise.

I grew up recommending windows to everybody I knew for most of my early life. I’ve had my boomer dad on Linux mint for almost a decade. Any time I am asked for a recommendation I cannot say to buy a Mac fast enough. Yes they are overpriced but the build quality to me is worth it. The windows 11 start menu is user hostile, I seriously can’t believe people use that day to day. I’m old enough to remember when they called it Micro$oft -unfortunately Microslop is going to stick (the author is right about the two settings apps). When was the last time you think an exec at MSFT played an Xbox or described using teams as “pleasant”?

“Adobe and Office run better on Mac, change my mind”


I use office on Mac and Windows. It works fine on Mac, but not better than Windows. OneNote, for instance, has serious delays and glitches when syncing notebooks changed on other machines or web. I lost work multiple times before stopping using it altogether on the Mac.

Evolution typically happens on the scale of a million years, not a couple generations of human behavior.


You can speed it up.


AI didn’t kill SAAS- the overinflated P/E values are just returning to something vaguely resembling sanity


It’s not an AI bubble - it’s an inflated P/E bubble.


Yeah, the stock price is still higher than any price it obtained prior to Oct 2024. I think people are just shocked that stock prices may go up and down rather than up only.


No, we just want to point out not everybody utilizing agents ends up like LeBron or Jordan - most are Brian Scalabrine.


For sure. I like having discussions with nuanced takes, these are tools with strengths and weaknesses and being a good tool user includes knowing when not to pick it up.


> This is a five-alarm fire if you're a SWE and not retiring in the next couple years.

I’m sorry, but this is such a hype beast take. In my opinion this is equivalent to telling people not to learn to drive five years ago because of self driving from Tesla. How is that going?

Every single line of code produced is a liability. This idea that you’re going to have “gas town” like agents running and building apps without humans in the loop at any point to generate liability free revenue is insane to me.

Are humans infallible? Obviously not. But if you are telling me that ‘magic probability machines’ are creating safe, secure, and compliant software that has no need for engineers to participate in the output- first I’d like to see a citation and second I have a bridge to sell you.


> In my opinion this is equivalent to telling people not to learn to drive five years ago because of self driving

Self-driving has different economics. We're reading tea leaves, true, but it's also true that software has zero marginal cost and that $20K pays for an engineer-month in SF.

> Every single line of code produced is a liability.

Do you have a hard spec and rock-solid test cases? If you do, you have two options to a working prototype: 2-6 engineer-years, or $20K. The second option will greatly increase in quality and likely decrease in price over the next few years.

What if the spec and the test cases are the new software? Assembly programmers used to make an argument against compiled code that's somewhat parallel to yours: every instruction is a (performance) liability.

> without humans in the loop

There will be humans, just fewer and fewer. The spec and test cases are AI-eligible too.

> safe, secure, and compliant software

I'm not sure humans' advantage here is safe, if it even exists still.


So let’s say you fund a single engineer for an open‑source project with $20k. The outcome will be a prototype with some interesting ideas. And yes, with a few hundred bucks' worth of AI assistance that single engineer might get much further than without (but not using any of the techniques presented in this blog). People can coalesce around the project as contributors. A seed was planted and watered a bit.

In this case, the $20k has been burned and produced zero value. Just look at the repo issues: looks like someone trying to get attention by spamming the issue tracker and opening hundreds of PRs. As an open source project, it’s a dead end.

So it doesn’t matter that this is “likely decrease in price over the next few years”? The value is zero, so even if superintelligence can produce this in an instant at zero cost in six months, the outcome is still worth zero.

You’re assuming a kind of inverse relationship between production cost and value.

In terms of quality, to anyone using those coding agents, it should be clear by now that letting them run autonomously and in parallel is a bad idea. That’s not going to change unless you believe LLMs will turn into something entirely different over time.

Note that what works with humans—social interaction creating some emergent properties like innovation—doesn’t translate to LLM agents for a simple reason: they don’t have agency, shared goals, or accountability, so the social dynamics that generate innovation can’t form.


I agree that there's not a lot of value in your example, but it's the wrong example. AI writing code and humans refining it and maintaining it is probably an inferior proposition, more so if the project is FOSS.

The model I'm referring to is: "if it walks like software and quacks like software, it's software." Its writers and maintainers are AI. It has a commercial purpose. Its value comes from fulfilling its requirements.

There will be human handlers, including some who will occasionally have to dig through the dung and fix AI-idiosyncratic bugs. Fewer Ferrari designers, more Cuban 1956 Buick mechanics. It's an ugly approach, but the conjecture that, economically _or_ technically, there must be something fundamentally broken with it is very hand-wavy and dubious.

I agree that there will be less code-level innovation overall, just like artistic value production took a big hit when we went from portraits to photographs.


> its value comes from fulfilling its requirements.

The requirements will have to come from somewhere, and they will have to be quite precise although probably higher-level than code written today. You're talking about just a new kind of software engineer. The kind of stuff described at https://martin.kleppmann.com/2025/12/08/ai-formal-verificati... (note the "the challenge will move to correctly defining the specification")

Unless what you have in mind is some sort of Moltbook add-on that the bots would write for themselves.

I'm talking software providing value to humans.


We use OpenTofu it’s pretty seamless


Now more will be using a combination of OpenTofu and Terraform, and there will probably be some tacit endorsement of OpenTofu by Hashicorp folks in their communication with those who are using both. Good to see!


Does it do ephemeral values yet?


Yep, as of yesterday’s 1.11 release it’s supported!

That also includes a new “enabled” meta argument, so you don’t have to hack around conditional resources with count = 0.

[0]: https://opentofu.org/blog/opentofu-1-11-0/

Disclaimer: affiliated with the project


How do you migrate from count/for_each to `enabled` ?


You can just switch from `count = 1` to `enabled = true` (or vice-versa, works back-and-forth) for a resource and tofu will automatically move it next time you apply.

It's pretty seamless.


That's cool! We'll still need to change all of the references to `resource[0]`, right? Or does tofu obviate that need as well?


I’m not sure I understand. You refer to the conditional resource fields normally - without list indices. You just have to make sure the object isn’t null.

There’s some samples in the docs[0] on safe access patterns!

[0]: https://opentofu.org/docs/language/meta-arguments/enabled/


And you don't get the annoying array form for the resulting resource with the `enabled` syntax, right?

EDIT: Oh just realized the sibling asked the same, but the doc doesn't state that clearly, although it seems to me that the doc implies that yeah, it doesn't use the array form anymore.


Yes indeed! It does not use the annoying array form.


Worth switching to Opentofu only for this, then! I fuckin hate the count pattern for conditional present/not present that leads to an array of size == 1.


Amazing. Good work !


Damn, might finally be able to use it. The lack of ephemeral values was a major blocker.


It doesn't work for me on safari either.


works fine on safari desktop for me


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: