Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean it can also depend on scale. I use hundreds of sub-agent instances to do analysis that I just would not be able to do in a reasonable timeframe. That is a TON of thinking done for me.
 help



Is "not having to think" a good metric now?

While "I don't have to think, I just get the LLM to do the task" is a bit careless (or a "hype" way of putting it)... I'd reckon it's always been true that you want to think about the stuff that matters and the other stuff to be done for minimal effort.

e.g. By using a cryptography library / algorithm someone else has written, I don't need to think about it (although someone has done the thinking, I hope!). Or by using a high level language, I don't need to think about how to write assembly / machine code.

Or with a tool like a spell-checker: since it checks the document, you don't have to worry about spelling mistakes.

What upsets is the imbalance between "tasks which previously required some thought/effort can now be done effortlessly". -- Stuff like "write out a document" used to signal effort had been put into it.


I think it could be. It doesn't have to be one or the other.

In my opinion it's entirely comparable to anything else that augments human capability. Is it a good thing that I can drive somewhere instead of walking? It can be. If driving 50 miles means I get there in an hour instead of two days, it can be a good thing, even though it could also be a bad thing if I replace all walking with driving. It just expands my horizon in the sense that I can now reach places I otherwise couldn't reasonably reach.

Why can't it be the same with thinking? If the machine does some thinking for me, it can be helpful. That doesn't mean I should delegate all thinking to the machine.


I guess, how often do you pay someone to fix your car? Repair something in your house? Give you financial advice?

Those are all things many people outsource their thinking to other people with.


No, you outsource it because it's not your core competency. I think humans should be able to do anything and not narrowly specialise as narrow specialisation leads to tunnel vision. Sometimes you need to outsource to someone because of legal reasons (and rightly so, mostly because the complexities involved do require someone who is a professional in that area). Can some things be simplified? Of course they can, and there are many barriers that prevent such simplification. But it's absolutely insane to say - nah, we don't need to think at all, and something else can do all the work.

Nobody said "we don't need to think at all" though. The statement was "not having to think", or rephrased: "being able to choose how much to think or what to think about".

There’s both a quality and quantity angle.

For some work, similar to the philosophy example of GP, LLMs can help with depth/quality. Is additive to your own thinking. -> quality approach

For other things. I take a quantity approach. Having 8 subagents research, implement, review, improve, review (etc) a feature in a non critical part of our code, or investigate a bug together with some traces. It’s displacing my own thinking, but that’s ok, it makes up for it with the speed and amount of work it can do. —> quantity approach.

It’s become mostly a matter of picking the right approach depending on the problem I’m trying to solve.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: