Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I understand the author’s sentiment but I would like to give a counter example:

I like to read philosophy and after I read a passage and think about it, I find it useful to copy the passage into a decent model and ask for its interpretation, or if it is something old ask about word choice or meaning.

I realize that I may not be getting perfect information, but LLM output gives me ideas that are a combination of live web searching and whatever innate knowledge the LLM holds in its weights.

Another counter example: I have never found runtime error traces from languages like Haskell and Common Lisp to be that clear. If the error is not clear to me, sometimes using a model gets me past an error quickly.

All that said, I think the author is right-on correct that using LLMs should not be an excuse to not think for oneself.

 help



> I realize that I may not be getting perfect information, but LLM output gives me ideas that are a combination of live web searching and whatever innate knowledge the LLM holds in its weights.

I don't mean to be judgemental. It's possible this is a personal observation, but I do wonder if it's not universal. I find that if I give an inch to these models thinking, I instantly become lazy. It doesn't really matter if they produce interesting output, but rather that I stop trying to produce interesting thoughts because I can't help wonder if the LLM wouldn't have output the same thing. I become TOO output focused. I mistake reading an interpretation for actually integrating knowledge into my own thinking, I disregard following along with the author.

I love reading philosophy as well. Dialectic of Enlightenment profoundy shaped how I view the world, but there was not a single part of that book that I could have given you a coherent interpretation of as a read it. The interpretations all come now, years after I read it. I can't help but wonder if those interpretations would have been different, had my subcouncious been satiated by cheap explanations from the lie robot.


Seconding this. Revelation happens subtly, often far removed from what you might later unpick as its "primary source". Immediate interpretation tend to be plastic and shallow.

Also it might be hard to grasp for most of us, used to constant stimulation and lack of space for contemplation and incorporation of information (I recommend the works of philosopher Byung-Chul Han on the matter) with yet unknown effects on our psyche and creative output. It takes days or weeks for one to sit and digest novel viewpoints; asking a machine to skip all that work for us is just another example of seeking instant gratification. I have no time to think, do it for me, so I can scroll to the next post already.

I don’t think you are wrong but isn’t it obvious to pick and choose cases where you might want to use LLMs vs doing the work? Seems obvious to me.

Sure if you want to read a novel, don’t ask an llm about it.

When you want to learn something quick then use LLMs. But you would know how much compression is going on. This is something we do routinely anyway. If I want to know something about taxes, I read the first google result and get the gist of it. But I’m still better off and didn’t require to take a full course.


For me it's the opposite - sure, for many outputs I don't need to think, but then I end up thinking on a higher level, and doing even more work.

An analogy would be - if GPS allows you to not worry about which turn to take, you can finally focus on where you want to get.


I mean it can also depend on scale. I use hundreds of sub-agent instances to do analysis that I just would not be able to do in a reasonable timeframe. That is a TON of thinking done for me.

Is "not having to think" a good metric now?

While "I don't have to think, I just get the LLM to do the task" is a bit careless (or a "hype" way of putting it)... I'd reckon it's always been true that you want to think about the stuff that matters and the other stuff to be done for minimal effort.

e.g. By using a cryptography library / algorithm someone else has written, I don't need to think about it (although someone has done the thinking, I hope!). Or by using a high level language, I don't need to think about how to write assembly / machine code.

Or with a tool like a spell-checker: since it checks the document, you don't have to worry about spelling mistakes.

What upsets is the imbalance between "tasks which previously required some thought/effort can now be done effortlessly". -- Stuff like "write out a document" used to signal effort had been put into it.


I think it could be. It doesn't have to be one or the other.

In my opinion it's entirely comparable to anything else that augments human capability. Is it a good thing that I can drive somewhere instead of walking? It can be. If driving 50 miles means I get there in an hour instead of two days, it can be a good thing, even though it could also be a bad thing if I replace all walking with driving. It just expands my horizon in the sense that I can now reach places I otherwise couldn't reasonably reach.

Why can't it be the same with thinking? If the machine does some thinking for me, it can be helpful. That doesn't mean I should delegate all thinking to the machine.


I guess, how often do you pay someone to fix your car? Repair something in your house? Give you financial advice?

Those are all things many people outsource their thinking to other people with.


No, you outsource it because it's not your core competency. I think humans should be able to do anything and not narrowly specialise as narrow specialisation leads to tunnel vision. Sometimes you need to outsource to someone because of legal reasons (and rightly so, mostly because the complexities involved do require someone who is a professional in that area). Can some things be simplified? Of course they can, and there are many barriers that prevent such simplification. But it's absolutely insane to say - nah, we don't need to think at all, and something else can do all the work.

Nobody said "we don't need to think at all" though. The statement was "not having to think", or rephrased: "being able to choose how much to think or what to think about".

There’s both a quality and quantity angle.

For some work, similar to the philosophy example of GP, LLMs can help with depth/quality. Is additive to your own thinking. -> quality approach

For other things. I take a quantity approach. Having 8 subagents research, implement, review, improve, review (etc) a feature in a non critical part of our code, or investigate a bug together with some traces. It’s displacing my own thinking, but that’s ok, it makes up for it with the speed and amount of work it can do. —> quantity approach.

It’s become mostly a matter of picking the right approach depending on the problem I’m trying to solve.


Why would you even read philosophy if you're then consulting a third party for interpretation? That is the definition of meaningless.

That is like listening to music and asking somebody if you liked a song.


Philosophy is often considered an activity of engagement with others in thought and discussion. I don’t see why an LLM can’t play a role there.

I would say it's more like enjoying a song so much that you choose to listen to a cover of that song.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: