Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Everyone in the comments is like, "take a look at this AI tool for Ghirda"

This is indicative of two things.

1. While I can't stand the guy, ya'll need to watch Peter Thiel's talk from 10-15 years ago at Stanford about not building the same thing everyone else is, a la, the obvious thing.

2. People are really attracted to using LLMs on deep thinking tasks, off shoring their thinking, to a "Think for me SaaS". This won't end well for you, there's no shortcuts in life that don't come with a (huge) cost.

The person who showed their work and scored A's on math tests instead of just learning how to use a calculator, is better off in their career/endevours than the 80% of others who did the latter. If Laurie Wired makes an MCP for Ghirda and uses it that's one thing, you using it without ever reverse engineering extensively is completely different. I'd bet my bottom dollar that Laurie Wired doesn't prefer the MCP over her own mental processes 8/10 times.



This is correct. The majority of cases I have to rely on my own expertise.

It's useful for the automation of small repetitive tasks here and there. I was never expecting it to gain the traction that it did; anyone saying they expect it to replace reverse engineers (it won't) is wildly misunderstanding the original intent.

Quite trivial to create binaries that massively confuse LLMs!


I bet red herrings are effective


I wonder if renaming variables to all reference a single movie or book (go through the exe and rename each new variable to the next word or letter in Monty Python's Holy Grail) would do anything.


I was wondering why so many people were suddenly hopping into my humble profession and declaring me redundant. Ah, a youtube influencer is at the center of it. Makes sense.


lol


> People are really attracted to using LLMs on deep thinking tasks, off shoring their thinking, to a "Think for me SaaS". This won't end well for you, there's no shortcuts in life that don't come with a (huge) cost.

I, too, watched The Sorcerer’s Apprentice. The problem is that I, too, shipped a fuckton of working, reviewed, reworked, tests-and-lint-passing, properly-typed code implementing brand new features from scratch in the last 48 hours, that would have taken me 48 days a year ago.

“thinking” means a lot of different things, and you can indeed outsource a lot of it to other things that can think at different levels of ability than you. This is effectively what an engineering organization does.

Perhaps I haven’t fully offshored my thinking in the sense you mean in that I review all the code and give feedback on the PRs—I still steer. But I think the SOTA will continue to improve until we can indeed oneshot larger and larger tasks.


I actually have zero clue what Sorcerer's apprentice is, or what you're getting at. I never said that it isn't useful for dumb tedious tasks that don't require much thought.

I was talking about critical tasks where human nuance is important, just because an LLM can produce a result, does not mean that the result is great. Not everything people work on are "features" delivered via http handlers.

I don't understand this new paradigm where everyone wants to brag about how quick they get X amount of work done. Its the long standing belief of pretty much any quality builder that quick != quality, and quick usually isn't necessary. I'm glad your KPIs are great though and your product is getting 2 months of features every two days... The world needs this!


The Sorcerer's Apprentice is a poem by Goethe, or more famously, a sequence from the Disney movie "Fantasia", which you can see here: https://video.disney.com/watch/sorcerer-s-apprentice-fantasi...

The short summary of it is: the sorcerer's apprentice (Mickey) uses magic to get a broom to fetch water for him, and then the situation gets out of control as the broom continues to get water, and he has no idea how to stop it.

(It's a cautionary tale about the danger of playing with forces you don't really understand/"be careful what you wish for".)


None of it is product. It’s all personal free software that I’ve designed over many years that I didn’t have time to implement before.

No KPIs, just useful tools for myself and anyone else who finds them valuable.


In a funny inversion of the normal analogy to machine code and compilers, you could say the same thing about people using decompilation rather than getting gud at reading ARM assembly.


This feels like a bit of a false dichotomy. Just because I give some thinking tasks to an AI doesn't mean I'm sitting there doing nothing while it thinks.


Interpret the intent of the parent's comment more and focus less on finding its critiques. The irony here is that the critique you made is the most obvious one, which also means it is the one that the parent believed you're most likely to understand the implicit context around. I don't think anyone has handed all thinking over to LLMs, it's always somewhere on a spectrum. I think we can assume the parent isn't framing things as a binary outcome. If they were, we should ignore everything they're saying.


I’ve made frameworks that turn a project entirely over to the AI — eg, turn a paragraph summary of what I want into a book on that topic.

Obviously I get much less out of that — I’m not denying the tradeoff, just saying that some people are all the way to “write a short request, accept the result” for (certain) thinking tasks.


Sure but even that falls on the spectrum. The request requires some thinking. So if we're not being pedantic then people will criticize because natural language isn't


I think it’s a difference in kind, ie, if we return to above[0] and the discussion about “outsourcing our thinking” — then it deeply depends on what we hope to accomplish. That’s what I was originally intending to convey: that people are actually inhabiting the space you used as an extreme because they’re operating in a different mode.

That is, we seem to be conflating different cases - ie, being an expert versus hiring an expert. A manager and an SDE get different utility from the LLM.

I think I expressed it poorly, but I think that we need to consider that outsourcing thinking entirely is the right answer in the way that subcontracting or outsourcing or hiring itself can be; and that we seem to get caught in a “spectrum” or false dichotomy (ie, “is outsourcing good or bad?”) discussion, when the actual utilization of LLMs, their content, etc interacts in a complex way due to the diversity of roles, needs, etc that humans themselves have. And the impact on acquired expertise is only one aspect, for which “less work, less learning” is both true but too simple.

[0] - https://news.ycombinator.com/item?id=47040091


I'd say _this_ is the comment guilty of making a false dichotomy.


Do you have a background in reverse engineering?


You literally have a blog post called "AI can only solve boring problems"

Are you just trying to argue for the sake of arguing?


What does my blog post have to do with anything? (But since you mention it - a large part of reverse engineering falls under the "boring" category I define in that article)


A VC might want variety and advise people he will vote with his dollars for variety, because he's not funding the same thing as everyone else is.

Being first and the winner requires a lot to line up, so it shouldn't be the only, default, or best setting. Pursuing this is optimizing.

Also a message from 10-15 years ago might not reflect the same context as today.


"A VC might want variety and advise people he will vote with his dollars for variety".

In other words, what's good for Peter Theil might not be goid for you.


Yup. Therefore postulating it as a truth or standard is ok if that's what you agree with and want to also pursue, but it's important to keep in mind that valid goals are a spectrum.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: