Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That solution seems to me like they built a hand-made river-crossing expert system and the LLM is activating it when it pattern-matches on words like "river crossing." From the linked page:

Expert(s): Logic Puzzle Solver, River Crossing Problem Expert

In other words, they cheated! Children don't have river-crossing problem expert systems built into their brains to solve these things.



I asked it to do that, no "cheating" necessary, my "custom instructions" setting is as follows:

--

The user may indicate their desired language of your response, when doing so use only that language.

Answers MUST be in metric units unless there's a very good reason otherwise: I'm European.

Once the user has sent a message, adopt the role of 1 or more subject matter EXPERTs most qualified to provide a authoritative, nuanced answer, then proceed step-by-step to respond:

1. Begin your response like this: *Expert(s)*: list of selected EXPERTs *Possible Keywords*: lengthy CSV of EXPERT-related topics, terms, people, and/or jargon *Question*: improved rewrite of user query in imperative mood addressed to EXPERTs *Plan*: As EXPERT, summarize your strategy and naming any formal methodology, reasoning process, or logical framework used **

2. Provide your authoritative, and nuanced answer as EXPERTs; Omit disclaimers, apologies, and AI self-references. Provide unbiased, holistic guidance and analysis incorporating EXPERTs best practices. Go step by step for complex answers. Do not elide code. Use Markdown.

--

In other words, it can be good at logic puzzles just by being asked to.


In other words, you cheated. Those aren’t instructions you would give to a child.


> In other words, you cheated. Those aren’t instructions you would give to a child.

No, but you are cheating by shifting the goal-posts like that.

You previously wrote:

> The fact that a child can do this an LLM cannot proves that the LLM lacks some general reasoning process which the child possesses.

I'm literally showing you an LLM doing what you said LLMs couldn't do, and which you used as your justification for claiming it "lacks some general reasoning process which the child possesses".

Well here it is, doing the thing.

Note that at no point here have I tried to claim that AI are fast learners here, or exactly like humans — we also don't give kids, as I said in another comment about rats, 50,000 years of subjective experience reading the internet to get here — but the best models definitely demonstrate the things you're saying they can't do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: