Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I have observed is, if you don't know what the issue is, llm would usually suggest something that is unnecessarily complex and not ideal.

It might work but the moment something fails, llm suggest hacks instead of solution.

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: