Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Human minds are built to find patterns, and you should be careful not to assume the rate of improvement will continue forever based on nothing but a pattern.
 help



Just the fact that even retail quality hardware is still improving at local LLM significantly is still a great sign. If AI quality remained the same, and the cost for local hardware dropped to $1000, it would still be the greatest thing since the internet IMO. So even if the worst happens and all progress stops, I'm still very happy with what we got.

>I'm still very happy with what we got

"One person's slop is another person's treasure"

I'm not all that impressed with "AI". I often "race" the AI by giving it a task to do, and then I start coding my own solution in parallel. I often beat the AI, or deliver a better result.

Artificial Intelligence is like artificial flavoring. It's cheap and tastes passable to most people, but real flavors are far better in every way even if it costs more.


Home made food is better than anything you can buy too. Im 40 but I still drive 30 minutes to my parents once a week for dinner because the food they make feels like the elixir of life compared to the slop I can buy in trader joes, Costco, or most restaurants.

But I'm pretty glad trader joes exists too.


Trust me, Trader Joes is real food compared to a lot of the toxic waste being sold as food out there.

That crap will fill your belly but it won't keep you healthy. Your brain is like a muscle, if you stop flexing it, you'll end up weaker.


At their current stage, this feels like the wrong way to use them. I use them fully supervised, (despite the fact that feels like I’m fighting the tools), which is kind of the best of both worlds. I review every line of code before I allow the edit, and if something is wrong, I tell it to fix it. It learns over time, especially as I set rules in memories, and so the process has sped up, to the point that this goes way faster than if I would have done that myself. Not all tasks are appropriate for LLMs at all, but when they are, this supervised mode is quite fast, and I don’t believe the output to be slop, but anyways I feel like I own every line of code still.

The happy path for me is with erlang, due to the concurrency model the blast radius of an error is exceptionally small, so the programming style is to let things crash if they go wrong. So, really you are writing the happy path code only (most of the time). Combine this approach with some very robust tests (does this thing pass the tests / behave how we need it to?) then you’re close to the point of not really caring about the implementation at all.

Of course, i still do, but i could see not caring being possible down the road with such architectures..


The overall trend in AI performance will still be up and to the right like everything else in computing over the past 50 years, improvement doesn't have to be linear

Assuming newer, more efficient architectures are discovered.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: