Hacker Newsnew | past | comments | ask | show | jobs | submit | RohMin's commentslogin

Do you think the average person would need this sort of clarification? How many of us would have recommended to walk?


Do you think this is a fundamentally unbridge-able limitation of LLMs? Do you know where we were just a year ago? Can you imagine that this will get better with upcoming releases? It's like when Gary Marcus was confidently stating that AI (at least current paradigm) will never be able to generate an image of a horse riding an astronaut. (Or full wineglasses or arbitrary clocks).


https://www.youtube.com/watch?v=LvW1HTSLPEk

I thought this was a solid take


interesting


I guess with ~50 years of CPU advancements, 3-4 seconds for a TUI to open makes it seem like we lost the plot somewhere along the way.


Don’t forget they’ve also publicly stated (bragged?) about the monumental accomplishment of getting some text in a terminal to render at 60fps.


So it doesn’t matter at all except to your sensibilities. Sounds to me that they simply are much better at prioritisation than your average HN user, who’d have taken forever to release it but at least the terminal interface would be snappy…


I wish Odin could gain more traction


I didn't realize odin had a similar threading model to go with built-in channels, that's pretty neat. Odin might be my next toy language


It’s a great little language. I just wish it had a bigger standard library.


Bigger?! What more do you need?! There are also other things that are on the way as well.


First time I've heard someone say a thinkpad is bulky


I should have said "large" then, I could only find a 16" model with the specs I want. It's not bulky in the sense that a gaming laptop is.


Maybe they've only seen the older ones (like the ubiquitous T420) which were admittedly pretty bulky.


I really want to learn his methodology to writing software


Considering how much software he has actually put out... do you?


His methodology is to put his hands on the keyboard, write a function, a struct, make it compile, make it produce the correct output, make it faster, make it use less memory...


Check the Data Oriented Design talk: https://m.youtube.com/watch?v=rX0ItVEVjHc


I do feel with the rise of "reasoning" class of models, it's not hard to believe that code quality will improve over time.


The thing is: how much

0.2x, 2x, 5x, 50x?


It all comes down to a religious faith in AGI or not.

There can't be things that a human can program that AGI can not program or it is not "AGI".

While I am never a true believer in AGI, it seems to go I get a little faith when a new model comes out then I become increasingly agnostic the weeks and months after that. Repeat.


Who knows? It just needs to be better than the average engineer.


The thing is that this "just" may not happen soon


It needs to be better than the average engineer whose abilities are themselves augmented by AI.


It just needs to be cheaper than the average engineer, you mean.


Doesn't sound like improving dramically.


This is a policy enforced by OpenAI, not OpenRouter


git switch focuses on switching branches while git checkout extends further than that


Right, so for the folks at home who already know and use `git checkout`, no switch needed (no pun intended) as everything already works fine and probably won't be deprecated in the near future.


That's actually a really interesting way to leverage that feature. Have you found this easier than other services built specifically for this use case?


I've tried other tools. But I'm on GitHub most days, so it's been a seamless way to keep track of some things that would otherwise disappear into a calendar, or some tool that I don't use as often.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: