Hacker Newsnew | past | comments | ask | show | jobs | submit | khimaros's commentslogin


having burned though easily 10 pairs of Triple Aught pants of various designs, they are well made and attractive, but durability is not an outlier from my experience. each design consistently fails in the same area with regular use. i tend to repurchase the designs that fit and function well, but they all inevitably fall.

Meshcore != Reticulum, but you are right that it is not limited to those platforms.

this isn't quite right. the blobs are produced by GrapheneOS and are reproducible once the source code embargo lifts.


Whoops, nice catch - comment edited.


allocation is irrelevant. as an owner of one of these you can absolutely use the full 128GB (minus OS overhead) for inference workloads


Care to go into a bit more on machine specs? I am interested in picking up a rig to do some LLM stuff and not sure where to get started. I also just need a new machine, mine is 8y-o (with some gaming gpu upgrades) at this point and It's That Time Again. No biggie tho, just curious what a good modern machine might look like.


Those Ryzen AI Max+ 395 systems are all more or less the same. For inference you want the one with 128GB soldered RAM. There are ones from Framework, Gmktec, Minisforum etc. Gmktec used to be the cheapest but with the rising RAM prices its Framework noe i think. You cant really upgrade/configure them. For benchmarks look into r/localllama - there are plenty.


Minisforum, Gmktec also have Ryzen AI HX 370 mini PCs with 128Gb (2x64Gb) max LPDDR5. It's dirt cheap, you can get one barebone with ~€750 on Amazon (the 395 similarly retails for ~€1k)... It should be fully supported in Ubuntu 25.04 or 25.10 with ROCm for iGPU inference (NPU isn't available ATM AFAIK), which is what I'd use it for. But I just don't know how the HX 370 compares to eg. the 395, iGPU-wise. I was thinking of getting one to run Lemonade, Qwen3-coder-next FP8, BTW... but I don't know how much RAM should I equip it with - shouldn't 96Gb be enough? Suggestions welcome!


I benchmarked unsloth/Qwen3-Coder-Next-GGUF using the MXFP4_MOE (43.7 GB) quantization on my Ryzen AI Max+ 395 and I got ~30 tps. According to [1] and [2], the AI Max+ 395 is 2.4x faster than the AI 9 HX 370 (laptop edition). Taking all that into account, the AI 9 HX 370 should get ~13 tps on this model. Make of that what you will.

[1]: https://community.frame.work/t/ai-9-hx-370-vs-ai-max-395/736...

[2]: https://community.frame.work/t/tracking-will-the-ai-max-395-...


Thanks! I'm... unimpressed.


The Ryzen 370 lacks the quad channel RAM. Stay away.


Ryzen AI HX 370 is not what you want, you need strix halo APU with unified memory


maxed out Framework Desktop


i have been working on something similar, trying to build the leanest agent loop that can be self modifying. ended up building it as a plugin within OpenCode with the cow pulled out into python hooks that the agent can modify at runtime (with automatic validation of existing behavior). this allows it to create new tools for itself, customize it's system prompt preambles, and of course manage its own traits. also contains a heartbeat hook. it all runs in an incus VM for isolation and provides a webui and attachable TUI thanks to OpenCode.


minimax-m.2 is close


2.5 is out now too.


i meant m2.1, but you are probably talking about kimi, not minimax


No, MiniMax M2.5 is now available on agent.minimax.io. We await the weights still.



this seems to be for custom design services. IANAL but the libraries and design language seems to be open source and free to use.



Next.js without Tailwind ... why not just make it a fuckin tailwind plugin lol


Because I didn't/don't know tailwind super well.

There is a community plugin, though. https://github.com/jellydeck/liftkit-tailwind


Indeed; if you look at the top nav this is a site that's an agency first and a design system second.

This design system really deserves its own site.


Agreed. New docs are under construction and they'll be posted on a separate website. The agency came first and then liftkit came after, which is why it's hosted on there now. But I'm shutting down agency operations and so the whole thing will be liftkit eventually.


what will we keep our ready power tucked in our vests under now?


does this support oauth tokens for making use of Claude or Gemini subscriptions?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: