Hacker Newsnew | past | comments | ask | show | jobs | submit | chillfox's commentslogin

The article explains it. This is not for streaming over the web, but for editing professional grade video on consumer hardware.

davinci resolve is the only commercial NLE with any kind of vulkan support, and it is experimental

prores decodes faster than realtime single threaded on a decade old CPU too

it doesn't make sense. it's much different with say, a video game, where a texture will be loaded once into VRAM, and then yes, all the work will be done on the GPU. a video will have CPU IO every frame, you are still doing a ton of CPU work. i don't know why people are talking about power efficiency, in a pro editing context, your CPU will be very, very busy with these IO threads, including and especially in ffmpeg with hardware encoding/decoding nonetheless. it doesn't look anything like a video game workload which is what this stack is designed for.


6k ProRes streams that consumer cameras record in are still too heavy for modern CPUs to decode in realtime. Not to mention 12k ProRes that professional cameras output.

How do you figure? Have you tried? The CPU is required for IO. Deciding ProRes is pretty simple, that's why you can do it in a shader in the first place, and the CPU will already be touching every byte when you're using Vulkan.

Yes. I get 300fps decoding 8k ProRes on a 4090 and barely 50fps on a Zen 3 with all 16 cores running. The CPU doesn't touch anything, actually. We map the packet memory and let the GPU read it out directly via DMA. The data may be in a network device, or an SSD, and the GPU will still read it out directly. It's neat.

50fps sounds greater than 24fps, which is greater than realtime, no?

> We map the packet memory and let the GPU read it out directly via DMA

packets from where, exactly?

another POV is, all the complexity of what you are doing, all the salaries and whatever to do this prores thing you are doing, is competing with purchasing a $1,000 mac and plugging a cable in.

if your goal is, use a thing in the Apple ecosystem, the solution is, the Apple ecosystem. it isn't, create a shader for vulkan.

i didn't say that streaming prores 8k doesn't make sense. even if it were 60fps. i am saying that it doesn't make sense to do product development in figuring out how to decode this stuff in more ways.


That reduces power consumption. So should improve battery life of laptops and help environment a little.

How do you figure? Power consumption will be higher. Decompressing on GPU with shaders uses the highest power state and every engine (the copy, compute and graphics for your NLE). The CPU will be running at full speed for IO and all the other NLE tasks. Using the GPU might be more efficient, but it will not matter to decode faster than realtime, which is my whole point.

> Decompressing on GPU with shaders uses the highest power state

Huh?


I recently had to go through several remote desktop apps before I found one that would work.

Nice, that actually looks really good. I was not expecting that.

Usually when there's no screenshot or video for GUI apps I assume it's because the creator is not proud of it, so a sign that the UI will be shit.


We spend a lot of effort on our UI and UX. I dare to say it's one of OpenRocket's defining features. There's still much work to be done though.

pointing at the menu and saying "that one" works just fine.

Atomic rollback is kinda big for servers.

If you manage enough diverse servers, then patching will break something critical fairly frequently. Back when I was a sysadmin, Windows updates would break some server every 2 months, and Redhat every 6 months.

Being able to just reboot the server back into a working state, and then fix it at a later time would have been nice.


It's also a big deal for desktops, especially when they're operated by people who ain't experts at troubleshooting software issues. Aeon's my go-to when setting up computers for non-technical folks specifically because I can have it auto-update fearlessly, knowing that the absolute worst case scenario is having to talk someone through booting into a known-good snapshot.

Opensuse already supports booting into a known working snapshot with btrfs and snapper. I am using the same in CachyOS now.

I started out running FreeBSD on my home servers, then moved to Alpine Linux because all server software that I wanted to run was provided in Docker docker containers and with docker compose examples, so it was just easier. Moving the ZFS pools over to Linux was effortlessly.

And now I am looking at moving over to k3s (still on Alpine) because everyone is providing Helm charts, so it seems easier.

I really like FreeBSD, but it's just easier to go with the flow.


This honestly sounds like the best proposed solution I have heard.


Agreed. Putting the burden on parents is quite something:

1. You end up being the bad guy, other parents don't restrict their kids internet usage etc. Some folks would argue to just not set up restrictions and trust them. But it's a slippery slope and puts kids in a weird position. They start out with innocent YouTube videos, but pretty quickly a web search or even a comment can lead them to strange places. They want to play games online, but then creeps abuse that all the time. Even if you trust them to not do anything "wrong", it's a lot to put on their shoulders.

2. If you want to put restrictions in place, even if you're an expert, the tools out there are pretty wonky. You can set up a child protection DNS, but most home routers don't make it easy (or even allow you) to set a different DNS server. And that's not particularly hard to circumvent. I suppose a proxy would be a more solid solution, but setting that up would be major yak shaving. Any "family safety" features (especially those from Microsoft) are ridiculously complicated and often quite buggy. Right now, I got the problem on my plate that I need to migrate one of my kid's accounts from a local Windows account to a Microsoft account (without them loosing all their stuff), because for local accounts, it seems the button to add the device is just missing? Naturally, the docs don't mention that, I had to do research to arrive at that hypothesis. The amount of yak shaving, setup and configuration you have to do for a reasonable setup is just nuts.

3. If you're not good with tech - I don't see how you have _any_ chance in hell to set up meaningful restrictions.

Some countries are banning social media - sure, that's one thing. But there's a _lot_ of weird places on the internet, kids will find something else. I for one would appreciate dedicated devices or modes for kids < 18. Would solve all this stuff in a heartbeat.


After struggling with this problem for a while, we started using Qustodio. It's not perfect by any means, but it's the most broadly effective and usable tool for parental control I've found. Loads better than the confusing iOS native screen time tools.


Isn’t this pretty much how everyone uses agents?

Feels like it’s a lot of words to say what amounts to make the agent do the steps we know works well for building software.


I think most of these writeups are packaging familiar engineering moves into LLM-shaped language. In my experience the real value is operational: explicit tool interfaces, idempotent steps, checkpoints and durable workflows run in Temporal or Airflow, with Playwright for browser tasks and a vector DB for state so you can replay and debug failures. The tradeoff is extra latency, token cost and engineering overhead, so expect to spend most of your time on retries, schema validation and monitoring rather than on clever prompt hacks, and use function calling or JSON schemas to keep tool outputs predictable.


> I think most of these writeups are packaging familiar engineering moves into LLM-shaped language.

They are, and that's deliberate.

Something I'm finding neat about working with coding agents is that most of the techniques that get better results out of agents are techniques that work for larger teams of humans too.

If you've already got great habits around automated testing, documentation, linting, red/green TDD, code review, clean atomic commits etc - you're going to get much better results out of coding agents as well.

My devious plan here is to teach people good software engineering while tricking them into thinking the book is about AI.


G is posting this slop so Anthropic sends him his dinner invitation this month, give him a break.


I can't believe how far down I had to scroll before someone called the OP out for not having actually read the article and just decided to make up their own topic.


Isn’t dependent types replicating the object oriented inheritance problem in the type system?


No, unless you mean the problem of over-engineering? In which case, yes, that is a realistic concern. In the real world, tests are quite often more than good enough. And since they are good enough they end up covering all the same cases a half-assed type system is able to assert anyway by virtue of the remaining logic needing to be tested, so the type system doesn't become all that important in the first place.

A half-assed type system is helpful for people writing code by hand. Then you get things like the squiggly lines in your editor and automated refactoring tools, which are quite beneficial for productivity. However, when an LLM is writing code none of that matters. It doesn't care one bit if the failure reports comes from the compiler or the test suite. It is all the same to it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: