Hacker Newsnew | past | comments | ask | show | jobs | submit | belthesar's commentslogin

Nested virtualization can be very handy in both the lab and in production. In the lab, you can try out a new hosting platform by running one atop the other. IE: Proxmox on VMWare, Hyper-V on KVM. This lets you try things out without needing fresh bare metal hardware.

In prod, let's say you run workloads in Firecracker VMs. You have plenty of headroom on your existing hardware. Nested virtualization would allow you to set up Firecracker hosts on your existing hardware.


Perhaps I'm misunderstanding, but wouldn't that case be covered by simply putting some vms under a vnet and others on another vnet and make them talk to each other? I can't also understand what you mean by "fresh bare metal hardware". In either case you don't need bare metal, being a top level vm or a nested one.


If you're evaluating VM hosts (proxmox, hyper-V, vmware, etc...) You need to have support for nested virtualization all the way down. Otherwise, if you want to evaluate a VM infrastructure, you need to start with bare-metal. Really, you just need to make sure that your top level support nested virtualization, but I understand their point.

However, the point about firecracker VMs in place of containers I think is really a good use-case. Firecracker can provide a better isolation environment, so it would be great to be able to run Firecracker VMs for workloads, which would require that the host (and the VM host above) support nested virtualization.


Cost-wise, there's a solid chance that the Pi would have been more expensive. Jeff Geerling ran some numbers (^1) on this last year, before the current chip crisis we're in, and it was bad enough already.

Home Assistant does a surprising amount of Disk I/O, if for nothing than for logs. Sibling commenters are also advising not running it on the SD card to avoid wearing it out, so there's definitely some truth here. This means we're adding a Pi M.2 hat + SSD into the mix. The Pi5 SSD kit for 256 GB, when it was available, was around $60 USD. A Pi5 with 8 GB of RAM is $130 USD. Now we need a cooler, a case that will fit the Pi5 with said M.2 hat, and a power supply. We're already well north of $250 USD, encroaching on $300, and we're not even using the core benefits of the Pi's platform. No need for GPIO pins, tightly integrated cameras or other sensors, none of that.

For all we know, the blog author did this assessment (or trusted the assessment of others, eg: Jeff) and came to the came conclusion - it wasn't worth the price of entry.

^1: https://www.jeffgeerling.com/blog/2025/intel-n100-better-val...


We tried that in 1998! See Swatch Internet Time: https://en.wikipedia.org/wiki/Swatch_Internet_Time


I loved the idea. However, the main issue was that it completely ignored the date.

While it worked fine in Western Europe - as i.Beats were based on the “Biel Mean Time” = GMT+1, people in the US would e.g. wake up at @584 on March 7 and eat dinner @125 on March 8.


The same way that Element does - they host a service for you that relays push notifications their Firebase Cloud Messaging endpoint for Android or iOS Instant Notifications for Apple. I believe ntfy's hosted option is the way they offset the costs of hosting this, even if self-hosted options can take advantage of those servers free of charge.

I think it's reasonable for Zulip to ask for compensation for access to these gateways, since Apple and Google do not make them available to end users free of charge, and the burden of responsibility to ensure that these systems aren't abused is on them. Also, the fact that they offer mobile push notifications for any self hosted server of up to 10 users is pretty generous, and there seems to be a Community plan option for larger servers that includes "groups of friends" as a qualifier. It really seems they're offering quite a bit.


This isn't true, self-hosted Android push notifications in ntfy are provided using a "foreground service" by default (i.e: the app keeps a websocket open and listens), unless you set up firebase for yourself and build a custom version of the app with the cert baked in.

https://docs.ntfy.sh/subscribe/phone/#instant-delivery


So it either drains battery and gets significant delays (as per ntfy's own docs), or it still uses Google's solution. There is no free lunch.


I think you misread, the delays are if you don't use instant delivery. I use it and it's extremely consistently delivered instantly, which makes sense, it's a websocket.

As to battery drain, I'm sure it technically does consume more, but according to my phone it's an insignificant amount: <1% of usage which is the lowest stat it gives you. Their docs suggest the same thing:

> the app has to maintain a constant connection to the server, which consumes about 0-1% of battery in 17h of use (on my phone). There has been a ton of testing and improvement around this. I think it's pretty decent now.

https://docs.ntfy.sh/faq/#how-much-battery-does-the-android-...

Honestly it's a good solution that works well with few downsides, the only real one is that iOS doesn't support doing it, but personally I don't have any apple phones so I do get an essentially free lunch.


Google doesn't have any magic way to do instant notification that nobody else has access to. The only thing they have access to in this regard is disabling any battery optimisations without triggering warnings.

Notification and battery performance is on par with google's solution except when an android build does dumb things to prevent the background activity, in which case notification performance gets worse and battery draw gets worse (not sure why exactly, it's just a common issue in these regards).


Well, there is an advantage, if everything is using the one service then you only need to have one thing alive to check it, so each new app is "free" if you already have push enabled (assuming that push notifications are rare enough the activity isn't the cost), as where each app doing it themselves is going to cause more battery use, so it isn't directly equivalent.

However, it also isn't a big deal, at least in my experience, at least for ntfy.sh.


Listening on a socket doesn't drain any battery when no data arrives unless the app does other things that actually use CPU. That's just what Google/Apple want you to believe so you depend on their proprietary lock in services.


Also like, how else would the Google / Apple services do it? Probably via sockets right? I guess you could do it in a pull-based approach on a timer, but that doesn't seem more efficient to me.


But waiting on multiple sockets for each app is surely more expensive than a single one, isn't it?


A single process waiting on multiple sockets is basically no more expensive than a single socket, but if each app has its own background process then that is more expensive. So for best performance you really want to delegate all the push-notification-listening for all the apps on a device to a single background process owned by the OS, but it'd be fine for each app to use its own push server (though of course most apps do not actually want to self-host this).


From a platform risk perspective, each tenant has dedicated resources, so it's their platform to blow up. If a customer with root access blows up their own system, then the resources from the MSP to fix it are billable, and the after-action meetings would likely include a review of whether that access is appropriate, if additional training is needed to prevent those issues in the future (also billable), or if the customer-provider relationship is the right fit. Will the on-call resource be having a bad time fixing someone else's screw up? Yeah, and having been that guy before, I empathize. The business can and should manage this relationship however, so that it doesn't become an undue burden on their support teams. A customer platform that is always getting broken at 4pm on a Friday when an overzealous customer admin is going in and deciding to run arbitrary kubectl commands takes support capacity away from other customers when a major incident happens, regardless of how much you're making in support billing.


This is essentially how it is. Additionally, the reality is that our customers don't often even need to think about using root access, but they have it if they want it. They are putting a lot of trust in us, so we also put trust in them.


Musicians who are being threatened by AI impersonating them, flooding the market with music like theirs, and otherwise actually harmed by this would disagree with you. Benn Jordan speaks at length about it in this video: https://www.youtube.com/watch?v=QVXfcIb3OKo


Lutris by default will use an older WINE version (something based on WINE8 IIRC) by default for reasons I don't quite understand. You can, however, configure Lutris to use proton-cachyos by default, to which I was able to get Battle.net to install and work correctly without issues. Not sure what feature was implemented in later WINE to make that work better, but it works.


"EAC supports Linux, but devs just won't turn it on" is the clickbait answer, but the details are more nuanced. EAC has multiple security levels that a title can set based on the threat model of the game, and most games with heavy MTX that use EAC shy away from it, largely because Fortnite doesn't do it. EAC is owned by Epic, and if Tim Sweeney says that you can't do MTX on Linux safely, then any AAA live services game with in-game MTX is going to shy away from it, regardless of how true the statement actually is.


The Finals has mtx, is protected by EAC, and is playable on Steam Deck.

Throne and Liberty, which is also protected by EAC and has mtx, is also playable on Steam Deck.

So this is bullshit and it clearly shows it's the publisher's choice. What Sweeney thinks has nothing to do with it.


> What Sweeney thinks has nothing to do with it.

I don't know if this is a fever dream or if it actually happened, but I seem to recall reading something about Tim Sweeney using Linux for a week to see how it compared. If he liked it, Epic Megagames would publish titles w/Linux support. He ended up complaining about some irrelevant things in KDevelop and it was pretty clear what his intentions were before even trying things.

I can't find any reference to this online, but I'm pretty sure that it happened. This would have been ~1998.

edit: It may have been Mark Rein?


no it shows those guys are willing to take the risk and learn the water is fine.

most aren't


This. While EAC does support Linux it is nowhere near the level of protection of EAC on Windows.


"MTX" as in, microtransactions?

What do microtransactions have to do with anticheat?


You don't want someone having a skin that you are charging money for among other things.


granting clientside without paying, things like that


You are only safe if you run Tim's rootkit :)


For use cases like attaching to an SBC or really any other computer, I'm sure this is great, but there are also USB crash cart consoles that can be gotten pretty cheaply like the NanoKVM-USB[0] or Cytrence's KIWI[1]. This gets you both video, keyboard and mouse.

[0] https://wiki.sipeed.com/hardware/en/kvm/NanoKVM_USB/introduc...

[1] https://www.cytrence.com/product-page/cytrence-kiwi


Openterface Mini-KVM also works great [1]

[1] - https://openterface.com/


This is my current pick - simple, works exactly as expected, very small. Only thing I ever fight with is remembering to accept Mac OS's warning about connecting a USB device.

For just video (or w/ separate keyboard/mouse), the Genki Shadowcast devices work really well.


Is there anywhere I can buy a NanoKVM-USB? The page you linked has a 'preorder' page linked, but I'm not sure how long I'd have to wait and whether it's an actual product that people have successfully used.


I see them on Amazon, sold by WayPonDEV. I've bought several NanoKVM brand devices from them and haven't had any problems (yet).


GLI Comet is much better from my experience.


Can GLI Comet allow my laptop to control a device without needing a network connection?

That seems to be what NanoKVM-USB does. But GLI Comet seems to be KVM-over-IP?


I use Comet in remote field by plug the ethernet it expects to the laptop. Both will set up the link local IP and accessible in browser without internet


You might be able to use something like usbipd to forward the USB port from your target machine?


Different use-cases. The Sipeed product comparable to the GL-RM1 is the NanoKVM Cube. Comparable to the GL-RM10 is the NanoKVM Pro (Desk).

(Of course you could use the Cube on a crash cart, too. Just like you can use the butt of a screwdriver to hammer a nail.)


Aliexpress has them


I use a Cytrence Kiwi myself, really handy bit of kit, I just wish it could do higher resolution, even if it meant dropping the frame rate.

I also have a PiKVM with the switch for network level access which works really well too.


Is there a VGA "story" for these devices? Most of the Dell and HP servers I'm physically proximate to don't have HDMI video. VGA connectors abound on the gear I work with.


worst case a VGA-to-HDMI adapter, they are less than 20 bucks but extra box/cables of course.


I've had poor luck with the couple of VGA-to-HDMI I've ever used over the years (latency, poor video quality) so I guess my question was more "Are there any known-working good adapters for VGA for these?"


There shouldn't be more than few lines of delays in cheap VGA->HDMI adapters, they don't even come with one full frame worth of RAM.


> I've had poor luck with the couple of VGA-to-HDMI I've ever used over the years

Yes, they're terrible, but...

> latency, poor video quality

For a crash cart? Who cares. For everything else...

> Are there any known-working good adapters for VGA for these?

No, you're AOL.


Those both look very nice, but I am disappointed that neither lists support for DP alt-mode as an input despite having a type-c port on the input side. If I were to buy such a device, I'd want it be future-proof while also supporting legacy video input like HDMI, but these are legacy-only. Good for my old raspberry pis and my ancient sandybridge NAS, but these days I only buy computers capable of single-cable operation (with exceptions for power cables for power-hungry devices like desktops).


I feel like this is kind of looking a gift horse in the mouth, especially for the cost of these units. Certainly not impossible to add, but an increase in the BOM vs. the loads of off-the-shelf super cheap HDMI capture chips available, and questionable compatibility (DP Alt Mode is getting better, but plenty of devices still have interesting quirks with it depending on implementation). These devices aren't made with daily driving a system in mind so much as for installation and recovery of a system.

Would it be handy to have this all in one cable on both ends? Sure, absolutely, that'd be killer. I personally don't think it's too big of an ask to use two cables in an installation or recovery case though, and if your devices only have USB-C ports for video out, an active USB-C to HDMI via DP-Alt cable can be had to meet that need.


Came here to endorse the NanoKVM USB. It's a great little device. Wendell made a video[0] on it. The web interface is super handy.

I keep one in my tool bag and I've been meaning to buy a second one for a dedicated crash cart.

I can't speak to the Kiwi or the Openterface as I haven't tried those.

0. https://www.youtube.com/watch?v=SAbyQcpR-yQ


Following the Obsidian model, which I love and support. Give folks the best part of the product, offer a paid option to enhance it, but allow folks to use alternatives as first class options.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: