Hacker Newsnew | past | comments | ask | show | jobs | submit | jonhohle's commentslogin

Buy why that much? The first computer I bought had 192MB of RAM and I ran a 1600x1200 desktop with 24-bit color. When Windows 2000 came out, all of the transparency effects ran great. Office worked fine, Visual Studio, 1024x768 gaming (I know that’s quite a step down from 1080p).

What has changed? Why do I need 10x the RAM to open a handful of terminals and a text editor?


> and I ran a 1600x1200 desktop with 24-bit color

> What has changed? Why do I need 10x the RAM to open a handful of terminals and a text editor?

It’s not a factor of ten, but a 4K monitor has about four times as many pixels. Cached font bitmaps scale with that, photos take more memory, etc.

> When Windows 2000 came out

In those times, when part of a window became uncovered, the OS would ask the application to redraw that part. Nowadays, the OS knows what’s there because it keeps the pixels around, so it can bitblit the pixels in.

Again, not a factor of ten, but it contributes.

The number of background processes likely also increased, and chances are you used to run fewer applications at the same time. Your handful of terminals may be a bit fuller now than it was back then.

Neither of those really explain why you need gigabytes of RAM nowadays, though, but they didn’t explain why Windows 2000 needed whatever it needed at its time, either.

The main real reason is “because we can afford to”.


Partly because we have more layers of abstraction. Just an extreme example, when you open a tiny < 1KB HTML file on any modern browser the tab memory consumption will still be on the order of tens, if not hundreds of megabytes. This is because the browser has to load / initialize all its huge runtime environment (JS / DOM / CSS, graphics, etc) even though that tiny HTML file might use a tiny fraction of the browser features.

Partly because increased RAM usage can sometimes improve execution speed / smoothness or security (caching, browser tab isolation).

Partly because developers have less pressure to optimize software performance, so they optimize other things, such as development time.

Here is an article about bloat: https://waspdev.com/articles/2025-11-04/some-software-bloat-...


2 Programmers sat at a table. One was a youngster and the other an older guy with a large beard. The old guy was asked: "You. Yeah you. Why the heck did you need 64K of RAM?". The old man replied, "To land on the moon!". Then the youngster was asked: "And you, why oh why did you need 4Gig?". The youngster replied: "To run MS-Word!"

Higher res icons probably add a couple hundred megs alone

Well if you have a 512x512 icon uncompressed it is an even megabyte, so that makes the calculations fairly easy.

But raw imagery is one of the few cases where you can legitimately require large amounts of RAM because of the squaring nature of area. You only need that raw state in a limited number of situations where you are manipulating the data though. If you are dealing with images without descending to pixels then there's pretty much no reason to keep it all floating around in that form, You generally don't have more than a hundred icons onscreen, and once you start fetching data from the slowest RAM in your machine you get pretty decent speed gains from using decompression than trying to move the uncompressed form around.


You were not prevented from doing anything, but that doesn’t mean others weren’t. For example, OEMs were not allowed to offer any other preinstalled OS as a default option. That effectively killed Be and I’m sure hindered RedHat.

What’s interesting is mini pcs are dirt cheap. The RAM for them costs as much or more than a barebones Ryzen 7 mini pc.

In January I bought a barebone ASUS NUC, which is relatively expensive among mini-PCs, but I need to run it 24/7 for many years, so I made a choice based on expected reliability.

After adding to it DRAM and SSDs, the cost of the barebone remained of only 40% of the total, so the price of the memories was 50% higher than the barebone computer.

At that time, the memories were still cheaper than today, so now the price ratio would be even worse. (The barebone NUC had an Intel Arrow Lake H CPU and it cost $500, while 32 GB DDR5 + 3 TB SSDs cost $750.)


I’ve been using google search AI and Gemini, which I find generally pretty good. In the past week, Gemini and Search AI have been bringing in various details of previous searches I’ve done and Search AI conversations I’ve had and it’s extremely gross and creepy.

I was looking for details about cars and it started interjecting how the safety would affect my children by name in a conversation where I never mention my children. I was asking details about Thunderbolt and modern Ryzen processors and a fresh Gemini chat brought in details about a completely unrelated project I work on. I’ve always thought local LLMs would be important, but whatever Google did in the past few weeks has made that even more clear.


It's Personal Intelligence in the Gemini settings. I just turned that off last night when it was doing similar things.

macOS has been one of the best keyboard OSes for over a decade, maybe longer. Nearly everything is bindable without additional software or third party apps. This can be done on globally or app-specific. A lot of this comes from the deep script ability that used to be a priority but has fallen by the wayside in recent years.

“This wasn’t just AI generated — it was a paragon of hallucinated AI slop.”

Thinking about it from an individual (not business) point of view, the upfront capital won’t be repaid for 10-years or more and does little to change the value of the lot. The lot value is probably most dictated by location and capacity. Solar does nothing to affect location, and may even harm capacity. Parking lot customers might choose a lot of its shaded, but ultimately it’s a captive market due to location.

If I owned the lot, I could take on no-risk (which may be why the lot was purchased to begin with), or take on a 6-figure investment that could bankrupt me if the demand for the lot vanished. (I suppose in that case you’d at least be making money on selling power back to the grid.)


Not only that, but authors and approvers could be used to track who created and voted for each change.

Then compare wording and structure with other bills proposed elsewhere to look for single sources trying to legalize an agenda or retry after earlier failed attempts.

I never understood running apps in full screen. Unless it's an IDE, Video Editor, or some other app with tools filling all nooks and crannies, I want windows that fit the content. I don't want to launch a text or document editor in full screen, read a PDF in full screen. Typically I don't even want to watch a video full screen. I also generally don't want tiling. I want to arrange windows with parts peeking out behind other windows to reference while I'm working on something else. I want some sense of "space" related to where I left a window.

Why sunset it, though? There’s still floppy, atapi, and zip support.

Every Apple device from the late 90s to 2012 had FireWire. Most Sony PCs from the late 90s to 2009. Google estimates that at over 100M systems with FireWire. There were 50M Zip drives, in comparison.

I know I should probably move on, but I have a lot of FireWire block devices and video equipment. The disk/disc drives can be moved to USB, but the video equipment cannot.


People moved from "good to have" to "better to throw it out because it's unmaintained so it's not secure".

And floppy support is needed for cloud-init, heh.


By "people" you mean the corporate interests.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: