Hacker Newsnew | past | comments | ask | show | jobs | submit | wallstop's commentslogin

From my experience with game engines and people that really care about CRTs - I believe the effect (confidence: 95%) can be entirely achieved with rendering glue in any of the modern game engines - Unreal, Unity, Godot, etc. Now, whether or not it is a literal shader, or a shader + custom sauce, not sure.

However, I have not tried, so I cannot verify that claim to 100% accuracy. The author ...might have tried? They definitely surveyed the landscape. My read of the article was that they went down this rabbit hole and back-justified it instead of investing a similar amount of time in a practical solution in a modern engine.

CRT look and feel is a niche full of very passionate and opinionated people.


I've done a version of this in Godot. It's kind of a hack, but it works and it's all Nodes without needing a script. You could do it more easily in a compositor effect but you have to deal more with the guts of the engine.

The basic idea was to use a chain of Viewports and TextureRects using the previous ViewportTexture to do the effect. This is essentially just setting up a chain of framebuffers that I can draw on top of at each step. The first step just does a simple calculation to convert the incoming color to an "energy" value with a simple shader on the whole frame. Then there are two decay viewports that feed each other to decay the old frame and overlay the new one. These just have a decay parameter that I can tweak to get different effects. There's a final Viewport that supersamples everything because I'm going for more of a vector display look than a CRT. And I can layer on other effects at either the energy state (like decaying phoshpor) or at the final stage (like flicker or screen curve).

Here I've exaggerated the decay quite a bit so you can see the effect. https://www.youtube.com/watch?v=2ZrvcZIfqOI The trails there are entirely from this effect. I'm only rendering the spinning figure. You can also see where lines overlap they are brighter than the surrounding lines because I cap the maximum energy value in the first layer.


Do you have an example? I’ve done a slightly-more-than-casual search for convincing post-processing and haven’t found success. Filters still tend to retain the sharpness of pixel edges in a way that CRTs don’t, and the contrast doesn’t look right either.

I'm not saying the tech currently exists in a form you can just plop into a project and have it give you the exact CRT look and feel that you want. What I am saying is that you can do that within any modern game engine - you just have to decide what, exactly, the look and feel you want is and how to get there.

As an example, I will quote the article:

> Retro Game Engine owns the full frame lifecycle. I decide what the input signals are, what the display does with it, how time affects it, what gets presented and when.

You can replace "Retro Game Engine" in that sentence with "Unity" or "Godot" and it is just as true.


Balatro is a good example. Lots of different opinions on whether its a good CRT effect or not.

I think you are underestimating what a filter is allowed to do.

You could build a simulation of the universe, send the frame data to the CRT inside that universe and capture the output of the simulated CRT and show it on the LCD.


Still the best CRT simulation I've seen is in an X screensaver called XAnalogTV. It simulates both CRT artifacts as well as NTSC channel cross-talk and analog interference. It amazes me that still no one has produced a portable version.

This is cool. I took a look at the public GitHub (not too deep) and this appears to use classical neural network training techniques - there is an exposed API that the author came up with a clever way to encode Doom state and inputs into. Well done!


Typically my criteria for "production-ready" is "has been battle-tested in production".

Without any production dog fooding, I consider software (that I write) as "alpha", "beta", or "preview".


Your criteria must be faulty, it shouldn’t be tested in production if it wasn’t already production ready.

I think similarly when considering if I’m willing to deploy something though; if there isn’t a long running sample why would I trust it?


I wrote my own spin lock library over a decade ago in order to learn about multi threading, concurrency, and how all this stuff works. I learned a lot!


Edit:

    wallstop@fridge:~$ free -m
                   total        used        free      shared  buff/cache   available
    Mem:           15838        9627        3939          26        2637        6210
    Swap:           4095           0        4095


    wallstop@fridge:~$ uptime

    00:43:54 up 37 days, 23:24,  1 user,  load average: 0.00, 0.00, 0.00


The command you want to use is "free -m".

This is from another system I have close:

                   total        used        free      shared  buff/cache   available
    Mem:           31881        1423        1042          10       29884       30457
    Swap:            976           2         974
2MB of SWAP used, 1423 MB RAM used, 29GB cache, 1042 MB Free. Total RAM 32 GB.


If you are interested in human consumption, there's "free --human" which decided on useful units by itself. The "--human" switch is also available for "du --human" or "df --human" or "ls -l --human". It's often abbreviated as "-h", but not always, since that also often stands for "--help".


Thanks, I generally use free -m since my brain can unconsciously parse it after all these years. ls -lh is one of my learned commands though. I type it in automatically when analyzing things.

ls -lrt, ls -lSh and ls -lShr are also very common in my daily use, depending on what I'm doing.


So that 2M of used swap is completely irrelevant. Same on my laptop

               total        used        free      shared  buff/cache   available
    Mem:           31989       11350        4474        2459       16164       19708
    Swap:           6047          20        6027
My syslog server on the other hand (which does a ton of stuff on disk) does use swap

    Mem:            1919         333          75           0        1511        1403
    Swap:           2047         803        1244
With uptime of 235 days.

If I were to increase this to 8G of ram instead of 2G, but for arguments sake had to have no swap as the tradeoff, would that be better or worse. Swap fans say worse.


> So that 2M of used swap is completely irrelevant.

As I noted somewhere, my other system has 2,5GB of SWAP allocated over 13 days. That system is a desktop system and juggles tons of things everyday.

I have another server with tons of RAM, and the Kernel decided not to evict anything to SWAP (yet).

> If I were to increase this to 8G of ram instead of 2G, but for arguments sake had to have no swap as the tradeoff, would that be better or worse. Swap fans say worse.

I'm not a SWAP fan, but I support its use. On the other hand I won't say it'd be worse, but it'd be overkill for that server. Maybe I can try 4, but that doesn't seem to be necessary if these numbers are stable over time.


Thanks! My other problem was formatting. Just wanted to share that I see 0 swap usage and nowhere near 100% memory usage as a counterpoint.


> Clean code will slow you down

Hard disagree. In fact, learning how to apply clean code and architectural patterns in game dev has kept projects manageable and on track and done nothing but level up my general software ability.


You can do the above in C#, I haven't written Java in a decade so can't comment on that. I don't really understand your argument though - the options approach is extremely readable. You can also do the options approach in C or C++. The amount of stuff that you can slap into one line is an interesting benchmark to use for languages.


This is a philosophy, or "opinion", and should not be confused with truth. If the world was 100% evil people and beings, across all of history, forever, the present would look very different than it does now. And none of us know what the future holds.


There are enough evil people that the present is as bad as it is.


Interesting. When I do not feel up to the task of working, whether it is a physical, mental, emotional, spiritual, or arbitrary cause, I use one of my provided PTO days and email the team a short "I will not be showing up to work today" message, without explaining the cause.

I similarly don't bat an eye when a coworker takes off for whatever reason. We're allotted PTO. Why jump through hoops to convince ourselves that it's ok to use it?


I don’t even use a PTO day if I’m just feeling “blah” as long as I’m available via Slack to answer questions and can attend ad-hoc meetings. There are so many times I’ve had to/chosen to work late, I don’t say anything.

I don’t think I’ve taken a “sick day” once since going remote over 5 years ago. But for the last 10 years I’ve been leading initiatives first at startups and then at consulting companies and I mostly have autonomy and the trust to get things done.


It's because the GP doesn't value you as a person or trust you. In that worldview, you cannot allow any autonomy and all time not spent at work must be tightly regulated. It will also spill in other areas, and you can bet the GP is not well liked by their colleagues.


This looks interesting. Maybe I'm not in-the-know, but why would you offload such important aspects like `sync` to the client instead of building in some protocol to ensure that file integrity is maintained? With this kind of design choice, it seems quite easy to lose data, unless I'm missing something.


From the README:

A sync process syncs the open disk files once every config.syncInterval. Sync also can be done on every request if config.alwaysFsync is True.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: