Hacker Newsnew | past | comments | ask | show | jobs | submit | wookmaster's commentslogin

Because it writes wrong code 100x faster too and humans trying to make sense of it all

it’s not always wrong. some of it is wrong. the trick is figuring out what’s actually correct

Same here phantom braking on the highway, randomly turned off in the middle of an intersection turn and didn’t get over in time for exit and decided to brake in the left lane to try and force over. While it was fun to try it’s not reliable for me to trust. That and If I lean my head the wrong direction resting it I start getting yelled at by it.

Good luck telling that to a judge

It's probably a better system for what to live by. The government can and will imprison anyone they want by a variety of methods they have for putting anyone they want away at any time. If you follow "god's / natural law" as they put it, it is a better guide to whether you will anger some victim who will call the police on you. Most of the rest of the law are just the excuse the powers that be will use for putting you away if the powers that be find you threaten their order. The vast majority of victimless crime laws are selectively chosen to be "enforced" for the actual reason that you've done something to challenge the ruling class, trying to adhere to them as if they are applied as 'rule of law' is probably irrational.

I make it a point to keep a healthy distance between myself and Imperial Officials.

During any unavoidable interaction with Imperial Officials, I always pretend subservience and submission. I am aware that many others do also.

The result is a large and growing body of people who secretly despise Imperial Officials, while said officials are under the increasingly detached from reality impression that everyone loves them. It usually doesn't end well for them.


Real rose tinted glasses ignoring all the awful things in previous generations.

They're trying to find ways to lock you in

Can't tell if you're joking, but if not: was -> want

Unfortunately companies seem to be in panic mode about making ANY offering to not become irrelevant that they're giving AI overall a bad reputation. Everyone made a mad dash and didn't spent enough time making that product well thought out. Some got burned by it such as Microsoft.

Everyone seems well convinced AI can just replace 90% of software out there but I've yet to see any evidence of that. Sure it can stand up a blog, get a simple app together pretty quick but once you get into larger scale software it's not capable of doing it by itself and you still need teams of developers working together.


"violently confiscates their wealth just" Sure are jumping to some...conclusions there.

There's plenty of people that got a CS degree and went to work and this is only a job for them, they have no interest outside of work. Unfortunately I'm not one of those people so I get off work troubleshooting issues to troubleshoot issues at home lol though there aren't that many just my choice to self host cameras through HomeKit sometimes falls apart somehow but im also squeezing every KB or RAM out of that beelink I can.

Don't get me wrong I don't think a homelab is necessary, but I think people who have only done this in a big corporate environment are doing themselves a disservice - either a small company or a homelab can fix that itch, but like you say a lot of people don't have the interest

It's like a developer who went straight from knowing nothing about programming to JavaScript and never looked back. They missed C, they missed assembly, they missed cycle counting, they missed knowing what your memory footprint is at all times in your application, they missed keeping your inner loops tight and in the cache... It's not just "oh this person doesn't have a nerdy hobby." These are real skill holes in [many] developers' backgrounds, just like knowing how to host something on bare metal+OS is a real skill hole for some devops people.

How do you manage HA?

Backups, litestream gives you streaming replication to the second.

Deployment, caddy holds open incoming connections whilst your app drains the current request queue and restarts. This is all sub second and imperceptible. You can do fancier things than this with two version of the app running on the same box if that's your thing. In my case I can also hot patch the running app as it's the JVM.

Server hard drive failing etc you have a few options:

1. Spin up a new server/VPS and litestream the backup (the application automatically does this on start).

2. If your data is truly colossal have a warm backup VPS with a snapshot of the data so litestream has to stream less data.

Pretty easy to have 3 to 4 9s of availability this way (which is more than github, anthropic etc).


My understanding is litestream can lose data if a crash occurs before the backup replication to object storage. This makes it an unfair comparison to a Postgres in RDS for example?

Last I checked RDS uploads transaction logs for DB instances to Amazon S3 every five minutes. Litestream by default does it every second (you can go sub second with litestream if you want).

Yes but there is still a (small) window where confirmed writes can be lost

Right and that window is bigger for RDS by the looks of it.

Interesting - I had not looked deep into this before.

Is suppose the difference is RDS has high 9s, whereas in the Litestream case the frequency of crashes is tied to your application code and deployment process. In practice this will take more work to reach the same uptime?


your understanding is very wrong. please read the docs or better yet the actual code.

Please can you link to the relevant guarantees? I did read the documentation just today so clearly misunderstood something!

> Backups, litestream gives you streaming replication to the second.

You seem terribly confused. Backups don't buy you high availability. At best, they buy you disaster recovery. If your node goes down in flames, your users don't continue to get service because you have an external HD with last week's db snapshots.


If anything backups are the key to high availability.

Streaming replication lets you spin up new nodes quickly with sub second dataloss in the event of anything happening to your server. It makes having a warm standby/failover trivial (if your dataset is large enough to warrant it).

If your backups are a week old snapshots, you have bigger problems to worry about than HA.


> If anything backups are the key to high availability.

Not really. Backups are complementary in disaster recovery. They play no role in high availability. Putting your data in cold storage plays no role in keeping your system up and handling traffic.

> Streaming replication lets you spin up new nodes (...)

You seem to be confused. Replication and backups are two entirely separate things. Replication is used to preserve consistency across a distributed system and improve fault tolerance, whereas backups just means you are able to recover the state of your system at each checkpoint. Either you're using a word while giving it a new personal meaning, or you're confusing concepts.


Depends how you do your backups. If you do them by replicating. They are both. See litestream [1].

With SQLite this is even more obvious as a database is just a file (or three in the case of WAL). Which means you can replicate to not just another machine (or any file system) but much more resilient object storage like S3 (most cloud provider offer S3 compatible object storage).

- [1] https://litestream.io/how-it-works/

I think you might need to rethink your idée fixe.


No offense, you wait. Like everyone's been doing for years in the internet and still do

- When AWS/GCP goes down, how do most handle HA?

- When a database server goes down, how do most handle HA?

- When Cloudflare goes down, how do most handle HA?

The down time here is the server crashed, routing failed or some other issue with the host. You wait.

One may run pingdom or something to alert you.


> When AWS/GCP goes down, how do most handle HA?

This is a disingenuous scenario. SQLite doesn't buy you uptime if you deploy your app to AWS/GCP, and you can just as easily deploy a proper RDBMS such as postgres to a small provider/self-host.

Do you actually have any concrete scenario that supports your belief?


> SQLite doesn't buy you uptime if you deploy your app to AWS/GCP

This is...not true of many hyperscaler outages? Frequently, outages will leave individual VMs running but affect only higher-order services typically used in more complex architectures. Folks running an SQLite on a EC2 often will not be affected.

And obviously, don't use us-east-1. This One Simple Trick can improve your HA story.


> This is...not true of many hyperscaler outages? Frequently, outages will leave individual VMs running but affect only higher-order services typically used in more complex architectures. Folks running an SQLite on a EC2 often will not be affected.

You're trying too hard to move goalposts. Look at your comment: you're trying to argue that SQLite is immune to outages in AWS even when AWS is out, and your whole logic lies in asserting the hypothetical outage will be surgically designed to somehow not affect your deployment because it may or may not consume a service that was affected.

In the meantime, the last major AWS outage was Iran blowing up a datacenter. They should have just used SQLite to avoid that, is it?


All I'm saying is that people mention HA, when there isn't a need for it or when most people are fine with some downtime. For example,

> When AWS/GCP goes down, how do most handle HA?

When they go down, what do most do? Honestly, people still go about their day and are okay. Look how many systems do go down. What ends up happening? An article goes out that X cloud took out large parts of the internet.. and that's it.

Even when there's ways of doing it, they just go down and we accept it. I never said this doesn't go down or can't go down, it's just that it's okay and totally fine if it does.


> All I'm saying is that people mention HA, when there isn't a need for it or when most people are fine with some downtime.

I don't think it's smart to just cherry pick the design constraints you feel don't apply to you, and proceed to argue others should also ignore them.

Just because you are ok to let your pet project crash and be out for long periods of time, why do you assume it's ok for everyone to do the same?

Think about it for a second: what would be the impact of a storefront to crash during a black Friday type event? Do you think people don't get fired for dropping the ball in these circumstances? Heck, you have papers that document how a few extra milliseconds of latency in a store page is correlated to measurable drops in revenue, and here you are claiming that having businesses crash is no biggie.


There are meteorological seasons already defined by weather shifts.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: