Hacker Newsnew | past | comments | ask | show | jobs | submit | rhim's commentslogin

Kernel level anti cheat is really the maximum effort of locking down a client from doing something suspicious. But today we still see cheaters in those games running these system. Which proofs that a game server just cannot trust a random client out there. I know it's about costs, what to compute on client and what to compute in server side. But as long as a game trusts computation and 'inputs' of clients we will see those cheating issues.


It’s not about costs, it’s about tradeoffs. In an online shooter game (for example) there is latency, and both clients are going to have slightly different viewpoints of the world when they take an action.

No amount of netcode can solve the fact that if I see you on my screen and you didn’t see me, it’s going to feel unfair.


Plus, if I was a motivated cheater, I'd just use a camera, a separate computer, and automate the input devices.


That does not make any sense to me. A human can produce software that is at least as bad. Vice versa, a good developer can also create good software with AI. They should focus on actual quality - the outcome - and evaluate it neutrally rather than making such stupid blanket statements.


It's so crazy and scary that Cloudflare is the single point of failure for the internet.


But this decision is not determined by CF. It's how the devs decided.


Trying to figure out if this observation was intended to frame it so that it's less|same|more scary. The effect is more, but it sounds like the intention was less.


The common pasture.


> Or is the performance of those models also worse there?

The context and output limit is heavily shrunk down on github copilot[0]. That's the reason why for example Sonnet 4.5 performs noticeably worse under copilot than in claude code.

[0] https://models.dev/?search=sonnet+4.5



> You do mentoring because the pay off is a junior that develops into a senior; writes better code more efficiently. But what's the pay off with going through this process with AI?

This point is so underrated, when discussing about replacing junior devs with AI.



What a coincidence, today is the annual LAN party with my school friends. We've been doing this once a year between christmas and new year for many years, and I enjoy every second of it.


Imagine a supply chain attack on a closed system and nobody finding out about it.


That's crazy, imagine you have thousands of office PCs that all have to be fixed by hand.


It gets worse if your machines have bitlocker active, lots of typing required. And it gets even worse if your servers that store the bitlocker keys also have bitlocker active and are also held captive by crowstrike lol


I've already seen a few posts mentioning people running into worst-case issues like that. I wonder how many organizations are going to not be able to recover some or all of their existing systems.


Presumably at some point they'll be back to a state where they can boot to a network image, but that's going to be well down the pyramid of recovery. This is basically a "rebuild the world from scratch" exercise. I imagine even the out of band management services at e.g. Azure are running Windows and thus Crowdstrike.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: