I saw this talk at DEFCON. It seems to combine ideas from IPFS, BitTorrent magnet links, Tor, and things like scuttlebutt. There was a very strong focus on privacy including for metadata.
The biggest weaknesses I saw were around DOS attack resistance, which I did not see addressed. There were good approaches for privacy and security but what if someone with resources (botnet or money) wants to just burn the network down?
Completely pure P2P systems are hard to protect against Sybil attacks whose goal is simply to degrade service. Just launch a ton of bots, have them act normal for a while, then have them start being tar pits or misbehaving in ways that are carefully designed to maximize errors and latency. Then have them randomly pretend to be normal for a while, toggling good/evil to blend in as much as possible.
Combined with strong privacy how do you identify and remove these? The alternative is to design a protocol so bulletproof that there is no viable DOS attack, or use tokenization to impose a high cost on such attacks. Of course the latter leads down the road to all the toxic craziness of the cryptocurrency ecosystem. Add a token and now you have a vector for pump and dump schemes even if you didn’t intend it as that.
I can think of many reasons someone might launch such an attack from troll wars all the way to nation states trying to kill comm vectors for espionage or uprisings.
All protocols really have to be designed as if your client is the defense department and they will be used in a war zone… because the Internet is a war zone.
I actually think going the tokenization route is a good idea. As a positive side effect of the cryptocurrency boom, a lot of good research on POW/POS secured distributed systems was done and it would be a waste to not at least consider using this research.
As long as you hide the details in the depths of documentation of the protocol, make up alternative terms to those that have been tainted by cryptobros and don't mention any relation to cryptocurrency-originated technology, you would probably successfully avoid having parallels drawn between your project and cryptocurrencies.
Sure, someone might cobble together an API and put your token on a cryptocurrency exchange against your best wishes, but I think the risk of that is low. It's easy enough to launch your own crypto and if I'm looking to run a pump and dump scheme, why attach myself to a project that openly distances itself from the crypto scene?
If it can be done, it will be done. Non-consenting projects have been hijacked for pump-and-dumps countless of times. Better design the economics of the token with that in mind rather than having them broken once the wrapped token inevitably hits DeFi markets and gets spammed across Discord and Telegram chats.
> Completely pure P2P systems are hard to protect against Sybil attacks
I’m happy to report that P2P systems resistant to all kinds of attacks are implemented already. There are many, many solutions in many OS projects. See the technicalities of I2P or Freenet 2023. Safe P2P is possible, although hella hard to get right.
Yes. The protection they have against attacks are not ad hoc. They are based on theory and are scalable. One protection strategy (often seen in web-of-trust arthitectures) is to put trust in the authenticity of the current present machine (idk how to better phrase it). This way it doesn't even matter how much of your network is comprised of bots. Your machine will reject them all.
No, let me describe Web-of-Trust. Think about a P2P communication network. You have a trust level for each peer. You add a few trusted peers as friends beforehand. Now beware, this is a recursive algorithm. Your trust level for your direct friends is %100. It’s %0 by default for non-friends, then you calculate it as: the average trust every friend of yours has in that non-friend. If a friend of yours has added that non-friend as his friend, then you happen to have a level of trust in that non-friend and if that level exceeds a threshold, you deem him trustworthy for your purpose (info exchange etc.).
This is simplified, for example to be able to trust a peer who is not a friend of yours or your friends, you can add the trust of your friends’ friends in the calculation for a non-friend, while dividing the friends’ friends trust by a number to diminish their effect on the result. E.g. multiply your 2-level friends’ trust levels with 0.9, 3-level with 0.8 and so on.
This way, real peers will live in their isolated bubble network. Bots can’t fabricate your trust in themselves even if a direct friend of yours has a botnet he added as friends to his node, because no one else added those bots as friends other than your malicious friend. That botnet all trusting each other will be another isolated bubble of trust, with no trust-access to the real-peers bubble.
You can use this exact architecture to share and distribute encryption keys to create secure channels with trusted peers. Though you’d have to exchange keys with your friends air-gapped for it to be secure, if all the wires are tapped (which is true for the internet).
The failure mode in this architecture is: If all/most your real-life friends are malicious agents, then your node can live in a botnet trust bubble without you knowing it.
Depends on your definition of secure. If you mean software security against things like buffer overflows and other software vulnerabilities then the size of the machine is completely irrelevant.
If you mean robustness against brute force volumetric and resource exhaustion attacks then yes large systems are going to be more resilient than small systems for the same reason a big boat is harder to swamp with waves than a small one. The whole business model of things like Cloudflare is to put your site behind a gigantic CDN that is just so damn big it can weather volumetric attacks. It's a brute force solution to a brute force problem.
In this case I was talking about more intelligent distributed attacks against distributed protocols. Those types of attacks are very hard to protect against in general.
Most systems deal with it by being closed and isolated or at least having fairly strict policies about who is allowed to participate (e.g. BGP). In an open P2P network you're going to have to figure out a way to permit resiliency in aggregate without a trust model, which is IMHO an unsolved open problem in protocol design. There's been some progress in countermeasures but no huge breakthroughs.
The biggest weaknesses I saw were around DOS attack resistance, which I did not see addressed. There were good approaches for privacy and security but what if someone with resources (botnet or money) wants to just burn the network down?
Completely pure P2P systems are hard to protect against Sybil attacks whose goal is simply to degrade service. Just launch a ton of bots, have them act normal for a while, then have them start being tar pits or misbehaving in ways that are carefully designed to maximize errors and latency. Then have them randomly pretend to be normal for a while, toggling good/evil to blend in as much as possible.
Combined with strong privacy how do you identify and remove these? The alternative is to design a protocol so bulletproof that there is no viable DOS attack, or use tokenization to impose a high cost on such attacks. Of course the latter leads down the road to all the toxic craziness of the cryptocurrency ecosystem. Add a token and now you have a vector for pump and dump schemes even if you didn’t intend it as that.
I can think of many reasons someone might launch such an attack from troll wars all the way to nation states trying to kill comm vectors for espionage or uprisings.
All protocols really have to be designed as if your client is the defense department and they will be used in a war zone… because the Internet is a war zone.