As decentralization, which is not a recent invention, is in fashion, I would like to say a few words about how there are methods to stop it, and how it has already been done. It was done with the SMTP protocol, which was the first truly decentralized protocol, and the scene was repeated with SIP.
The concept is simple: if someone is forbidden to do something, he will try to escape from the cage. If the states, in a desperate attempt to read e-mail, had prevented users from using SMTP to its fullest extent (I will talk about it later), there would have been a rebellion.
But the strategy used to crack down on the network is not based on bans. It is based on obstacles. If we make the cage comfortable and uncomfortable life outside the cage, essentially all the lazy will stay in the cage. And the bars of the cage will be made up of their own laziness.
Thus, the reduction of the SMTP protocol from a completely decentralized and federated protocol, up to a completely centralized protocol, was done simply in this way. Making it increasingly uncomfortable to use it.
States and a lot of industry wanted to centralize their emails so they only have one server to log in to read them. But SMTP could have been very different.
First, the implementation of SMTP has been completely stupid, at the client level. Let's understand each other.
On any client, you have an "outgoing SMTP server". What does this server do?
- Takes charge of your message.
- Search DNS for an MX record to find out which server to deliver it to.
- It delivers it to a server, which saves it at its destination and then provides it to you using other protocols.
This made sense in a world based on modems, and on an embryonic network where servers were not always connected. It was therefore necessary for someone to queue if an element of the chain was NOT online at the time, or was not available.
But today as today there is no more need for this "Outgoing Server". The e-mail client can easily query the DNS and deliver the mail directly to the destination SMTP server. The destination server will definitely be active, so there is no need for an outgoing server to queue in case you have to wait until the destination server is online . It doesn't happen anymore, the servers are always online, or almost.
Even on reception, this is no better. It was decided to save everything and then other protocols (POP3, IMAP4) were invented to do the work of the last stretch. This was due to the fact that unix had (and has) its conventions for mail, and that early smtp servers did not support authentication.
But today there would be no need for such protocols: the SMTP client could simply
- Connect to your SMTP server
- Send the ETRN command to have all mail queued on the SMTP server, using the SMTP protocol.
- Empty your user's queue from the SMTP server.
What went wrong with all this? Why have email clients never evolved in this sense?
The fact has gone wrong that, if we imagine a world where e-mail works like this, the only way to intercept e-mail is to be on the destination server, or to have the recipient's credentials. Since the police and the managers wanted to intercept the mail knowing who sent it , and it would have been too easy for anyone to use a server in some country that does not collaborate with the investigations, they decided to keep up the absurdity of the "SMTP sending server ".
How did they do it?
- No client seems capable of querying the MX field and delivering the message to the recipient's server.
- Even if someone did, some anti-spam genius suggests blacklisting "residential ip".
All in all, the first obstacle would be easily circumvented. Either by running a small SMTP server on your machine, or by implementing the lines of code in some Opensource client that are used to select the right smtp server.
But then the sheriffs came. The sheriffs decided that to block spam (an enterprise that would have been very easy using certificates, but this is another matter) it was necessary to create gigantic blacklists of IP addresses, for the precise purpose of preventing individual users from sending e-mail independently. The result is that today, if you host an SMTP server at home, you will not be able to send mail except using an "outgoing SMTP server", since only ISPs are allowed to send mail to each other.
Did this block spam? Do you have the spam folder empty? Obviously not. Besides, that wasn't the goal either. The goal was to prevent you from sending mail directly to your destination.
There has also been no progress on reception. In general, it would have been enough for the client to connect and give an ETRN after authentication. This would have emptied the queues. But it was just the thing you didn't want: IMAP and POP3, in fact, instigate the user to leave the mail on the server. Where an agent or company can read it. On the contrary, emptying a queue implicitly requires removing messages. While on IMAP4 and POP3 the default is to leave them on the server and if anything you need to configure something to delete them.
Moral of the story: a protocol that had to be decentralized, and could be COMPLETELY, with the excuse of spam it was centralized on ISPs and providers such as Gmail and Outlook. Has the spam problem been solved?
But did this make it impossible to do what I describe? Absolutely not. It just made it uncomfortable. I can still, with various artifices, use smtp in a "decentralized" way, that is with my home server. But they are uncomfortable and you have to set them up and maintain them.
If you are lazy, use your ISP.
The second example is SIP. The SIP protocol, born mainly for VOIP calls, if combined with the SDP session actually allows you to transport anything. Including, potentially, radio and TV. So it was the most versatile protocol, and it was federated as much as SMTP, with the difference that DNS was not entrusted with keeping a specific record to describe the destination server, and it was decided to use the SRV fields. On the other hand, servers are used that register users and simply distribute the IP and port of the other user to each user.
In practice, the pattern is that if Pippo and Pluto want to talk, they first call Mickey Mouse and register. Then Pluto asks Mickey Mouse: you who know everything, can you tell me where I find Goofy? Mickey Mouse says to him: "sure, find it at 22.214.171.124:5678". And then Pluto contacts Pippo directly, and they talk to each other.
Horror! I imagine the desperation of the overseers. In addition, ISPs feared that this protocol would take away their money from voice calls.
How did SIP stop? Just like SMTP stopped, but with different tricks.
- ISPs occupied the SIP port on each router, implementing it in turn.
- ISPs did not allow the registration of custom addresses on their SIP server, allowing only telephone numbers.
- ISPs implemented SIP VoIP only in the clear, in the vast majority of cases. So as to make interception easier.
What does it mean? It means that if you have a VoIP router, you cannot open port 5060. You cannot because normally the small SIP server running on the router itself has taken it, and you cannot touch it. (only in the case of some routers, such as Fritz, can I attach to sip custom servers and receive calls on the home phone)
But if you want to use dynamic DNS to have your SIP server and make calls in this way, you will soon discover that in most cases you cannot, at least not on the standard port.
Also in this case, however, the loophole exists: in the great majority of cases these SIP proys that run on your router use only the plaintext port, the 5060, while leaving the encrypted one free. So you can safely keep your SIP server at home, as long as you do all the work it takes to have a cryptographic certificate from Let's Encrypt.
Once again, laziness is exploited.
Then you might ask yourself: but why don't you make an image already ready to download, put on a raspberry and make your own box-telephone / mail / that house?
The truth is, it would be prohibited. That is, you can do it as individuals, but it is prohibited in almost all of Europe (and in the USA) to sell products that have the explicit purpose (or side effect) of avoiding legal interception.
I'm not kidding. If I like any pinco pallino I make a box that has a SIP server with encrypted transmission and an SMTP server that encrypts the transmission in transit and deletes the data, no problem. But in the end people would say "but who the hell are you and why should I trust you?".
Then I could say: "I found a company that builds freedomboxes that are easy to use and install, and I sell them". It's not so easy. The company would be instantly joined by the government, which would remind him that LAW is required to provide a backdoor for the Lawful Interception. And this, in different sauces and versions, is implemented in all the legal systems of the world. In different forms and ways, sometimes with single and explicit laws and sometimes indirectly.
For example, in the US you want to make a law that says "encryption protocols must contain a backdoor for the FBI". This brutal way of proceeding unleashes protests. In other countries (such as European ones) this is expressed in a different way, saying things like "the provider of services and products for telematic communications must, on request, allow the magistrate to intercept calls". Which is the same thing, but it does not trigger any protest.
In the case of a company that sells a "freedom box" like the one I describe, for example, the state would immediately ask that magistrates can read all the mail, intercept calls, etc. So no, it would not be possible to make the user autonomous, because too much "surveillance" would be impossible when it was autonomous.
The response of the libertarian part of the net to all this disaster was the creation of darknet. But don't worry, they've already encysted them. Tor is a U.S. Navy product, so deluding himself that he's out of control is ridiculous. Darknet like I2P, Freenet and Mnet, HayStack, MUTE, JAP, Mixminion / MixMaster, MorphMix, Retroshare, are never mentioned in the mainstream press (although on Freenet there are contents that I would define as "creepy" we have not yet seen the interest that we have seen for the Tor markets), and therefore they have little following.
What remains are the VPNs, and I must say that I find Linus Torvalds' decision to place WireGuard as the default in the kernel curious. It must be said that this represents a fatal blow to all the other OSS software that make VPN, since being in the kernel of each linux machine and being fairly secure code (just over 4K lines) it will become made the standard for VPNs, very shortly.
The ability to create vpn with four to five convenient commands and without the complexity of the first will certainly attract many vendors, and also many users. The problem? The problem is that wireguard only works on layer 3.
If you want to create a GRE type network, that is create a real private network (as if you were attached to a switch with a cable that encrypts, like tinc ( https://www.tinc-vpn.org/ ), or as openvpn in TAP mode, you struggle a lot You have to use gretap and stick to the wg interface you created.
those below are the IPs of wg0 on the server
ip link add gre1 type gretap remote 192.168.99.2 local 192.168.99.1 sleep 2 ip link set up dev gre1
those below are the IPs of wg0 on the server
ip link add gre1 type gretap remote 192.168.99.1 local 192.168.99.2 sleep 2 ip link set up dev gre1
although easily scriptable, it becomes a discreet cat hanging from the balls, but in the end you can also do something like a mesh net.
But once again, to have the full solution, the one in which you own YOUR network and do what you want, you need to overcome laziness. It is easier to buy a VPN online, and to hope that what the advertising says is true.
In any case, you could certainly build a box that does all this and creates a truly private overlay network, but … as I said, a company that did would be illegal and would be closed immediately. Or forced to provide a backdoor.
Let's go to the Phaediverse. Here too I see that the sheriffs are at work. First of all, they are centralizing the platform, to the point that the universe is referred to not as "Fediverse" but increasingly as "Mastodon". Although there are a dozen platforms, someone has been "helped" (don't tell me it's the work of two programmers, santiddio !!) and Mastodon is becoming the de facto standard.
But the absurd thing is that the situation today is very centralized: https://fediverse.network/?count=users
If you consider that the entire population of the universe is 4/5 million people, the fact that a single instance contains one million makes you understand how centralized it is. And if you look at the platform, you understand that if some government entity persuaded the programmers to put a backdoor on it, it would have taken control of the network, or almost so.
Once again, the phenomenon is being curbed by exploiting the laziness of the masses, who do not want to install a pod in their own home. And as I said, the company that offered for sale a freedombox with a pleroma on it to run on any stalks would be immediately closed.
Ultimately, that is, the way of manipulating the network and locking users in cages exists, and is based on their own laziness.
Anyone who deludes themselves that there are no bars on the internet should stop looking at the Internet and look for bars inside themselves. As usual, if you want a culprit, all you have to do is look in the mirror.
I know that some respond to these speeches by saying "yes, but what happens if we stay as we are"? It happens that you are poor while Bezos has enough money that if he wants to buy you and use you as garden dwarfs.
Why the fuck do you think economic inequalities are increasing in the Internet age, except for the fact that you have empowered others, and money always follows power?