April 27, 2024

The mountain of shit theory

Uriel Fanelli's blog in English

Kein Pfusch

The problem of network security.

The problem of network security.

This time I want to write an article about computer science, and I want to write it because in the end we are having all the problems we are having with social networks for exactly the same reason, or the lack of a culture of computer security. I would like to underline how catastrophic this lack is, pointing to a real case.

Then I'll explain what this has to do with social networks.

The real case is this: https://eu.desmoinesregister.com/story/news/crime-and-courts/2019/09/19/iowa-state-senator-calls-oversight-committee-investigate-courthouse-break -ins-crime-polk-dallas / 2374576001 /

It happens that the equivalent of the interior minister wants to know how safe the state systems are. And since the presidential elections are coming (and you will have known how important the case of Clinton was that has screwed US national security with its ingenious find of the servers in the bathroom), this is important.

Then they give a WRITTEN assignment to a security company, which performs "friendly hacking". In this pentesting operation, some employees (convinced that they were acting legitimately, since they had been appointed by their "target") were able to enter the state systems. Since they did not cause damage, security officials noticed the attacks and tracked down the attackers.

I'm not going to say who was better, because we could discuss a lot. Maybe the hackers never really penetrated the system and ended up in a Honeypot (traps set up specifically to attract the hacker), or they really penetrated and the security took them only because they didn't delete the logs of their passage. There are different ways to look at it, depending on what really happened. But that's not the point here.

The point is that the attacker's reaction was to request the arrest and legal prosecution of those who worked (it was a legitimate action) to penetrate the state's defenses.

If you read the absurd arguments of those who want the trial, you will notice that there are some terrifying phrases, including those like "you shouldn't have worked like this, but you should have warned us before."

But we know what would have happened if they had warned before. The IT manager, who does not want to plant shit figures, would have ordered to promote EVERY alarm at "Maximum priority", getting the call to a specialist for EVERY anomaly detected. In the case of a system that does outlier detection (that is, that takes into account the anomalies), it would have minimized the standard deviance required to raise an alarm, and any slight fluctuation would have caused a call to the whole operations team, for all night long.

This is due to the fact that public employees work with the same mentality all over the world: minimizing the risk of losing the seat. Never mind a bad job, no matter the fact that this work of "friendly hacking" can close very dangerous holes. It only matters that no one says "your system was vulnerable, you did a bad job".

So no, the story "this was not done furtively" is bullshit: it is as if during the exercises the GE military was asked not to disturb the radar of the target, since they cannot defend themselves in this way or worse, to announce and plan the attack, so as not to be caught by surprise. But the purpose of a stealth attack is precisely to take you by surprise, and the exercise serves to understand if it is possible, and if it succeeds you need to understand what went wrong.

You could liquidate the story by simply saying "but what do you think, that the real hackers from Russia and CINA will warn you before attacking?" But, as I have already written, in the world of public employment the problem is never that of doing a good job: the problem is not being the one who is accused of having done a bad job. So who cares about Russians and Chinese and other hackers.

The second sentence that leaves terrified is "from now on to say that you had the contract will no longer want to say anything". Here we are in delirium. A contract is valid or invalid: if a client calls me to attack a system and pays me to do it, it is obvious that the intrusion does not conflict with his will: he is telling me that IF I CAN ENTER in his house he has nothing against it, as long as I tell him how I did it.

But we can do a little bit of reverse engineering of this statement to understand where they want to lead: if the attacked department had signed the same contract asking to attack their servers, they almost certainly wouldn't have had anything against it. The problem here is that ANOTHER state department had paid for it.

So it all translates into: "the other departments of the state have to take their dicks". But what difference would there have been if the hackers had a contract with the department they attacked? The difference would have been that the relationship with the technical assessments would have been delivered to the IT manager, whose department was the victim of the attack, and therefore the same manager could have managed it, giving him the desired publicity. He would have made public the result if positive, while he would have hidden the dust under the carpet in the opposite case: it doesn't matter that in the case of bad news he could still have repaired the leaks (which is why you pay that kind of "attack"), the point is that the logic is that of "dirty clothes wash themselves at home".

Let me be clear, there are also managers of private entities that think like this: for this reason the departments of "IT security" are always isolated, marginalized and "kept under control". Unless, on the upper floors, you don't really want to do security.

Then, put yourself in the shoes of the usual IT manager "I want a quiet life", who authorized bad practices to "not block the project". This manager does not make a quiet life and does not sleep well: after having sent into production (= on the internet) services written with the ass, full of workarounds and failed by dozens of bad practices, he now lives in fear that hackers will penetrate the system.

But he is confident of two things that help him sleep:

  • If they are REAL hackers, they can say that "no system is 100% secure" and rely on the fact that hackers are seen as people with magical powers, and not as individuals who rely on the bad practices of programmers and systems engineers.
  • If they are FINTI hackers (ie contractors hired for this), the affected manager knows that inside the company there will always be a top manager who justifies them and who will give them the incompetent. But beware, you can always turn to a dull, indifferent and automatic entity. You could, that is, report you to the police.

In the second case, the complaint serves to prove that he "noticed the intrusion", as if this were a mitigating factor. Sure, the criminals come into your house, they beat you, they rob you, your wife will rape you, of course you've noticed the intrusion !!! (especially your wife)

But to say that "you noticed the intrusion" means nothing: when you get a stealth plane on you, normally you notice the bomb, just before you die. But the point is it's too late, and saying "but I noticed it" doesn't mean NOTHING. It would have meant something if realizing the intrusion had served to stop it, or to prevent it altogether. Otherwise, it just means that you were powerless to watch as thieves raped your wife in front of their eyes (but ICINGA had a lot of red lights, eh).

In practice, in this story we see only one thing: that security as a culture is very far from being accepted by the management of any company. This deficiency comes in TWO steps, plus a sequel (aggravating if not criminal):

  • When, in order not to "stop" the project, security exception is authorized and unattainable requirements are written without reducing security. The product must go live, and you can't get in the way just because someone is called Robert DROP TABLE FBI; – (cit.)
  • When IT services are run, data leaks are hidden, intrusions are hidden, and we intend to work not so much to close the leaks, but to hide their existence.
  • When, to circumvent the GDPR, they maintain fake "vulnerabilities" for the specific purpose of offering them (for a fee) as access to data for "third-party companies" who pay them.

The third point leads us, directly, to the point of the great social networks, which are simulating "hacker attacks" to justify the output of data from their networks, and not to be sanctioned in accordance with GDPR.

I do not know if you have noticed, but since the GDPR made its way, there have been more public cases of "data theft by hackers", "password saved in the clear and exposed on the internet", "vulnerability discovery" that allowed the release of data of millions of people ".

These vulnerabilities were largely built ad hoc. Since the GDPR presumes very heavy fines for those who sell your data without your authorization, but do not sanction those who have vulnerabilities exploited by hackers, then

The big players are building fake vulnerabilities in their systems.

When they want to sell your data, they triangulate the money and then give the customer who bought them the instructions to access the data. They do the same with governments: after the snowden scandal the big players found themselves in the embarrassing position of justifying that they had allowed governments to access data to conduct mass espionage.

To get rid of the problem, the solution was simple: "let's pretend that the data is stolen against our will".

So if the US government wants data from a large social network, it is no longer a dedicated and official line that leads directly to the database. He asks the big social network to build a fake vulnerability, then gives the US state instructions on how to use it. So, if anyone notices the fact that the state owns those data, they will only say "the talented NSA, equipped with the best hackers, has penetrated our systems and has taken the data against our will" : how can we do something against NSA wizards ? " (for example, don't leave the keys under the magic carpet of the house?)

They do the same with private individuals: the GDPR prohibits large social networks from selling your data, and sanctions it if the data is sold. But it does not sanction anyone if it has a security problem. Thus, the big social networks are intentionally building security flaws, and then selling them to customers: when a customer wants to buy your data, he simply pays for it with some indirect means, and in return receives instructions to use a "flaw" to security ”, which happens to be closed as soon as you have downloaded the agreed amount of data.

If the amount is very large and they are afraid that someone might get pissed off, then they play in advance and declare to the press that "data theft by hackers", "password saved in the clear and displayed on the internet", "vulnerability discovery" that has allowed the release of data from millions of people ", washing their hands for the time when someone will do harm using that data, and it won't be possible to hide that they were sold stolen.

Ultimately, therefore, the GDPR has forced many companies to do "privacy by design" (art.22) but since it does not sanction data theft by "hackers", then it has created a parallel market of fake vulnerabilities with which are sold the data of customers who had not authorized the sale.

And here we return to the mentality of security: until laws are written that protect legal actors (companies that make friendly hacking, white hackers, etc.), no real security can be made. If the bug bounty programs are still few and do not allow hackers to live on their work, (except in very few cases), these fake vulnerabilities built to sell black data will not represent a risk.

The only risk in leaving the keys under the house mat is that you find someone else. The risk that large social networks have when creating these fake vulnerabilities is that some real hackers can use them. If we decriminalize the activity of bug bounty (the hunt for vulnerabilities), and it always becomes possible to do so, then the bad practice of building fake vulnerabilities to bypass the GDPR will stop.

If someone leaves the key of the house under the mat for an accomplice in order to simulate the theft, get paid by the insurance but keep the jewels, a certain number of fake thieves who go (for a fee) around to raise the mats is the only way to stop this bad practice.

Otherwise, the GDPR has already been circumvented: the big social network on duty will do nothing but create an ad hoc vulnerability for the client on duty, it will be paid in black, and then it will sell your data.

If we want to close this flaw, we must decide that if a hacker penetrates a system but does no damage, and warns him before a bug is found, as is done with the CVE, or makes the bug public if nothing changes, after a fixed time), is not punishable.

Otherwise, large social networks will continue to hand over your data to governments and companies, deliberately creating vulnerabilities, and then blaming "hackers" for avoiding GDPR sanctions. As they are already doing today.

The only difference is that the pathetic excuse "was a hacker", when a big social network says it, is taken seriously.

links

Leave a Reply

Your email address will not be published. Required fields are marked *