You may have noticed, with some intermittency, that you are the big face of ZARDOZ when you try to access. The reason is that I'm playing (in my spare time, like when I write) to write me a WAF that I can learn from the experience. The reason is not professional in the strict sense: first of all I am not a full stack programmer, so I do not sell code.
Secondly, what interests me is to create something that simplifies. Otherwise I would have simply installed Snort + Pulledpork immediately after https termination, which is not a WAF but has several nice rules. Or ModSecurity, or any other similar software to act as a WAF.
The truth is that I am interested in understanding how much software can be reduced in size. And this for a reason.
Surely we can install the most sophisticated waf on the planet, buy it that I know from Juniper, and also cover the eventuality in which a government attacks our Christmas IoT lights. Evil governments that hate Christmas, or want to do reindeer DDOS.
But the problem is another. That the stuff doesn't turn on the lights, and you can't have a monster that consumes 2KWh and a half at home. And even in the store. And not in front of your IoT.
But it's not just the IoT world (which is finally starting, even if the software component is shitting) that shows the urgency to simplify the software. The problem is that the era of gigantic data centers is about to end.
The first problem is energy: Google and Facebook can say what they want, but their data centers consume TOO. Not too much for the current world, but too much for the world to come. I know that Facebook is making Greta & co lynch and fomenting the conspiracy on climate change, and I know that google is tolerating the worst things on Youtube:
But in the end, we are already seeing the bill coming, and in a few years this will happen:
I refer to the moment when the economic costs of the thing will become visible in the life of the citizen.
Of course, you will tell me that in 5 years we will all go with renewable recycled biological energy with circular electrons, but in the meantime the problem is that this conversion will have some aftermath. In short, data centers consume too much, and spending almost two KWh to move 1GBi for the world doesn't make much sense.
After all, if the architects of AWS are massively implementing ARM64-based data centers, it's not because the servers will still fatten up power.
Between one thing and another, then, sooner or later the problem in the IT market will no longer be (to give an example) to know Kafka, but to let NATS be enough.
"To be enough" obviously means losing some of Kafka's precious
buzzword features, which cool programmers use so well because they wouldn't know how to program them to do fantastic things and solve problems that nobody has , like the indispensable Tinder for dogs .
On the contrary, the next trend will be to simplification. For several reasons:
- It is always more urgent to reduce the attack surface of the systems. And the easiest way to comply with increasingly stringent security requirements is to reduce the scope of the project.
- Energy consumption in the IoT world will be increasingly crucial. More and more often programmers will be told "this stuff must run on a WIFI dildo" and not "on a server".
- In general, there is plenty of data centers that consume satanic amounts of energy to function, and twice as much just to cool down, it won't last long. And there is no Moore law of energy efficiency, although many are struggling to make it exist.
Now, I was saying, that "being enough" will become the next art of programmers and systems engineers. And a WAF for your IoT outlets will certainly not be a 6U Juniper box. It will have to run, if all goes well, on your home router, if not on something like a raspberry.
With limited memory and everything.
Now, let's take the issue of Bayesians as an example. There are many libraries around, ranging from the real scientific ones that offer you a wide selection of Bayesians with all kinds of optimization, and simple libraries found on github. The problem of the future, for those who will not be able to spend energy on "fat" processors, will be to make simple stuff be enough.
For example, a few days ago I tested a version of ZARDOZ that used classes as a Bayesian, but instead of calculating the probabilities it calculated the entropy of the input request compared to the classes considered as dictionary. It is obviously an imperfect solution, but as effective it was comparable to several simple Bayesians that are around.
After all, there are different ways of calculating the information entropy, depending on whether it is specific, total, absolute, and so on. But the problem of "getting enough" things is that we need to adapt the least expensive version in computational terms.
This exercise consists in finding a solution that may not be general, it will not be the definitive framework that does everything, it will not be the most academically correct solution, it will not offer the stupendous buzzword specifications that are necessary for the modern programmer , but that must work with little, on modest hardware, and specifically in a few different use-cases.
The problem is not to use some specific technology or language: the problem is a mindset problem.
If you are a system architect, and ask programmers how many hardware resources are needed for Hello World, today they tell you that without a server cluster with less than 256GB of RAM, 4 CPUs and a TB of hard disk cannot do it turn. Because only for the Kafka cluster that serves to send Hello World to a Kafka Cluster that then turns it to an Elastic Search like Sink, which then runs it at Hadoop, all installed in a Kubernetes cluster, in high reliability (goes from if '), to write "Hello World" on the console you dream of it. Also because we have removed from the account an now indispensable API Gateway and the CI / CD chain, not to mention the whole part of DevOps.
(in reality they simply have no idea what they say, and simply multiply the power of their laptop by fifty. Their PM, then, for security, will multiply by two, who doesn't want to look bad. For fear of cuts of their budget their superior will suggest to be comfortable, so that even if they don't get everything, they do the same. And monstrous hardware requests arrive on the desk. If you don't believe it, ask them how they calculated those numbers, but keep popcorn at hand. ).
The point is this, and that is that I'm training a little bit to use simplified approaches.
So yes, I will try to write a WAF in a few lines, and so yes, sometimes you will find the Zardoz face.
And if you work in the sector, I advise you to train yourself to do the same, because the greasy is going to end up in many sectors, small and little powerful devices will become the vast majority of the budget, the powerful backend will become a luxury not always available, and therefore you will find the request to "make everything work with little hardware, having as a sole strategy the fact that the use cases are limited in number".
And therefore, you will have to get used to "make the hardware enough".
Skill that has been lost over time: consider that only in the last 80 years, to light an LED, would a battery with adequate voltage and a LED suffice. Today you need a raspberry 4 whose power we would have called "Mainframe" and an Arduino that in our time was a controller for industrial automation, and yet a big one.
And since most of the software will run on products like "wifi bulbs", I suggest you learn how to make just a little code, even if not perfect, and maybe play with it to understand what fits best with the specific use case.
Because in the future, you can't run Snort + Pulledpork on a light bulb, but they will ask you the same thing that it is quite safe against intrusion.