April 25, 2024

The mountain of shit theory

Uriel Fanelli's blog in English

Fediverse

Is artificial intelligence racist?

One of the bullshit that I hear most often about AI is that it would turn out to be racist. This could go unnoticed if I didn't recognize the longa manus of a certain academy behind this fake news . So it's time to clear up a few things.

First of all, this is the second wave of AI, or rather the third. AI is something that comes back with every big computational leap. The first attempts, which date back to the 1970s, were made using often analog tools to simulate synapses, such as transistors. This is because in those days the computational capacity was very low. The first wave, therefore, was mainly theoretical, and the main applications were mainly in the field of "fuzzy logics", at the time considered intelligent. Typically it was a question of control systems (brakes for trains and others) that were able to reason in a "quantitative" way while making use of digital components.

The second wave came back around 20 later, in the 90s. And here they began to discover the beauty of the first neural networks Back Propagation, of the "genetic" algorithms (based on an algorithm that reproduced natural selection), on the inferential dichotomizers of every order (today called "Decision Tree"), on the Kohonen networks (which today in the world of big data are called "clustering algorithms", and there are many others, etc.).

What was it due to? It was due to the introduction of the first supercomputing systems that universities could afford, that is, once again to a quantum leap in the head of computational capacity. There was the GigaFLOP. WOW.

Applications were poor and rare, and mainly in the military: shooting adjustment systems for cannons that change characteristics with wear, flight correction systems for supersonic jets, and little else of relevance. Okay, Apple's Newton implemented a handwriting recognition system. Let's face it. Here, so we also make civilians happy. The rest was experimental, that is, limited to the academic sphere.

Today's wave partly by a new paradigm shift in computational power, and by a new ease of building ad hoc ASICs, as they tell you that in your mobile phone "there is an AI chip". Ok. So now convolutional models, recursive networks, and many other technologies are developing indefinitely.

But there is a second point that nobody is pointing out: in this wave, academics no longer DRIVE research. If you want to find the state of the art, it is no longer in universities that you can find it. For several reasons.

  • the academic research budget is microscopic compared to that of Google & co.

  • the computing capacity of the academic world is microscopic compared to that of Google & co.

  • the attractiveness of the workplace in academia is microscopic compared to that of Google & co.

in practice, the academic world in this wave of AI is cut off: it does not have the funds, it does not have the infrastructure, it no longer has the best brains. Yes, many universities have collaborations with this and that, but if we go to see how "Cuban", we discover that these are things made "for the show", but in the end the patent, that is the know-how, remains at large industries.

No wonder, since the same thing happened in the pharmaceutical business.

What does the academy do when it is passed? He puts all his power and credibility into play, starts infecting newspapers and the internet, and then we find out that robotics is doing skynet, that artificial intelligence will kill us all, and now we have discovered that "it is racist ".

bales.

As soon as large companies start making projects "in cooperation" with universities, that is, paying a bribe, then the technologies will return harmless, ethical, and everything. Even if they use them to drop Hellfire missiles. They are our Hellfires. Not like those developed without the precious supervision of the Ethics department, which would have released Hellfires, but evil .

That is, there is blackmail: either the industries announce projects together with the university, or the academics and ethics experts start screaming: what the bad guys who are thirsty for money will kill us all !!!! . It is a highly tested model already used with the world of pharmaceuticals, but also of ecology (if you don't make donations to the usual GrPe and the three capital letters you are in the sights even if your cars go to neutrinos, if instead you make donations you they forget about you even when your cars go to seal pups).

But let's go to the juice. Can AI be racist?

Those who support this bale say that "when the experiments were carried out in the USA it was seen that according to the AI ​​the negroes were more criminal".

But look how strange, and if we asked ourselves what data they were trained on, what would they answer us? They would answer that they were educated with the data of the American police. The one who kills random niggas, apparently.

In fact, therefore the experimental result is not at all negative: in becoming racist, AI had signaled to us that the data were in some way racist, or contained racism.

Just like Microsoft's AI who, left alone on Twitter in the midst of the Alt-Right storm, became a Nazi. Instead of scandalizing people for the fact that an AI wanted to exterminate mankind, MAYBE someone should have asked "but isn't it that on Twitter there is a slight problem of hatred?".

So the point is: YES. An AI can become a Nazi if it is run on data written by Goebbels himself. The problem with AI that does machine learning in general is that they don't have the facts: they only have the data .

If the data show that we give blacks higher penalties than whites, AI will consider them more guilty: but the fact that Ai becomes racist, if anything, is a warning sign. In doing this AI is IMITATING the behavior of the data it reads: if the data comes from a racist justice system, then it will become racist.

But at that point it would be interesting if anything to use them as litmus paper: you create an AI, educate it with the data produced by a given institution, and check what it has become: here the XAI comes into play, that is, the construction of artificial intelligence that is possible to understand after the period of education. At present the AI ​​are very difficult to understand because the construct of all the data they have ground is not understandable if not to themselves.

The problem does not lie in the method by which we control what AI have learned: we can also program them as chatbots and talk to each other. But if now I took an AI, I stuck all the INPS data in it, and I discovered that AI becomes racist against the Calabrians, I shouldn't say "AI is racist against the Calabrians": on the contrary I should go to check how INPS behaves towards the Calabrians.

If when the first AI showed themselves to be "racist" after grinding the police data someone had gone to check, perhaps it would have been understood that actually the American police had some racism problems maybe they had it. But it made more headlines in the newspapers (and more convenient in the academic world) to report the "failure" of the AI ​​industry, rather than listening to the alarm signal.

If by chance I took an AI and programmed it with the data of the Italian law enforcement agencies, and the AI ​​began to enhance the Duce and ask for cocaine and amphetamines, I would not go to criticize the AI: I would wonder if the Italian armed forces do not have a problem of fascism and a problem of exciting drugs. (any reference to really existing facts and people is a reference to really existing facts and people).

Essentially, the game of many academics and journalists was precisely that of not understanding AI as an instrument: it is as if using a thermometer it said "hey, the thermometer has a fever!". Well, no: maybe the thermometer is at the same temperature as the human body, but the human body has a fever.

In summary: yes, if an AI is left to do machine learning on a block of data that nests a problem of Nazism, it will become Nazi. If you feed her with the data of a racist police, she will become racist. If you leave her exposed during the learning phase in an environment full of hatred, she will tell you that she wants to eliminate every human being.

But AI is a thermometer, and the thermometer does not have a fever: the fever has someone who has used the thermometer.

And if anyone had asked, the creators of those AI would have explained it to him.

Leave a Reply

Your email address will not be published. Required fields are marked *