May 7, 2024

The mountain of shit theory

Uriel Fanelli's blog in English

Fediverse

Europe versus Hollywood.

Today I happened to read around that the EU is trying to write legislation regarding the "ETHICAL" uses of artificial intelligence. I tried to figure out what they're trying to do, and the only word that came to mind was “Hollywood”.

My position is this, and it's simple: when you're trying to avoid some risk, read scientific literature on risk. Is there literature on the risks we take, let's say scientific literature of people killed or harmed by artificial intelligence?

We'd be tempted to say it's too soon, but that's not the case. There are two groups of people who have been killed by AI.

The first group is that of US owners of Tesla cars. It happens that the natural imbeciles behind the wheel sleep while driving, or do something else, and the artificial intelligence behind the wheel, which is not autonomous driving but assisted driving, causes a fatal accident.

Here we can discuss the proper or improper use of artificial intelligence, but since assisted driving also falls within the definition of Artificial Intelligence, and since "brakes before a truck" is certainly "assisted driving", I would say that we can include these dead in the case of "killed by AI".

The second group is more "smoky", in the sense that the military are involved with their secrets. We know that in Libya one or more drones opened fire on people who seemed to them to be terrorists, without asking the base for approval, that is, acting autonomously.

There is a lot of military secrecy behind this: it is not known exactly how many dead, which dead, what happened, etc. We only know that something like this happened.

This is the second batch of AI human victims.


One would therefore expect that a European regulation on the subject would begin by taking into consideration what exists, i.e. the AIs that are on cars and which are becoming increasingly invasive and powerful, and on military instruments. This is because there is literature.

No.

On the contrary, European legislation appears to be inspired by Hollywood.

You read things like “Predictive Police”. Now, to this day there is no system in the world where an AI says “this guy is going to kill tomorrow” and the police arrest you today. They exist only in one film, Minority Report, which also didn't use AI. However, someone in the EU has decided to stop this.

The only evidence that exists of systems capable of profiling people enough to understand whether or not they will do something in the future are advertising profiling systems, which are able to do psychological profiling to the point of understanding what we could buy . But since there is no system capable of understanding precisely "by what a criminal is recognized", a "predictive police" is impossible: the measurable character traits do not include the aptitude for crime, at most the "unagreeability", which however so far it is associated with success in professional life.

We know from the headlines that the EU is fighting against "Big Brother", that is against a character in a book from the last century, and that it will also deal with "social profiling".

Other technologies that are considered "problematic" are the "social ranking", which is practiced in China, and consists of a series of ostracisms applied to those who do not align themselves with the ideas of the government and/or the communist culture, and facial recognition.


Again, there is 'to put their hands in their hair stupidity'. When discussing the introduction of a technology, the technology itself is NEVER discussed, but the existing alternative is discussed, and compared. If there is improvement, the new technology is preferred, otherwise the old one is preferred.

Let us therefore assume that there is a "Predictive Police" based on artificial intelligence. It refers to something that independently signals a person BEFORE they have actually committed a crime. Does this thing already exist or not?

It already exists, and it's called “prevention”. When a terrorist is arrested on the basis that "he was organizing an attack", for example, one is already doing Predictive Police.

The problem is that today the vast majority of such interventions are complete fiascos, and we end up with innocent people whose intentions have been misunderstood. Not to mention things like these:

https://www.ilriformista.it/sei-siciliano-allora-sei-mafioso-storia-della-famiglia-virga-sempre-assolti-ma-lasciati-in-mutande-dalla-saguto-143863/

https://www.ilriformista.it/lazienda-e-mafiosa-dia-distrugge-prenditore-per-aver-assunto-due-ex-detenuti-132811/

https://www.ilriformista.it/storia-di-francesco-lena-di-matteo-chiese-9-anni-per-associazione-mafiosa-assolto-con-formula-piena-126199/

So, in this case the problem is the following:

  • Clarified that the status FA Predictive Policing, just calling it in different ways.
  • Do you prefer a human judge to do it, or a machine to do it?

Let's go to the second point, ie social ranking, or the Social Credit System which today is "done only in China": is it done only in China? And why, if a kindergarten teacher gets screwed over for fucking her boyfriend, does the school fire her? Does it have anything to do with teaching? Why if a high school teacher is found out to be a swinger, does she get fired? Does what you do in private have anything to do with teaching?

How come today trials are held on the mass media and people no longer wait for the sentence to declare guilt?

The truth is that a social credit system already exists, it is implemented in the form of media lynching, and it can destroy a person's life exactly as, how much, when and why the Chinese system does it.

So:

  • It is evident that there already exists a consolidated system of social credit, today managed by a mafia of newspapermen, bigoted officials and provincial villagers.
  • Do you prefer it to be a car, or for the sanctimonious neighbor to manage your social credit?

We are ridiculous about biometric recognition. Nowadays, the police usually "recognize" suspects using scandalously low quality photographs, or using "witnesses" who can now be bought by the kilo in the bars in front of every court in Italy. Or use this method:

We now have witnesses who testify with questionable lucidity, an "identikit" process that probably shot thousands of innocent people to jail, and a machine that recognizes faces with more than 95% accuracy.

Excuse me so much if I prefer the car.

At least they can't bribe or threaten her like witnesses. And it tells you if a photo is of too low quality to work with.

As for the Big Brother discourse, an EU parliament that publishes press releases on Facebook and then complains about data collection and misuse is simply incomparable.


In general, I see that the approach is not to see if there is progress. If a new technology arrives, what is done is to compare it with the existing one, to understand if its adoption improves or worsens things.

On the contrary, the approach is not to begin by understanding whether technology is better or worse than the one we are already using, it is not to ask whether "Predictive Policing" is better or worse than "preventive policing" or of "preventive seizures".

The EU Parliament is using a MORAL approach to judge new technologies. And as if that weren't enough, the morality he's using has no political or philosophical origin, but comes DIRECTLY from Hollywood.

We are not concerned with establishing whether these technologies can improve the law, justice, policing compared to the antediluvian methods that are used today. We deal with whether they are "good" or "bad", "worrying" or "dangerous", and to make matters worse the judgment comes from Hollywood. (for the simple reason that there is still no literature, except science fiction, capable of offering footholds).

If a logic linked to the existing scientific literature had been followed, the EU would have had to deal mainly with AI inside cars (since there have been deaths from Tesla cars with "assisted driving") and military uses. On the contrary, the EU is going to work "preemptively" on technologies that have never been seen at work, and which have only Hollywood film literature.

One wonders what would have happened, or what would happen, if in a hypothetical Sequel to Minority Report there was the case of a dangerous terrorist who must be signed at all costs, and the hero were asked to convince the three mutants to go back to work to save the population from evil terrorist. Now the preventive police are good. What do we do now?

Or one wonders what would have happened if the plot of Minority Report had depicted this technique positively: this is the risk you run when you rely on Hollywood to do politics.


The last point to mock in this story is the story of the "copyright of the images that ChatGPT uses to train". This thing is such gigantic bullshit it's embarrassing to even mention it.

First of all, the examples you provide to an AI are the subject of "machine-learning". The word "learning" should alarm you. Why'?

Because saying that it is illegal to take a work covered by Copyright and use it for free to learn about graphics (because AI do this), is like saying that whoever wants to make Rock can only reinvent Rock from scratch, since that listening to it and learning (or getting inspired) would become illegal.

I would like to ask the painters who complain how many works they have LOOKED AT in order to educate themselves: or have you really reinvented oil painting on canvas from scratch? I would like to know from graphic designers and illustrators how many works have they LOOKED AT before forming their own style, or have you really reinvented photography, illustration, and all related techniques from scratch?

Until now, the idea of ​​educating one's ear by listening to other people's music, of educating one's eyes by looking at other works, of educating one's writing by reading others' books, had been exempt from taxes and duties.

Here, however, we're transitioning to the idea that the learning function is itself subject to copyright rules. If we now pass the principle that learning is subject to this taxation, it will be an instant to go from making you pay the copyright for everything you watch, as you are learning. Would an aspiring photographer open a site and find himself paying the copyright for every image he looks at, since he is forming himself in the light of other masters?

It is obvious that the film and music industry will lead you to believe this.

Let's take Stable Diffusion. You have been influenced by terminators, you want to make a cover and ask Stable Diffusion to make a cyborg. Maybe the Cyborg will have elements taken from Terminator:

In one case, there's a red eye like the one in The Terminator. In the other, a part of the fleshless face. I can see quotes from different movies, here. But now the problem is to understand whether to go back to the artificial origin of the film it is sufficient that there is a red eye in common.

Also because mixtures are also possible:

Do these statues pay a pledge to Botero or Modigliani? And to what extent?

The truth is, in a REAL Copyright dispute, it would be really hard to win with these images. It is true that there are recalls, but they would fall under "fair use".

But strangely, if fair use is tolerated for humans, it is no longer tolerated for machines. It seems like a sensible criterion, but Blender already has the plugin for Stable Diffusion, and all graphics programs are adapting: and some have the possibility to decide from which examples to learn.

In the case of an artist using this AI to support, who has to pay the copyright to the famous images that BOTH the human artist and ChatGPT are learning from?

The problem, then, in the event that a complaint should be made, is to go back to the origin of the inspiration. Let us take the following “poem”:

 Those who die for pizza have lived a long time

Oh, enchanted pineapple pizza, 
With bold and spicy flavours, 
With fragrant bismuth nitrate, 
Which intoxicates the ecstatic senses.

food heroes, 
Challenge to fearful palates, 
On the triumphal plate in portion, 
Seal the enterprise of the brave.

Radiate with strength and vigor, 
With the freshness of royal pineapple, 
An experience of flavour, 
Which only the most daring can appreciate.

And in bismuth nitrate, 
The light of the culinary art, 
Which with its power makes it absolute, 
Eternal symbol of enchanted gluttony.

Oh, heroic and indomitable pizza, 
Who defies fate with ardor, 
I lift you up on a throne of stars, 
Your victory of love resounds in my throat. 
must pay royalties to:
  • Gabriele D'Annunzio
  • Me subscribed.
  • Gabriele d'Annunzio and Me and ChatGPT

Don't rush to understand, I wrote it by imitating my high school memories of D'Annunzio. ChatGPT has nothing to do with it. How would you prove it?

The alternative is that you would be charging ChatGPT for copyright even without it producing anything, but only because it has learned. Once this is done, you will have opened the door to the taxation of watching everything and learning something.

I don't think there is any need to comment further on this lopsided idea of ​​trying to establish whether an AI used copyrighted material to learn graphics. It is absurd, stupid and dangerous.


The EU got a little too excited about the success of the GDPR, and now believes it is at the forefront of everything it does.

I guess sooner or later they will come up against their own incompetence.

Leave a Reply

Your email address will not be published. Required fields are marked *