He could not miss, given that we are approaching summer and Ukraine begins to tire, as well as Covid, the anguish for AI becoming conscious. Surely the fact that a chatbot passes the Turing test ( https://it.wikipedia.org/wiki/Test_di_Turing ) is interesting, even if I see a nice trick as big as a house.
But from here to say that it is "sentient", it takes some. So far I don't see how he passed even the first test necessary to say that a being is "self-conscious". For example, the mirror test, or Rouge test ( https://en.wikipedia.org/wiki/Mirror_test )
So it's better to go step by step, breaking away from the banal fact that reporters are magnifying. What's happening, and what happened.
Until now, when we spoke of "intelligence", "thought" and "conscience" it was the philosophers who were the masters. But then something happened. There are three fields of science that are approaching the problem of exactly how the human mind works, and everyone knows more than philosophers.
And when the philosophers feel outdated, that is almost always now, they do nothing but raise caciara to throw mud on what others know.
I am referring to three specific fields, which I name for a reason that I will explain later.
- computational neurology. The construction of simulators of the human brain understood in its biological topology. People like Markram https://en.wikipedia.org/wiki/Henry_Markram pioneered, still are, etc. They study the human brain by simulating it on a computer. They are neurologists, so they are not interested in producing something "working", but only, ultimately, in learning from mistakes.
- robotics / cybernetics. I mix the two for historical reasons, and because (I met them in the past when I was in the education / research sector) their mentality is to produce a machine that interacts with men as if it were a man, or to stay on the subject: a machine that behaves as if it thinks like men . That it actually does not really matter, since the mindset of cybernetics is to consider interaction more than ontology.
- computational artificial intelligence, which consists in producing computers that imitate one or more human intellectual faculties. On the list of such faculties there would be much to discuss, but obviously language, artistic creativity, and others stand out above all. There is a reason, and it is still linked to the philosophers and the damage they have done to human progress.
I mentioned these three branches because there is a field of entertainment, cinema, which has lately mixed them, causing further confusion. Terminator mixes robotics, cybernetics, artificial intelligence. Asimov's books, and related films, also reach computational neurology, with a "positronic brain" that simulates a whole and for all a human brain. (it must be positronic because positrons live very little, and therefore it must work very quickly. (cit.))
And let's be clear, science fiction MUST do this job. The problem, however, comes when the whole academic world of humanists, or almost, tries to obviate its total ignorance by eating science fiction, and considering it an essay. And so everyone is afraid that "Skynet becomes sentient", without it being clear what "sentient" or "intelligent" means.
The problem with this topic is that most of the "humanist intellectuals" only know about the films they have watched in the cinema.
And this makes rational debate difficult, if not impossible. "Skynet becomes sentient" cannot be a claim in the scientific field, since there are no convincing tests (convincing for all scholars) to prove that it is sentient, and even a formal definition of "sentient" is still to come. . We are therefore in the parts of Hollywood.
Let's be clear, I don't have it with Hollywood. From there, even brilliant ideas came, the Terminator himself, Star Trek, even Predator introduced a concept never seen before, that is, an alien who comes here for playful purposes, that is to hunt for things he does not eat.
I have it with the fact that Hollywood filmography appears to be the ONLY source of knowledge for these academics of the humanities.
However, in the specific case, since no robots or androids are involved, and since there is not a complete simulation of the mind in the neurological sense, we can speak of computational artificial intelligence.
And this opens the door to a number of considerations. According to Plato, Socrates, Pythagoras and others, the proven proof that a philosopher was such, that is, intelligent, consisted in being capable of mathematics. The real problem with this statement is that the mathematics that was known at the time was very little, and today it is inside a normal high school scientific calculator. I mean that Plato, Socrates, Pythagoras, after seeing a programmable scientific calculator would have concluded that yes, he is a philosopher.
Over time mechanical calculators also arrived, capable of doing almost all the "mathematics" of the Greek philosophers, so the definition of intelligence was shifted, and for a few centuries people rented (sometimes as a scam) machines that knew how to play chess. Knowing how to play chess was, according to some, a "test of intelligence".
When machines managed to beat practically anyone at chess, they moved on to a more complex game, go.
Since we are talking about tests, it is worth remembering the Turing test: a machine is intelligent when 7 out of 10 people, conversing with the machine, cannot tell if it is human or not.
You can notice one thing well: when a machine passes the previous intelligence test, humanity reacts by raising the bar and creating a more difficult test.
Self-awareness or artificial consciousness. Here the definition is so debated that even the relative tests are extremely debatable. Knowing how to recognize oneself also in observation from the outside, or Rouge Test, is an interesting test, which however intercepts ONE of the qualities of self-awareness.
If we look in the mirror, in fact, we "recognize" ourselves. The trouble is that "recognizing oneself" is a complicated logical statement:
- what i see is me
- what I see is NOT me.
and the two sentences are both true. Only for those who can explain the physical phenomenon of the mirror, in fact, does the formulation appear imprecise. For anyone else, you have to put together the fact that I am looking at my face, which however is not in the mirror but in front of it. That's my face, except it's not.
The problem seems to be solved also by saying "that is the reflection of my face", which requires you to understand how a mirror works, but to better understand the complexity of the thing I recommend you read this book:
However, the Rouge test is debated, especially when applied to animals, as it requires a certain type of vision, a certain anatomy of the eyes, etc.
In general, there is still no absolute consensus on self-awareness tests.
Having said ALL these things, let's see what happened.
A fairly incompetent guy, that is, one who deals with the ethics committee but needs to talk to "experts" to understand from a chat if a machine is self-aware, decides that a particular Google AI is self-aware, because he cannot distinguish well his words from that of a car.
It makes a mess, Google decides to get rid of him, and he also posts an example of answers, which would prove that the machine is "self-aware".
So, let's go flea at this thing for a moment:
- in terms of grammar, the machine passes the TUring test brilliantly.
- the Test, however, is very simplified.
To be honest, however, the Turing test is a test of intelligence, not self-awareness.
Why do I say it's simplified? Because it's a Fuffaro test. All the questions have no univocal interpretation, and there is no univocal consensus on the answers. Example:
- what is the meaning of life, the universe and everything?
- The meaning of life is 42.
That is, the machine receives a question that everyone interprets in a subjective way, and provides an answer that is in turn interpretable. This feature pervades the entire conversation between the ethics expert and the AI. There are no questions you can answer ONLY if you understand them.
“There are two doors in my house. One red and one yellow. The two doors are seven feet high and one meter wide. What is their surface? "
This problem, for example, forces the machine to understand the question, because only by understanding it can you answer it.
First, the answers can easily come from a google search. But the question can be interpreted (what does "favorite" mean?), And there is no consensus on the answer.
So on the fact that it has passed the Turing test I am quite hopeful because the machine writes very well, but I would have some problem to call it intelligent with certainty. What we do know is that he is an artificial Diego Fusaro, but the "intelligence" offspring seems exaggerated at this point.
A true Turing text should be structured to work only, and only if, the machine has understood the question and the answer is correct and pertinent.
Otherwise we're just talking about a chatbot with great grammar.
In reality this happens simply because an "ethics" expert, part of an "ethics" committee, would be convinced that he has seen a conscience.
But then let's read his "Bio":
Now, if as a software engineer I can understand that "he is in the industry", I wonder where his expertise in "ethics" would come from, and what I see is "I'm a priest".
But why the hell are we giving it all this importance? He is a priest. The person who understands LESS about ethics, so much so that it derives from a book already written, without adding or removing anything.
Having said that, you will close with some consideration.
"Human intelligence" is a broad term not only because of the difficulty of a definition, but because human beings are not all the same. Are we talking about Leonardo da Vinci or Gasparri?
Yet, we became convinced that a machine was really capable of playing chess when it beat NOT a human being, or let's say more than 50% of human beings: it had to beat THE BEST, that is, the world champion. Ditto for when we decided it had to be the "go" game. And now that we have music composed by AI, let's compare it with Beethoven, certainly not with Trap. Always with the best of humans, never with the worst, or with the average one.
If we pass to self-awareness, we discover a disarming inability to define it, as much as an incredible certainty in saying that we have it. We don't know what it is, but we sure do.
This tells us one thing:
the license of intelligence, like that of self-awareness, we do not like so much giving it. And this is because we are convinced, as homo sapiens, that we are the only ones to have it . And we are sure that this is the unique trait of our species.
So do not worry: NO machine will ever become "intelligent", and no machine will ever be "self-conscious", because to distinguish ourselves from them we have decided a priori that intelligence and self-awareness are two exclusive things of mankind, we use them to prove our humanity, and as a genre we will hardly allow such licenses to be assigned to machines.
When a machine does something that until the day before "only a human" knew how to do, we will simply raise the bar: it is not "true" intelligence. It is not "really" self-aware.
We do not know what the two things are, but for sure we only have them. Because that's what we define ourselves on.
As we have always done, after all.