Stochastic parrots, and where to find them.

There is a lot of talk about the heretics who built Stable Diffusion and GPT-3, because they entered the gotha ​​of those who disgusted you, looking down on you, daring to question the value of the artist (not of art) and the value of the writer-creative (but not of writing or creativity).

Writings come to their rescue which, seeking an academic alliance with a certain humanistic world, go to the rescue of these future unemployed, one of whom defines what these networks do, or extended language models, as a "stochastic parrot". Since no one had ever given a real technical or commercial definition of "stochastic parrot", we can only read the paper and see what the author means.

Paper sucks.

Not because it's false (it doesn't say anything that is falsifiable or carries any data or fact) not because the numbers are wrong (there are no numbers or quantities in the script) or anything (the script collects a pile of bullshit woke and talks about risks that he doesn't quantify and models that are "too big" without defining "too much").

It sucks because it's not science. It is the offspring of some gender studies department that works in some safe zone, where contradicting the speaker is forbidden so as not to hurt his feelings.

The real problem with that document is not what it writes, which is laughable, stupid, superficial and Hollywood-like: the problem is that it is passed off as science only because it is a paper.

It stands out not for what is written, but for what is NOT written, i.e. anything that shows a modicum of scientific thinking.

Let me explain: ChatGPT was trained by feeding it previous writings. If chatGPT was fed only scripts that were, in their own way, written by stochastic parrots, obviously the output will have the equivalent of so many stochastic parrots:

garbage in, garbage out.

If, for example, I were to paste the entire journalistic production of any country, I would get exactly the same sentence: stochastic parrots. They all copy from the agencies, they copy each other's news, writings and concepts, and nation after nation, even the bias is often reduced to two positions: traditionalists/progressives, right/left, etc.

The paper does not intend to make this measurement, and instead says "hey, that's a stochastic parrot" (it would be nice if they knew what the technical term stochastic means, but among humanists it is fashionable to use technical or scientific terms for the purpose of giving itself a veneer of seriousness – which says a lot). Moreover, they keep confusing GPT with a stupid Markovian.

The first point that is easily contested in what is written is the mass of woke bullshit, devoid of proofs, evidences, measures.

The second is that in speaking he adds so many citations as to make peer review impossible, except by someone who does positive peer review because he is a friend of the group that writes that document.

Here we are at the first point: if ChatGPT is a stochastic parrot because it "takes contents from a gigantic language model made up of an immense quantity of writings", the definition applies perfectly to that paper, which "takes contents from a gigantic archive of academically acceptable content, made up of an immense amount of writings”

Oops. Given the amount of citations in that paper, I can safely say that the writer of the article is a stochastic parrot, fed by an enormous amount of citations taken from other writings. That writing is, given the number of citations, the product of a stochastic parrot.

In saying this, and rightly so, I am obviously touching on another elite of the sniffle: the academic one. Because using the concept of "stocastic parrot", we discover that due to the excessive use of quotations practically EVERY academic paper is a stochastic parrot, or a stochastic parrot, at least in large part.

Here will come those who tell me: a”lt, even if it is quite full of quotations from other works, my paper adds something to what was known before.”

Here we can decide whether or not to believe this information. I mean that if it is a scientific discovery (the only ones to add something), it must contain a set of experimental data which are necessary for the formulation and demonstration of the discovery.

The trouble is, if you did an experiment, then the experiment did NOT happen in your mind. So the data you enter is like a citation: you are simply quoting the experimental data.

But if that's the case, then you've just added data to your language model and you're a stochastic parrot again.

What's the moral of this story?

The moral is that the non-definition of stochastic parrot also used by this document seems to be able to refer to every single human brain, because it is so gaping that anything passes through it.

Free everyone? No, not necessarily. In the sense that a system like ChatGPT has a finite cardinality because it uses a finite amount of examples. And the funny thing is that the same system, asking the right questions, seems to be aware of it.

Since our mind is capable of understanding languages ​​with infinite cardinality and of describing them, while ChatGPT has a finite cardinality, it is clear that there is still a difference with ChatGPT, but I haven't yet found a single text that points out this simple difference. (although axiomatizing does not cover or amount to the statement of every theorem)

In reality, ChatGPT is "optimistic" in its answer (that is, it is wrong), since not even if it had a language of infinite power it could really generate all possible sentences (Gödel, et al), but the point is that it is of the only text existing on the net that asks the question of the cardinality and power of the language model. And you won't find any other.

Apparently, ChatGPT is a stochastic parrot that is under attack from other stochastic parrots.

Having said that, now the humanists will be nudging each other saying “did you see? The human is superior”. In doing so they introduce a fallacy which is to compare mankind to its champions.

So, if a (human) champion of some intellectual discipline beats a computer, as happened for chess (now it doesn't happen anymore) it is said that machines will never be able to compare with humans. But there is a problem, which I would state like this:

  • “no computer will ever be like Michelangelo”
  • neither will you ever be like Michelangelo, for that matter.

Established, even absurdly, that the computer cannot compete with Michelangelo, we need to establish how many people it can compete with, and with whom. And with how many.

I am sure that ChatGPT will never be as good a journalist as Enzo Biagi, but we have to ask ourselves how many Enzo Biagis there are, and how many Sciandivasci there are. I'll make Sciandivasci on my cell phone using a Markovian if I want.

And if by chance it doesn't take much to write better than Sciandivasci, you have to understand that yes, Enzo Biagi's job is safe (also because he's dead), but I wouldn't bet on Sciandivasci's.


Now, the big reporters on Buzzfeed are probably not in danger, but 12% lost their jobs when they replaced them with ChatGPT.

So, the problem is simple: can ChatGPT replace a human being? No. But it can definitely replace at least 12% of Buzzfeed's content creators right away. In short, the Sciandivasci of the situation.

And this, at least in journalism, poses a problem: How much of modern journalism falls into the "stochastic parrot" category? Well, we can do a test:

  1. do you go to the place where things happen to talk to people, investigate, gather information, sometimes taking risks?
  2. are you sitting in the editorial office all day, the furthest place you visit is the one where you go on holiday, and you get news "from the agencies"?

if the answer is “1” , you are probably not a stochastic parrot, or at least you are an active stochastic parrot, insofar as you are looking for data for your model yourself.

If the answer is “2”, you are a stochastic parrot yourself: as if we had connected ChatGPT to the agencies, and then we had told them to write articles.

Stochastic parrots, simply put, are found in newsrooms, liberal arts colleges, and other places. And when we see people scandalized by the fact that "a computer will NEVER be like a creative", we know what is happening: these inhabitants of the attic of knowledge see a stochastic parrot like them, only it doesn't get sick, it doesn't go on strike, it costs much less, and they are afraid it will take their place.

And it will.

Same old story. As if stochastic parrots were writing it.

Leave a Reply

Your email address will not be published. Required fields are marked *