April 29, 2024

The mountain of shit theory

Uriel Fanelli's blog in English

Fediverse

Galileanisms.

One of the most perverse things about the discussion on Artificial Intelligence is how that pile of useless bullshit unmasked by ChatGPT is capable of fearing "dangers for mankind" without specifying which dangers and how they materialize, and at the same time is unable to discuss the why AI is important, both good and progress. So I decided to provide a simple use-case.

As you know, I write science fiction books as a hobby. As you don't know, I didn't start writing in 2000, when “Other Robots I” came out. I started around 1990. The problem was, though, that the internet was in its infancy, or it didn't exist.

There are several books that probably lie in the form of magnetized ferrite in some landfill because I uploaded them to some BBS. But nobody read BBSs capable of taking the writing and making a book out of it.

In general, over the course of life and moving, these floppies were lost. That changed with Amazon Kindle Publishing, because it allowed me to publish four. And I must say that the results did not displease me, since Italy is certainly not a country of science fiction readers. For science fiction you have to fantasize, and the creative vein of Italians, in general, is dry. By now the canon is discussed everywhere, it is discussed in a suffocating way, and by now canons are applied to things like pasta carbonara or the color of clothes.

I've always wanted two things.

  1. make a comic of my books. Particularly Other Robots, who was born for it.
  2. translate them and sell them in markets where more science fiction is read

the problem is that if an editor already costs money but it is feasible, a translator is a cost that is not reasonable for a hobbyist. Not even a designer.

Now, it happens that I'm trying version 4 of ChatGPT, and both things, combined with a stable diffusion (with only 700 Euros you can buy a GPU with 8GB of memory, and run it) I could do both.

Is there a use case? Certain. This thing, for only $20 a month, would give me the "superpowers" I need. Speak many languages, and draw comics.


Now the artists will tell me that it's not right because ChatGPT trained by reading works by other artists, who preceded her. So I assume that Dante Alighieri is not part of your training, and that you have never trained by reading past artists.

This explains why you are dickheads.


Now, I have TWO powers against me. The first is the power of RIAA, SIAE, GEMA and company. I explained it in theory, and now comes the practical cases:

https://www.repubblica.it/cronaca/2023/04/03/news/intelligence_artificiale_copyright_graphic_novel_tutela-394682260/?ref=RHLF-BG-I394707491-P9-S1-T1

This example shows you that what I said is true: the problem they have with AI is that they don't know how to tax it to make the RIAA fat. It's a new thing, they risk not having control of it, and they try to stop it.

It's convenient for SIAE to enter a party and expect everyone to have paid for the rights to the music, but they can't if someone can answer that they make the music at the moment.

This is not the mimimilalalablablabla chatter of the usual humanist pseudo-intellectuals. This is a fact: a person who is told “no, the work isn't yours because the computer you used to do it doesn't pay us enough taxes and we can't buy a new yacht”.

For now, Microsoft is planning to include ChatGPT in its Office package. Sooner or later, graphic design and DTP companies will insert these tools into their software, from Photoshop to all the others, and they will crush these "humanists" like shit, the shit that they are.

So the debate will then die down, simply when you have to pay the usual to do something you could do at home using your PC and open source software.

Instead, since they are strong with the weak and weak with the strong, you can be sure that they will take the copyright away from Kris Kashtanova, but the day when they will do the same using, instead of MidJourney, a plugin of some Adobe software, the RIAA herself will be careful not to raise her voice,

Because Adobe can crush them like the shit they are, in any court, while Kris Kashtanova can't pay for such a lawsuit. The rich win, as always happens when the "humanist intellectuals" come into play.

Who are soaaaanto full of fear that AI will destroy humanity, but not if AI makes money to the usual very rich monopolies. They only move when it's democratic and it costs $20 a month. As soon as you pay $3,000 for a photoshop plugin that uses stable diffusion, then you'll be fine.

Nothing convinces a humanist like the "money & power" argument


But beyond the mafias there are parishes. And if you take my second use-case, that is translating books into other languages ​​(which ChatGPT 4, available for a fee, does VERY WELL), we don't have against giants like RIAA or others.

We find ourselves against the parish of translators and editors (those who review the text). Now, editing is definitively useful, because the author, who reads and re-reads the texts, misses his own mistakes.

As a result, I never had to complain about the editor. Corrects you errors, warns you if something sounds wrong, etc. But a translator asks ten times the price, and to make matters worse he almost never takes care of the same things that an editor promptly points out to you.

This little parish of overrated people is, of course, afraid.

But the translation is not only this. If an Italian wants to sell books in Germany, in addition to translation, he must find distributors, etc. Of course, Amazon helps you in this, but here we have, coincidentally, a strange coincidence: no humanist intellectual has ever complained about the monstrous system of publishing houses, which formed the parish where THEY make money, but coincidentally an Amazon that also allows to you to sell on a huge market was the only innovation ever challenged.

And in the same way, today as today, a tool like the AI ​​that also allows me to have services (editing and translation) that up to now have been the PRIVILEGE of a few great authors, finds it difficult, the usual difficulties of a parish church that wants to keep others out because with the privilege of being there, keeping others out, it pays the bills.


What am I gonna do? I will buy the ChatGPT 4 subscription and translate everything into enough languages ​​to have a potential market of 2-3 billion people.

Will I omit that I used ChatGPT for translation? Maybe yes maybe no. I might as well have paid a translator who then used ChatGPT after all. And know nothing.

Then, in time, when it's cheaper to run Stable Diffusion on home computers, I'll probably make the comics I want.

Whether they like it or not.


As if that weren't enough, this thing is demonstrating what I have always said: "there is no humanistic culture, there is only an anti-scientific and pauperistic culture".

What does it mean? Easy. As soon as someone said "you can't stop progress," they sparked a frenzied debate where a pile of mentally ill people argued over whether "technology is mandatory" or not.

Their agenda is absolutely clear: stop everything. Because “technology” doesn't just mean ChatGPT. It means curing cancer. It means diagnosing autism. It means feeding billions of people by growing more and better crops.

To question whether technology should be used by those who want it is to question whether we could one day cure cancer, feed everyone, help autistic children, or feed billions of people. It means that we could do it, but also not , in case they prefer grandfather Evola's bullshit.

These characters are the walking dead. If you tell them that with AI it will probably be possible to better understand some climate dynamics and make progress with climate change, they will tell you that they make beer out of climate change.

If you tell them that Italy risks falling behind on information technology, they will tell you that they are also against information technology and that "the world was better before". And if you tell them that without information technology you could not have, say, high-speed trains or a stable electric grid even using renewables, they will tell you that they are against electricity and trains, because for them “the world was better before.”

And it is obvious that this revolution will be opposed, simply because they ALWAYS oppose it.


But is technology mandatory? It is inevitable?

To know this, it would be enough to observe history: yes, technology is inevitable. Let's take an example. As you know, Galileo abjured his observations, taking them out of a book he was writing, but about 12 years later, he wrote a whole book about it. What had changed in those 12 years? And what had he been doing in the meantime?

Galileo had entered into correspondence with the Doge of Venice. To which, together with his admirals, he demonstrated that using one of his telescopes, both on board ships and on coastal defense, they could see ships from afar, reading their flags: which in fact allowed them to predict their intentions.

The Doge and his admirals were admired, and Galileo sold a large number of telescopes to the Serenissima, which gave him economic serenity and notoriety.

Then he wrote his book again, and no one objected. How come?

Because being the inventor of the equivalent of the marine radar, at that time, gave him a superpower: that of finding asylum easily, in any kingdom of the period. There wasn't a single court, that is, that wouldn't have wanted it. (Also Leonardo, disliked for other reasons, remained in France for years: and coincidentally, Leonardo was also concerned with military fortifications).

The church, therefore, stopped prosecuting him: they knew very well that he would end up in some Protestant country, to further improve the power of their fleet. If not in the Ottoman Empire, which was perfectly willing to give him asylum.

The same goes for OpenAI & co. They can, of course, close it. Let's imagine some government of overzealous imbeciles shuts down OpenAI. Do you think the people who built ChatGPT would remain unemployed? You know very well not: there are plenty of nations and companies willing to cover them with sesterces in order to have them. They would give him asylum, money, tools. Especially the military. And how will you know where you will find it? Who would they go to work for? I don't know.

I'm sorry, but the power of linguistic models is now clear: they are inevitable. All technology is inevitable. Always. Anyway. Its adoption comes slowly or explodes suddenly, if the slow adoption has been hindered.

The specialty of humanists has always been to present technological progress as a proposition that people are free to accept or reject.

I have bad news for them: technological progress either rides it or overwhelms you. Tertium non datur.


Some say “But Yelon Musk is against it too, and also – random stem dude”. Of course. The person who, thanks to Galileo's abjuration, took his place in the university chair was in full agreement with the condemnation.

There is always, in every sector, those who have an interest in stopping the successes of others.

Just think of the research funds that are at stake: you will ALWAYS find someone angry with the competitor who has a brilliant idea, and who receives funds.

Then think about that sucker Musk: On February 21, 2018, Musk resigned from the board of directors, due to "a potential future conflict of interest 'with Tesla 's AI for self-driving cars,'". By the way, the musk who is against AI because it will destroy us all by chatting, is that the same one who wants to put AI inside every car?

Maybe it has something to do with the fact that so far, the only AI to have killed people was inside a Tesla car, and it was driving?

And sometimes it would be good to ask questions about "intellectuals" who consider ChatGPT the greatest danger for future humanity: while we fill the atmosphere with CO2, Putin threatens to use nuclear weapons every day and the USA does not see the It's time to go to war with China.

If those are "intellectuals", I understand why they fear ChatGPT.

Leave a Reply

Your email address will not be published. Required fields are marked *