February 28, 2024

The mountain of shit theory

Uriel Fanelli's blog in English


AI and self-publishing.

I just had a bizarre discussion about writing and the use of AI in the self-publishing space. Now, as you know I started using Amazon Kindle Publishing several years before AI existed, so it's obvious that I wrote my texts (unless you also want to accuse me of inventing generative AI ten years before OpenAI) , but the biggest nonsense I've heard is that the use alone, if only for the covers, is enough to say that the work is no longer mine.

Let's try to understand for a moment what this means.

In the normal flow, you write the text of the book, then pay an editor to do the editing. In this phase, a reader who is not you reads the book and corrects the typos and "extravagant" forms that you may have used without realizing it.

The argument I've heard is that if I use a human person for this work, then the book is still "mine." If instead I use a language model, then my book is no longer mine because I am using the evil AI, which surely stole Manzoni's ability to write. So the author (or at least co-author) of my book is, without a shadow of a doubt, Manzoni.

It goes without saying that it makes no sense to think of a person (the editor) as a completely neutral person, being certain that he/she has never been trained by reading Manzoni, while a language model would clearly be Manzonian if he corrected my text.

The problem, however, is the following:

  • Technically, the computer's autocorrect is also a language model.
  • Even the word prompter on your keyboard, despite being a trivial Markovian, is a language model. And the T9 was too.
  • essentially, everyone has been using language models for at least 30 years, only they were small.

So no, get over it: you said bullshit. Using a large language model to correct your text, as in editing, does not turn it into a Manzoni text just because Manzoni was also used to create the language model.

Another big issue concerns covers. I'm using generative AI to make the covers, and it's the same thing here too. You are “stealing” from someone else, which the AI ​​would have trained on.

I have to say a couple of things: I built the cover images using the descriptions of the characters in the book, and at most adding things to make the images more precise. I can say that today, if you look at the covers of "Pietre", you see the characters almost exactly, if not exactly, as I imagined them.

It means that if you had taken the photograph from my brain, you would have gotten the same face. So far I have managed to simulate both Amira and D'hu very well, and I must say that the resemblance is impressive.

And here is the point, that is: let's even assume that the AI ​​trained on the photo of Manzoni himself, it makes no sense to think that an image that appeared in MY mind is someone else's creation , to the extent that it resembles the person I have in mind, and not what Manzoni had in mind.

Every time I write a long prompt, I get about twenty images, plus those that are "dreamed". Among these, many are to be discarded, but at least one is usually practically a snapshot of what I had in mind. It is, therefore, a process that is not even deterministic: it is ME who recognizes among the many images "here, this is what I see if I close my eyes".

The reader, therefore, can be sure: when you look at the cover of Stones: first episode, you practically see a photograph of how I imagine D'hu, with a precision that I would define as "telepathy".

The same goes for Amira, on the cover of the second episode. And so on. I will use the same criteria for all covers.

But now I have a question: when instead I asked someone expert in Photoshop to do something for me, and I NEVER got what I had in mind, I had stolen "less", having the CERTAINTY that that stuff did NOT come from my brain, but from that anyone else's?

Of course, a difference is noticeable. I can do the covers myself and I can do the editing myself. I don't have to pay anyone anymore.

So the question arises spontaneously: what if, behind all these complaints from the "purists", there was actually the defense of a sinister economic interest, that is, the much opposed "capitalism"?

I say it honestly: until now no one had managed to "photograph" my mind like generative AIs are doing. I can tell you that those photos reflect what I imagined. How dare you come out and say that somehow, if someone else comes along and NEVER gets it right, the book is "mine" anymore?

And I'm asking because lately I see that Linux is becoming more of a political party than an operating system, and there is a tremendous aversion to integrating generative AI into its desktop and its programs: then I don't want to hear complaints if you he says that after the integration of AI into the Windows and Office desktop, the year of the Linux desktop is next year.

The stallmanization of Linux, with all this useless aversion towards a tool that helps a lot, could do MUCH more damage than you think.

Let Stallmann have fun with GNU HURD.

Everyone else would like to work in peace.

Leave a Reply

Your email address will not be published. Required fields are marked *