May 3, 2024

The mountain of shit theory

Uriel Fanelli's blog in English

Fediverse

Still on ChatGPT

I spoke some time ago about the relationship between the world of work and forms of machine learning, data analytics, and the new automation technologies entering companies. And already when one reacted with annoyance to what I said, I always remembered the different attitude of the working class and the white collars.

It is not the first time that a new technology has arrived and changed the way industry works. The so-called "working class" is now used to it, and doesn't get upset much: instead, it goes to refresher courses on new technologies.

If by now these "revolutions" are faced with a certain detachment from a fairly accustomed working class, the problem arose when the revolution touched the white-collar workers.

But this was an idiosyncrasy: accustomed to thinking they were irreplaceable, white-collar workers saw their main value under pressure, namely the presumption that no machine could ever replace them by doing their "concept work".

But this is now the past, because nowadays technology has reached higher: if before the low floors of white-collar workers had been touched, for some time now higher floors have been targeted, and in now we have come to target the attic of the building, or the "artists".

And if white-collar workers have felt outraged by this new wave of corporate automation, imagine a little that bunch of pompous, useless scoundrels who pass themselves off as "artists", discovering that an AI can write poetry, songs, paint.


For historical purposes, we can remember a date :

On August 3, 2022 a certain Jason M. Allen participates in an art competition, whose theme is realistic art (not abstract, or other bizarre stains on the wall). The jury is made up of "art experts". His work wins, and it turns out that it was made by DALL-E 2, but not with the usual interface, but using its interface on Discord, called Midjourney.

Tz, ta, pum!

Of course the outrage arises. And this outrage arises from a bad conscience. The bad conscience of those who had told themselves they were doing something so superfine, creative, sublime, transcendent, that a machine (let alone the "algorithm") could never have reached their peaks.

But the thing that touches them even more is that the judges, able to tell them that they understand what art is and know if they have it in front of them or not, have confused that mere product of a machine without feelings, mere algorithm that "doesn't suffer" (sic! I just read it in an Italian newspaper), with a true piece of "art".

And this is even worse, because at this point it becomes difficult to understand what art is, if not even art experts can distinguish it from the real thing.


Economically, what could this mean? There are two cases:

  1. a proliferation of content
  2. fierce competition

I mean, I've tried DALL-E on my GPU, and it stumbles a bit. But we know that over time the GPUs will be able to support it. So, I know that at this rate for next year I will be able to take the science fiction books I've written and turn them into comics, just by explaining to the AI ​​what I have to draw.

Not bad. And it's not bad if you think that, within five years, with a normal GPU I will be able to directly make as many films out of it.

This is what I call “the proliferation of contents”: I clearly won't be the only one thinking about it, and ad hoc programs to do it will probably come out as well.

If we think only of ChatGPT, I can think that as soon as it will be released in a "papable" version for my GPU, I will certainly translate my books into different languages, so as to adapt them to the local market.

Since I certainly won't be the only one thinking of doing such a thing, it is clear that we would witness a sudden proliferation of contents. Until now, most of the "content creators" do nothing but undress on OnlyFans, but when an AI gives the possibility to easily draw comics or videos, the game would change considerably.

And the real question, at most, will be "but who consumes and/or buys all that content?"


Under "ruthless competition" you will find the fact that Netflix, Amazon Prime, Disney & company will also be able to have these technologies available, only that unlike me they will have all the GPUs they want. In this case, one must prepare for the condition whereby producing videos and films will be increasingly cheaper, and actors will be less and less indispensable. Personally, I estimate that at this rate in seven/eight years there won't be any more real actors.

If you don't believe it, here it is:

as you can see, from here to making a real film, the step is short. Imagine being able to do something like Avatar 2 in two, three weeks.


Are we at a tipping point? In my opinion not yet. Let me explain: all these AIs run (at least for the learning part) on GPUs. The GPU is not a very different concept from the CPU: sure, it's specialized, sure it often has a MIMD architecture, but it's not a very new thing architecturally.)

The turning point will come when these AIs will be transferred to silicon in an "as is" manner, i.e. with all the appropriate topology. In this way, the performances will be far superior to now.

Let me explain better: imagine you want to broadcast a video. To conserve data, render it in 256 shades of gray, at 640×480. That way you get a crappy video, but let's forget that on the other side of the graphics card we have an AI that can take this crappy footage and bring it back to 4K or beyond.

The compression effect is obvious, superior to that of any codec. By this I mean that from that moment a piece of silicon with the specific ability to "revitalize" low quality video could arrive directly in the network card.

Then you could have components that "modernize" the old films, allowing you to modernize the look of the actors and the surrounding environment, bringing it to 2023. This too is not very complicated.

By dint of adding, it could also end up that graphics cards become real "display cards", which deal with displaying a rather imprecise input, perhaps reduced to a lot of scattered pixels ("Stable Diffusion") it already works like this '.

That, in my opinion, will be the turning point, where anyone with a normal computer can simply ask the graphics card to do something they have in mind, describing it in words.

From that moment, the world of the visual arts, and probably also of the sound ones, will never really be the same again.

Leave a Reply

Your email address will not be published. Required fields are marked *