Life beyond Moore.

Sometimes I enjoy discussing computer science "in general" with people from other sectors, such as physics, and I find a certain open-mindedness that I struggle to find in engineers. In one of the recent chats we ended up talking about Moore's law.

As you know, this is the empirical law that wants computer power to double every two years. And as you know, at this moment we are in a state of stall, since the physical limits (frequency, density and more) are very close to the current implementations.

The result of all this is that in fact the power of the CPUs and GPUs is still struggling to grow at the pace we would expect. The problem is that they are actually issues that physicists suffer from irremediable problems.

  • Materials other than silicon with which to make smaller circuits. There is talk of graphene, which is flammable and oxidizes easily. So far there are materials being tested, but no one has yet passed that phase.

  • Quantum computers. Still on the high seas. The problem is that if we think in theory a quantum computer does this and that, but in practice we find that the qbits lose measurability, that is, they are wrong, with a worrying frequency.

  • Artificial intelligence. The idea of ​​implementing the topology of neural networks on silicone pays so far, but only for applications of neural networks. Although they can solve some problems very quickly, making them all-purpose machines has so far proved impossible.

So?

My personal (and crazy in some ways) position is that some experiments should be done with the so-called ternary logics . I'm not the first to have this idea: ternary calculators have already been made, and have proven to be more efficient .

An example of a ternary calculator was SETUN, built in the USSR using a magnet system. Although it was not completely ternary, it proved to be much more effective than binary ones as far as the ternary part was concerned.

There is a reason for this. The reason is that when Von Neumann started to make a mathematical model of his idea of ​​computers (the one we use today, more or less), when calculating which was the most efficient numbering base he did not get "2" , i.e. the binary system. He got "e", that is, a number that is transcendent, therefore also irrational. No division between naturals can obtain it and no finite time algorithm can calculate it all.

But if you look closely at the value of e, 2.71828 18284 59045 23536 …, notice one thing.

It is closer to 3 than to 2.

There are several ternary logics. The most "tested" so far is the symmetric one, ie instead of using {0}, {1} as logical states, {-1}, {0}, {1} are used. It was also chosen by the Russians because their calculator worked with electromagnets, so it was easy to invert north and south to represent {-1} and {1}, and use {0} to indicate that the coil was off, that is, it had no polarity 'because there was no current.

The advantages of a ternary logic on calculation systems are HUGE.

First, it is easy to represent negative numbers. Anyone who has studied how a calculator works knows that to represent a negative number using a sign bit, the calculator's silicon wastes a lot of time.

In a ternary logic it is enough to put {-1} where previously there was {1}. And the funny thing is that in a ternary system, this is exactly what the NOT operation does. In practice if 5 is {1}, {0}, {1} in ternary, with a bitwise NOT it becomes {-1}, {0} {- 1}. One cycle for the NOT.

You may have noticed, however, that I continue to use boxes like binary to indicate 5: each box still represents a power of two (otherwise 5 would not be 101), what changes is the possible value of each box, which is enhanced. I am therefore assuming a Kleene algebra.

But the problem lies in the performance, and in the ease with which a ternary logic can be used to simulate a quantum calculator. Our "trit", a two-value bit, can also be read as "when it is zero, it could be both {-1} and {1}". Sure, we don't have a real quantum system, but it would be interesting to test how you can simulate one using the classic silicon, of which we are experts.

Obviously storage also gains: in a binary system with 8 bits you save 256 values, if we use a Kleene algebra the number rises to 512 if we were to use (only for storage) a circuit that allows us to use all 3 values ​​as the basis of numbering we would get 6561 values. Not bad as an increase in density .

I am not the only one to say it, it is clear: Donald Knuth and Howard Aiken were the ones who started his studies.

Are there any disadvantages? Well … yes. The disadvantages are that at a certain point a circuit is needed that takes the ternary numbers and returns them to tracks, so that they can be used on the devices that work on two states. It is therefore necessary that there are converters: in the first experiments, conversion tables were used which were not, however, very large. It would be interesting to study whether their implementation thwarts computational advantages.

Certainly now that storage is moving towards solid state, becoming silicon storage (nvram, SSD, etc), the advantages on storage (with the same density and frequency) are evident.

Consequently, if you are a startup of ggiuovani and you want to revolutionize things with a disruptive idea in some way, what you should do is start drawing a circuit in ternary logic, and build a minimal turing machine, at least on FPGA.

I am too much old classic to make a startup, so the scepter passes to you. Maybe if you can get me a beer.

Leave a Reply

Your email address will not be published.