Home » News »

We want to catch up with the brain

Photo of Prof. Andrzej Strójwąs

Prof. Andrzej Strójwąs is a world-renowned specialist in the field of integrated circuits. On April 24, 2023, he received the title of doctor honoris causa of Warsaw University of Technology.

About the Internet of everything, artificial intelligence, their symbiosis with the latest technologies, as well as the attempt to catch up with the human brain, we talk to Prof. Andrzej Strójwąs, a doctor honoris causa of the Warsaw University of Technology.

 

Professor, have we moved from the Internet of Things to the Internet of Everything? What is the difference between these systems?

The Internet of Things meant we mostly communicated with objects, we could be called agents, and the information was sent one way and back. Its processing took place in the cloud, centrally. It worked well, but one limitation was that these peripherals did not have enough computing power.

On the Internet, everything is more distributed. We try to do everything that is possible on peripheral devices and only when we need really high computing power, we use the cloud, data centers. This is a really major change.

The second technology that plays an increasing role in our lives and has the greatest potential to transform the future is artificial intelligence. What do you think about its use and importance?

Artificial intelligence is not a new concept. The name originated at the Dartmouth Conference in 1956, but the idea, the main concepts of artificial intelligence, began in ancient Greece. Then, for many years, science fiction writers were the creators of seemingly crazy ideas. A lot of such concepts were introduced, among others, by Stanisław Lem and Issac Asimov.

What is the history of the evolution of artificial intelligence? For a long time, it had specific applications, e.g. in autonomous vehicles, first created at Carnegie Mellon University – the university where I worked. There are also expert systems. There was a rumor about a Deep Blue computer that won a chess game with Garry Kasparov and later, when the Watson supercomputer defeated two champions of the American Jeopardy tournament.

After a period of excitement with this technology, the so-called winter of artificial intelligence has arrived. It was impossible to meet huge expectations, because there were no two things at that time: data that could be massively transmitted, and computing power.

It is only in the last 15 years that computing capacities, as well as the possibility of very fast communication with, for example, 5G systems, have allowed artificial intelligence to be used effectively.

Chart illustrating the development of artificial intelligence in the second half of the 20th century

Evolution of Artificial Intelligence

Artificial intelligence is really artificial, and it's not that intelligent at all (laughter). These are self-learning systems. It's not like there's a magic algorithm. It learns from a huge influx of data so that certain things, specific applications can be implemented. Why did Deep Blue win against Kasparov? He was able to predict so many next steps in real time that he found custom solutions that Kasparov hadn't encountered before.

What are the types of artificial intelligence?

There is machine learning, or artificial basic intelligence – learning machines without human intervention. There is deep learning, which involves using neural networks for huge databases, and finally something which has recently made a furore - generative artificial intelligence.

The most famous program is the GPT chat, which will write an essay for us, whether the medical exam, legal exam, do homework in mathematics, write a poem. It is as if man himself were communicating in natural language, and one could cheat Alan Turing's test, which proved that it was impossible to distinguish communication with a machine from communication with a human being. But as a matter of fact -that’s really not the case.

In the earlier version of GPT, 200 billion parameters were needed, and the learning time of huge machines is 2 months. Man is in control of all of this. It doesn’t mean that these programs suddenly start thinking for themselves, doing things autonomously.  The 60 Minutes show, which was aired in the US, showed robots playing football without a coach. However, they were shown avatars of players from many matches, so they learned how to nod, score a goal and so on.

What has influenced the current capabilities of the Internet of Everything and artificial intelligence and without which further development will not be possible?

What is needed for this is infrastructure, i.e. data availability, fast communication and huge computing power. When it comes to communication, telecommunications in high-speed networks and in computing power, the progress is huge.

We miniaturize all instruments. It was once said that they could not be reduced to a micron. Now we create instruments up to 10 nanometers in size. How far can we reach? If we use carbon nanotubes, we go down to 1 nanometer. This is already the level of individual molecules – a boundary that we will not transcend. What can we do though?

We are talking about two-dimensional scaling, although 10 years ago we switched to three-dimensional instruments, namely FinFETs (fin field-effect transistor). However, it was still mainly miniaturization in two dimensions, and in three-dimensional integration we build up a chiplet for a chiplet. Such "altimeters" have the advantage that in the vertical direction the distances are much closer than if the miniaturized instruments are packed into a chip, which may have dimensions such as 3 x 2.5 cm. Vertically, we can put so many of these layers that it will not be scaled in the next generation by 50%, only in the range of 10 and more. This is the third dimension. 

Chart illustrating the scale of technology miniaturization in the 21st century

Miniaturization of technology, source: L. Su AMD, ISSCC 2023

In addition, massively parallel computing is important, i.e. massive parallel processing. It is no longer possible to increase the clock speed due to the huge power consumption, so instead we have created algorithms that will use hundreds or thousands of cores and memory that are close together. In fact, in the most popular applications, roughly 90% of the power is used for CPU-memory communication, and only 10% is used for computing. Shortening the distance between processors and memories gives us a very large opportunity to increase the efficiency of operations.

Professor, what's all this for?

What are we trying to do? We're trying to catch up with the human brain. When it comes to the complexity of systems, we're not that far away, actually. What does this difference amount to? We need 20 watts to make smart decisions, think, and so on. Systems need 500 megawatts, and the average power generated by a nuclear power plant is 1 gigawatt. When it comes to energy consumption, we will never catch up with the human brain.

We try to do this using supercomputers, entire networks, systems that are in data centers. This requires a great deal of energy. The whole idea is that in order to achieve these computing powers, you need the massive parallel processing that I have mentioned.

What are the barriers that can stand in the way of development?

The main barriers are cost and power consumption. Development must be geared towards greater energy efficiency. At a cost, not all applications require a “nuclear power plant.” Let's use them as needed. Here, a very good symbiosis emerged between the semiconductor industry and artificial intelligence – a marriage of technology and the requirements of artificial intelligence that needs processors.

Is it possible for machines to revolt, or can artificial intelligence threaten humans to some degree?

I completely disagree with the vision of the destruction of our civilization by robots. This does not mean, however, that in the wrong hands artificial intelligence cannot be used for the wrong purposes. You can create programs that stop all activities, shut down all energy systems, and so on, but it's not artificial intelligence that will do it, just people using these technologies.

I can't believe computers are going to turn against people on their own. Terrible things are happening in the world – wars, including religious wars, kill people, but this is the result of the sick ambitions of individuals. Artificial intelligence can be used in a positive way and is able to help you make rational, optimal decisions. These applications need to be focused on.

Thank you for your time and talking to us.