Learning to reason, the being or not being of robotics.
Children learn how the world works by observing, experimenting and playing, developing intuition and reasoning. They develop intuition and reasoning. Could a robot do this? Autonomous machines are one of the great scientific challenges of the 21st century. To develop them, it is necessary to design systems that help them perceive and understand the world around them in a human-like way. Part of the University of Zaragoza's artificial intelligence strategy is geared towards the development of these systems.
Beatriz Moya García
November 20, 2021
Seeing, reasoning and learning are three inherent abilities of people. The human brain processes each image captured by the eyes in 13 milliseconds and with them takes a picture of reality and its physical laws. This mechanism allows us to learn when we are children, and remains active in adulthood: "Will the tower of blocks fall down? What happens if I throw a ball?
To make these predictions, our brain performs rapid simulations based on the information it has learned and the physical principles it has deduced. The development of this intuition is also of great interest in robotics. The goal of this work is to create robots that are independent and learn to reason about the world around them.
For robots to interact with the environment, we not only want to control their own movements, but also the consequences of their actions, and this is achieved by evaluating them in real time with a simulation of the world that imitates their senses and the perception of what is happening around them. The Artificial Intelligence laboratory at the Aragon Engineering Research Institute is developing research to convert the pixel values of images into information that a computer can understand. The aim is to predict the behaviour of liquids, which are very common in care tasks and industrial processes, in order to be able to make informed decisions. But what distinguishes this technology from others?
Existing work has two problems: the lack of physics, which can result in inconsistent situations, and their long computational times to make decisions. This research addresses these challenges to provide a solution that, in addition to being within the limits of physics, can give a real-time answer.
What are digital twins?
For a robot to understand the world, it is necessary to simulate it, i.e. to create a virtual copy to translate it into the language of a computer. Digital twins are copies of machines, products or services that behave identically to their real part. With them, we learn the response of a system without having to alter the original copy. For example, we can test the action of a drug on the digital twin of a human heart to see if it will be effective. Among the digital twins, we distinguish between live twins, which are those that react at the same time or even faster than the real part. This technology makes it possible to anticipate possible problems and provide preventive solutions. By designing liquid live twins, a robot interprets what it is handling.
What's the next challenge for intelligent robots?
Far from being substitutes, independent robots are being created to assist people in tasks ranging from the mundane to the dangerous, but there are still major challenges to be faced in their development. Despite having access to vast amounts of data, the data we need is sometimes inaccessible or hard to come by. It is therefore necessary to create systems that retrieve important, non-measurable information to advance application development.
At the same time, although existing perception models may be accurate, situations outside their learning range may occur. Intelligent systems must evolve towards adaptive models capable of detecting that the model does not fully match the reality it perceives and correcting itself. This is called a hybrid twin.
How far does data science go?
In the age of the internet of things there are millions of data that we can exploit, but it is also necessary to learn how to work with this volume of information: What information does the data give? What knowledge does it hide?
A model is the mathematical replica of a real phenomenon that we want to work with. Models learned from data, in our case images, are increasingly common and have been strongly influenced by the latest trends in artificial intelligence.
In particular, neural networks are structures that mimic brain connections in the learning process. Despite their power, there are drawbacks to their use. The networks find a good solution with the information available to them, but it may not be the most optimal. Incorporating knowledge of physics acquired over centuries of scientific advances is the key to guiding the learning of what we are trying to model. But working with this amount of data may not be computationally feasible. Among the mathematical techniques that make up data science are so-called model reduction techniques, which analyse the actual information given by the data to reduce its complexity and learn a simpler system with the same information. In this way we can achieve real-time response times.