A robot equipped with cameras in the head

Self-awareness allows you to be “beside” your life as you live it – the third-person perspective. It also has many and complex implications for our decision-making processes.
Depending on the person, some decisions are based on long-term goals: “Where do I want to be in five years?
It is the ability to converse with an “inner voice”, the third person, that enables appropriately elegant people to have a unique and powerful characteristic for decision making and planning.

Providing a machine with such a sense of place and time with the research stock available is at least an enigma, even for thoughtful laymen. But it is the essence of life, the epitome of autonomy that even an insect has. Planning and anticipation are at the heart of what it is to be alive – the driving force behind all life, whether it happens again or, more complexly, becomes the next president.

A simple planning action like “I think calm will reign at the airport next Wednesday, I will book my flight” arose because of emotions and awareness: emotions due to unpleasant feelings associated with the presence of large crowds and overbooked planes; anticipation resulting from a set of past perceptual experiences (consciousness). The latter is a ramification of inductive reasoning.

But even if programs have to overcome decision-making problems, giving an appearance of autonomy, without consciousness or emotions, computers lack understanding and sense of self, thus lacking the capacities necessary for intelligent thinking.
They are questioned as being simple empty impostors who understand nothing. Although they meet some of the parameters, they do not meet the most basic definition of human intelligence.

Programmers refine and blend the logical systems of AI and philosophy to create more powerful programs. Currently, no logical system is powerful enough to sufficiently model human reasoning or create a practical and fully autonomous robot.
Although AI is far beyond the Turing circuit, there is still a long way to go before fully autonomous robots can emerge from their long column of theory and testing.
The IBM Blue Brain project is an attempt to better model the reasoning power of the human mind.

According to the official project site, “Using the enormous computing capacity of the IBM Blue Gene eServer, researchers from IBM and EPFL (Ecole Polytechnique Fédérale de Lausanne) will be able to create a detailed model of the neocortex circuits – the largest and most complex part of the human brain “.

Thanks to a full three-dimensional computer model of the human brain, it will be possible to better understand the processes that underlie thought – an important contribution to AI research. It could also give rise to positronic brains in robots, although this is ambitious. In some thought societies, positronic brains could give birth to consciousness.
Researchers at the Massachusetts Institute of Technology took an evolutionary angel in 1997, by modeling their KISMET robot on a child. KISMET was unique in that it was a reluctant, autonomous, and anthropomorphic robot designed to interact emotionally with humans. It was a start.
The result of the robot’s plethora of evolutions was well covered throughout the EXPO in Aichi, Japan. It gave rise to a generous demonstration of robots ranging from childcare robots to portrait painting robots. With 63 prototypes on display, there seemed to be robots for everything.

LEAVE A REPLY

Please enter your comment!
Please enter your name here