A humanoid robot

The SIFT algorithm, developed and patented by Lowe, breaks down images into many small slices of overlapping features. Each slice is then juxtaposed individually with each other, and the corresponding slices are combined. If there are enough correspondences between the object and the image, the displayed object is then drawn and oriented on the robot’s map, continuously updated. All of this is done in less than a second.

This allows the recognition and localization of several objects in changing environments, a global technology integrated into many visual robotic systems. The achievement is autonomy with constrained parameters.

The more technology evolves, the more real the existence of robots becomes. But, since the robots agree to the idea of ​​autonomy, can we still say that they are intelligent? This is a valid question since intelligence can be present without the presence of autonomy (for example, Christopher Reeves). Another relevant question is whether, even with total autonomy, they can be called intelligent.

These questions call for a clarification of the meaning of intelligence, thus making it possible to have criteria for measuring it. Querying dictionary definitions means that intelligent entities have an affinity for knowledge, for understanding it, and for solving problems through cognition. This implies autonomy because something must have the capacity to store information (affinity for knowledge, to be used in learning) and to overcome obstacles in an unpredictable environment (problem-solving by cognition). Understanding is a special aspect, as we will see.

Alan Turing thought the same thing around the 1950s, having developed his “Turing Test”: an intelligence quotient test (IQ test), but for computers. Turing believed that if his computer could trick a human competitor into thinking he was human – a game of questions and answers – then he was smart.

An oversimplification, however. In comparison, current IQ tests for humans that are valid intelligence gauges innervate a set of “cognitive abilities”, such as analytical reasoning, spatial awareness, verbal ability, etc. Psychometricians have successfully measured over seventy of these abilities. Robert Sternberg (IBM professor of psychology and education at Yale University) reduced that number to three: analytical, creative, and practical.

These capabilities are seen as part of the interconnected systems that make up intelligence. Of course, these are metaphors that hover over their respective brain regions.
Together they represent a wide range of intellectual power, such as mathematical reasoning, finding solutions to new problems, creative writing, making quick decisions with lasting future implications, etc. They dictate the essence of intelligent thinking. The Turing test is at most a test of a program in a substantially compartmentalized manner.

If we consider these cognitive abilities as the driving force of human thought, adding the need for a physical structure like a brain or a ganglion to process information, we get intelligence as, unmistakably, “a thing. physicist who has the capacity to learn and understand knowledge by exercising a set of cognitive abilities that allow the acquisition of knowledge “.

With the inadequacy of the Turing test, there is the possibility of administering IQ tests common to computers, such as SAT, GMAT, GRE, etc. If they were successful, would we say they are intelligent by definition? After all, the hardware of a computer and its program are surprisingly similar to the relationship between physical bodies and DNA.

Although the QRIO or the EMIEW may pass, there are some issues. Perhaps the most important defining aspect of intelligence is that one thing has to “figure out,” which is difficult for an entity made of pieces of plastic.
Professor John R. Searle’s Chinese room analogy from his 1980 article “Minds, Brains, and Programs” illustrates this point. A computer is not interested in the meaning of things – semantics – but only in the manipulation of symbolic representations, like “if x, then exit z …”. What “x” and “z” actually mean is irrelevant.

Take the word “death”. Star Trek data can tell us that the word “death” is a noun, that it has five letters, and its dictionary meaning. However, as he lacks emotions, he cannot understand like a human. If Captain Picard were assassinated in his presence, he would not have a trivial reaction. For the sensitive, unlike Data, death has a series of emotional/perceptual tags, like sadness, fear, faces of family members who have perished, etc. These tags make it possible to understand concepts, like death: tags that a robot does not have.

Besides understanding, emotions are considered an important precursor for other intellectual processes, as they contribute to both motivation and temporality of thought, such as the rush after solving a math problem (acquisition of a goal) or the memory of a period of high happiness.

As emotions support motivation, by helping to set goals, they are imperative in the planning and decision-making capacity of sensitive people: they are necessarily linked to autonomy. Time has also an important function in planning and decision-making. However, temporality in thought is also a product of consciousness.
Difficult to test, but the consensus is that computers lack this important attribute and therefore lack self-awareness.

LEAVE A REPLY

Please enter your comment!
Please enter your name here