In computer science called artificial intelligence ( AI ) to the unnatural minds are not rational agents alive. 1 2 3 John McCarthy coined the term in 1956, defined it: “It is the science and engineering of making intelligent machines, especially intelligent computer programs. ” 4
To explain the above definition, to wit an intelligent agent that can think, evaluate and act on certain principles of optimization and consistency to meet some goal or purpose. According to the previous concept, rationality is more general and therefore more appropriate intelligence to define the nature of the objective of this discipline. homes for rent in raleigh nc
Therefore, and more specifically artificial intelligence is the discipline that is responsible for building processes to be implemented on a hardware architecture or actions produce results that maximize a given performance measure, based on the sequence of entries received and the knowledge stored in such architecture .
There are different types of knowledge and means of knowledge representation, which can be loaded into the agent by its designer or may be learned by the same agent using techniques of learning .
Also there are several types of valid processes for rational outcomes that determine the type of intelligent agent . From simple to more complex, the five main types of processes are:
Running a default response for each input (analogous to reflex actions in living things).
Search the state required all states produced by the possible actions.
Genetic algorithms (analogous to the process of evolution of the DNA strands).
Artificial neural networks (analogous to the physical functioning of the brain of animals and humans).
Reasoning by formal logic (analogous to the human abstract thought).
There are also different types of perceptions and actions, can be obtained and produced respectively by physical sensors and mechanical sensors in machines, electrical or optical pulses in computers, both as input and output bits of software and software environment.
Several examples are in the area of system control , automatic planning , the ability to diagnose and respond to consumer inquiries, handwriting recognition , speech recognition and pattern recognition . AI systems are currently part of the routine in such fields as economics , medicine , engineering and the militia , and has been used in a variety of applications software , strategy games like chess and other computer games .
Systems that think like humans . – These systems try to emulate human thought, for example, artificial neural networks . The automation of activities that we associate with human thought processes, activities such as decision making , problem solving, learning . 6
Which act as human systems . – These systems try to act as humans, ie mimic human behavior, eg the robotics . The study of how to make computers perform tasks that, at present, humans do best. 7
Systems that think rationally . – That is, logic (ideally) try to imitate or emulate rational logical thinking human being, for example, expert systems . The study of the calculations that make it possible to perceive , think and act. 8
Systems that act rationally (ideally) . – try to emulate in a rational human behavior, for example the intelligent agents . It is related to intelligent behavior in artifacts . 9
Schools of thought I love Apartment Marketing
The IA is divided into two schools of thought:
The conventional artificial intelligence
The computational intelligence
Artificial Intelligence conventional
Also known as symbolic AI-deductive. It is based on formal statistical analysis of human behavior to different problems:
Case-Based Reasoning : Help make decisions while solving some specific problems and apart from that are very important need of a good performance.
Expert systems : infer a solution through prior knowledge of the context in which it applies and deals with certain rules or relationships.
Bayesian networks : Proposes solutions using probabilistic inference.
Behavior-based artificial intelligence , that can have autonomy and self-regulated and controlled to improve.
Smart process management : facilitates complex decision making, proposing a solution to a problem you would like a specialist in the activity.
Artificial Intelligence Computational
Main article: Computational Intelligence
Computational Intelligence (also known as IA-inductive subsymbolic) involves development or interactive learning (eg, interactive modification of the parameters in connectionist systems). Learning is done based on empirical data.
Main article: History of artificial intelligence
The term “artificial intelligence” was coined in 1956 during a formal conference Darthmounth more then it had been working on it for five years in which many different definitions proposed that in no case had managed to be accepted fully by the research community. The AI is one of the newer disciplines with modern genetics. Both are two of the most attractive fields for scientists today.
The basic ideas go back to the Greeks, BC. Aristotle (384-322 BC.) was the first to describe a set of rules that describe a part of the workings of the mind for rational conclusions, and Ctesibius of Alexandria (250 BC.) built the first self-controlled machine, a water flow regulator (but rational reasoning).
In 1315 Ramon Llull in his book Ars magna had the idea that the argument could be made artificially.
In 1936 Alan Turing designed a formal universal machine that demonstrates the feasibility of a physical device to implement any computation formally defined.
In 1943 Warren McCulloch and Walter Pitts presented their model of artificial neurons, which is considered the first work of the field, although the term did not exist. The first major developments began in the early 1950′s with the work of Alan Turing , from which science has gone through various situations.
In 1955 Herbert Simon , Allen Newell , JC Shaw, developed the first programming language-oriented problem solving, the IPL-11 . A year later develop LogicTheorist , which was able to prove mathematical theorems.
In 1956 the term was invented artificial intelligence John McCarthy , Marvin Minsky and Claude Shannon in the Dartmouth Conference , a conference which took ten years triumphalist predictions ever came true, prompting the almost total neglect of research for fifteen years.
In 1957 Newell and Simon continue their work with the development of the General Problem Solver (GPS). GPS was a system geared to problem solving.
In 1958 John McCarthy develops at the Massachusetts Institute of Technology (MIT), LISP . Its name is derived from LISt Processor. LISP was the first language for symbolic processing.
In 1959 Rosenblatt introduces Perceptron .
A late 50′s and early 60′s Robert K. Lindsay develops “Sad Sam”, a program for reading prayers in English and the www.jascas.com drawing of inferences from interpretation.
In 1963 Quillian develops semantic networks as knowledge representation model.
In 1964 Raphael Bertrand constructs the system SIR (Semantic Information Retrieval) which was able to infer knowledge based on information supplied to it. STUDENT develops Bobrow.
Then between 1968-1970 Terry Winograd developed the SHRDLU , allowing question and give orders to a robot moving in a world of blocks.
In the mid 60′s, are the expert systems that predict the likelihood of a solution under a set of conditions. For example DENDRAL , initiated in 1965 by Buchanan, Feigenbaum and Lederberg, the first expert system, attending chemicals in Euclidean complex chemical structures, MACSYMA, attending engineers and scientists in solving complex mathematical equations.
In 1968 Minsky published Semantic Information Processing .
In 1968, Seymour Papert , Danny Bobrow and Wally Feurzeig, develop the programming language LOGO .
In 1969 Alan Kay develops language Smalltalk at Xerox PARC and published in 1980.
In 1973 Alain Colmenauer and his research team at the http://www.chirobizacademy.com University of Aix-Marseille create Prolog (French Programmation in Logique ) a programming language widely used in AI.
In 1973, Shank and Abelson develop scripts, or scripts , pillars of many current techniques in Artificial Intelligence and computing in general.
In 1974 Edward Shortliffe wrote his thesis with MYCIN , one of the best known expert systems, attending physicians in the diagnosis and treatment of infections Botox Kelowna in the blood.
In the 1970 and 1980, increased use of expert systems like MYCIN: R1/XCON, ABRL, PIP, PUFF, CASNET, INTERNIST / CADUCEUS, etc.. Some remain to this day ( Shells ) and EMYCIN, EXPERT, OPSS.
Kazuhiro Fuchi announces in 1981 the Japanese project of the fifth generation of computers .
In 1986 Rumelhart and McClelland published Parallel Distributed Processing ( Neural Networks ).
In 1988 he established the Object-Oriented languages .
In 1997 Garry Kasparov , world champion of chess against the computer loses autonomous Deep Blue .
2006 marks the anniversary with the Spanish Congress in 50 years of Artificial Intelligence – Campus Multidisciplinary in Perception and Intelligence 2006 .
In 2009 there are therapeutic developing intelligent systems that can detect emotions in order to interact with autistic children.
In 2011 IBM developed a supercomputer named Watson , who won a round in three straight games of Jeopardy , beating their top two champions, and winning a prize of $ 1 million that IBM then donated to charity. 10
There are people that talk without knowing a chatbot not realize speak with a program so as to meet the Turing test as when made: “There will be AI if we are not able to distinguish between a human and a computer program in a conversation blind ‘ .
Anecdotally, many AI researchers argue that “intelligence is a program that can be run independently of the machine you run it, computer or brain” .
Artificial intelligence and feelings
The concept of AI is still too vague. Contextualizing, and taking into account a scientific standpoint, we can embrace this science as the charge of imitating a person and not his body, but mimic the brain in all its functions, existing in the human or fabricated on the development of a machine intelligent.
Sometimes, applying the definition of Artificial Intelligence, one thinks of intelligent machines without feelings , that “hinder” the best solution to a given problem. Many think of artificial devices able to conclude thousands of premises from other premises given, without any emotion have the option to block these efforts.
In this line, you should know that already exist intelligent systems. Able to make decisions “successful.”
Although, at present, most researchers in the field of Artificial Intelligence focus only on the rational, many of them consider seriously the possibility of incorporating components ‘emotional’ as status indicators in order to increase efficiency of intelligent systems.
Particularly for mobile robots, we need to have something like the emotions in order to know, at every moment and at least-what to do next [Pinker, 2001, p. 481].
By having “feelings” and at least potentially, “motivations” may act according to their “intentions” [Mazlish, 1995, p. 318]. So, you could equip a robot with devices to control their internal environment, for example, that “feel hungry” to detect that your energy level is falling or that “feel fear” when that is too low.
This signal could disrupt high-level processes and force the robot to get the precious item [Johnson-Laird, 1993, p. 359]. You could even enter the diabetic foot treatment “pain” or “physical suffering” in order to avoid the awkwardness of operation, for example, put your hand in a chain of gears or jumping from a height, which would cause irreparable damage .
This means that intelligent systems must be equipped with mechanisms for feedback to enable them to have knowledge of internal states, as with humans who have proprioception , interoception , nociception , and so on. This is critical for making decisions as to preserve its own integrity and security. The feedback system is particularly developed in cyber , for example in the change of direction and speed independent of a missile, using as parameter at each instant the position relative to the objective to be achieved. This should be distinguished from knowledge that a system or software may have its internal states, such as the number of cycles completed in a loop or loop type sentences do … for , or the amount of memory available for a particular operation.
A smart systems disregard the emotional elements allows them not to forget the goal to be attained. In humans the neglect of the goal or goals to leave the emotional disturbance is a problem that in some cases it becomes disabling. Intelligent systems, combining a durable memory, an allocation of goals or motivation , with decision making and prioritization based on current states and states goal, achieve extremely efficient behavior, especially with complex problems and dangerous.
In short, the rational and the emotional are so interrelated that one could say that not only are not contradictory aspects but are, to some extent, complementary.
See also: The Age of Spiritual Machines
The main criticisms of artificial intelligence have to do with their ability to mimic completely human. These criticisms ignore the fact that no individual human is capable of solving all kinds of problems, and authors such as Howard Gardner has proposed that there are multiple intelligences . An artificial intelligence system should solve problems. It is therefore fundamental in its design the delimitation of the types of problems solved and the strategies and algorithms used to find the solution.
In humans the ability to solve problems has two aspects: aspects of innate and learned aspects. Innate aspects allow for example to store and retrieve information in memory and aspects learned knowledge to solve a math problem using the appropriate algorithm. Just as a human must have tools to solve certain problems, artificial systems should be programmed so that they can solve certain problems.
Many people consider the Turing test has been passed, citing conversations in which to communicate with all artificial intelligence chat not know that they speak with a program. However, this situation is not equivalent to a Turing test, which requires that the participant is on notice of the possibility of talking to a machine.
Other thought experiments like the China Room of John Searle have shown how a machine could simulate thinking without having to have it, passing the Turing Test without ever understanding what he does. This would show that the machine is not really thinking , and acting in accordance with a predetermined program would suffice. If for Turing the fact deceive a human being trying to avoid being deceived is a sign of an intelligent mind, Searle considers this effect can be achieved by rules defined a priori.
One of the biggest problems in artificial intelligence systems is communication with the user. This obstacle is due to the ambiguity of language, and appeared already at the beginning of the first operating system computer. The ability of humans to communicate with each other implies knowledge of the language used by the caller. For a human to communicate with an intelligent system there are two options: either the human learns the language of the system as if he learned to speak any language other than native, or the system has the ability to interpret the message the user in the language the user uses.
A human lifetime learning the vocabulary of their native language. A human interprets messages despite the multiple meanings of words using context to resolve ambiguities. However, you must know the different meanings to interpret, and that is why specialized and technical languages are known only by experts in the respective disciplines. An artificial intelligence system faces the same problem, the polysemy of human language, its syntax and unstructured dialects between groups.
Developments in artificial intelligence are greater in the disciplinary fields in which there is greater consensus among specialists. An expert system is more likely to be scheduled in physics or medicine in sociology or psychology. This is due to the problem of consensus among experts in defining the concepts involved and the procedures and techniques to use. For example, in physics there is agreement on the concept of speed and how to calculate it. However, in psychology discusses the concepts, etiology, psychopathology, and how to proceed with certain diagnosis. This makes the creation of intelligent systems because there will always be disagreement over what would be expected that the system does. Despite this there are great advances in the design of expert systems for diagnosis and decision making in the medical and psychiatric (Adaraga Morales, Zaccagnini Sancho, 1994)
Objects are entities that have a certain status , behavior (methods) and identity :
The state is composed of data, will be one or more attributes that are to be assigned specific values (data).
The behavior is defined by the methods or messages that can answer that object, ie, what operations can be done with it.
The identity is a property of an object that sets it apart from the rest, in other words, is its identifier (ID concept analogous to a variable or constant ).
An object contains all the information you can define and identify it to other objects belonging to other classes and even from objects of the same class, to have distinct values in their attributes. In turn, the objects have mechanisms of interaction known methods , which favor the communication between them. This communication helps to turn the state change in the objects themselves. This feature leads to treat them as indivisible units, which does not separate the state and behavior.
The methods (behavior) and attributes (state) are closely related by ownership group. This property features a class requires methods to treat the attributes that account. The programmer must think both concepts interchangeably, without separating or give greater weight to some of them. Doing this could create erroneous habit container classes of information on the one hand and classes with methods that the first handle to the other. This will be conducting a structured programming camouflaged in the language of object-oriented programming.
OOP differs from structured programming traditional, in which data and procedures are separate and unrelated, and that all he seeks is to process some input data for other output. Structured programming encourages the programmer to think primarily in terms of procedures or functions, and second in the data structures that manage these procedures. In structured programming only write functions that process data. Programmers who use OOP, however, first define objects and then send them messages asking them to make their methods themselves.
The concepts of object-oriented programming have originated in Simula 67 , a language designed for making simulations, created by Ole-Johan Dahl and Kristen Nygaard of the Norwegian Computing Center in Oslo . This center is working on simulations of vessels, which were confounded by the combinatorial explosion of how the various qualities of different ships could affect one another. The idea was to group the various types of craft in various classes of objects, being responsible for each class of objects to define their own data and behavior. They were later refined in Smalltalk , developed in Simula at Xerox PARC (whose first version was written on Basic ) but designed to be a fully dynamic system in which objects could be created and modified “on the fly” (at runtime ) instead of having a system based on static programs.
The object-oriented programming was becoming dominant programming style in the mid-eighties, largely due to the influence of C + + , an extension of C programming language . His domination was consolidated thanks to the rise of graphical user interfaces , for which object-oriented programming is particularly well suited. In this case, one speaks of event-driven programming .
The object-oriented features were added to many existing languages during that time, including Ada , BASIC , Lisp , Pascal , among others. Adding these features to languages that were not initially designed for them often led to problems of compatibility and maintainability of the code. The object-oriented languages ”pure”, for its part, lacked features which many developers had come to depend. To jump this hurdle, many attempts were made to create new languages based on object-oriented methods but allowing some mandatory features in ways “safe.” The Eiffel by Bertrand Meyer was an early and moderately successful language with those goals but has now been essentially replaced by Java , largely due to the emergence of Internet , and the implementation of the Java virtual machine in most browsers . PHP in the version 5 was amended, supports full object-oriented, fulfilling all the characteristics of object orientation.
The object-oriented programming is a form of trying to find a solution to these problems. It introduces new concepts that extend beyond old concepts and known. These include the following:
Class : definitions of the properties and behavior of a particular object type. The instantiation is the reading of these definitions and the creation of an object from them.
Heritage : (eg, inheritance of class C to class D) is the ease by which the class D inherits it each of the attributes and operations of C, as if those attributes and operations had been defined by the same D. Therefore, you can use the same methods and public variables declared in C. The components listed as “private” (private) is also inherited, but as they belong to the class, remain hidden to the programmer and can only be accessed through other public methods. This is to maintain the hegemonic ideal of OOP.
Object : entity provided with a set of properties or attributes (data) and behavior or functionality (methods) that subsequently react the same events. It corresponds to the real objects of world around us, or internal system objects (the program). Is an instance of a class.
Method : Algorithm associated with an object (or class of objects), whose execution is triggered upon receipt of a “message”. From the point of view of behavior, is what the object can do. A method can produce a change in the properties of the object, or the generation of an “event” with a new message to another object of the system.
Event : An event in the system (such as user interaction with the machine, or a message sent by an object). The system handles the event by sending the right message to the relevant object. Also be defined as an event, the reaction can trigger an object, ie the action it generates.
Message : a communication addressed to an object, which orders him to run one of his methods with certain parameters associated with the event that produced loans for bad credit it.
Property or attribute : a container data type associated with an object (or class of objects), which makes the data visible from outside the object and this is defined as its default characteristics, and whose value can be altered by the execution of some method.
Internal state : is a variable that is declared private, which can be accessed and altered only by a method of the object, and is used to indicate different possible situations for the object (or objects). It is not visible to the programmer who runs an instance of the class.
Components of an object : attributes, identity, relationships and methods.
Identification of an object : an object is represented by a table or entity that is composed of its attributes and functions.
Compared to an imperative language, a “variable” is merely an internal container of the object or attribute of an internal state and the “function” is an internal procedure of the method of the object.
Features of OOP
There is agreement on what features refers to the “object orientation”, the following are most important:
Abstract : denotes the essential characteristics of an object, which capture their behavior. Each object in the model system serves as an “agent” abstract work can inform and change its state, and “communicate” with other objects in the system without revealing how these features are implemented. Processes, functions or methods can also be abstracted as they are, a variety of techniques are required to extend a process of abstraction abstracción.El to select relevant features within a set and identify common behaviors to define new types of entities in the real world. Abstraction is key to the process of analysis and object-oriented design, and that through it we can build a set of classes that allow modeling the reality or the problem to be attacked.
Encapsulation : It means bringing all the elements that can be considered as belonging to the same entity at the same level of abstraction. This increases the cohesion of the system components. Some authors confuse this concept with the principle of concealment, mainly because they often use together.
Modularity : Modularity is called to the property that allows an application to subdivide into smaller parts (called modules), each of which must be as independent as possible from the application itself and the other parties. These modules can be compiled separately, but have connections with other modules. As encapsulation, supported languages Modularity of various forms.
Hiding principle : Each object is isolated from the outside, is a natural module, and each type of object exposes an interface to other objects that specifies how they interact with the objects of the class. The insulation protects the properties of an object against its modification by a person not entitled to access them, only their own internal methods of the object can access their state. This ensures that other objects can not change the internal state of an object in unexpected ways, eliminating unexpected side effects and interactions. Some languages relax this, allowing direct access to internal data of the object in a controlled manner and limiting the degree of abstraction. The entire application is reduced to an aggregate or puzzle objects.
Polymorphism : different behaviors associated with different objects can share the same name, to call by that name is used for the object’s behavior is being used. Or put another way, references and collections of objects can contain objects of different types, and invocation of a behavior in a reference produce the correct behavior for the actual type of object referenced. When this occurs in “runtime”, this last feature is called delayed allocation or dynamic allocation . Some languages provide a far more static (“compile time”) of polymorphism, such as templates and operator overloading in C + +.
Inheritance : classes are not isolated, but interrelated, forming a hierarchy of classification. Objects inherit the properties and behavior of all classes to which they belong. Inheritance organizes and facilitates polymorphism and encapsulation allowing objects to be defined and created as specialized types of existing objects. They can share (and extend) their behavior without having to implement it. This is usually done usually by grouping objects in classes and are in trees or lattices reflecting common behavior. When an object inherits from more than one class is said to be multiple inheritance .
Garbage collection : Garbage collection or garbage collector is the technique by which the environment will destroy objects automatically, and so decouple associated memory, the objects left without any reference to them. This means that the programmer does not have to worry about memory allocation or release, as the environment assign it to create a new object and free it when no one is using. In most hybrid languages that extended to support the paradigm of Object Oriented Programming and C + + or Object Pascal , this feature does not exist and must be deallocated memory manually.
The object-oriented programming is a paradigm that uses objects as key elements in building the solution. Arises in the 70′s. An object is an abstraction of some fact or real-world entity has attributes that represent characteristics or properties and methods that account for their behavior or actions they perform. All properties and methods common to encapsulate or objects are grouped into classes. A class is a template or prototype for creating objects, it is said that objects are instances of classes.
A Czech writer defined a word ( robot ), which would come to change the concept of slave machine helpful.
The Robotics is the science research , study and technology of robots . It deals with the design, manufacture and application of robots. 1 2 Robotics combines various disciplines such as: mechanics , the electronics , the computer , the artificial intelligence and control engineering . 3 Other important areas in robotics is the algebra , the programmable logic and state machines .
The term robot was popularized by the success of the play RUR (Rossum’s Universal Robots) , written by Karel Capek in 1920. In the English translation of this work, the Czech word Robota , meaning forced labor , was translated into English as robot . 4
The history of robotics has been linked to the construction of “artifacts”, trying to realize the human desire to create beings in his own likeness and that descargasen work. The Spanish engineer Leonardo Torres Quevedo (GAP) (who built the first remote control for your car by telegraphy wireless, the chess machine, the first air shuttle and many other mills) coined the term ” automatic “in relation to the theory of automating tasks traditionally associated with humans.
Karel Capek , a writer Czech , coined in 1921 the term “Robot” in his play “Rossum’s Universal Robots / RUR” from the word Czech robota , meaning servitude or forced labor. The term robot is coined by Isaac Asimov , defining the science of robots. Asimov also created the Three Laws of Robotics . In the science fiction man has imagined the robots visiting new worlds, gaining power, or simply relieving housework.
Date Importance Name of the robot Inventor
First century. C. and before Descriptions of more than 100 machines and automata, including a fire engine, a wind organ, a machine operated by a coin, a steam engine in Pneumatic and PLC of Heron of Alexandria Autonomous Ctesibius of Alexandria, Philo of Byzantium , Heron of Alexandria, and other
1206 First humanoid robot programmable Boat with four robotic musicians Al Jazari
c. 1495 Design of a humanoid robot Mechanical knight Leonardo da Vinci
1738 Mechanical duck can eat, shake their wings and excrete. Digesting Duck Jacques de Vaucanson
1800 Japanese mechanical toys that served tea, fired arrows and paint. Toys Karakuri Tanaka Hisashige
1921 Appears the first controller of fiction called “robot” appears in RUR Rossum’s Universal Robots Karel Čapek
1930s A humanoid robot on display in the Expo between 1939 and 1940 Elektro Westinghouse Electric Corporation
1948 Exhibition of a robot with simple biological behavior 5 Elsie and Elmer William Grey Walter
1956 First commercial robot, Unimation company founded by George Devol and Joseph Engelberger based on Devol patents of 6 Unimate George Devol
1961 Installed the first industrial robot Unimate George Devol
1963 First robot “palletizing” 7 Palletizer Fuji Kogyo Yusoki
1973 First robot with six axes electromechanical Famulus KUKA Robot Group
1975 Universal programmable manipulator arm, a product of Unimation PUMA Victor Scheinman
2000 Humanoid robot capable of moving in a bipedal and interact with people ASIMO Honda Motor Co. Ltd
Classification of robots
According to its timeline
Which then arises is the most common classification:
Manipulators. They are multifunctional systems with a simple mechanical control system, either manual or fixed sequence of variable sequence.
Robots learning. Repeating a sequence of movements that has been previously executed by a human operator. The way is through a mechanical device. The operator performs the required movements while the robot follows and memorizes them.
Sensorised control robots. The controller is a computer that executes program commands and sends them to the handler to perform the necessary movements.
4 th Generation.
Intelligent robots. Similar to above but also have sensors that send information to the control computer on the status of the process. This allows an intelligent decision-making and controlling the real time processing.
The architecture is defined by the type of general configuration of the robot, can be metamorphic. The concept of metamorphism, recently introduced, has been introduced to increase the functional flexibility of a robot by changing the settings of your own robot. Metamorphism supports many levels, from the most basic (tool change or end effect), to more complex as the change or alteration of some of its structural elements or subsystems. The devices and mechanisms that can be grouped under the generic name of the Robot, as noted, are very diverse and is therefore difficult to establish a consistent classification of them to withstand a critical analysis and rigorous. The subdivision of the Robots, based on its architecture, is made in the following groups: polyarticulated, mobile, android, zoomorphic and hybrids.
In this group are very diverse robots shape and configuration whose common characteristic is to be basically sedentary (although exceptionally they may be guided to make limited movements) and be structured to move their terminal elements in a particular workspace by one or more coordinate system and with a limited number of degrees of freedom. In this group are the handlers, Industrial Robots, Cartesian Robots and used when needed to cover a relatively wide working area or elongated, acting on objects with a vertical symmetry plane or reduce the space occupied on the ground.
Robots are highly mobile, based on cars and platforms and fitted with a type locomotive stock. They follow their path or remotely guided by information received from its environment through its sensors. These robots ensure the delivery of parts from one point to another in a production line. Guided by clues materialized through electromagnetic radiation circuit embedded in the floor, or through bands detected photoelectrically, they can even overcome obstacles and come with a relatively high level of intelligence.
Robots are trying to reproduce the shape and kinematics of human behavior. Currently android devices are still very poorly developed and no practical use, and intended mainly to study and experimentation. One of the most complex of these robots, and focusing on the most jobs is that of bipedal locomotion. In this case, the main problem is to control dynamic and coordinated in real time process and simultaneously maintain the balance of the Robot.
The zoomorphic robots, which considered as non-restrictive sense could also include the androids, are mainly characterized by a class of transport systems that mimic the various living beings. Despite the morphological disparity of possible locomotion systems is to group the zoomorphic robots in two main categories: walkers and walkers. The group of walkers is not zoomorphic robots quite undeveloped. Experienced made in Japan based on bevelled cylindrical segments axially coupled to each other and provided with a relative movement of rotation. Robots zoomorphic multípedos walkers are very numerous and are being experienced in various laboratories for the further development of real land vehicle, piloted or autonomous, able to evolve in very rough surfaces. The applications of these robots will be interesting in the field of space exploration and the study of volcanoes.
correspond to those of difficult classification structure of which is situated in combination with any of the above given above, either by combination or juxtaposition. For instance, an articulated and segmented wheels, is both one of the attributes of mobile robots and zoomorphic robots.