Welcome
To
Robotics World

Saturday, 23 May 2015

Robotics intelligence

ROBOTICS Intelligence

The Socially Intelligent Machines Lab of the Georgia Institute of Technology researches new concepts of guided teaching interaction with robots. Aim of the projects is a social robot learns task goals from human demonstrations without prior knowledge of high-level concepts. These new concepts are grounded from low-level continuous sensor data through unsupervised learning, and task goals are subsequently learned using a Bayesian approach. These concepts can be used to transfer knowledge to future tasks, resulting in faster learning of those tasks. The results re demonstrated by the robot Curi who can easily cook pasta.[102]

Control





form tasks. The control of a robot involves three distinct phases – perception, processing, and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). This information is then processed to be stored or transmitted, and to calculate the appropriate signals to the actuators (motors) which move the mechanical.
The processing phase can range in complexity. At a reactive level, it may translate raw sensor information directly into actuator commands. Sensor fusion may first be used to estimate parameters of interest (e.g. the position of the robot's gripper) from noisy sensor data. An immediate task (such as moving the gripper in a certain direction) is inferred from these estimates. Techniques from control theory convert the task into commands that drive the actuators.
At longer time scales or with more sophisticated tasks, the robot may need to build and reason with a "cognitive" model. Cognitive models try to represent the robot, the world, and how they interact. Pattern recognition and computer vision can be used to track objects. Mappingtechniques can be used to build maps of the world. Finally, motion planning and other artificial intelligence techniques may be used to figure out how to act. For example, a planner may figure out how to achieve a task without hitting obstacles, falling over, etc.

Autonomy levels


TOPIO, a humanoid robot, playedping pong at Tokyo IREX 2009.[103]
Control systems may also have varying levels of autonomy.
1.   Direct interaction is used for haptic or tele-operated devices, and the human has nearly complete control over the robot's motion.
2.   Operator-assist modes have the operator commanding medium-to-high-level tasks, with the robot automatically figuring out how to achieve them.
3.   An autonomous robot may go for extended periods of time without human interaction. Higher levels of autonomy do not necessarily require more complex cognitive capabilities. For example, robots in assembly plants are completely autonomous, but operate in a fixed pattern.
Another classification takes into account the interaction between human control and the machine motions.
1.   Teleoperation. A human controls each movement, each machine actuator change is specified by the operator.
2.   Supervisory. A human specifies general moves or position changes and the machine decides specific movements of its actuators.
3.   Task-level autonomy. The operator specifies only the task and the robot manages itself to complete it.

4.   Full autonomy. The machine will create and complete all its tasks without human interaction.

No comments:

Post a Comment

Popular Posts