Welcome
To
Robotics World

Friday, 30 September 2016

The First Robot

The First Robot, Created in 400 BCE, Was A Steam-Powered Pigeon

It's easy to assume that robots are a relatively recent invention—but in fact, the history of robotics stretches back well over 2000 years. The first, a steam-powered "pigeon," was created around 400 to 350 BCE by the ancient Greek mathematician Archytas.
Archytas constructed his robo-bird out of wood and used steam to power its movements. The bird, suspended from a pivot bar, was at one point able to fly about 200 meters before it ran out of steam—which makes Archytas' experiment not just the first known robImage result for The First Robot, Created in 400 BCE, Was A Steam-Powered Pigeonot, but was also one of the first recorded instances of a scientist doing research on how birds fly.
The inventor was also a very accomplished philosopher, mathematician, astronomer, commander, statesman, and strategist. Archytas founded mathematical mechanics—what we now call mechanical engineering—and was an elected General for seven consecutive years. This violated the law at the time, but because Archytas never lost a single battle in his time asstrategos, the people decided to continue to elect him as the ruler of their city-state anyway.
His mathematical works heavily influenced Plato and Euclid, among others. In geometry, he solved the problem of “doubling the cube,” as proposed by Hippocrates of Chios; he also made great advancements in musical theory, using mathematics to define intervals of pitch in the enharmonic scale in addition to those already known in the chromatic and diatonic scales. And he showed that pitch on a stringed instrument is related to vibrating air.
Archytas was known to be virtuous—so virtuous that his close friend Plato may have used him as his model for the “Philosopher King.” Archytas also seems to have strongly influenced Plato’s political philosophy as shown in “The Republic” and other works. For example, “How does a society obtain good rulers like Archytas, instead of bad ones like Dionysus II?”

Sunday, 25 September 2016

8 main Components of Robots


The structure of a robot is usually mostly mechanical and is called a kinematic chain (its functionally similar to the skeleton of human body). The chain is formed of links (its bones), actuators (its muscles) and joints which can allow one or more degrees of freedom.
Some robots use open serial chains in which each link connects the one before to the one after it. Robots used as manipulators have an end effectors mounted on the last link. This end effectors can be anything from a welding device to a mechanical hand used to manipulate the environment.
1. Actuation:
Actuation is the "muscles" of a robot, the parts which convert stored energy into movement. The most popular actuators are electric motors.
2. Motors:
The vast majority of robots use electric motors, including bushed and brushies DC motors.
3. Stepper motors:
Stepper motors do not spin freely like DC motors; they rotate in discrete steps, under the command of a controller. This makes them easier to control.
4. Piezo Motors:
A recent alternative to DC Motors are piezo motors or ultrasonic motors. Tiny piezoceramic elements, vibrating many thousands of times per second, cause linear or rotary motion.
5. Air Muscles:
The air muscle in a simple yet powerful device for providing a pulling force. It behaves in a very similar way to a biological muscle; it can be used to construct robots with a similar muscle/skeleton system to an animal.
6. Electroactive polymers:
Are classes of plastics which change shape in response to electric stimulation.
7. Elastic Nanotubes:
The absence of defects in nanotubes enables these filaments to deform elastically by several percent.
8. Manipulation:
Robots work in the real world require some way to manipulate objects; pick up, modify, destroy or otherwise have an effect. Thus, the hands, of a robot are often referred to as end effectors. Most robots arms are replaceable effectors, each allowing them to perform some small range of tasks. Some have a fixed manipulator which cannot be replaced, while a few have one very general purpose manipulator

Thursday, 22 September 2016

How do robots work?







If you had a big enough construction set with enough wheels, gears, and other bits and bobs, and a limitless supply of electronic components, could you bolt together a living, breathing, walking, talking robot as good as a human in every way?
That might sound like one question, but it's really several. First, there's the matter of whether it's technically possible to build a robot that compares with a human. But there's also a much bigger question of why you'd want to do that and whether it's even a useful thing to do. When humans can reproduce so easily, why do we want to create clunky mechanical replicas of ourselves? And if there really is a good reason for doing so, what's the best way to go about it? In this article, we'll be taking a detailed look at what robots are, how they're designed, and some of the things they can do for us.
Photo: Our friends electric. Will robots replace people in the future? Or will people and machines merge into flesh-machine hybrids that combine the best of both worlds? For all we know, Octavia (pictured here) is pondering these questions right now. She's an advanced social robot who can scuttle around on her wheels, pick up objects, and pull a variety of emotional faces. Her biggest challenge so far has been helping toput out fires

Imaginary friends

Close your eyes and think "robot." What picture leaps to mind? Most likely a fictional creature like R2-D2 or C-3PO from Star Wars. Very likely a humanoid—a humanlike robot with arms, legs, and a head, probably painted metallic silver. Unless you happen to work in robotics, I doubt you pictured a mechanical snake or a clockwork cockroach, a bomb disposal robot, or a Roomba robot vacuum cleaner.
What you pictured, in other words, would have been based more on science fiction than fact, more on imagination than reality. Where the sci-fi robots we see in movies and TV shows tend to be humanoids, the humdrum robots working away in the world around us (things like robotic welder arms in car-assembly plants) are much more functional, much less entertaining. For some reason, sci-fi writers have an obsession with robots that are little more than flawed, tin-can, replacement humans. Maybe that makes for a better story, but it doesn't really reflect the current state of robot technology, with its emphasis on developing practical robots that can work alongside humans.

How do you build a robot?

A grey and red plastic humanoid robot built with a construction set.
If robots like C-3PO really did exist, how would anyone ever have developed them? What would it have taken to make a general-purpose robot similar to a human?
It's easy enough to write entertaining stories about intelligent robots taking control of the planet, but just try developing robots like that yourself and see how far you get. Where would you even start? Actually, where any robot engineer starts, by breaking that one big problem into smaller and more manageable chunks. Essentially, there are three problems we need to solve: how to make our robot 1) sense things (detect objects in the world), 2) think about those things (in a more less "intelligent" way, which is a tricky problem we'll explore in a moment), and then 3) act on them (move or otherwise physically respond to the things it detects and thinks about).
In psychology (the science of human behavior) and in robotics, these things are called perception (sensing), cognition (thinking), andaction (moving). Some robots have only one or two. For example, robot welding arms in factories are mostly about action (though they may have sensors), while robot vacuum cleaners are mostly about perception and action and have no cognition to speak of. As we'll see in a moment, there's been a long and lively debate over whether robots really need cognition, but most engineers would agree that a machine needs both perception and action to qualify as a robot.
Photo: Is this a robot? It certainly looks like one, but it has no senses of any kind, no electronic or mechanical onboard computer for thinking, and its limbs have no motors or other means to move themselves. With no perception, cognition, or action, it cannot be a robot—even if it looks like one.

Perception

We experience the world through our five senses, but what about robots? How do they get a feel for the things around them?

Vision

Humans are seeing machines: estimates vary wildly, but there's general agreement that about 25–60 percent of our cerebral cortex is devoted to processing images from our eyes and building them into a 3D visual model of the world. Now machine vision is really quite simple: all you need to do to give a robot eyes is to glue a couple ofdigital cameras to its head. But machine perception—understanding what the camera sees (a pattern of orange and black), what it represents (a tiger), what that representation means (the possibility of being eaten), and how relevant it is to you from one minute to the next (not at all, because the tiger is locked inside a cage)—is almost infinitely harder.
Eye with a simulated laser flash
Like other problems in robotics, tackling perception as a theoretical issue ("how does a robot see and perceive the world?") is much harder than approaching it as a practical problem. So if you were designing something like a Roomba vacuum cleaning robot, you could spend a good few years agonizing over how to give it eyes that "see" a room and navigate around the objects it contains. Or you could forget all about something so involved as seeing and simply use a giant, pressure-sensitive bumper. Let the robot scrabble around until the bumper hits something, then apply the brakes and tell it to creep away in a different direction.
Perception, in other words, doesn't have to mean vision. And that's a very important lesson for ambitious projects such as self-driving (robotic) cars. One way to build a self-driving car would be to create a super-lifelike humanoid robot and stick it in the driving seat of an ordinary car. It would drive in exactly the same way as you or I might do: by looking out through the windshield (with its digital camera eyes), interpreting what it sees, and controlling the car in response with its hands and feet. But you could also build a self-driving car an entirely different way without anyone in the driving seat—and this is how most robotics engineers have approached the problem. Instead of eyes, you'd use things like GPS satellite navigation,sonarradar, and infrared detectors, accelerometers—and any number of other sensors to build up a very different kind of picture of where the car is, how it's proceeding in relation to the road and other cars, and what you need to do next to keep it safely in motion. Drivers see with their eyes; self-driving cars see with their sensors. A driver's brain builds a moving 3D model of the road; self-driving cars have computers, surfing a flood of digital data quite unlike a human's mental model. That doesn't mean there's no similarity at all. It's quite easy to imagine a neural network (a computer simulation of interconnected brain cells that can be trained to recognize patterns) processing information from a self-driving car's sensors so the vehicle can recognize situations like driving behind a learner, spotting a looming emergency when children are playing ball by the side of the road, and other danger signs that experienced drivers recognize automatically.
Photo: Do robots really need eyes, ears, and noses—or can they sense the world in better ways than humans?

Hearing

Just as seeing is a misnomer when it comes to machine vision, so the other human senses (hearing, smell, taste, and touch) don't have exact replicas in the world of robotics. Where a person hears with their ears, a robot uses a microphone to convert sounds into electrical signals that can be digitally processed. It's relatively straightforward to sample a sound signal, analyze the frequencies it contains (for example, using a mathematical descrambling trick called a Fourier transform), and compare the frequency "fingerprint" with a list of stored patterns. If the frequencies in your signal match the pattern of a human scream, it's a scream you're hearing—even if you're a robot and a scream means nothing to you.
There's a big difference between hearing simple sounds and understanding what a voice is saying to you, but even that problem isn't beyond a machine's capability. Computers have been successfully turning human speech into recognizable text for decades; even my old PC, with simple, off-the-shelf, voice recognition software, can listen to my voice and faithfully print my words on the screen. Interpreting the meaning of words is a very different thing from turning sounds into words in the first place, but it's a start.

Smell

You might think building a robotic nose is more of a technical challenge, but it's just a matter of building the right sensor. Smell is effectively a chemical recognition system: molecules of vapor from a bacon butty, a yawning iris, or the volatile liquids in perfume drift into our noses and bind onto receptive cells, stimulating them electro-chemically. Our brains do the rest. The way our brains are built explains some of their highly unusual features, such as why smells are powerful memory triggers. (The answer is simply because the bits of our brain that process smells are physically very close to two other key bits of our brains, namely the hippocampus, a kind of "crossroads" in our memory, and amygdala, where emotions are processed.)
So, in the words of the old joke, if robots have no nose, how do they smell? We have plenty of machines that can recognize chemicals, including mass spectrometers andgas chromatographs, but they're elaborate, expensive, and unwieldy; not the sorts of things you could easily stuff up a nose. Nevertheless, robot scientists have successfully built simpler electrochemical detectors that resemble (at least, conceptually) the way the human nose converts smells into electrical signals. Once that job's done, and the sensor has produced a pattern of digital data, all you're left with is a computational problem; not, "what does this smell like?", but what does this data pattern represent? It's exactly like seeing or hearing: once the signals have left your eyes, ears, or nose, and reached your brain, the problem is simply one of pattern recognition.

Other senses

Brain-controlled prosthetic hand pictured with its realistic cosmetic skin covering alongside.
Although robots have had arms and primitive grabber claws for over half a century, giving them anything like a working human hand has proved far more of a challenge. Imagine a robot that can play Beethoven sonatas like a concert pianist, perform high-precision brain surgery, carve stone like a sculptor, or a thousand other things we humans can do with our touch-sensitive hands. As the New York Times reported in September 2014, building a robot with human touch has suddenly become one of the most interesting problems in robotics research.
Photo: Engineers can building amazingly realistic prosthetic hands. If we could modify these things with touch sensors, maybe they could double up as working hands for robots? Photo by Sarah Fortney courtesy of US Navy.
Taste, too, boils down simply to using appropriate chemical sensors. If you want to build a food-tasting robot, a pH meterwould be a good starting point, perhaps with something to measure viscosity (how easily a fluid flows). Then again, if you've already given your robot eyes and a nose, that would go a long way to giving it taste, because the look and smell of food play a big part in that.
One of the misleading things about trying to develop a humanoid robot is that it tricks us into replicating only the five basic human senses—and one of the great things about robots is that they can use any kind of sensor or detector we can lay our hands on. There's no need at all for robot vision to be confined to the ordinary visiblespectrum of light: robots could just as easily see X rays or infrared (with heat detectors). Robots could also navigate like homing pigeons by following Earth's magnetic field or (better still) by using GPS to track their precise position from one moment to the next. Why limit ourselves to human limitations?

Cognition

Thinking about thinking is a recipe for doing not much at all—other than thinking; that's the occupational hazard of philosophers. And if that sounds fatuous, consider all the books and scientific articles that have been published on artificial intelligence since British computer scientist Alan Turing developed what is now called the Turing test(a way of establishing whether a machine is "intelligent") in 1950. Psychologists, philosophers, and computer scientists have been wrestling with definitions of "intelligence" ever since. But that hasn't necessarily got them any nearer to developing an intelligent machine.
"No cognition. Just sensing and action. This is all I would build, and completely leave out... the intelligence of artificial intelligence."
Rodney A. Brooks,Robot: The Future of Flesh and Machines, p.36.
As British robot engineer Kevin Warwick pointed out about 20 years ago in his book March of the Machines, "intelligence" is an inherently human concept. Just because there's a lofty view from where people happen to be standing, it doesn't follow that there aren't better views from elsewhere. Human intelligence tests measure your ability to do well at human intelligence tests—and don't necessarily translate into an ability to do useful things in the real world. Developing computer-controlled machines that humans would regard as intelligent is not really the goal of modern robotics research. The real objective is to produce millions of machines that can work effectively alongside billions of humans, either augmenting our abilities or doing things we simply don't want to do; we don't automatically need "intelligence" for that. According to pragmatic engineers like Warwick, robots should be assessed on their own terms against the specific tasks they're designed for, not according to some fuzzy, human concept of "intelligence" designed to flatter human self-esteem. Is a robot intelligent? Who cares if it does the job we need it to do, maybe better than a person would.

Emotional intelligence

Emo emotional robot with a computer-controlled mouth and eyes
Whether they're deemed intelligent or not, computers and robots are quintessentially logical and rational where humans are more emotional and inconsistent. Developing robots that are emotional—particularly ones that can sense and respond to human emotions—is arguably much more important than making intelligent machines. Would you rather your coworkers were cold, logical, hyperintelligent beings who could solve every problem and never make a mistake? Or friendly, easy-going, pleasant to pass time with, and fallibly human? Most people would probably chose the latter, simply because it makes for more effective teamwork—and that's how most of us generally get things done. So developing a likeable robot that has the ability to listen, smile, tell jokes round the water cooler, and sympathize when your life takes a turn for the worse is arguably just as important as making one that's clever. Indeed, one of the main reasons for developing humanoid robots is not to replicate human emotions but to make machines that people don't feel scared or threatened by—and building robots that can make eye contact, chuckle, or smile is a very effective way to do that.
Emotion is often in the eye of the beholder—especially when it comes to humans and machines. When people look at cars, they tend to see faces (two headlights for eyes, a radiator grille for a mouth) or link particular emotions with certain colours of paintwork (a red car is racy, a black one is dark and mysterious, a silver one is elegant and professional). In much the same way, people project feelings onto robots simply because of how they look or move: the robot has no emotions; the emotions it conjurs up are entirely in your mind. One of the world's leading robot engineers, MIT's Rodney Brooks, tells a story of how he was involved in the development of a robotic baby toy so lifelike that it provoked sincere feelings of attachment in the adults and children who looked after it. Kismet, an "emotional robot" developed in the late 1990s by Cynthia Breazeal, one of his students, listens, coos, and pays attention to humans in a startlingly babylike way—to the extent that people grow very attached to it, as a parent to a child. Again, the robot has nothing like human emotions; it simply provokes an authentic emotional reaction in humans and we interpret our own feelings as though the robot were emotional too. In other words, we might redefine the problem of developing emotional robots as making machines that humans really care about.
Photo: Robots are designed with friendly faces so humans don't feel threatened when they work alongside them. This one is called Emo and it lives at Think Tank, the science museum in Birmingham, England. Its digital-cameras eyes help it to learn and recognize human expressions, while the rubber-tube lips allow it to smile and make expressions of its own.

Action

How a robot moves and responds to the world is the most important thing about it. Intelligent machines that sense and think but don't move or respond hardly qualify as robots; they're really just computers. Action is a much more complex problem than it might seem, both in humans and machines. In humans, the sheer number of muscles, tendons, bones, and nerves in our limbs make coordinatred, accurate body control a logistical nightmare. There's nothing easier than lifting your hand to scratch your nose—your brain makes it seem to easy—but if we try to replicate this sort of behavior in a machine, we instantly realize how difficult it is. That's one reason why, until relatively recently, virtually all robots moved around on wheels rather than fully articulated human legs (wheels are generally faster and more reliable, but hopeless at managing rough terrain or stairs).
Military Big Dog robot running through a field alongside a soldier.
Just because a robot has to move, it doesn't follow that it has to move like a person. Factory robots are designed around giant electric, hydraulic, or pneumatic arms fitted with various tools geared to specific jobs, like painting, welding, or laser-cutting fabric. No human can swivel their wrist through 360 degrees, but factory robots can; there's simply no good reason to be bound by human limitations. Indeed, there's no reason why robots have to act (move) like humans at all. Virtually every other animal you can think of, from salamanders and sharks to snakes and turkeys, has been replicated in robot form: it often makes much more sense for robots to scuttle round like animals than prance about like people. By the same token, making "emotional robots" (ones to which people feel emotions) doesn't necessarily have to mean building humanoids. That explains the instant success of Sony's robotic AIBO dogs, launched in 1999. They were essentially robotic pets onto which people projected their need for companionship.
Photo: Robots don't have to look or work like humans. This is BigDog, the infamous robotic "pack-mule" designed for the US military by Boston Dynamics. Where most robots are electrically powered, this one is driven by four hydraulic legs powered by a small internal combustion engine from a go-kart. In theory, that gives it a big advantage over robots powered by batteries (it should be able to go much further); in practice, its official range is just 32km (20 miles). Photo by Kyle J. O. Olson courtesy of US Marine Corps.
Human perception and cognition are hard things for robots to emulate, partly because it's easy to get bogged down in abstract and theoretical arguments about what these terms actually mean. Action is a much simpler problem: movement is movement—we don't have to worry about defining it, the same way we worry over "intelligence," for example. Ironically, though we admire the remarkable grace of a ballet dancer, the leaps and bounds of a world-class athlete, or the painstaking care of a traditional craftsman, we take it for granted that robots will be able to zing about or make things for us with even greater precision. How do they manage it? Some use hydraulics. Most, however, rely on relatively simple, much more afforable electric stepper motors and servo motors, which let a robot spin its wheels or swing its limbs with pinpoint control. Unlike humans, which get tired and make mistakes, robot moves are reliably repeatable; robots get it right every time.

What are robots actually like?

Real-world robots fall into two broad categories. Most are task-specific robots, designed to do one job and repeat it over and over again. Hardly any are general-purpose robots capable of doing a wide variety of jobs (in the way that humans are general-purpose flesh-and-blood machines). Indeed, those multi-purpose robots are still pretty much confined to robotics labs.

Robot arms

Industrial robot arm welding a Jaguar car
Riveting and welding, swinging and sparking—most of the world's robots are high-powered arms, like the ones you see in car factories. Although they became popular in the 1970s, they were invented in the 1950s and first widely deployed in the 1960s by companies such as General Motors. The original robot arm, Unimate, made its debut on the Johnny Carson show back in 1966. Modern robot arms have more degrees of freedom (they can be turned or rotated in more ways) and can be controlled much more precisely.
Photo: It might never have occurred to you that a robot built the car you're driving today. This Jaguar assembly robot (a Kawasaki ZX165U) is a demonstration model at Think Tank, the Birmingham science museum. It can lift loads of up to 300kg and reach up to 3.5m (11.5ft)—quite a bit more than a human arm!
Robot playing drums
Whether robot arms really qualify as robots is a moot point. Many of them lack much in the way of perception or cognition; they're simply machines that repeat preprogrammed actions. Fast, strong, powerful, and dangerous, they're usually fenced off in safety cages and seldom work anywhere never people (a recent article in theNew York Times noted that 33 people have been killed by robots in the United States during the last 30 years). Rodney Brooks has recently reinvented the whole idea of the robot arm with an affordable ($25,000), easy-to-use, user-friendly industrial robot called Baxter. It can be "trained" (Brooks avoids the word "reprogrammed") simply by moving its limbs, and it has enough onboard sensory perception and cognition to work safely alongside humans, sharing (for example) exactly the same assembly line.
Photo: Robot arms are versatile, precise, and—unlike human factory workers—don't need rest, sleep, or holiday. But "all work and no play..." So this one is learning to play drums for a change, at Think Tank, the Birmingham science museum.

Remote-controlled (teleoperated) machines

Orange bomb disposal robot EODMU8 carrying a suspect bomb to safety
Some of the machines we think of as robots are nothing of the kind: they merely appear robotic (and intelligent) because humans are controlling them remotely. Bomb disposal robots work this way: they're simply robot trucks with cameras and manipulator arms operated by joysticks. Until recently, space-exploration robots were designed much the same way, though autonomous rovers (with enough onboard cognition to control themselves) are now commonplace. So while 1997's Mars Sojourner (from the Pathfinder Mission) was semi-autonomous and largely remote-controlled from Earth, the much bigger and newer Mars Spirit and Opportunity rovers (launched in 2003) are far more autonomous.
Photo: Right: Bomb-disposal robots are almost always remote-controlled. This one, Explosive Ordnance Disposal Mobile Unit (EODMU) 8, can pick up suspect devices with its jaw and carry them to safety. Photo by Joe Ebalo courtesy of US Navy.
Photo: Left: NASA's FIDO was one of its first semi-autonomous robot rovers. Onboard cameras allow space scientists to control it remotely from Earth. Photo courtesy of NASA JPL Planetary Robotics Laboratory and NASA on the Commons.
NASA FIDO robot undergoing field tests to simulate driving on Mars.

Semi-autonomous household robots

If you've got a robot in your home, most likely it's a robot vacuum cleaner or lawn mower. Although these machines give the impression that they're autonomous and semi-intelligent, they're much simpler (and less robotic) than they appear. When you switch on a Roomba, it doesn't have any idea about the room it's cleaning—how big it is, how dirty it is, or the layout of the furniture. And, unlike a human, it doesn't attempt to build itself a mental model of the room as it's going along. It simply bounces off things randomly and repeatedly, working on the (correct) assumption that if it does this for long enough, the room will be fairly clean in the end. There are a few extra little tweaks, including a spiraling, on-the-spot cleaning mode that kicks in when a "dirt detect" sensor finds concentrated debris, and the ability to follow edges. But essentially, a Roomba cleans at random. Robot lawn mowers work in a somewhat similar way (sometimes with a tether to stop them straying too far).

General-purpose robots

Although advanced robots like Baxter can be trained to do many different things, they're still essentially single-domain machines. Whether they're picking out badly formed machine parts for quality control or shifting boxes from one place to another, they're designed only to work on factory floors. We still don't have a robot that can make the breakfast, take the kids to school, drive itself to work somewhere else, come back home again, clean the house, cook the dinner, and put itself on recharge—unless you count your husband or wife.
Back in the 1990s, when Kevin Warwick wrote his bestselling book March of the Machines, building intelligent, autonomous, general-purpose robots was considered an overly ambitious research goal. Engineers like Warwick typified a "hands-on" alternative approach to robotics, where grand plans were put aside and robots simply evolved as their creators figured out better ways of building robots with more advanced perception, cognition, and action. It's more like robot evolution, working from the bottom up to develop increasingly advanced creatures, than any sort of top-down approach that might be conceived by a kind of robot-world equivalent of God.
Roll time forwards, however, and much has changed. Although engineers like Kevin Warwick and Rodney Brooks are still champions of the pragmatic, bottom-up, minimal-cognition approach, elsewhere, general-purpose autonomous robots are making great strides forward—often literally, as well as metaphorically. The US Defense Department's research wing, DARPA, has sponsored competitions to develop humanoid robots that can cope with a variety of tricky emergency situations, such as rescuing people from natural disasters. (DARPA claims the intention is humanitarian, but similar technology seems certain to be used in robotic soldiers.) Thanks to video sites such as YouTube, robots like these, which would once have been top-secret, have been "growing up in public"—with each new incarnation of the stair stomping, chair balancing, car driving robots instantly going viral on social media.
The Carnegie Mellon Red Team self-driving Sandstorm car.

Self-driving cars

Self-driving cars are a different flavor of general-purpose, autonomous robot. But they've yet to catch the public's imagination in quite the same way, perhaps because they've been developed more quietly, even secretly, by companies such as Google. Now you could argue that there's nothing remotely general-purpose about driving a car: it involves a robot operating successfully in a single domain (the highway) in just the same way as a Baxter (on the factory floor) or a Roomba (cleaning your home). But the sheer complexity of driving—even humans take years to properly master it—makes it, arguably, as much of a general-purpose challenge as the one the DARPA robots are facing. Think of all the different things you have to learn as a driver: starting off, stopping at a signal, turning a corner, overtaking, parallel parking, slowing down when the car in front indicates, emergency stops... to say nothing of driving at daytime or night, in all kinds of weather, on every kind of country road and superhighway. Maybe it would be easier just to stick a humanoid robot in the driving seat after all.
Photo: Sandstorm: A prototype self-driving car developed by the Red Team from Carnegie Mellon University. Based on a 1986 Hummer, it figures out where it's going using a variety of scanners, including a giant roof-mounted LIDAR laser (round, top) and a radar scanner (white box mounted above the front fender). There are also GPS sensors at the back of the roof and digital cameras giving more conventional vision. Photo courtesy of Dan Homerick published on Flickr under a Creative Commons Licence.

Our robot future

There's no unknowing the things we learn. Technologies cannot be invented. The march of the robots is unstoppable—but quite where they're marching to, no-one yet knows. Futurologists like Raymond Kurzweil believe humans and machines will merge after we reach a point called the singularity, where vastly powerful machines become more intelligent than people. Humans will download their minds to computers and zoom into the future, not in the "bodiless exultation" of cyberspace (as William Gibson once put it) but in a steel and plastic doppelganger: a machine-body powered by the immortal essence of a human mind.
More pragmatic, less dramatic scientists such as Rodney Brooks see a quieter form of evolution where the last few decades of robotic technology begin to augment what millions of years of natural selection have already cobbled together. Brooks argues that we've been on this path for years, with advanced prosthetic limbs, heart pacemakers, cochlear implants for deaf people, robot "exoskeletons" that paralyzed people can slip over their bodies to help them walk again, and (before much longer) widely available artificial retinas for the blind. There will be no revolutionary jump from human to robot but a smarter, smoother transition from flesh machines to hybrids that are part human and part robot. Will robots take over from people? Not according to Brooks: "Because there won't be any us (people) for them (pure robots) to take over from... We (the robot people) will be a step ahead of them (the pure robots). We won't have to worry about them taking over."

A brief history of robots

Grey NASA Unimate/PUMA robot arm
Photo: PUMA is one of the world's best-known robot arms, developed from Vic Scheinman's Vicarm in 1978. Photo courtesy of NASA Ames Research Center.
When robots look back on their lives, what milestones spring to their computerized minds? Here are some of the key moments in the long and continuing history of robotkind!
  • ~100s AD: Ancient Greeks invent automata (self-controlled machines). Hero of Alexandria uses hydraulics, pneumatics, and steam power to construct all kinds of automatic machines, from self-closing doors to a primitive robotic cart.
  • 1739: French inventor Jacquard de Vaucanson builds an elaborate mechanical duckwith a working digestion system that can eat and produce "feces."
  • 1818: Mary Shelley's novel Frankenstein raises the terrifying prospect of scientists creating monsters that run out of control—still a major concern when most people think about robots today.
  • 1920: Czech playwright Karel ÄŒapek coins the word "robot" in his play R.U.R. (Rossum's Universal Robots) .
  • 1927: Fritz Lang's movie Metropolis shows robots in a bleakly dystopic, urban future.
  • 1912: John Hammond, Jr. and Benjamin Miessner build an electric dog that senses and responds to light signals.
  • 1948: William Grey Walter builds autonomous robot tortoises.
  • 1954: George Devol patents the Programmed Article Transfer (a forerunner of the Unimate industrial robot).
  • 1956: Devol meets physicist Joseph Engelberger and the two discuss working together to develop factory robots. Their efforts ultimately lead to the formation of Unimation, a company that pioneers the manufacture of industrial robots by cooperating closely with companies such as General Motors (GM).
  • 1962: GM installs its first industrial robot at a plant in Trenton, New Jersey.
  • 1964: At Stanford Artificial Intelligence Laboratory (SAIL), PhD student Rodney Schmidt (with help from artificial intelligence pioneer John McCarthy) constructs a self-driving car based on a simple mechanical cart. Initially just remote controlled, it evolves into an autonomous (but very primitive) self-driving car that can follow a painted white line.
  • 1966: Factory robots capture the public imagination after a Unimate appears on the Johnny Carson TV Show, demonstrating how to hit a golf ball and pour a glass of beer.
  • 1967: GM deploys 26 Unimate welding robots at its plant in Lordstown, Ohio, provoking industrial unrest among disruntled factory workers.
  • 1968/69: Vic Scheinman develops The Stanford Arm, an advanced, computer-controlled robot arm at SAIL.
  • 1970: Ira Levin's bestselling humorous novel The Stepford Wives imagines a world where independent women are quietly replaced by zombie-like robots who happily do their husband's bidding.
  • 1972: British military engineers develop the Wheelbarrow, a remote-controlled robot on tracks that can investigate booby-trapped vehicles, buildings, and packages.
  • 1973: Vic Scheinman starts Vicarm Inc. to manufacture industrial robot arms. In 1977, he sells the design to Unimation.
  • 1978: Unimation develops Scheinman's robot into the PUMA (Programmable Universal Machine for Assembly). Unlike earlier robot arms, which are heavy hydraulic machines, it's compact, light, easy to program, and powered by electric motors.
  • 1997: 40 teams compete in the inaugural Robot World Cup (RoboCup): a soccer competition just for robots.
  • 1998: Reading University robotics Professor Kevin Warwick becomes a cyborg by having robotic circuitry implanted into his body.
  • 1999: Sony introduces the AIBO robot dog, but discontinues production in 2006.
  • 2002: iRobot launches the Roomba robot vacuum cleaner. (The key patents are filed between 2000 and 2002.)
  • 2004: iCub, a European-funded humanoid robot, the size of a small child, is released as an open-source project. Around 30 different iCubs are built by academics and used for researching artificial intelligence and robot emotions.
  • 2004: DARPA launches the Grand Challenge—a competition to encourage engineers to develop self-driving cars.
  • 2005: Boston Dynamics creates BigDog, a computer-controlled robotic "pack mule" designed to carry loads for soldiers. Later military robots include Cheetah, PETMAN, and Atlas.
  • 2012: Rethink Robotics, a company founded by Rodney Brooks, introduces the Baxter factory robot.
  • 2013: The SCHAFT S1 humanoid robot wins the trial stage of the DARPA Robotics Challenge to develop robots for emergency humanitarian work and disaster relief.
  • 2015: Final of the DARPA Robotics Challenge.

Monday, 19 September 2016

Thirteen Advanced Humanoid Robots for Sale

Thirteen Advanced Humanoid Robots for Sale

The humanoid robot is a metal and plastic replica of the human body which is the most advanced system known to date. We could say that humanoid robots have great potential of becoming the supreme machine, with growing intelligence expected to surpass human intelligence by 2030, and with already augmented motor capabilities in terms of speed, power and precision. Initially used in research with the purpose of understanding the human body in detail and eventually sourcing motion and control solutions already engineered by nature, humanoid robots are becoming increasingly present in our lives. Their operating environments are no longer limited to controlled environments found in laboratories, humanoid robots are now able – to different degrees of course – to tackle a variety of challenges present in the real world. They are already employed for entertainment purposes, assisting the elderly or performing surveillance on small kids. In this article we take a look at humanoid robots for sale on today’s market. Generally the prices are not accessible to everyone, as they start from within the five-figure range.

Why would I buy an advanced humanoid robot?

Tasks and interactions with people that can be accomplished by such machines have difficulty ratings between medium and low, the human body is subject of study for researchers and engineers that develop technologies in the robotics field. An advanced humanoid robot has human-like behavior – it can talk, run, jump or climb stairs in a very similar way a human does. It can also recognize objects, people, can talk and can maintain a conversation. In general, an advanced humanoid robot can perform various activities that are mere reflexes for humans and do not require high intellectual effort.

1. DARwIn-OP (ROBOTIS OP)

DARwIn-OP humanoid robot
DARwIn-OP | Photo: ROBOTIS
DARwIn-OP is a humanoid robot created at Virginia Tech’s Robotics and Mechanisms Laboratory (RoMeLa) in collaboration with Purdue University, University of Pennsylvania and Korean manufacturer ROBOTIS. The robot can be used at home, but the main goal is to be used in education and research thanks to the fact that it is a powerful and open platform and its creators encourage developers to build and add features to it.
DARwIn-OP or ROBOTIS OP first gen is 45cm tall (almost 18 inch) and has no less than 20 DOF, each joint being actuated by Dynamixel MX-28 servos. Its brain is represented by a PC sporting an Intel Atom Z530 CPU with 1GB of DDR2 RAM and 4GB of SSD storage as well as lots of standard I/Os, communication and peripherals. Hardware level management is accomplished by means of a CM-730 controller module which also integrates an inertial measurement unit (IMU), a 3Mbps servo bus as well as other interfaces and hardware.
ROBOTIS OP 1st and 2nd gen side by side
ROBOTIS OP 1st and 2nd gen side by side
Click to enlarge
ROBOTIS OP 2nd gen is a revision of the original platform, sporting more powerful hardware under the hood, several improvements and features, and a smaller price tag, but almost no differences in exterior design apart from color of course.
The updated CPU is a dual core 1.6GHz Intel Atom N2600 with 4GB DDR3 RAM and 32GB SSD storage, both of which can be upgraded by the user, and Gigabit Ethernet and 802.11n WiFi connectivity. Thanks to improved hardware the robot can now run not only Linux but also any 32-bit Windows version. There is also a revised slightly smaller CM-740 hardware controller.
ROBOTIS OP 2 can be bought at around US $9600, about 20 percent cheaper than the first generation.

2. DARwIn Mini (ROBOTIS Mini)

ROBOTIS Mini Humanoid Robot
ROBOTIS Mini Humanoid Robot | Photo: ROBOTIS
The ROBOTIS Mini or DARwIn Mini is a lightweight and of course much smaller humanoid robot kit aimed at makers and hobbyists. The 27cm (10.6 inch) tall robot is completely open source and its parts are 3D printable, making it an ideal and cost-effective development platform.
Its brain is represented by an OpenCM9.04 embedded controller board which is also compatible with Arduino IDE. There are also interesting software options available, the R+ mobile app lets you program the robot from an iOS or Android device, while R+ Task and R+ Motion let you program motions or more advanced tasks into it. DARwIn Mini has a very affordable US $499 price tag.

3. NAO Evolution

NAO humanoid robots socializing
NAO humanoid robots socializing | Photo: Aldebaran
NAO Evolution is the fifth iteration of the platform developed by French company Aldebaran Robotics and released in 2014. This 58cm tall robot has 25 DOF and is packed with a wide range of sensors such as sonar, tactile and pressure sensors, not to mention cameras and other standard equipment, being able to perform highly complex motions and tasks.
NAO is also an open platform for all those who want to make improvements or to learn how an advanced robot works in technological terms. It can also be used in education and research as study material or platform for developing new generation of humanoid robots.
The robot comes with a powerful brain, the main CPU is an Intel Atom with 1.6GHz running the NAOqi OS and the associated programming framework. There is also a second controller which handles hardware level functions The robot can recognize shapes, people or voices. Captured images have the best resolution thanks to the two HD-resolution cameras, which yield good performance even in low light conditions. To understand what the user is trying to transmit through words, Aldebaran has created a technology called Nuance that translates sounds into robot commands.
NAO Evolution is available for around US $7500, a price tag almost half of the previous generation initial release price.

4. Pepper

Pepper Emotional Humanoid Robot
Pepper Emotional Humanoid Robot | Photo: Kazuaki Nagata/JapanTimes
Pepper is a cute faced humanoid robot designed by Aldebaran in collaboration with Japanese communications giant SoftBank. The robot is geared toward high level human interaction, therefore featuring some advanced capabilities. The robot is equipped with a highly complex cloud-backed voice recognition engine capable of identifying not only speech but also inflections, tonality and subtle variations in the human voice. It also has the ability to learn from its interactions, while its 25 sensors and cameras provide detailed information about the environment and humans interacting with it. Pepper is not only a master of speech, it can use body language as well, relying on 20 actuators to perform very fluid and lifelike movements.
The robot is available for sale since June 2015 – only in Japan for now – at the price of US $1,600 or 198000 Yen, in concordance with the price stated last year. It is worth noting that the price tag does not cover production costs entirely, however hopes are that the difference will be covered from cloud subscriptions and maintenance fees totaling about US $200 per month.

5. Romeo

Romeo humanoid robot
Romeo | Photo: Aldebaran
Romeo is a cute-faced character from plastic and metal with a height of 143 cm. This robot is under continuous development with new features being added as we speak. Romeo is built to order and customized according to requirements. The idea of developing a robot to help people with disabilities or health problems is not new, but Romeo is one of the best robots built for these tasks. Besides the care shown to people, it can be a real family member. It can have a discussion, can work in the kitchen, or empty the garbage. Interaction between people and Romeo is done in a natural way using words or gestures. Even if has four fingers on one hand, the robot can grasp objects, manipulate and feel objects of whatever form. Its degrees of freedom add to a total number of 37.

6. HUBO 2 Plus

HUBO 2 Humanoid Robot
HUBO 2 Plus
The HUBO 2 Plus robot comes from Korea it has a height of 130 cm, weighs 43 kg and a staggering 40 DOF with 10 of them just for the fingers. It senses the environment through a video camera while an array of inertial and force-torque sensors is used for accurately determining its position. and has a price of approximately US$ 400,000. The new plus version introduced in 2011, comes with low power consumption, greater flexibility, a lightweight body, and high intelligence. The total number of degrees of freedom is 40 and allows a high flexibility of the head, arms and legs. It can dance, walk and grab objects as well as a human. It can walk at a speed of 1.5 km/h and run with 3.6 km/h.
A modified variant called DRC-HUBO has won the DARPA Robotics Challenge in June 2015, completing successfully all of the tasks in the competition. DRC-HUBO features modifications such as wheels added to it’s knees and feet for increased stability, more powerful motors, longer arms with more DOF and a 180 degree rotating torso. Complicated sensing mechanisms and F-T sensors were dropped in favor of a single camera and a Lidar operated only when required. Find out more about the robot here.

7. HOVIS Series Robots

HOVIS Eco Plus
HOVIS Eco Plus | Photo: Dongbu Robot
HOVIS Eco Lite
HOVIS Eco Lite | Photo: Dongbu Robot
HOVIS Eco Plus is a 20 DOF, 41cm tall robot developed by Korean company Dongbu Robot. Attractively packaged and packed with sensors and lights this can be employed either as a ready to run robot or development platform. It is based on an ATMega128 controller chip and motion simulation, visual programming and task programming software comes bundled with it, however it can also be programmed with other IDEs such as Visual Studio or AVR Studio. Pricing is around US $1000-1200, depending on opting whether you want it assembled or not.
Another variant, the HOVIS Eco Lite is a more basic kit with only 16 DOF and without the plastic body casings of the Eco Plus, with inertial sensors and Zigbee being optional extras, otherwise being fully compatible with each other. This version is available at prices around US $700 unassembled or about US $1100 in ready to run form.
HOVIS Genie
HOVIS Genie | Photo: Dongbu Robot
HOVIS Genie is a robot designed as a personal robot and can help its users with daily tasks. It has the ability to perform voice recognition, play music, patrol its environment, and household tasks. It is just as customizable as the Eco Plus and Lite. A myriad of sensors is packed into this robot and it can automatically roll into its charging station. It can move efficiently thanks to its rolling base equipped with omnidirectional wheels. The Eco Genie is priced at around US $2000 and is available at major retailers.

8. RoboThespian

Robothespian humanoid robot
Robothespian | Photo: Gizmag
RoboThespian comes from the United Kingdom and is in continuous development since 2005. The current iteration – RT 3 – is available since 2011 and can be bought at prices starting with approximately US $78,000 or rented for various events. The robot is designed to be used in museums to guide visitors, education, or research. It is a good public speaker, and impresses by gesturing and emotions displayed on its face. Also it has the ability to dance, sing, or recite text. The two eyes are made up of LCD screens and change their colors in relation to the robot’s movements. Some of the moves made are created in the 3D animation programBlender.
The robot can be controlled remotely from a browser while the user can see what the robot sees at all times. The browser interface also allows for customizing high level functions via Python scripts running on a proprietary software architecture, while processing is ensured by Intel NUC units.
The robot’s body is made of aluminum while components are covered with the very common PET plastic material. Instead of electric motors, RoboThespian uses muscles driven by air pressure created by well-known German company Festo, these actuators allow for delicate and precise hand movement.

9. iCub

iCub humanoid robot
iCub | Photo: IIT
Now at version 2.5 iCub is actually a spoiled baby, that’s how advanced this robot is. With a price of US $270,000 (250.000 EUR) without tax this is an extremely advanced social robot, the good part is that it is also modular so parts can be bought separately. It has a height of 100cm, weighs 23kg, it has ‘human features’ such as skin, sensors in fingertips and palms, complex tendon articulations, elastic actuators and is able to recognize and manipulate objects.
Each hand has 9 DOF and can feel objects in the same way a human does. The head is an essential component for moving, recognition and commands. It has 6 DOF and has integrated two cameras, two microphones, gyroscopes and accelerometers. Its brain is controlled by PC-104 controller board powered by an Intel CPU.

10. PR2 Robot System

PR2 SE robotic platform
PR2 SE | Photo: Willow Garage
The PR2 is one of the most advanced development platforms to date, created by robotics research company Willow Garage, the same that created ROS.
PR2 is impressive in every way, from its dimensions and hardware to the impressive open source community and the amount of development around it.
The robot has a variable height, thanks to its telescopic body, between 1.3 and 1.64 meters (4.36 and 5.4 feet), 7 DOF arms that extend to almost 1 meter, omnidirectional mobile base and 2 DOF neck joint. Lots of sensors are employed, such as Hokuyo Laser scanners, a Kinect sensor, multiple stereoscopic and regular cameras, pressure sensors, to mention just a few of them.
Two quad-core Intel i7 Xeon CPUs with 24GB of RAM lie at the core of its control electronics and several communication interfaces, such as Gigabit Ethernet, dual-band WiFi and Bluetooth, ensure connectivity, while a 1.3kWh battery pack provides power for the whole system.
A complete setup with 2 arms and grippers including charging station, controller and other accessories costs US $280.000 without tax and includes a 30 percent discount if important contributions to the open source community are demonstrated.

11. HRP-4

HRP-4 humanoid robot
HRP-4 | Photo: Kawada
HRP-4 is one of the most advanced humanoid robots with 34 DOF and a price of US$ 300,000 (approximately 222.000 EUR). Developed by Kawada Industries of Japan together with Japanese National Institute of Advanced Industrial Science and Technology (AIST) it is used for research and development of advanced software for humanoid robots motion. HRP-4C nicknamed Miim is a more human looking evolution based on this platform, that not only looks creepily human, it can also dance and sing.
HRP-4 has a height of 1.51 meters and weighs 39kg. Each robot arm has 7 degrees of freedom and can lift a maximum weight of 0.5kg. The intelligence level is pretty high, the robot can talk, understand, recognize or manipulate various objects. Thanks to the wide range of integrated sensors, HRP-4 can detect the direction of the sound so it can turn towards the speaker. The brain is an Intel Pentium M processor with a working frequency of 1.6GHz.

12. Kuratas


Kuratas is a very interesting robot, it offers lots of features, it is rideable and can also be controlled from a smartphone app. Its operating system V-Sido offers pretty advanced control and motion, and it can be ordered straight from Amazon. In Japan.

The price? Oh the price is about 1.3 million US Dollars, not a typo, but as seen in the demonstration above it may be worth it. The 3.8 meter (about 12 feet) tall 5-ton robot is also equipped with ‘weapons’ such as a water powered LOHAS launcher or a smile powered gatling gun, should you encounter any enemies in that lazy Sunday afternoon when you’re driving it around the block.

13. ASIMO

ASIMO humanoid robot
ASIMO | Photo: Honda Robotics
ASIMO is the most advanced humanoid robot which can be bought but also the most expensive, it costs no less than US $2,500,000. The latest version appeared in 2011 and brings significant improvements in autonomy, new balancing capability, new recognition system and other technological improvements.
ASIMO can move in crowded places such as shopping malls, station or museums. It has the ability to adapt to the environment, can walk on any terrain, can climb or descend stairs almost as good as a human. It has a height of 130 cm and weighs 48 Kg. The 57 DOF enable the machinery to perform amazing maneuvers and it can also run with a speed up to 9 km/h.

Conclusion

Robots are no longer used just in industrial environments, factories, warehouses and laboratories. They become part of the society we live in, part of our lives. The passage of time will bring lower costs and new technologies which will narrow the intelligence gap between robots and humans and will enable their widespread purchase. For the moment, advanced humanoid robots are found in limited numbers on the market due to low demand and high prices, however we expect to see many changes on this market in the near future, especially in Japan where demand is very high. Would you acquire a humanoid robot today?

Resources



Popular Posts