Curiosity is not your ordinary rover. It's bigger than a small car. The rover comes equipped "standard" with six-wheel rocker-bogie suspension and multiple camera systems, and its power supply doesn't rely on solar panels.
Curiosity uses a radioisotope power generator so that it can roam longer and farther, traveling to more interesting places than previous missions. It has an expansive suite of science instruments named Sample Analysis at Mars, designed to analyze samples of material collected and delivered by the rover's arm.
Robotic Exploration Rover
Test your programming skills and move the robot around the obstacles. Image Credit: NASANASA tests robots for exploration in areas called analogs. Analogs are places where the environment is similar to locations like Mars or the moon, where a robot may be used. One NASA analog is in the Arizona desert. NASA robotics experts conduct field tests in the desert to assess new ideas for rovers, spacewalks and ground support. Some of these tests are conducted by a team called Desert RATS, which stands for Desert Research And Technology Studies. What is it like to be part of a team that designs and tests robots? Find out and test your programming skills with "ROVER". Guide the robot over an analog of 12 terrain grids without consuming all of his battery power. Watch out for obstacles!
When it comes to exploring the hostile environment of space, robots have done a lot (if not most) of the exploring. The only other planet besides Earth that humans have set foot on is the Moon. Robotic explorers, however, have set down on the Moon, Mars, Venus, Titan and Jupiter, as well as a few comets and asteroids. Robotic missions can travel further and faster, and can return more scientific data than missions that include humans. There is much debate on whether the future of space exploration should rely solely on robots, or whether humans should have a role.
As contentious as this issue is, there is no doubt that robots have and will continue to contribute to our understanding of the Universe. Here’s a short list of past, current, and future robotic missions that have done or will do much in the way of exploration of our cosmos.
The most famous robots in space have to be the series of orbiters, rovers and landers that have been sent to Mars. The first orbiter was Mariner 4, which flew past Mars on July 14, 1965 and took the first close up photos of another planet. The first landers were the Viking landers. Viking 1 landed July 20, 1976, and Viking 2 on September 3, 1976. Both landers were accompanied by orbiters that took photos and scientific data from above the planet. The landers included instruments to detect for life on the surface of Mars, but the data they returned is somewhat ambiguous, and the question of whether there is life on Mars still requires an answer. Currently, Spirit and Opportunity are roving away on the Martian surface, well past their expected mission lifetime, and the Phoenix lander returned a wealth of information about our neighbor. For more about the entire series of Mars missions, go to NASA’s Mars Exploration Program website. Of course, NASA isn’t the only space organization represented at Mars – the European Space Agency currently has Mars Express orbiting the planet, and has the first webcam of another planet available!
Mars isn’t the only place to go in the Solar System, though. Both the U.S. and the Russians sent numerous missions to Venus, with a lot of successes and failures. For a complete list of the many missions to Venus visit the Planetary Society. The most notable firsts are: Mariner 2 was the first successful Venus flyby on December 14, 1962, and the Russian lander Venera 7 was the first human-made vehicle to successfully land on another planet and transmit data back to Earth on December 14, 1962.
Sputnik 1, of course, was the first robot in space, and was launched October 4th, 1957 by the USSR.
The Voyager missions are notable for the milestone of having a robot leave the Solar System. Voyager 1 and 2 were launched in 1977 are still making their way out of the Solar System, and have entered the heliopause, where the solar wind starts to drop off, and the interstellar wind picks up. To keep up with their status, visit the weekly status page.
Dextre, a robotic arm developed by the Canadian Space Association, is a very cool robot aboard the International Space Station. Dexter allows for delicate manipulation of objects outside the station, reducing the number of space walks and increasing the ability of the ISS crew to maintain and upgrade the station.
This is by no means an exhaustive list of the enormous number of robotic space missions. To learn a lot, lot more check out the Astronomy Cast episode on Robots in Space, the ESA robotics page, NASA missions page, and the Planetary Society missions page.
Researchers at the University of Minnesota have made a major breakthrough that allows people to control a robotic arm using only their minds.
Research subjects at the University of Minnesota fitted with a specialized noninvasive brain cap were able to move the robotic arm just by imagining moving their own arms.
Credit: University of Minnesota
The research has the potential to help millions of people who are paralyzed or have neurodegenerative diseases."This is the first time in the world that people can operate a robotic arm to reach and grasp objects in a complex 3D environment using only their thoughts without a brain implant," said Bin He, a University of Minnesota biomedical engineering professor and lead researcher on the study. "Just by imagining moving their arms, they were able to move the robotic arm."
The noninvasive technique, called electroencephalography (EEG) based brain-computer interface, records weak electrical activity of the subjects' brain through a specialized, high-tech EEG cap fitted with 64 electrodes and converts the "thoughts" into action by advanced signal processing and machine learning.
Eight healthy human subjects completed the experimental sessions of the study wearing the EEG cap. Subjects gradually learned to imagine moving their own arms without actually moving them to control a robotic arm in 3D space. They started from learning to control a virtual cursor on computer screen and then learned to control a robotic arm to reach and grasp objects in fixed locations on a table. Eventually, they were able to move the robotic arm to reach and grasp objects in random locations on a table and move objects from the table to a three-layer shelf by only thinking about these movements.
All eight subjects could control a robotic arm to pick up objects in fixed locations with an average success rate above 80 percent and move objects from the table onto the shelf with an average success rate above 70 percent.
"This is exciting as all subjects accomplished the tasks using a completely noninvasive technique. We see a big potential for this research to help people who are paralyzed or have neurodegenerative diseases to become more independent without a need for surgical implants," He said.
The researchers said the brain-computer interface technology works due to the geography of the motor cortex -- the area of the cerebrum that governs movement. When humans move, or think about a movement, neurons in the motor cortex produce tiny electric currents. Thinking about a different movement activates a new assortment of neurons, a phenomenon confirmed by cross-validation using functional MRI in He's previous study. Sorting out these assortments using advanced signal processing laid the groundwork for the brain-computer interface used by the University of Minnesota researchers, He said.
The robotic arm research builds upon He's research published three years ago in which subjects were able to fly a small quadcopter using the noninvasive EEG technology.
"Three years ago, we weren't sure moving a more complex robotic arm to grasp and move objects using this brain-computer interface technology could even be achieved," He said. "We're happily surprised that it worked with a high success rate and in a group of people."
He anticipates the next step of his research will be to further develop this brain-computer interface technology realizing a brain-controlled robotic prosthetic limb attached to a person's body or examine how this technology could work with someone who has had a stroke or is paralyzed.
In addition to Professor He, who also serves as director of the University of Minnesota Institute for Engineering in Medicine, the research team includes biomedical engineering postdoctoral researcher Jianjun Meng (first author); biomedical engineering graduate student Bryan Baxter; Institute for Engineering in Medicine staff member Angeliki Bekyo; and biomedical engineering undergraduate students Shuying Zhang and Jaron Olsoe. The researchers are affiliated with the University of Minnesota College of Science and Engineering and the Medical School.
The University of Minnesota study was funded by the National Science Foundation (NSF), the National Center for Complementary and Integrative Health, National Institute of Biomedical Imaging and Bioengineering, and National Institute of Neurological Disorders and Stroke of the National Institutes of Health (NIH), and the University of Minnesota's MnDRIVE (Minnesota's Discovery, Research and InnoVation Economy) Initiative funded by the Minnesota Legislature.
What is a PLC System – Different Types of PLCs with Applications
Programmable Logic Controller (PLC) also known as Industrial Computer is the major component in the industrial automation sector. Due to its robust construction, exceptional functional features like PID controllers, sequential control, timers and counters, ease of programming, reliable controlling capabilities and ease of hardware usage – this PLC is more than a special-purpose digital computer in industries as well as in other control-system areas. Different types of PLCs from vast number of manufacturers are available in today’s market. Therefore, in the subsequent paragraphs, let us study about PLCs and their types.
What is a PLC System?
PLC is invented to replace traditional control panels whose operations depend on the electromagnetic logic relays that are based on timers in industrial control systems. PLCs are capable of monitoring the inputs continuously from sensors and producing the output decisions to operate the actuators based on the program. Every PLC system needs at least these three modules:
CPU Module
Power Supply Module
One or more I/O Module
CPU Module
CPU module consists of a central processor and its memory. The Processor is responsible for doing all the necessary computations and data processing by accepting the inputs and producing appropriate outputs. Memory includes both ROM and RAM memories. The ROM memory contains the operating system, driver and application programs, whereas the RAM stores user-written programs and working data. These PLCs use retentive memory to save user programs and data when the power supply breaks or fails and to resume the execution of a user program ones the power is restored. Thus, these PLCs do not need any use of a keyboard or monitor for reprograming the processor each time. The retentive memory can be implemented with the use of long-life batteries, EEPROM modules and flash memory methods.
BUS or Rack
In some modular PLCs bus or rack is provided in the backplane of the circuit into which all the modules like CPU and other I/O modules are plugged to the corresponding slots. This bus enables the communication between CPU and I/O modules to send or receive the data. This communication is established by addressing the I/O modules according to the location from CPU module along the bus. Suppose, if the input module is located in the second slot, then the address must be I2:1.0 (second slot first channel only as an example). Some buses provide necessary power to I/O module circuitry, but they do not provide any power to sensors and actuators connected to I/O modules.
Power Supply Module
These modules supply the necessary power required for the whole system by converting the available AC power to DC power required for CPU and I/O modules. The output 5V DC drives the computer circuitry, and in some PLCs 24DC on the bus rack drives few sensors and actuators.
I/O Modules
Input and output modules of the PLC allow to connect the sensors and actuators to the system to sense or control the real-time variables such as temperature, pressure flow, etc. These I/O modules vary in type, range, and capabilities and some of these include the following:
Digital I/O module: These are used to connect the sensors and actuator that are of digital in nature, i.e., only for switch ON and OFF purpose. These modules are available on both AC and DC voltages and currents with variable number of digital inputs and outputs.
Analog I/O modules: These are used to connect the sensors and actuators that provide the analog electric signals. Inside these modules, analog to digital converter is used to convert the analog to processor understandable data, i.e., digital data. This module’s number of channel’s availability is also can be varied depending on the application,
Communication Interface Modules: These are intelligent I/O modules that exchange the information between a CPU and communication network. These are used for communicating with other PLCs and computers that are placed at a remote or far away distance.
An integrated or Compact PLC is built by several modules within a single case. Therefore, the I/O capabilities are decided by the manufacturer, but not by the user. Some of the integrated PLCs allow to connect additional I/Os to make them somewhat modular.
A modular PLC is built with several components that are plugged into a common rack or bus with extendable I/O capabilities. It contains power supply module, CPU and other I/O modules that are plugged together in the same rack, which are from same manufacturers or from other manufacturers. These modular PLCs come in different sizes with variable power supply, computing capabilities, I/O connectivity, etc.
Modular PLCs are further divided into small, medium and large PLCs based on the program memory size and the number of I/O features.
Small PLC is a mini-sized PLC that is designed as compact and robust unit mounted or placed beside the equipment to be controlled. This type of PLC is used for replacing hard-wired relay logics, counters, timers, etc. This PLC I/O module expandability is limited for one or two modules and it uses logic instruction list or relay ladder language as programming language.
Medium-sized PLC is mostly used PLC in industries which allows many plug-in modules that are mounted on backplane of the system. Some hundreds of input/ output points are provided by adding additional I/O cards – and, in addition to these – communication module facilities are provided by this PLC.
Large PLCs are used wherein complex process control functions are required. These PLCs’ capacities are quite higher than the medium PLCs in terms of memory, programming languages, I/O points, and communication modules, and so on. Mostly, these PLCs are used in supervisory control and data acquisition (SCADA) systems, larger plants, distributed control systems, etc.
Some of the manufacturers or types of PLCs are given below:
Allen Bradley PLCs (AB)
ABB PLCs (Asea Brown Boveri)
Siemens PLCs
Omron PLCs
Mitsubishi PLCs
Hitachi PLCs
Delta PLCs
General Electric (GE) PLCs
Honeywell PLCs
Modicon PLCs
Schneider Electric PLCs
Bosch PLCs
Applications of PLC
The below figure shows the operation of a PLC for a simple process control application wherein the conveyor belt operation, the number of boxes’ measurement and other control operations are performed by the PLC. Here, the position sensor and other sensor outputs are connected to the input module of the PLC, and from the output modules – a motor is controlled. When the sensors are activated, then the CPU of the PLC reads the inputs, and correspondingly processes them according to the program and produces the outputs to operate the motor so that the conveyor is controlled.
PLC and SCADA combination of control structure is mostly used in industrial automation sector and also in electrical utility systems like power transmission and distribution systems. Programmable sequential switching operation is another major application area of the PLC.
If we consider PLC types to be based on their size and features, they can be placed into classes: Pico or Nano, Micro, Mini, Standard, RTU, Safety, OEM, and PAC. Keep in mind though, trying to classify PLCs into types is not easy as there is an overlap of features among them.
Some may suggest the DCS (Distributed Control System) is a type of PLC. Originally the two where more distinct, the DCS for process/batch and PID loops and PLCs generally used for discrete or on and off control. However the PLC has evolved over the years to become very powerful controllers available to perform the many of the tasks that the DCS can do. For now, I choose to exclude this one from classification in a PLC type, but you can certainly argue the case.
It is difficult though identifying the differences of PLCs in the Pico or Nano class and the Micro andMini. A PLC manufacturer may name a particular PLC line Pico ( for example Allen Bradley), while another competitor will name their line of similarly featured and sized model as Nano (for instance GE).
Among PLC types is the RTU or Remote Terminal Unit which are specialized for communicating remote measurement and process data to a SCADA systems usually. Since these devices may be placed in the middle of nowhere they are design to consume little power (possibly solar power) and endure environmental extremes. This one might be a bit of a stretch if is not actually controlling anything, but just relaying measurement data to a SCADA.
Safety PLCs are designed with some form of logic processing redundancy and monitoring as well as input and output self checking. As you may have guessed, these cost more, roughly 30% higher than the standard fare and are use in high risk situations. These controllers are typically used for Safety Integrity Levels (SILs) 2 and 3.
OEM PLCs are generally without a case or enclosure and therefore suited to fit inside a product that is mass produced, for instance a washing machine.
PAC or programmable automation controllers are essentially the tops of the PLC food chain. They are very powerful in terms of processing speed, extensibility, programming and communications.
Again I'll say that the RTU, PLC, PAC and even DCS lines are blurring as these once specialized devices adapt more of the features as the others. This is simply due to the never ending advancement of technology and the once expensive cutting edge technology becoming more accessible.
The FANUC R-30iB Controller uses high-performance hardware and the latest advances in network communications, integrated iRVision, and motion control functions. The R-30iB Controller features FANUC’s exclusive new and easy-to-use touch screen iPendant with 4D graphics. The iPendant displays process information and the actual process path directly on the iPendant screen, enabling easier setup and troubleshooting.
Based on the latest FANUC Series 30iB CNC Controller, the R-30iB Robot Controller is compact, providing customers a significant space savings. The R-30iB Controller is also energy efficient, requiring less power consumption than previous models, and available with an optional power regeneration .
Discover the entire FANUC robot range. With 15 series of models, FANUC offers the widest range of industrial robots in the world. Covering a diverse range of applications and industries, FANUC machines are easy to operate and provide complete flexibility.
A Robot Learns To Do Things Using A Deep Neural Network
We seem to be starting on the road to autonomous robots that learn how to do things and generalize. Watch as a robot learns how to use a hammer and adapts to changes in the setup.
Deep Neural Networks (DNNs) are well known for doing amazing things, but why are they not used more in robotics? If you have a neural network that can recognize things, why not couple it up to a robot's camera and let it control the robot?
At the moment we have reached the point where if you look around the labs and the different work that is going on you come to the conclusion that there needs to be a consolidation and an integration to create something more than the sum of the parts.
This is starting to happen.
A team at UC Berkeley have implemented a DNN architecture that allows a robot to learn how to perform simple tasks. What is important about this is that while you can program a robot to perform simple tasks, and even teach them to do those tasks by example, a DNN has the power of generalization.
In other areas of application DNNs tend to behave in ways that a human finds "understandable". That is, when they fail a human can see why they have failed and consider the failure not unreasonable. For example, if you show a DNN a photo of a miniature horse and it says that it is a dog - well you can see that it's a near miss rather than being catastrophically wrong as so many classical digital systems are prone to be.
So if you teach a robot to do a job using a DNN then you have to hope that the same powers of generalization will allow the robot to do the same task in slightly different situations and this is what the UC Berkeleyteam has demonstrated does happen.
Pictured from left to right: Chelsea Finn, Pieter Abbeel, Trevor Darrell, and Sergey Levine...and BRETT.
They have taken a Willow Garage PR2, called BRETT, for Berkeley Robot for the Elimination of Tedious Tasks, equipped with a single 2D video camera and placed a fairly complicated DNN between the camera and its motor controllers.
The DNN is an interesting design. The raw RGB image is fed into three layers of convolutional DNNs and then into a softmax that makes a decision about what objects are at what locations. The activations of the final layer of the video processing network is converted to explicit 2D positions in the visual field and then passed on to three more fully connected DNNs. The input to the second DNN includes information about the current position of the robot and the final DNN outputs signals that drive the robots motors.
Notice that in this setup there is no pre-programmed component that determines what the robot does. Training is performed mostly by reinforcement learning - a reward is supplied as the robot gets closer to doing what it needs to. The robot also learns useful visual features using the 3D positional information from the robot arm - the camera isn't calibrated in any way.
Of course training in any DNN is costly and so to make the whole training scheme reasonable the early vision layers were initialized using weights from a neural network trained on the ImageNet dataset. These provide a good starting point for general feature recognition. Next the robot trained itself to recognize the object involved in the task. To do this it held the object in its gripper and rotated the object to give different views from known positions. To avoid the neural network including the robot arm in the recognition task it was covered by a cloth! A similar pretraining initialization was used to narrow down the behaviors that could be learned. Finally the entire network was trained on the task.
It is amazing enough that the robot learned to perform the task, but what is really interesting is the generalization observed when the task differs from the training examples. Take a look at the video:
Impressive though this is there is still a long way to go. As the paper points out:
"The policies exhibit moderate tolerance to distractors that are visually separated from the target object. However, as expected, they tend to perform poorly under drastic changes to the backdrop, or when the distractors are adjacent to or occluding the manipulated objects, as shown in the supplementary video."
The solution is probably more training in a wider range of environments. It is also suggested that more information could be included as input to the neural networks - haptics, auditory and so on. It is also suggested that a recurrent neural network, i.e a network with feedback, could provide the memory needed to allow the robot to continue if elements of the task are momentarily obscured.
Clearly a lot more work is needed, but this is a demonstration of what can happen when you use neural networks as part of a system with senses and motor control.
It is a step closer to the sort of robot sci-fi has been imagining since I Robot and before.
One of the most difficult robot is two legged walking robot. Its balancing mechanism needs a complex circuit , lots of sensors, gyro, servos and mechanics. All the parts are controlled by microprocessors with complicated Firmware. But here we have made a very simple two legged walking robot for kids. To drive the legs we have used old CD player's door opening mechanism. Mechanical structures were made from iron wires. Step 1: Components/Part Necessary:
1> An old PC's CD player's door opening mechanism.
2> Battery: 3.7V Li-Po battery (140mAh to 330mAh)
3> Two Green 3mm LEDs to make eyes.
4> One small switch.
5> Iron wires.
6> some pipes from old pen refill or sketch pens.
7> Wire, soldering iron, some tools like screw driver, cutter, pliers, glue etc.
This is another walking robot tutorial video. In this tutorial, I’ll be showing you step by step process to make a simple six legged walking insect robot.
Parts needed to build this one:
Ice cream/popsicle stick
Paper clips
Drinking straw
Gear motor
Terminal block
Bolts, nuts and locknuts
Wire
Double sided tape
Black electric tape (optional)