In the 2004 film I, Robot, when Dr Calvin fills up a crystal tube with nanites (microscopic robots) to wipe out artificial synopsis in Sonny, the 'unique' NS-5, we are caught in a dilemma: do we trust robots? Would we really succeed in creating the perfect machines that follow the three laws of robotics? What if we fail in attempt? Who would scream the loudest: the scientists who are passionate about the twisted logic, the businessmen who care only about the revenue from finding a hundred-million ways to exploit the bots, or the customers who have trustingly accepted the services of the machines?
"You might not know this, but one of my responsibilities as Commander-in-Chief [of the US armed forces] is to keep an eye on robots. And I'm pleased to report that the robots you manufacture here seem peaceful - at least for now." - President Barack Obama
Look around you. You're surrounded by machines that do tasks that are given to them - washing machines, fridges, CD players - but what distinguishes them from robots? Robots, too, are machines that are meant to perform tasks given by us, humans. The word you're looking for is autonomy.
Autonomy is the ability of a rational individual to make a voluntary decision. Autonomous robots can perform tasks in an unstructured environment by adapting themselves to it. So if your washing machine can pick up your dirty clothes, sort them out by colour, wash them, dry them, fold them and re-sort them into different slots of your wardrobe, then yes, your washing machine is a robot.
Many developed countries are already using robots in defence... but not humanoids. Humanoids, the focus of our article, are autonomous robots that resemble humans. Some characteristics of a typical humanoid would be safe interactions with humans, self-learning and self-maintenance. A male humanoid is called android and a female humanoid is known as a gynoid.
Doesn't a question like 'why are we creating mechanic versions of ourselves' pop up in your head? One good reason is: if robots are made to understand and think like humans, working with them would make our lives very productive - they don't tire. But what's going to stop them from jumping around all out of control? That's not going to be a problem for a while because we still have not been able to build a robot that can generate enough power to push itself of the ground.
Another reason is that by developing humanoids, scientists will be able to understand the complexities of the human body and the mind. While developing a humanoid, merging various algorithms into one efficient system is a difficult task.
Sensors, controllers and algorithms
Algorithm is just a fancy word for a set of instructions that effectively take you from problem to solution. One problem is how can we make a humanoid walk? When we walk, our brain tells us how far we should step, whether we should put the right foot first or the left while balancing on narrow paths, when to walk fast and when to slow down.
Similarly, robots get all their signals from controllers. Controllers are processors, like the Intel® microprocessor in your PC. They send pulses to various parts of the bot, like to the limbs. These parts, also called effectors, listen to the controller for commands and they perform the required tasks. If the pulse sent to a leg is high, the processor intends the robot to take a larger step or to push further ahead.
Such calculations are done by algorithms. The balancing algorithm is used to keep the robot walking straight without falling. It does so by balancing the amount of pulse on each leg. The knees and ankles of a humanoid have motors (also called actuators, but let's not have too many technical terms). These motors twist and turn according to the pulse given. The balancing algorithm helps calculate how much pulse should be given to each of these motors.
So we have controllers that do the thinking and we have effectors that implement the controller's thoughts... but where do the thoughts come from? Our brain responds to things that we see, smell, touch, hear and taste; only then can we have the thinking and the response. Humanoids need to sense as well, because they claim to be autonomous.
For this purpose, humanoids are built with sensors. They have proprioceptive sensors for sensing their own orientation, position and even speed, and they have exteroceptive sensors to detect obstacles. Ultrasonic sensors work like bats. They produce high frequency sounds and listen to the echoes. Depending on the intensity of the echo received, the sensor is able to perceive the distance of obstacles. Infrared sensors work on the same principle, but since they use the invisible light spectrum, these sensors cannot detect transparent objects like glass.
Human fingers can feel objects even at the slightest touch and can lift eggs and bulbs with delicacy and steadiness. Humanoids have been clumsy in this matter and there is much work to be done. For starters, the Shadow Robot Company in London has developed a humanoid hand. Each finger tip of the Shadow Hand contains microscopic sensors that can sense objects accurately and, as seen in the video below, can do most human tasks.
Cameras are also used as sensors. By mounting cameras for eyes, the humanoid can get a stereo vision. This is achieved by image processing algorithms that analyse every frame captured and create graphs which the controller can interpret.
One good use of cameras is face recognition. Put yourself ten to fifteen years from now and you have a humanoid helper that looks so much like cute, little Wall-E, who has cameras for eyes. You're tired of reading this never-ending article and you need some time to rest your head. "Wall-E, get me a beer from the fridge!" You have room mates who are busy packing to leave for a trip and are scurrying from room to room.
Your helper finds the can of beer in the fridge and returns to where you were, but you aren't there. With the image processing algorithm, he had smartly marked points of your face and stored it. He goes around matching the face points with anyone who passes by. Finally, he spots you stepping out of the wash room and proudly hands over the can of beer.
Acyut and future humanoids
We grew up watching Star Wars, wishing we had a robot like C3PO to do our homework. May be the Engineers in Honda had a similar wish when they started off with their first humanoid ASIMO. In India, research on humanoids is active in institutes like Birla Institute of Technology and Science, Pilani. Students from this institute have made India's first humanoid, named Acyut.
An interesting fact about this humanoid is that it can be controlled from a distance. The human controlling the robot should wear a body suit. The humanoid will then move according to the body suit's movements. But aren't humanoids supposed to be autonomous? Imagine a man has to undergo bypass surgery in a distant village in India and there are no surgeons available for miles. If humanoids were dispatched to these villages, a surgeon working in Delhi or Mumbai could perform this surgery wearing the body suit.
There are umpteen ways in which we can exploit a humanoid. By understanding how a humanoid works (with motors and processor chips), we get the feeling that they are just heartless machines. Hollywood and science fiction novels have transformed them into smart, almost-human-like beings. Yet, we fail to understand the complexities behind making a perfect replica.
Through various self-learning algorithms, we may successfully create a fully-autonomous humanoid, but what about feelings? A show of facial expressions, such as the Korean gynoid EveR-1, doesn't mean they respond to feelings. They calculate the modulation of the tone of our voice and analyse our facial expressions. Our brains do the same thing. When you see your friend smile, you smile as well. But this reaction is not just a bland reaction to what you see. Your brain sends signal to various chemicals and hormones in your body; there is an overall change in behaviour and is even believed to improve health and reduce stress.
All this does not happen in a humanoid. The calculated reaction when you smile at a humanoid would be: make all the motors of the face align into a smile. There is no hormonal change or mental-state change. If, however, we can replicate this entire pathway in a robot, we may be closer than we are now, to creating the perfect humanoid.
Of course, this would defeat the purpose of creating humanoids. We want them to do our dirty work, don't we? We don't want them to fall in love with us. Or worse, rebel.
We are way underequipped to consider such outcomes. Soon, humanoids will be used in areas with high risk such as oil rigs, outer space and nuclear power plants.
Think about the use of humanoids as soldiers in defence. India is already using Daksh, a remotely controlled robot created by DRDO (Defence Research and Development Organisation), to detect improvised explosive devices (IEDs) and defuse them using a jet of water.
Already, humanoids are used in entertainment - they can sing, they can dance and they can even play the violin. They can be used to take care of the old and can be hired for baby sitting. They can replace constables and traffic police. One day, we hope they would be used in household as well, to protect us and to make us smile.
View the original article here