print Print
send Send to friends
Modern Tyranny & Robot Wars
My interest in robotics started in childhood, and my first academic encounter with them took place in my sophomore year when I wrote a paper for an advanced English language course on robotics.

One of the people whom I profiled in the paper was Professor Rodney Brooks of the Massachusetts Institute of Technology (MIT), a roboticist, who actually went so far as to believe that robots will not only become a reality in the future, but that their abilities and intelligence would become so advanced that as humans, we would not be able to survive in their world in our current form.

Robots would be too fast, too strong, and too intelligent compared to us, and in order to survive, Brooks predicted that technology would allow us one day to download our consciousness either into computers and from there live on in cyberspace, or in enhanced robotic bodies that can survive the rigor and speed of robot-filled cities.

As exciting as all this sounded to me being the science fiction fan that I am, it didn't seem completely plausible that such a world would materialize anytime soon. But it seems that with Brooks, it was less a prediction than a self-fulfilling prophecy.

According to the recently published book "Wired for War: The Robotics Revolution and Conflict in the 21st Century" by P. W. Singer, the field of robotics has now taken leaps and strides in ways that make science fiction writers seem uninspired. One of the leaders of this current movement is no other than Brooks himself.

In 1990, Brooks and two of his students from MIT established a company they called iRobot (after the Isaac Asimov science fiction novel-turned Will Smith movie), which designs and makes robots for both civilian and military uses.

For American homes, iRobot makes the Roomba, a vacuum cleaner that roams around and cleans autonomously; for Iraq and Afghanistan, they make the PackBot.


Operated wirelessly by US troops, the lawnmower-sized PackBot is used to identify IEDs (Improvised Explosive Devices, more popularly known as roadside bombs,) which the insurgents are using against the occupying forces.

Singer tells how the soldiers using those robots have grown fond of their PackBot companions, and how they send letters to iRobot describing how time and again these robots have been saving lives on the battlefield as they either deactivate the bombs ahead of the troops or get blown up in their place.

Remotely operated robots, in the two wars the US is currently fighting in the Muslim World, have increased exponentially during the past years. Singer points out that while the invasion of Iraq in 2003 involved zero robots, by 2008 about 12,000 robots were being used.

And while robots like the PackBot can be said to be saving lives without much dispute, others like the Special Weapons Observation Reconnaissance Detection System (SWORDS) by competitor Foster-Miller is fitted with enough firepower to make "saving lives" quite a relative term to use. SWORDS can carry any gun that weighs less than 300 pounds and is considered the first armed robot in the battlefield.

Robots today come in all shapes and sizes and don't only run on the ground. The skies of Iraq are said to be so crowded with both manned and unmanned planes that it has become the most crowded air space in the world.

Unmanned drones have become a major point of contention in Afghanistan, where hundreds of civilians are said to have died due to its operations both in the country and across the boarder in Pakistan.


Drones like the Predator and the Global Hawk are being flown by pilots sitting in front of arcade-style controls at the other side of the world in Las Vegas and Indian Springs, Nevada.

"It's like a video game," one pilot tells Singer. "It can get a little bloodthirsty. But it's cool."

According to Singer, some systems have become so autonomous that their pilots find it more exciting to play Call of Duty or Battlefield 2 when they are not flying their drones just to kill the boredom.


But Brook's predictions would not be fulfilled so long as there are people controlling these robots from afar. Indeed, roboticists and military experts predict that as robotic systems advance, it would not actually make sense that humans continue to limit their capabilities by restricting their response time and fighting speed.

The Counter Rocket, Artillery, and Mortar (CRAM) system is an automatic radar guided gun that fires 45 hundred rounds a minute to create a virtual wall of bullets. Affectionately called R2-D2 by soldiers (following the little robot in the Star Wars movies), it is a cylindrical shell that tilts and moves in a circle fixed on board flatbed trucks in Iraq and Afghanistan.

They are stationed to shoot down mortars and rockets fired by insurgents at US bases which have passed all other defenses. While soldiers barely have a second to jump away to avoid the explosions, CRAM is able to destroy them in the air 70 percent of the time before they hit the ground.

"The trend towards the future will be robots reacting to robot attack, especially when operating at technological speed … As the loop gets shorter and shorter, there won't be any time in it for humans," an army colonel told Singer.

If a robot is required to fire on moving targets like rockets and prevent them from hitting their targets, it would require them to aim and fire at such a speed that a human operator would actually slow it down and prevent it from achieving its goal.

Fighter planes have also reached a point today that some of their maneuvers, if taken to their full capacities, can cause their pilots to blackout. For this reason, there is now a push to make robots completely autonomous after being programmed with the mission they are to carry out.

While robotic missions in the sense of hitting a building or camp in Afghanistan might seem straightforward (take off, fly to these GPS coordinates, drop the bombs, fly back, and land), when faced with unexpected circumstances, it is anybody's guess what kind of mistakes robots would be capable of doing.

Indeed, if military intelligence is capable of making mistakes that lead to controlled drones bombing civilians instead of the intended insurgents, can robots be trusted to do any better?

In 1988, the USS Vincennes was stationed in the Persian Gulf. On board was the Aegis radar system programmed to monitor the skies for incoming missiles and take action to shoot down any threats posed to the ship.

Although the programming of this robot allowed it to operate in four different modes, ranging from full manual to full automatic, the officers on board trusted it enough to allow it to operate without any intervention.

As Iran Air flight 655 was detected by the radar, and even though it was following its correct course and broadcasting radar and radio signals clearly indicating it was not a threat, the robot mistook it for an incoming missile and shot it down, killing all 290 passengers onboard, including 66 children.

Singer points out that even though today roboticists and military personnel insist that it will never happen that humans will defer life or death decisions to future robots, it is clear that it is already happening that people don't like to second-guess what is perceived to be more precise and calculated decisions done by machines.

As robots make more and more of such decisions on the battlefield, questions will emerge that had never faced humanity before. For instance, in a similar incident where an autonomous robot shoots down, bombs, or assassinates innocent civilians, who should take the blame?

Taking the robot to court wouldn't seem to be serving much justice, while one can typically assume that its programmers never really intended for it to target the wrong people.

What about remotely operated robots like today's drones? If some of the futurists have their way and within 20 to 30 years some nations are equipped with whole armies of robots with no human soldiers in sight, what tactical options would their opponents be left with?
Smashing robots all day long wouldn't leave them with much perceived progress on the battlefield as the human operators would simply switch to roll out the next robot in storage (just like computer gamers get new "lives" after each kill).
So, would the enemy be forced to look for ways to hit back at the operators located in their home countries? Considering that the operators are running their robots from within buildings and facilities around populated areas, wouldn't that classify them as hiding behind human shields and subjecting their own civilian populations to retaliation for a war they are remotely fighting on the other side of the globe?

For now, these ethical questions don't seem to be on many of the military's mind as the fever for more robots heightens on the battlefield. As US public opinion drives politicians to avoid casualties among their forces at all costs, robots seem like the clear way to go.

This is driving the US military to fund more than 80 percent of all research in artificial intelligence, and Singer insists that most of this is happening out in the open in US universities and private research labs rather than in those secret locations Hollywood is so fond of naming in their movies like Area 51. "We just aren't watching," he says.

And so long as the current wars in Iraq and Afghanistan continue, the funds will continue to pour in, bringing the self-fulfilling prophecies of roboticists closer and closer.

Singer writes, "I asked an executive at one defense contractor whether he agreed with the crazy ideas being bandied about on singularities and robots becoming as smart as humans. He replied, "If this war keeps going on a few more years, then yes."

--------------------------------------------------------------------------------
Source: Islam Online