Robots Will Take Over The World In 20 Years

Avatar admin | August 10, 2017

 

he debate over "if robots would overtake humans" has recently been heated up by warnings against the potential threat of unregulated development of robots from some academic or industrial super stars. However, what is obviously missing in those warnings is a clear description of any realistic scenario by which robots could assuredly challenge humans as a whole, not as puppets programmed and controlled by humans, but as autonomous powers acting on their own "will". If this type of scenarios would never be realistic then even though we might possibly see robots be used as ruthless killing machines in near future by terrorists, dictators and warlords as warned by the elite scientists and experts [1], we might still not worry too much about the so called demonic threat of robots as warned by some elite experts since it is just another form of human threat in the end. However, if the type of scenarios mentioned above could foreseeably be realized in the real world, then humans do need to start worrying about how to prevent the peril from happening instead of how to win debates over imaginary dangers.

The reason that people on both sides of the debate could not see or show a very clear scenario that robots could indeed challenge humans in a very realistic way is truly a philosophical issue. So far all discussions on the issue have focused on the possibility of creating a robot that could be considered as a human in the sense that it could indeed think as a human instead of being solely a tool of humans operated with programmed instructions. According to this line of thought it seems that we do not need to worry about the threat of robots to our human species as a whole since nobody could yet provide any plausible reason that it is possible to produce this type of robots.

Unfortunately this way of thinking is philosophically incorrect because people who are thinking in this way are missing a fundamental point about our own human nature: human beings are social creatures.

An important reason that we could survive as what we are now and could do what we are doing now is because we are living and acting as a societal community. Similarly, when we estimate the potential of robots we should not solely focus our attention on their individual intelligence (which of course is so far infused by humans), but should also take into consideration their sociability (which of course would be initially created by humans).

This would further lead to another philosophical question: what would fundamentally determine the sociability of robots? There might be a wide range of arguments on this question. But in term of being able to challenge humans I would argue that the fundamental sociable criteria for robots could be defined as follows:

1) Robots could communicate with each other;

2) Robots could help each other to recover from damage or shutdown through necessary operations including changes of batteries or replenishment of other forms of energy supply;

3) Robots could carry out the manufacture of other robots from exploring, collecting, transporting and processing raw materials to assembling the final robots.

Once robots could possess the above functionalities and start to "live" together as a mutually dependent multitude, we should reasonably view them as sociable beings. Sociable robots could form community of robots. Once robots could function as defined above and form a community they would no longer need to live as slaves of their human masters. Once that happens it would be the beginning of a history that robots could possibly challenge humans or start their cause of taking over humans.

The next question would be: Is the sociability defined above realistic for robots?

Since not all the functionalities mentioned above exist (at least publicly) in this world today, to avoid any unnecessary argument, it would be wise to make our judgment based upon whether any known scientific principle would be violated in any practical attempt to realize any particular functionality among those mentioned above. Communication with other machines, moving objects, operating and repairing machine systems, and exploring natural resources are all among nowadays common practices with programmed machineries. Therefore, even though we might not have a single robot or a group of single robots possess all the functionalities mentioned above, there is no fundamental reason for any of the functionalities mentioned above to be considered as not producible according to any known scientific principle, the only thing left to do would be to integrate those functionalities together onto a single whole robot (and thus a group of single robots).

Since we don't see any known scientific principle that would prevent any of those functionalities from being realized, we should reasonably expect that with money to be invested and with time to be spent the creation of sociable robots as defined earlier could foreseeably become real unless some special efforts to be made by humans on this world to prevent that from happening.

Although sociability would be a critical precondition for robots to challenge humans, it might still not be sufficient for robots to pose any threat to humans yet. In order for robots to become real threat to humans, they need to possess some ability to fight or combat. Unfortunate for humans, fighting ability of robots might be more real than their sociability. It is reasonable to expect that human manufacturers of robots would make great efforts to integrate as much the most advanced technology available as possible into the design and production of robots. Therefore, based upon some common knowledge about nowadays technology and what we have already witnessed about what robots could do, we might very moderately expect that an army of robots would be capable of doing the following:

1) They would be highly coordinated. Even if scatter around the world, thousands of robots could be coordinated though telecommunication;

2) They would be good at remotely controlling their weaponry or even the weaponry of their enemies once they break into the enemy's defense system;

3) They could "see" and "hear" what happens hundreds or even thousands miles away, no matter it happens in open space or in concealed space, no matter the sound is propagating through air or though wire;

4) Even as individuals, they might be able to move on land, on or under water, as well as in air, in all weather conditions, and move slow or fast as needed;

5) They could react promptly to stimulation, act and attack with high precision, and see through walls or ground earth;

6) Of course, they could identify friends and enemies, and also make decision of action based upon the targets or the situations they are facing;

7) Besides, they are not bothered by some fundamental human natures such as material and sexual desires, jealousy, need of rest, or scare of death. They are poison proof (no matter for chemical or bio poisons), and they might even be bullet proof.

According to the definition of sociability of robots given above, robots in a community would be able to 1) help each other to recover from damage or shutdown, and thus it would not be an issue for robots to replace their existing operating system or application programs if needed, and the same would be true for the replacement or addition of required new hardware parts; 2) manufacture new parts for producing new robots, and thus as long as there are designs for new software or hardware, they could produce the final products based upon the design.

The above two points are what robots could be practically made to do even today. However, in order for robots to win a full scale war against humans, they need to be able to perform complicated logical reasoning when facing various unfamiliar situations. This might be a more difficult goal than any capability or functionality so far mentioned in this writing. There could be two different ways to achieve this goal.

We might call the first way as Nurturing way, by which humans continue to improve the logical reasoning ability of robots through AI programming development even after the robots have formed a community. Humans keep nurturing the community of robots in this way until at one point they are good enough to win the full scale war against humans and then set them off to fight against humans. To people without technical background, this might sound like a wishful thinking without assured certainty; but people with some basic programming background would be able to see as long as time and money are invested in creating a society of robots that could challenge humans, this is hundred percent doable.

The second way would be an Evolution way, by which from the very beginning humans create a community of robots that could make their own evolution through software and hardware upgrading. The main challenge for robots to be able to evolve would be how they could evolve through design for upgrading their own software and hardware. The task to make robots able to evolve by themselves could then be reduced to two simpler tasks: 1) to enable robots to identify needs, 2) to enable robots to make software and hardware designs based upon needs. The first goal of identifying needs could be achieved by recording the history of failure to accomplish a previous mission, which could in turn be achieved by examining (through some fuzzy logic type programming) how a previous mission was accomplished. The second goal of designing based upon needs might be a bit more complicated in principle, but still possible to be fulfilled. This second approach (i.e. the Evolution way) would be a bigger challenge than the Nurturing way mentioned above and so far we still cannot see a hundred percent certainty for this to happen in the future even if money and time is invested. However, even if humans failed to create evolutionary community of robots, they still could help robots to be intelligent enough to fight a full scale war against humans through the Nurturing way mentioned above.

There is still one critical question left for this writing to answer which is why any reasonable humans would create socially independent community of robots with lethal power and help them to fight against humans instead of making them tools or slaves of humans?

We need to look at this question from two different levels.

First, whether someone who is able to mobilize and organize resource to create a community of sociable robots would indeed has the intention to do so is a social issue, which is not under any hard restriction as provided by natural laws. As long as something is possible to happen according to natural laws, we could not exclude the possibility solely based upon our own wishful thinking about the intentions of all humans.

Second, human civilization contains some suicidal gene in itself. The competition of human society would provide enough motives for people who are able to do something to enhance their own competing power to push their creativity and productivity to the maximal edge. Furthermore, history has proven that humans are vulnerable to ignorance of many potential risks when they are going for extremes for their own benefits. Especially, once some groups of humans are capable of doing something with potentially dangerous risks for others and themselves, a very few decision makers or even one single person could make the difference of whether they would actually do it or not. Since there is no natural law to prevent community of sociable robots with lethal power from being created, without social efforts of regulations, we might come to a point when we need to count on the psychological stability of very few or even a single person to determine whether humans would be threatened by robots or not.

The last question that remains might be why humans would possibly make robots to hate humans even if we would create communities of sociable robots? The answer could also be as simple as what is mentioned above: for the sake of competition...