
A cleaning robot was the example used by Google to future challenges of Artificial Intelligence.
Google released a new paper on a highly controversial topic: safety rules for Artificial Intelligence. Even though in the last years the general opinion shifted as a result of more engagement with technology, Google seems to keep its ethical edge and consider all issues surrounding the overpowering ability a robot may have over the human existence.
For this, Google went into the mundane and the paper treats a highly philosophical subject in relatively simplistic terms. While being highly practical, the issues raised are important to be considered. The purpose of AI is to create more comfort for human life, and Google does not want to be at risk of any accidents.
The researchers came up with five problems related to AI. They took the example of a cleaning robot, which is regarded as non-threatening, has a high risk of becoming intrusive and is by default entitled to access human intimacy and lifestyle.
The first problem raised is how to avoid the adverse side effects of the robot’s activity. If the machine is set to clean the floor, any accidents done by the robot while following its program have to be carefully prevented.
The second item on the list is more complicated than the first one. The researchers wonder on how a reward can alter robot’s behavior. The example is one of a robot programmed to take pleasure from cleaning the room. The question is whether the robot will start to create a mess just to have to have again the pleasure of arranging objects, which is what it’s supposed to “feel.”
Another issue is the decision level a robot should have. Asking every time it finds something that needs to be done can be bothering, but the same is the case with doing things by the book that a certain individual may find to be not appropriate.
The fourth question refers to the limits of exploration. Should a robot learn to do all by itself, given that many situations can lead to dangerous outcomes? The researchers give the example of a robot trying to clean an electrical socket with the wet mop. Every problem has a solution, and probably the machine could be taught how to avoid risks, but even so – the exploration could prove to be disastrous for both the machine and its owners.
The last item on Google’s list is how a robot should be taught to act differently if the context is changed. Activities that are welcomed in a room may not be proper to do in a hallway. The technical problem is how to teach the robot what the difference is, and how to accommodate the specifics of each home.
The examples sound funny, but we should not forget that robots would cost money, and their funny accidents around the house could bring more damage than initially believed.
It may not be that Google started to ask ethical questions for the sake of philosophy only. By raising these problems, researchers want to draw attention to the possible shortcomings of using Artificial Intelligence on a regular basis.
Underneath the somehow superficial and hilarious issues discussed in the paper, the more significant problem has nothing to do with technical abilities or software limitations. Are humans ready for a new stage in AI? And more importantly, who would be the clients to buy such robots.
While raising what seem to be child-like issues, Google tackles some more relevant economic and educational problems. In terms of business, smart robots may not be highly valued by people who find it difficult to control them. Regarding education, clients may have to get a shift in perception to accommodate Artificial Intelligence in their lives. And lifestyle changes are difficult.
Image Source: Wikipedia