Should robots have rights?

0

Alex Hughes explores the issue of robot morality and law


We live in an unprecedented era of technology, virtual reality, voice command and self-driving cars are all being implemented into everyday life and becoming more common. However, the next big push for technology comes in the form of Artificial intelligence.

Every dystopian story of the robotic age warns us against this artificial evolution, the robotic takeover of humanity, the loss of jobs and autonomy taken away from humans and yet the race for AI is in full flight. Some of the biggest companies in the world are dedicating huge teams to creating the next great AI and a lot of these companies have made some serious progress creating algorithms and software capable of beating humans at games and performing big data tasks.

However, as this era unravels an important question must be asked. Should these robots be given moral and human rights? It’s becoming increasingly likely that we will reach the point where humans and robots co-exist together so at what point are these robots considered ‘human’. If they talk and look like us then what is separating them, most people suggest human empathy and emotion but even that is being programmed into these systems.

To many, it seems logical that robots not be given rights as they see them as simply labourers and beneath us in importance, however, there is a growing community of people that argue that they should be treated equally. As robotic technology grows new issues evolve, take for example the question: is shutting down a robot the same as killing it? This is one of many questions that have arisen amongst the moral implications of robotic advancements.

A Russian science fiction writer, Isaac Asimov drew up ‘Three Laws of Robotics’ In 1942, these rules stated: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey orders given to it by human beings except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. These laws have become outdated as robots are becoming more likely to be companions to humans instead of labourers. The artificial empathy that robots are being programmed with is putting them in line with humans and identifying them could become difficult.

This is why a number of people have pushed for so called ‘robot people’ to be given insurance numbers and identification to show they are robots and not humans and to hold them accountable for their actions.

The big issue here is that it is so difficult to predict the outcome of robot morality as we are not at that point yet. However, it is an issue that must be addressed before it happens as our current society and legal system is not prepared for the induction of ‘robot people’ especially as much of our human law is based on morality and judgement.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.