Is a robot a person?
When during the course of 2016 robotics started to emerge as a topic of reflection at the EU level, unsurprisingly, it was the European Parliament (EP) that kicked the ball into play. A complex and lively debate finally led to a Resolution being adopted last February. Faithful to its reputation, the EP did not restrain itself to mere suggestions and encouragements, but boldly requested a legislative proposal from the European Commission. Some of the requests made waves and created quite some controversy. The one entailing the creation of "legal personality" for the most advanced robots - with the aim of answering questions of liability and making them responsible for making good any damage they may cause proved particularly attractive.
The Church "faces" robots
In the light of the multiple implications of this new frontier opened up by the MEPs, COMECE did not hesitate to publicly contribute to the futher debates launched by both the European Commission and the EP JURI Committee as a quick reaction to the general excitement and interest caused by the recent Resolution.
The position expressed by COMECE has been one of concern. It is not about "rejecting the future", but rather about recalling that the human person is the foundation of every legal order. For a natural person, legal personality derives from his/her existence as a human person: that personality implies rights and duties that are exercised within the frame of human dignity. To place robots - which will always remain machines, regardless of their programmed potential and capabilities - on the same level as human persons contradicts the very concept of responsibility, based on ultimate human rights and duties. There is also an indissoluble link between responsability and freedom (which is much more than autonomy!).
The between analogy robots and fictitious legal persons (e.g. a corporation) proposed by some does not hold up: while the latter exist and are able to act only because of an initial and subsequent expressions of human will, the case of robots is different: the EP seems to link their possible legal personality with their alleged “autonomous features/decision-taking”.
All legal consequences of the option under consideration have to be looked at by the policy-makers. Recognising legal personality to robots could open up disquieting possibilities for their capability of having a full range of legal rights and duties (e.g. under contract law, copyright law and even family law).
So then, robots as animals...?
Some experts would consider as a solution to the "liability conundrum", the extension of rules of liability for animals to robots. However, in this case as well, what would be the ultimate outcome? A perilous shift towards considering robots as belonging to the world of "the living".
Other existing legal regimes relying on liability of humans already provide viable solutions (e.g. provisions on defective products; rules about liability for damages or injury caused by things in one's care). There is no need to reinvent the wheel...
Cool vs reasonable
Once a subject becomes hot in Brussels - especially when a particularly creative and forceful player like the EP steps in - many doors are opened. Must the topic of robotics be discussed? Yes indeed, as the challenges are undeniable. However, a challenge should not necessarily lead to radical changes. It is important not to fly high above reality and mix it with fantasy (e.g. when speaking of levels of robots' "autonomy" the existence of which is highly doubtful) coming dangerously close to "legal daydreaming". The discussion is timely, but the solutions must be based on reality, not on a fascination for extreme constructions. In short, the human person has to remain the centre of any policy or legislation regarding robots and liability.
It might sound "cool" and futuristic to speak of robots as persons and to assign them legal personality. In reality, such an approach would cast aside reasonableness and embrace the (legally and ethically) unknown.
Entertaining the possibility of granting legal personality to robots also speaks, perhaps, of a more or less conscious lack of trust in/pessimistic view of humanity: are we only able to foresee a future where man stands helplessly watching while scores of robots carry out actions that cannot be controlled or stopped in any way by a human.
The question of forces and interests behind the mixture of buzz and pressure revolving around possible legal developments is also a burning one: providing legal personality to robots would indeed serve the position of producers and other actors, freeing them from responsibility for damages in a number of cases. This generous "gift" would however be paid by common citizens, as the solution would turn legal systems and approaches upside-down, with incontrollable consequences.
In any case, it seems like the story has only begun. The Commission recently stated that "Faulty sensors, vulnerable software or unstable connectivity may make it difficult to determine who is technically and legally responsible for any ensuing damage" and that "...it will consider the possible need to adapt the current legal framework to take account of new technological developments (including robotics, Artificial Intelligence and 3D printing), especially from the angle of civil law liability and taking into account the results of the ongoing evaluation of the Directive on liability for defective products and the Machinery Directive". The EP services also plan to get back to the issue from the angle "Cost of Non-Europe on Robotics and Artificial Intelligence".
Caution will still have to remain the key word, though.
The views expressed in europeinfos are those of the authors and do not necessarily represent the position of COMECE and the Jesuit European Social Centre.