Science continues to march forward, pushing the border of whatâ€™s possible and presenting significant ethical implications along the way. Many, including the Church, are currently trying to maintain balance in the relationship between the rights of man and the marvels of science.
The realm of artificial intelligence forces scientists, sociologists, politicians and the religious to question themselves: To what extent is it possible to advance the pursuit of scientific discovery before these advanced, intelligent machines â€œovertakeâ€ man? What does it mean for a machine to be intelligent, and how does it become that way? These are the questions legal scholars and experts are attempting to answer as they study artificial intelligence and all its complexities.
Last year, the European Commission drew up a draft code of ethics on the use of artificial intelligence. It consists of a series of guidelines for the creation of reliable artificial intelligence systems, respecting the fundamental role of human beings. Brussels called fifty-two international experts from private companies, universities and public institutions ( TheÂ AI HLEG is a European Commission-backed working group made up of representatives from industry, academia and NGOs, and was formed as part of the Commissionâ€™s ongoing work to develop EU policy responses to the development, challenges and new opportunities posed by AI technologies)Â to draft a text that was published in December 2018.
The code is extremely detailed, outlining the appropriate application of fundamental principles of European law in the development and use of intelligent systems. The document supports the â€œrobustness and security of systemsâ€ while maintaining the superiority of human beings in the relationship with AI. It prioritizes the dignity and freedom of man, as the autonomy of man should take precedence over that of the artificial. Humans should always maintain the ability to supervise, or control, machines to safely limit robotsâ€™ capacity to make autonomous decisions.
â€œThere is still an important gap between AI and human intelligence. Robots with applied AI, although advanced, fail to perform certain activities. And that gap has not been filled yet. For this reason, it is still difficult to entrust machines with significant responsibility.â€Â The rules that regulate the use of new technologies will evolve as quickly and consistently as the development of the technologies themselves. â€œThis is the problem between law and scientific progress: law has a duty to guarantee a minimum level of certainty, but science has a tendency to race ahead, especially in recent years. We must not arrive at legislation so restrictive that it discourages research.â€ Is there a risk to enacting retroactive laws, contravening a cornerstone of the law? â€œIt is not easy, but the cooperation between lawyers and AI scholars â€“ and society â€“ to understand each otherâ€™s fears, might be a solution. The legal community has taken up the challenge to tackle issues as they arise, in an attempt to stay ahead of the curve.â€