«Inclusion, transparency, security, equity, privacy and responsibility»
Curiously, Francis dedicated the Message for the 57th World Day of Peace, January 1, 2024 (although published on December 8, 2023), to the topic of “Artificial Intelligence and Peace.” Why? Well, because, today, humanity is torn by countless war conflicts, such as the war in the Gaza Strip or Ukraine. But they are not the only places: around Christmas 170 Christians – new “holy innocents” – were murdered in central Nigeria, or Mexico the massacres perpetrated by groups linked to drug trafficking continue. Why focus then on Artificial Intelligence and Peace?
Furthermore, when one analyzes the document, one discovers that in its brevity – just 8 paragraphs – it is much more ambitious than a simple call for peace. He highlights the risks to peace posed by AI and invites it to be used to build paths to peace. However, the scope of its content is broader and represents a good synthesis of the previous magisterial documents on this new topic. It constitutes a way of including AI within a magisterial text of the highest degree and tradition, such as the Message for the celebration of the 57th World Day of Peace.
The document takes care to maintain a delicate balance between the possibilities and dangers posed by AI. It is part of a reality: its emergence into history, and the world has no turning back. With this premise, it invites us to use it well and warns us of the dangers of its misuse.
Regarding the dangers of using AI in the military field, Francisco points out the following:
“The possibility of conducting military operations through remote control systems has led to a lesser perception of the devastation they have caused and the responsibility in their use, contributing to an even colder and more distant approach to the immense tragedy of the war. The pursuit of emerging technologies in the field of so-called “lethal autonomous weapons systems,” including the war fighting use of artificial intelligence, is a major source of ethical concern. “Autonomous weapons systems can never be morally responsible subjects… Nor can we ignore the possibility that sophisticated weapons end up in the wrong hands, facilitating, for example, terrorist attacks or actions aimed at destabilizing legitimate government institutions.”
Among the aspects worth highlighting in Francis’ new document is his insistence that scientific-technological advances are not “neutral”, which is equivalent to saying that they have an intentionality, they are at the service of certain cultural and economic interests. , political or ideological. For this reason, they enjoy an ethical dimension.
The ethical perspective tells us that the greater the power, the greater the responsibility. For this reason, “ethical issues should be taken into account from the beginning of the research, as well as in the experimentation, planning, distribution and commercialization phases.” Security, peace, the common good, respect for human dignity, require that ethics accompany the development of AI systems throughout their development.
On the other hand, there is the problem of determining, which ethics? In fact, we know that there are abundant moral models. Undoubtedly, Francisco is committed to a humanist anthropological proposal that supports the ethics necessary to manage AI. For this reason, “technological developments that do not lead to an improvement in the quality of life of all humanity, but, on the contrary, aggravate inequalities and conflicts, cannot be considered true progress.” Progress and humanism go hand in hand, and are the result of not losing sight of the goal of all technological development: the human person.
Some examples of how AI can play against human dignity: “In the future, the reliability of those who ask for a loan, the suitability of an individual for a job, the possibility of recidivism of a convicted person or the right to receive asylum political or social assistance could be determined by artificial intelligence systems.” They are also “negative consequences linked to their improper use, discrimination, interference in electoral processes, the implementation of a society that monitors and controls people, digital exclusion and the intensification of an individualism increasingly disconnected from the community.” ”.
Francis hits the mark when he refuses to identify the uniqueness of the person with a mass of data, and by pointing out how the algorithm can cause us to modify the way we promote human rights.
The Pope invites us to “reflect on the “sense of the limit”… The human being, mortal by definition, thinking of surpassing every limit thanks to technology, runs the risk, in the obsession of wanting to control everything, of losing control of himself, and in the search for absolute freedom, from falling into the spiral of a technological dictatorship.”
To face these challenges, the Pope offers two simple suggestions, certainly not easy to implement: Offer an education that sharpens the critical sense towards everything that comes from the digital world and, particularly, AI, on the one hand, and, secondly, develop a binding international treaty to control and channel it, and thus produce fruits that respect dignity and human rights.
It should be noted that Francisco is well advised. An update on previous magisterial texts is noticeable in the wording of the document, when he speaks of “artificial intelligences” – thus, in the plural – and “machine learning”, showing the differences they have with authentic human knowledge. The outlook is, for now, anything but clear. It is something that is being born in our hands and raises unknowns that go beyond the strictly technical, to enter the anthropological field, which is why they require an interdisciplinary approach: “Developments such as machine learning or deep learning. They raise questions that transcend the fields of technology and engineering and have to do with an understanding strictly connected to the meaning of human life, the basic processes of knowledge and the ability of the mind to reach the truth.
We can conclude the reflection on this text, clearly stating the challenges that this exciting reality that is AI faces us:
“Artificial intelligence must be understood as a galaxy of different realities, and we cannot assume a priori that its development will make a beneficial contribution to the future of humanity and to peace between peoples. Such a positive outcome will only be possible if we are able to act responsibly and respect fundamental human values such as «inclusion, transparency, security, fairness, privacy and responsibility»”