It’s unusual for tech executives and religious leaders to get together to discuss their shared interests and goals for the future of humanity and the planet. It’s even more extraordinary for the world’s three largest monotheistic religions to be represented.
When the Pope joins the meeting, it’s basically unprecedented.
That’s what happened at Vatican City this week as the Catholic Church hosted leaders of the Jewish and Islamic faiths, new signatories to the Rome Call for AI Ethics, in a meeting that included executives from Microsoft and IBM.
“In agreeing on promoting a culture that places this technology at the service of the common good of all and of the care of our common home, you are offering an example to many others,” Pope Francis said, according to a translation of his remarks, addressing the Jewish and Islamic delegations to the meeting.
Pope Francis added, “Fraternity among all is the precondition for ensuring that technological development will also be at the service of justice and peace throughout the world.”
The Rome Call for AI Ethics, which was originally signed nearly three years ago by Microsoft and IBM, focuses on six principles: transparency, inclusion, accountability, impartiality, reliability, and security and privacy.
Microsoft President Brad Smith, who attended and addressed the meeting, spoke with GeekWire from Europe about the Rome Call, the intersection of technology and faith, and Microsoft’s approach to the ethics of AI in both its own software development and its partnership with ChatGPT maker OpenAI.
Continue reading for highlights, edited and condensed for clarity and length.
Q: What changes when you have religious leaders in the room at a technology conference?
Brad Smith: I find it adds an extraordinary dimension to the conversation. You can ask whether this was having religious leaders in a technology meeting, or technology leaders in religious conversation; both are true. … It forces one to think about and talk about the need to put humanity at the center of everything we do.
I think it’s a good thing. I think it’s a powerful force. It does cause one to reflect a little bit more, and perhaps even think a little bit differently. But ultimately, I think it makes the work we do more purposeful.
It also reminds us that we have a lot of hard problems to stay focused on solving. The bar is raised even further. … And to find that these three religions have such a common vision and common message is important and inspirational for all of us who spend so much time thinking about artificial intelligence and how it can best serve the world.
Q: Will the implementation of these AI principles happen through voluntary commitments from companies such as Microsoft, or will it also require governments of the world to establish rules and frameworks?
Smith: I think it’s clear that the path to responsible AI involves both proactive and self-regulatory steps by responsible companies, and more rules of the road in the form of law and regulation.
I don’t think it would be possible to achieve what the world needs with only one approach or the other. I think it would be naive to expect that everyone in the world that has access to AI will use it only for good. Unfortunately, that’s not human nature. It’s not what history tells us is the path for any technology. …
But the more progress the responsible companies can make, and the more efforts we can pursue to build the broad dialogue and big tent that I think AI ethics needs, I think the easier the path will be for law and regulation.
Q: Have Apple, Amazon, and Alphabet/Google had an opportunity to participate in the Rome Call?
Smith: They have not participated to date. Ultimately, it’s the Vatican’s decision as to when to expand it to other companies. I do think it’s fair to say that Microsoft and IBM were at the forefront of this, not only in being the first ones to sign the Rome Call, but frankly, having done more in Europe to address these issues in the eyes of many.
Q: Microsoft is implementing artificial intelligence now, not only in the development of your own products, but in the incorporation of OpenAI technologies. Are there situations now, when you’re making choices as development teams or as corporate leaders that apply these principles?
Smith: The short answer is yes. Every day, and every week, there are engineers at Microsoft developing AI systems applying our Responsible AI Standard that we adopted last year, with increasing use of engineering tools, and more engineering guidelines and training. All of this is evolving. It is work that needs to move forward quickly.
But the other thing I would add is that I find in both Microsoft and OpenAI a very common and deep commitment to ensure that AI is used ethically and responsibly. … I couldn’t imagine a group of people that are more ethical, or responsible, or committed to this cause than the people at OpenAI, that we have been working with for some time. It is not new. And if you read their mission statement, it is absolutely as real for them as it is for us.