The race in the use of Artificial Intelligence (AI), accelerated since ChatGPT was released to the public at the end of 2022, brings new challenges for the governance of companies, which need to map operational and reputational risks of using technology. AI enhances and optimizes day-to-day operations of organizations, but it also brings attention points typical of disruptive innovations, such as the fact that robots and chatbots still lack ethical discernment. Among the reflections that have already begun, both in companies and in academia, is the need to review norms and guidelines for risk management, as well as specific recommendations for dealing with the issue.
To mitigate risks, there is a way. “You have to start small and grow gradually. Start small, test, learn. They are new technologies. You have to learn”, suggests Simone Lettieri, IT, Digital and Operations leader at Meta, a technology and innovation company. There is also a general recommendation: the use of technologies in this area must be done ethically, responsibly, reliably and safely. The indication, however, has not been sufficient to guarantee absolute safety. The request for a six-month break in ChatGPT updates, made in an open letter signed by the funders themselves, is a warning in this direction.
“The main reputation risk is a public scandal that can lead to a decrease in the value of the company“, warns in an interview with Reputation Feed, Professor Ricardo Baeza-Yates, director of research at the Experimental AI Institute at Northeastern University in Silicon Valley, considered a global reference in Artificial Intelligence. He is actively involved in initiatives, committees and advisory boards around the world, such as the Global AI Ethics Consortium, the Spanish AI Council and the US Technology Policy Committee. As well as damage already caused by the use of AI. To give you an idea, there are already more than 2,400 incidents related to Artificial Intelligence on the site incidentalbase.ai.
Among the recent damage caused by AI, the Chilean-Catalan researcher cites an error in the bot launched by Google to compete with ChatGPT (developed by the research company OpenAI) and Bing (from Microsoft). Launched in February 2023 in the United States, the bot, called Bard, gave the wrong answer about what to say to a nine-year-old child about the discoveries of the James Webb Space Telescope. Bard said the telescope was the first to take pictures of a planet outside the Solar System, but the pioneer was the European Very Large Telescope in 2004.
The error was found in the launch promotional video. It was enough to cause a loss of US$ 100 billion to Alphabet, controller of the best known internet search engine, in addition to irreparable damage to credibility, a case that reverberated globally in the media.
According to Baeza-Yates, the most appropriate way to deal with harm today is through a social impact analysis of the benefits and risks of use. “This involves legal and ethical aspects, including the users' perception”, he says. In the scientist's assessment, the risks are greater in decisive areas in people's lives. “The data does not fully capture the context of a person, and people are not numbers”, he points out. This is the case, for example, of situations such as job interviews, evaluation for the granting of scholarships or credit, among others.
Management of risks accepted and to be avoided
“There will always be risks, in any human endeavor”, adds researcher João Luiz Becker, full professor at the Department of Technology and Data Science (TDS) at FGV EAESP and consultant in the area of administration and mathematical modeling of complex systems. Faced with the realization that the absence of risks is a utopia, Becker assesses that the fundamental question is to manage which risks are accepted and which ones should be avoided. He understands that the use of AI-based systems is a responsibility of companies and that, therefore, it should be a subject for reflection by their leaders. Proper Artificial Intelligence risk management needs to follow three pillars, as Becker lists for the Reputation Feed.
the pillars
- Adoption of ethical principles for reliable and responsible Artificial Intelligence-based systems, which provide the basis for defining and selecting risk controls.
- Implementation of risk management structures in this area, taking responsibility for supervising and auditing the performance of AI-based systems.
- Design of risk management processes accompanying the entire lifecycle of systems based on AI.
Ethical principles, adds Becker, must be analyzed with an eye toward risks that can harm companies. “For example, the principle of justice seeks to avoid discrimination and injustice against individuals or collectives of individuals, with AI-based systems that provide accessibility and universal design for stakeholders. On the other hand, the privacy principle proposes to provide adequate governance for the privacy, protection, quality, integrity and access of data used by AI-based systems. If personal data will be used by the systems, this must be clearly communicated to the people involved, as well as the purpose of its use.
In extensive work on the subject done in partnership with Carlos Eduardo Brandão, CEO of Intelliway Tecnologia, Becker presents even more principles that must be observed to avoid risks and mitigate potential damage:
- Responsibility – Are there mechanisms for audit, accountability and redress?
- Security – Are the systems resilient, reliable and secure?
- Transparency – Are people informed when interacting with AI-based systems about their capabilities and limitations?
- Explainability – Is the operation of AI-based systems clear and easy to understand?
- ethic – Is there human supervision?
- Conformity – Are applicable laws and regulations followed?
Becker considers the existence of ethically discerning robots unlikely, which would put an end to the main reputational risks. "I do not believe. In a broader and more ambitious sense, a general AI, or even a super AI, is envisaged. We are currently at the threshold of narrow AI development (artificial narrow intelligence), with systems that appear to behave like humans.” He notes that racist or misogynistic biases, for example, "come built into the training data of AI-based systems." And it presents a way to mitigate them: “to avoid them or at least minimize them, one must properly curate these training data.”
With a background in technology and digital marketing in the financial, media, healthcare and startup industries, Simone Lettieri confirms that the key issue is curation. “The risks are not about technology, but about people. Technology is an enabler of what people will do. People are responsible for what they do”, says Simone, who deals with Artificial Intelligence on a daily basis since 2015. For the experienced executive in strategy and digital transformation, it is understandable that the tool is received with fears, such as those linked, at the beginning, to the computer and those surrounding the arrival of quantum internet (technology based on advances in science in relation to subatomic particles, promising more security and speed on the network). “We will have problems, but many opportunities, and what is certain is that the world will not end because of this”, he believes.
What the AI itself recommends
Reputation Feed asked Bing, Microsoft's Artificial Intelligence system, an obvious question for anyone who deals with the topic on a daily basis: “How can a company prevent reputational risks from using Artificial Intelligence?” Below is Bing's response to the specific question at the time it was asked:
"Companies can prevent reputational risks from using Artificial Intelligence (AI) by taking a few steps. One of them is transparency in the use of AI, that is, companies must be clear about the use of technology and how it is applied. Another measure is the adoption of efficient AI governance, which involves the creation of policies and procedures for the use of technology. In addition, companies must ensure that AI is used ethically and responsibly, avoiding the use of sensitive or discriminatory data. Finally, it is important that companies are prepared to deal with possible problems arising from the use of AI, such as technical failures or errors in the algorithms.”
On a daily basis, faced with manifestations like this, organizations still oscillate between the impossibility of ignoring a future that already is today and the damage that this type of technology can still bring, in the form of biases or distortions, often difficult to be detected. Therefore, responses such as Bing's, compiled from what already exists on the internet, need to be curated – human – in a position to capture these subtle threats.