Picture the scene: A user-focused organization is excited to employ Artificial Intelligence (AI) to help it engage with customers, work smarter, and make better decisions. It implements technology that can analyze and learn from data volumes, serve content tailored to users’ interests and behavior, and make accurate predictions about business needs. But just a few months later, those good intentions have gone awry as the organization’s use of AI has brought unintended, negative consequences. This is an apparent and present risk without sufficient awareness of the responsibilities attached to AI.
Unfortunately, research shows that many companies are not as savvy about Responsible AI (RAI) as they think they are. For example, Boston Consulting Group found that while 35% of organizations believe they have fully implemented an RAI program, only 16% have reached maturity. So, what needs to happen to change this picture? Ultimately, it’s a case of putting practicable RAI principles at the heart of every AI project.
“If you see RAI as separate from AI, as something that you can implement later on, then you are already on the back foot,” said Sray Agarwal, principal consultant at Fractal and co-author of Responsible AI: Implementing Ethical and Unbiased Algorithms. “Anything you do in AI must have the responsibility, from the initial thinking about the project. If it’s too late, then RAI must be implemented before a major reputational event happens.”
Building responsibility for AI involves work on several levels, from shaping national and industry regulations to providing the guidance and tools that make RAI work for individual businesses. “AI is ever-present in everyday life,” said Kevin Bishop, an associate information management officer at the United Nations Centre for Trade Facilitation and Electronic Business, United Nations Economic Commission for Europe. “The world needs guidelines for responsible and ethical AI that contribute to the benefit of humanity, ensure a safe and equitable digital future, and achieve the Sustainable Development Goals.”
Rapid progress is being made at the regulatory level, with new RAI laws set to be implemented in China, India, the UK, the US, and beyond over the coming years. As a prime mover behind AI and its application, the technology industry is taking responsibility, helping shape those regulations and enabling organizations to comply.
At Fractal, this commitment has involved working with the Governments of India, New York, Switzerland, and others to help shape RAI policy and regulations. This will ultimately be implemented across thousands of companies. In parallel, it is collaborating with the United Nations to produce whitepapers that will help the leaders of different countries to ensure RAI doesn’t introduce conflicting laws and policies that affect cross-border trade.
Co-founder, Group Chief Executive & Vice Chairman, Fractal
Within its industry, Fractal is working with IT bodies like the National Association of Software and Service Companies (NASSCOM) to build awareness of the issues. One example is the Responsible AI Hub and Resource Kit, which NASSCOM launched as part of the government’s Digital India initiative. It provides free resources for IT players in India to benchmark their RAI maturity and access practicable guidance and tools for improving it.
Sr. Vice President and Chief Strategy Officer, NASSCOM
While high-level regulation promises to build a strong foundation for RAI, there is also a lot of activity on the ground. Increasingly, organizations across different industries want to ensure they are already compliant with RAI principles before they become law. To do that, they need help understanding what RAI should look like for their company and its industry.
“Organizations, especially in regulated industries like financial services and healthcare, are addressing RAI practices proactively because they want to be ahead of the policymakers and have those structures in place when regulations come into force,” said Akbar Mohammed, principal consultant, Fractal Dimension. “In healthcare, for example, RAI can be a matter of life and death. Those organizations cannot wait for a clear legal and policy framework to be implemented, so they are looking for ways to develop and operationalize their own RAI framework and governance structure.“
Leaders in the AI space have already developed their RAI practices and are sharing their experiences with others. Fractal, for instance, has established frameworks, toolkits, and training courses to ensure all its employees understand RAI and know how to practice it. Now, it is helping its Fortune 500 clients frame their problems and put their RAI structures and governance mechanisms in place. From the big pharmaceutical companies incorporating some of Fractal’s RAI practices into its internal responsibility practices to the growing number of consumer goods companies looking for help to establish their RAI frameworks, it touches all industries.
It is a diverse picture in which each industry and individual organization requires a different approach to make RAI work.
“There is no one-size-fits-all solution to RAI,” Agarwal said. “A financial services business, for instance, needs RAI tools for detailed fairness and explainability, which ensures that humans can understand the decisions and predictions their AI makes. Meanwhile, a healthcare organization requires a focus on privacy and monitoring. It’s different for supply chains, where explainability and monitoring are high. To be effective, RAI tools need to be customizable, usable, and developed in tune with businesses’ needs. They must enable organizations to address industry-specific and business-specific RAI issues in a way that is both industry-ready and implementable.“
This is about much more than developing an algorithm. To develop and implement effective RAI, people need help to visualize the issues and find answers. That requires a human-centered design approach to problem framing encompassing surveys, templates, project planning methodologies, ways of working, guidelines, training programs, and case studies.
“Asking questions structured from a gamification, behavioral science point of view is an effective way to direct stakeholders towards the issues they should be reflecting on,” said Sagar Shah, Client Partner, Strategic Center. “Those questions are often quite blunt. For example, we might ask if a stakeholder has taken consent from the people whose data they are using. If they have assumed that consent, it may come back to bite them – so how will they document this activity so it can be audited in a few years? Suppose the stakeholder can’t answer a question. Exploring the available guidebooks, real-life examples, and case studies will help them find the solution.“
Ultimately, RAI needs to allow flexibility for innovation while providing checks and balances for organizations to consider the effects of what they are trying to create. Today’s toolkits enable people to develop and operationalize RAI, which is a crucial first step. The next challenge will be to ensure that these methods are adopted within every organization. This will require components that have been developed to understand how business leaders and stakeholders operate with AI today. When RAI is relevant to the work people do on a day-to-day basis, it will also become integrated into the life cycle of every AI project.