Responsibility is everyone’s responsibility:
How intelligent accountability is ensuring a full and active role for AI in our society.
How intelligent accountability is ensuring a full and active role for AI in our society.
Picture the scene: A user-focused organization is excited to employ artificial intelligence (AI) to help it engage with customers, work smarter, and make better decisions. It implements technology that can analyze and learn from large volumes of data, serve content tailored to users’ interests and behavior, and make accurate predictions about business needs. But just a few months later, those good intentions have gone awry as the organization’s use of AI has brought unintended, negative consequences. This is an apparent and present risk without sufficient awareness of the responsibilities attached to AI.
Unfortunately, research shows that many companies are not as savvy about Responsible AI (RAI) as they think they are. For example, Boston Consulting Group found that while 35% of organizations believe they have fully implemented an RAI program, only 16% have actually reached maturity. So, what needs to happen to change this picture? Ultimately, it’s a case of putting practicable RAI principles at the heart of every AI project.
“If you see RAI as separate from AI, as something that you can implement later on, then you are already on the back foot,” said Sray Agarwal, principal consultant at Fractal and co-author of Responsible AI: Implementing Ethical and Unbiased Algorithms. “Anything you do in AI must have responsibility, from the initial thinking about the project. If it’s too late for that, then it’s imperative that RAI is implemented before a major reputational event happens.”
Building responsibility into AI involves work on several levels, from shaping national and industry regulations to providing the guidance and tools that make RAI work for individual businesses.
AI is ever-present in everyday life. The world needs guidelines for responsible and ethical AI that contribute to the benefit of humanity, ensure a safe and equitable digital future, and the achievement of the Sustainable Development Goals.
Associate Information Management Officer at the United Nations Centre for Trade Facilitation and Electronic Business, United Nations Economic Commission for Europe
Rapid progress is being made at the regulatory level, with new RAI laws set to be implemented in China, India, the UK, the US and beyond over the coming years. As a prime mover behind AI and its application, the technology industry is taking responsibility, both in helping to shape those regulations and in enabling organizations to comply.
At Fractal, for example, this commitment has involved working with the governments of India, New York state, Switzerland and others to help shape RAI policy and regulations which will ultimately be implemented across thousands of companies. In parallel, it is collaborating with the United Nations to produce white papers that will help leaders of different countries to ensure RAI doesn’t introduce conflicting laws and policies that affect cross-border trade.
As we increase our reliance on algorithms to automate tasks and augment human decisions, it is even more crucial that we hold ourselves and our systems to the highest ethical standards. This means ensuring that our AI systems are designed and implemented in a fair, transparent, and accountable way. I recommend that companies beyond a certain scale must set up an AI Ethics Committee that applies Responsible AI principles to practical situations and guides the organization.
Co-founder, Group Chief Executive & Vice Chairman
Within its own industry, Fractal is working with IT industry bodies like the National Association of Software and Service Companies (NASSCOM) to build awareness of the issues. One example of this is the Responsible AI Hub and Resource Kit, which was launched by NASSCOM as part of the government’s Digital India initiative. It provides free resources for IT players in India to benchmark their RAI maturity and access practicable guidance and tools for improving it.
While high-level regulation promises to build a strong foundation for RAI, there is also a lot of activity on the ground. Increasingly, organizations across different industries want to ensure they are already compliant with RAI principles before they become law. To do that, they need help understanding what RAI should look like for their company and its industry.
Companies building, deploying, or sourcing AI solutions must realize the integral role ethics has come to play (and rightly so) in ensuring long-term business sustainability. The growth and scaling prospects that AI presents for businesses are truly unprecedented; yet, to effectively realize any of these prospects, industry actors must tread the path of AI adoption with an uncompromising commitment to user trust and safety. We at NASSCOM are driving efforts to help the industry develop its shared commitment and capacity for delivering AI solutions in an ethical, trustworthy, and inclusive manner.
Sr. Vice President and Chief Strategy Officer
“Organizations, especially in regulated industries like financial services and healthcare, are addressing RAI practices proactively because they want to be ahead of the policymakers and have those structures in place when regulations come into force,” said Akbar Mohammed, head of the responsible AI practice at Fractal. “In healthcare for example, RAI can be a matter of life and death. Those organizations cannot wait for a clear legal and policy framework to be put in place, so they are looking for ways to develop and operationalize their own RAI framework and governance structure.”
Leaders in the AI space have already developed their RAI practices and are sharing their experiences with others. Fractal, for instance, has established frameworks, toolkits, and training courses to ensure all its employees understand RAI and know how to practice it. Now, it is helping its Fortune 500 clients to frame their problems and put their RAI structures and governance mechanisms in place. From the big pharmaceutical companies incorporating some of Fractal’s RAI practices into its internal responsibility practices to the growing number of consumer goods companies looking for help to establish their RAI frameworks, it touches all industries.
The world needs guidelines for responsible and ethical AI that contribute to the benefit of humanity, ensure a safe and equitable digital future, and the achievement of the Sustainable Development Goals.
It is a diverse picture in which each industry, and the individual organizations within it, requires a very different approach to make RAI work.
“There is no one-size-fits-all solution to RAI,” Agarwal said. “A financial services business, for instance, needs RAI tools for detailed fairness and explainability, which ensures that humans can understand the decisions and predictions their AI makes. Meanwhile a healthcare organization requires a focus on privacy and monitoring. It’s different again for supply chains, where explainability and monitoring are high on the list. To be effective, RAI tools need to be very customizable, usable and developed in tune with business’ needs. They must enable organizations to address industry specific and business-specific RAI issues in a way that is both industry ready and implementable.”
This is about much more than developing an algorithm. To develop and implement effective RAI, people need help to visualize the issues and search out answers. That requires a human-centered design approach to problem framing that encompasses surveys, templates, project planning methodologies, ways of working, guidelines, training programs and case studies.
“Asking questions that are structured from a gamification, behavioral science point of view is an effective way to direct stakeholders towards the issues they should be reflecting on,” said Sagar Shah, client partner at Fractal Dimension. “Those questions are often quite blunt. For example, we might ask if a stakeholder has taken consent from the people whose data they are using. If they have simply assumed that consent, it may come back to bite them – so how will they document this activity so it can be audited in a few years’ time? If the stakeholder can’t answer a question, then exploring the available guidebooks along with real-life examples and case studies will help them find the solution.”
Ultimately, RAI needs to allow flexibility for innovation while providing the checks and balances for organizations to consider the effects of what they are trying to create. The toolkits emerging today enable people to develop and operationalize RAI, and that is a crucial first step. The next challenge will be to ensure that these methods are adopted within every organization – and that will require components that have been developed with an understanding of how business leaders and stakeholders operate with AI today. When RAI is relevant to the work people do on a day-to-day basis, it will also become integrated into the life cycle of every AI project.
Almost everybody is talking about the metaverse. And almost everyone has their own idea about what it is. Definitions…
The number one reason Marshall Goldsmith’s clients are so successful is that they are always looking to improve…
Never miss an insight. Subscribe to ai:sight