You need to have balance in your pipeline. What I mean by that is sometimes you need to deliver short-term wins to earn the right to go after long-term strategic goals. You want to enable rapid experimentation, demonstrate your capability and brand, and create attractive financial returns in an area important to the organization.
What’s important to realize is that even if you have long-term initiatives in your pipeline, it’s still essential to break them down into short-term milestones to demonstrate measurable progress. In Decision Analytics, we break down our initiatives into two- or three-week sprints.
It’s also important to recognize that urgent requests will always come up – these need to be balanced carefully with the long-term ambitions of the AI project. We have a separate ‘Swat’ team designed to go after these urgent, short-term requests and enable rapid experimentation. This, teamed with our close collaboration with multiple partners, gives us the scalability and flexibility to meet demands rapidly.
The businesses we support have varied degrees of maturity in many different industries. Some are much more sophisticated than others; some employ data scientists, others don’t. This means we need an agile operating model to adapt to that – because there is no one-size-fits-all approach. There is one common thread that features in every project we work on, though – and that’s the need to connect to business strategy and business outcomes first which drives the initiatives we focus on. We have engagement leaders aligned by business & functional area who identify how advanced analytics can help accelerate the vision and strategy of a particular business. They’re also responsible for translating what we can deliver into a language the business understands.
One of the misconceptions is that most value comes from the minimum viable product (MVP) development. So, you’ve got a problem, and then build an AI solution to solve it. But this is the easier part. In my opinion, the last mile generates most of the value –the adoption phase—integrating the insights into the business process and scaling and sustaining this over time.
It’s also important to involve end users in the AI development process from the beginning to the ideation and scoping stages. If you don’t do this, the danger is that you build a solution that no one wants or uses, wasting time and money.
To prevent value erosion, we need the ability to monitor what’s put into production. So, after production, there’s a crucial step: monitoring the outcomes. We have a separate team that monitors business outcomes while keeping a close eye on data quality and the performance of models. If we are not achieving business outcomes, then there’s a problem. We need to pick up business process changes before they occur and identify when something isn’t working.
These are important considerations. Say you wanted to create a model to detect whether a student enrolled in a data science program will get an A. If the class consists of 90% boys, then the model is likely to predict that males are more likely to be successful. This is because of bias in the population sample. So, it’s important to remove the demographic that is causing the bias. This is why it’s essential to have humans involved so that they can highlight these sorts of issues. We also follow a tried-and-tested framework developed by the Institute of Ethical AI and Machine Learning.
Generative AI seems to have attracted more attention than anything else I’ve seen in the realm of AI, so we are using that as a catalyst to facilitate conversations around more general AI. Because what most people don’t realize is that generative AI seldom operates in isolation. Don’t be surprised if it only makes up 10% of a project; the other 90% is traditional AI.
Businesses also need to consider the privacy and security of their data when employees start using generative AI solutions. For example, third-party managed platforms can provide a secure tenant in a secure environment. This connection means that prompts entered into the solution are not shared for the training of the model – which means we can use it securely. That said, if you employ 30,000 people, you can’t stop them from entering information into a generative AI solution from their phone, for example. So, the best thing we can do in this situation is to educate people about the dangers of entering private data in a public environment.