aisight

Select Reads

Explore Magazines

aisight


aisight


Select Reads >

Racing ahead with AI: A conversation with Aamod Sathe

Interview

Interview

Interview

Racing ahead with AI: A conversation with Aamod Sathe

Racing ahead with AI: A conversation with Aamod Sathe

Racing ahead with AI: A conversation with Aamod Sathe

Aamod Sathe

Aamod Sathe

Aamod Sathe

Director of Data Science at Meta
Director of Data Science at Meta
Director of Data Science at Meta

Srikanth Velamakanni

Srikanth Velamakanni

Srikanth Velamakanni

Co-founder, Group Chief Executive, and Vice-Chairman of Fractal
Co-founder, Group Chief Executive, and Vice-Chairman of Fractal
Co-founder, Group Chief Executive, and Vice-Chairman of Fractal
Aamod Sathe, Srikanth Velamakanni
Aamod Sathe, Srikanth Velamakanni
Aamod Sathe, Srikanth Velamakanni

In today’s fast-moving AI world, the noise is deafening. OpenAI boasts hundreds of millions of users, Microsoft processes trillions of tokens, Google pushes AI-driven products, and Meta reaches billions through its platforms. But who’s really winning, and how should enterprises, startups, and nations position themselves?

To explore this, Srikanth Velamakanni, Group Chief Executive and Vice-Chairman of Fractal, spoke with Aamod Sathe, Director of Data Science, Enterprise Products at Meta. Their wide-ranging conversation covers the AI race, the limits of reasoning models, India’s role in the ecosystem, the future of data science careers, and why reinvention is the ultimate skill in the AI era.

Who's winning the AI race?

Srikanth: OpenAI recently announced that they have 800 million weekly active users. Microsoft said it processed 100 trillion tokens between January and March, with 50 trillion in March alone. Google reported 500 trillion tokens a month, 1.5 billion AI overviews each month, and 400 million active users on Gemini. Meta also shared that they have similar weekly active users. That’s a staggering amount of activity across the big four players. But it raises a natural question: who’s winning this AI race? 

Aamod: I don’t think there’s a single “winner.” Too often, people treat this as a zero-sum, but the reality is more nuanced. Each major player has strengths that AI will only amplify. Microsoft has deep enterprise expertise; Google’s search and ad business is entrenched; Meta reaches massive consumer audiences; and OpenAI, being AI-native, has agility and early-mover advantage.  

Ultimately, the real winners are users, enterprises, and consumers alike, who benefit as each company applies AI in its own unique way. This won’t be “winner takes all,” but rather intense competition where multiple players succeed by leveraging AI to boost their core strengths. 

Srikanth: But there’s another school of thought. The hypothesis is that once AI reaches a certain threshold, superintelligence or AGI, the leading player at that time, even if only marginally ahead, will dominate the rest of the market and capture all the economic surplus. In other words, extraordinarily powerful AI will capture all the value. Do you not believe in that? 

Aamod: Predicting the future is hard, so I avoid absolutes. I don’t think AGI will instantly create a winner-takes-all world. Even a consumer-focused AGI breakthrough wouldn’t guarantee enterprise dominance. Large incumbents have deep customer relationships, distribution channels, and trusted ecosystems. AI will enhance these, not erase them. 

Businesses that fall behind in AI risk being left behind, like Kodak with digital. But it’s unlikely a single AGI leap will let one organization capture everything. Competition will persist, and multiple organizations can succeed at different points along the AI curve. 

Srikanth: Fair enough. Still, if we take that Kodak analogy, do you see any company today that looks like a potential laggard? Maybe an overvalued player that could face disruption? 

Aamod: I’d avoid naming specific businesses, but I don’t see any large enterprise or major startup ignoring AI. Every big company I work with is investing, and most late-stage Bay Area startups are AI-native or have pivoted quickly. No one says, “I won’t participate.” 

Take automakers: Every U.S. seller now embeds AI in their vehicles. Tesla leads, but other manufacturers also have impressive AI-driven automation, especially given the complexity of edge computing. No clear “dinosaur” is standing still; everyone is moving, just at different speeds. 

Srikanth: Let’s talk about Apple. They’ve faced challenges launching AI products, and I saw an interesting paper by Apple researchers called Illusion of Reasoning. It argues that reasoning models hyped for logic, coding, and problem-solving aren’t as capable as they seem. When problems get complex, they fail, creating an “illusion of reasoning.” Other researchers have debated this, but it raises the question: Is AI truly making major progress, or are we heading toward another AI winter? 

Aamod: Let’s start with the Apple paper. It categorizes reasoning problems by complexity: low, medium, and high, and finds that models do well on low and medium tasks but fail at high complexity. That matches what I’ve seen: multi-step logic problems often trip them up in the middle steps. 

I don’t disagree with Apple, but bigger models or more computes aren’t the only solution. Targeted, product-focused models, smaller, device-based, task-specific, can outperform large general-purpose models. That’s what excites me: the real breakthroughs come from how we use these models, not just theory. 

Srikanth: So, the Apple paper suggests LLMs handle low-complexity tasks, reasoning models manage medium complexity, and both struggle with high complexity. You’re saying the future isn’t just bigger models, but purpose-driven reasoning models designed to solve specific problems more effectively. 

Aamod: Exactly. They don’t need to be huge. Coding models, for example, aren’t massive like general LLMs but handle complex, multi-step coding tasks exceptionally well. Engineers rarely see them fail. The takeaway: reasoning models benefit from being custom-built. Size alone doesn’t ensure better reasoning; purpose and design matter more. 

Srikanth: This ties back to another trend I’ve noticed in the industry. Every time we discuss AI progress, scaling laws emerge, models improve with more data, increased compute, and enhanced architectures. We’ve also seen big bets on data recently Mark Zuckerberg’s large investment in Scale AI for annotation, for example. How do you interpret this? Are these bets really about data? 

Aamod: I’ll speak broadly rather than about that specific investment. AI progress relies on three levers: compute, data, and algorithms. Compute has improved dramatically, and architectures like transformers were major leaps, with incremental advances continuing, though emerging architectures aren’t revolutionary yet. 

The real differentiation is data. In practice, I’ve seen the biggest gains come not from bigger models or tweaks in architecture, but from high-quality, well-annotated, and clean datasets. Data hygiene often makes the difference between a prototype and a production-ready system. 

Srikanth: It seems like improving data quality is not just the most important lever, but also the most cost-effective. Compared to compute or new architectures, investing in data seems like the best ROI. Would you agree? 

Aamod: Generally, yes. A friend of mine is building a startup focused on vision tasks in real estate. Most of his VC funding goes into acquiring and annotating domain-specific data. Compute costs are high; GPU rentals are no joke, but once you have high-quality annotated data, everything else becomes easier. So yes, dollar-for-dollar, data quality gives the biggest return in terms of product performance. It doesn’t just improve accuracy; it determines whether the product is genuinely useful for end users. 

Srikanth: Annotation itself has evolved. We’ve gone from labeling cats in images to much more complex tasks, code generation, app building, and even video annotation. Do you see this as a new frontier? 

Aamod: Absolutely. Data creation and annotation have become highly sophisticated. And increasingly, AI is helping annotate AI, whether that’s text-to-image, image-to-text, or synthetic datasets to fill gaps. 

From a startup perspective, this is an enormous opportunity. And I think India, in particular, has a big advantage here. We have diverse data sources and a regulatory environment that can sometimes be more conducive to experimentation. Yet, Indian startups haven’t focused as much on data creation compared to building architectures. That’s where I see huge potential: India becoming a global hub for advanced data creation and annotation. 

India’s AI moment 

Srikanth: Let’s take a step back and talk about India. You mentioned that there may not be a single global “winner” in AI, as different organizations will likely succeed in parallel. But what about countries? If you had a billion dollars to invest, would you place that bet on the big tech enterprises, the so-called “Magnificent Seven,” or somewhere else? 

Aamod: Honestly, Srikanth, I’d bring that money to India. The next wave of Magnificent Sevens could easily come from outside the U.S. India is a prime candidate. Startups in Bangaluru, Mumbai, or smaller cities can build world-class AI for local and global markets. Thanks to AI, geography matters less than talent, creativity, and scale. That’s where I’d place my bets. 

Srikanth: That’s fascinating. I was recently reflecting that if we exclude China, the rest of the world has approximately 6.5 billion people. And of that, India alone contributes nearly a quarter of the world’s digital exhaust, massive bandwidth usage, cheap data, and constant screen time. Doesn’t that give India a natural advantage in building AI applications? 

Aamod: Yes, but with a caveat. India generates huge amounts of data, but much of it is raw “data exhaust,” unstructured and not immediately usable. Real value comes from turning this into AI-native datasets: properly structured, annotated, and relevant for training. The future winners in India will be those who transform this digital activity into the “oxygen” AI models need to thrive. 

Srikanth: That raises an important policy and strategy question. There’s a narrative that India should be the “use case capital” of AI, developing applications for agriculture, education, healthcare, governance, and so on. Another narrative is that India should focus on building deep AI research capabilities. Let’s assume for a moment India positions itself as the “use case capital.” What are the most promising use cases today? 

Aamod: Great question. Many assume AI is only useful where labor is expensive, but India has its own opportunities. Agriculture faces labor shortages; AI for crop monitoring, automated irrigation, and yield prediction could be transformative. 

Infrastructure also offers monitoring and security applications beyond traffic. And for the government, AI in public services, defence, and governance could help India leapfrog, with the right investment in research and deployment. 

Srikanth: And what about healthcare and education? These are areas where AI could have a massive societal impact, especially in a country like ours. 

Aamod: Absolutely. Healthcare is already seeing promising AI applications. Fragmented medical records in India are being cleaned and standardized, helping doctors make faster, more accurate diagnoses. AI-assisted care for aging parents is emerging, but it is still early. The key is product-first thinking: start with real problems, not just cool models. 

Education is trickier; learning outcomes matter, so AI shortcuts can backfire. Startups should focus on sectors like healthcare first before tackling education, where there’s no clear playbook yet. 

Srikanth: That’s a fair point. Education isn’t just about outcomes; it’s about the process of comprehension and skill development. If AI answers too easily, it can actually weaken learning. But you also mentioned the government’s role. Beyond funding, what can the Indian Government do to accelerate AI? 

Aamod: Funding helps, of course, but it’s not the only lever. What India truly needs is supportive frameworks, clear legal guidelines, easier pathways for startups to scale, regulatory clarity regarding data usage, and policies that foster global collaboration. 

If the government can make it easier for startups to innovate compliantly and reach scale, India’s AI ecosystem could leapfrog dramatically. AI for governance, AI for public services, and AI for defence are all urgent areas where government involvement could make a massive difference.

Redefining careers in data science 

Srikanth: Over the last 20 years, you’ve hired and led data science teams at some of the world’s most influential organizations. Let me ask you this directly: what makes a data scientist successful today? 

Aamod: The recipe for success has evolved. Once, it was all about statistical skills and programming Python, especially, plus product orientation: tying insights to outcomes. In the AI era, the role is broader. Data scientists must now work closely with engineers. Insights and models matter, but scaling them into real products requires engineering collaboration. The defining skill today is bridging data science and engineering, understanding model design, scale, and deployment as much as statistics. 

Srikanth: We could call this the “industrialization” of AI. Twenty years ago, datasets were small, structured, even Excel-manageable, with classical stats as the default. Today, datasets are massive, multimodal, and unstructured, with machine learning as the norm and scale the expectation, which is why data scientists now need a much tighter partnership with engineers. 

Aamod: Exactly. In the old days, engineers and data scientists could work in silos. Today, the line between them has blurred. Modern AI products intertwine model architecture and product architecture. Neither data scientists nor engineers can succeed alone; they must collaborate tightly. This is why we’re seeing the rise of the “AI engineer” profile. Sometimes it’s one person who has both skill sets; sometimes it’s a tightly integrated team. Either way, the old silos are gone. 

Srikanth: But given the rise of APIs and foundation models, couldn’t an engineer bypass a data scientist entirely? For example, call an API, run an LLM, and get answers directly? 

Aamod: That’s a common assumption, but it doesn’t hold up in practice. Yes, APIs and open-access models like those on Hugging Face make it easy to get a quick first-pass solution. But that’s just the raw starting point. 

In production systems, you need much more: validation, experimentation, statistical significance testing, and refinement. Without those, you risk catastrophic errors. Data scientists bring that rigor. So no, APIs don’t eliminate the need for data scientists. They make collaboration more efficient, but they don’t replace expertise. 

Srikanth: And now we’re also seeing the rise of AI agents. Some can already participate in Kaggle competitions end-to-end: download the dataset, build the model, and even submit automatically. If agents can do all that, what happens to data science careers? 

Aamod: Agents are fascinating, but let’s keep them in context. We’ve had software manipulating data for years; what’s new is today’s scale and flexibility. Agents won’t replace data scientists, but they’ll handle grunt work, such as cleaning data, feature engineering, and initial model development, boosting productivity tenfold. The irreplaceable part is problem-framing: spotting hidden data issues, translating them into business terms, and reframing them as solvable challenges. That judgment will remain human. 

Srikanth: So, the heaviest lift is listening to users, framing the right problem, and defining it in a way that lends itself to experimentation. Once that’s done, the rest data prep, algorithms, validation can be heavily automated. 

Aamod: Exactly. And even there, AI plays a role in augmentation. Instead of testing one approach, I can now try fifty different techniques in parallel. That saves time and often produces a better answer. But yes, the essence is this: if you’re not solving a real user problem, then all the data science in the world becomes an academic exercise. Agents can accelerate workflows, but they can’t replace problem framing. 

Srikanth: That brings me to a bigger, perhaps existential question. If we look ahead five years, is data science still a good career choice? Will there be enough jobs, or will AI shrink opportunities? 

Aamod: I’d argue the opposite: data science is one of the best careers for the next three to five years. Yes, there’s fear about AI taking jobs, but if someone told me they were pursuing data science, I’d cheer. At its core, AI is algorithms, compute, and data and combining algorithms with data is exactly what data scientists do. The scale will grow, complexity will rise, and their expertise will only matter more. The key is using AI tools to amplify impact, not fearing them. 

Srikanth: I call this the “serial expert” model. Careers aren’t 40 years in one skill anymore, they’re five to ten years in one area before you reinvent yourself. With AI, that cycle may shrink to two or three years. So, the real skill isn’t just technical, it’s adaptability: diving deep into a new domain in six to twelve months, creating value for a few years, then moving on. Like a serial entrepreneur, but as a serial expert. Do you agree? 

Aamod: I don’t just agree I think it’s essential. With AI, career lifespans are shrinking from decades to just a few years before you need to retool. The upside: AI accelerates reinvention. With a technical background, you can get proficient in months; I’ve seen people go from zero to products in weeks. So yes, the “serial expert” model fits: master X today, pivot to Y tomorrow, and use AI as the multiplier to stay relevant. 

Artificial Intelligence 101 

Srikanth: Aamod, everyone talks about large language models, but for many leaders, it’s still abstract. Can you break it down simply; what are LLMs, and how do they work? 

Aamod: At the simplest level, LLMs are systems trained on enormous amounts of text; books, articles, blogs, tweets. They don’t “think” like humans; they predict the next word based on patterns they’ve learned. 

Srikanth: So, like a 12-year-old who has read every book, recognizing patterns, but not really understanding? 

Aamod: Exactly. They don’t comprehend in a human sense, but the pattern recognition is so nuanced it feels intelligent. 

Srikanth: Let me try a simple analogy. Say we mask one word in a sentence: “India is a beautiful country in ___ Asia.” The missing word is “South.” The model guesses, gets corrected, and repeats this billions of times until it gets very good at predicting. Is that essentially how it learns? 

Aamod: That’s right. It’s called masked language modelling; guess, correct, repeat until the associations become strong. At scale, this creates a surprising depth of knowledge, even though it’s just probability. 

Srikanth: And when it chooses “South” over “West,” it’s running probabilities across millions of options, right? 

Aamod: Correct. It evaluates grammar, logic, and all the patterns it has seen. “West Asia” makes sense, but “India in South Asia” is statistically more common, so that wins. And the “large” in LLMs means it can consider a broader context; entire paragraphs, not just nearby words. 

Srikanth: What about fine-tuning? I’ve heard about SFT—supervised fine-tuning, where you feed the model thousands of Q&A pairs. Is that like a cheat sheet? 

Aamod: In a way. Sometimes you change the model’s weights; other times you just give it external knowledge it can search before answering. Both make the model more specialized without retraining it from scratch. 

Srikanth: And then there’s RLHF—reinforcement learning from human feedback, where people rank multiple answers and the model learns our preferences. 

Aamod: Exactly. Pre-training gives general knowledge. Fine-tuning and RLHF align the model with human expectations, making it more helpful and less biased. And new techniques like retrieval-augmented generation keep pushing this further. The beauty is, you no longer need a PhD to try these approaches; anyone can experiment and build on top of them. 

Applied data science in practice 

Srikanth: Let’s make this concrete. Say I want to build a model to predict who might default on a credit card loan. What does a data scientist actually do, step by step? 

Aamod: First, frame the problem, it’s a classification task: will this customer default, yes or no? Next, look at the data, income, employment, debt, spending habits, plus past defaults. Prep it: clean, handle missing values, engineer features (e.g., group ages into ranges). Then test models’ logistic regression, decision trees, boosting, neural nets, train on past data, validate on new, and see if they outperform the current system. Finally, deploy into production, like a bank’s credit scoring pipeline, and monitor as new data flows in. 

Srikanth: So, the model is essentially capturing signals from past behavior and turning them into predictions. Then you refine and iterate until it performs better than the existing process. 

Aamod: Exactly. The benchmark isn’t 100% accuracy; it’s “better than before.” If the old system was right 70% of the time and the model is right 80%, that’s valuable. And with today’s AI tools, you can run more experiments faster and reach that point much quicker. 

Srikanth: And this is why you need proper data science rigor. If you call an API, you won’t know whether the results are statistically valid. 

Aamod: Correct. Prototypes are easy, but scaling to production demands validation experiments, statistical significance, and error analysis. In business, it’s not enough to know that something works; you need to know why it works and when it fails. That’s the real role of data science. 

In-Person

Aamod Sathe

Aamod Sathe

Aamod Sathe

Director of Data Science at Meta

Director of Data Science at Meta

Aamod Sathe is a seasoned AI and data science executive with over 20 years of experience driving data-driven decision-making and innovation. His expertise spans multiple industries, with a proven track record of building high-performance teams, fostering a culture of experimentation, and delivering significant revenue growth for global organizations. Aamod's expertise includes Machine Learning, data-driven decision-making, market sizing and segmentation, KPI definition and tracking, product analytics, and team leadership. Aamod has a passion for solving complex problems and creating impactful AI driven products.

Srikanth Velamakanni

Srikanth Velamakanni

Srikanth Velamakanni

Co-founder, Group Chief Executive, and Vice-Chairman of Fractal

Co-founder, Group Chief Executive, and Vice-Chairman of Fractal

Srikanth Velamakanni is one of India’s most influential technology leaders and a global champion of artificial intelligence. He is the Co-Founder and Group Chief Executive of Fractal, which he helped grow from a five-person startup into India’s first AI unicorn. Srikanth is also the Vice-Chairman of Nasscom and a founder & Trustee at Plaksha University. With a career defined by vision, resilience, and long-term thinking, Srikanth stands as a pioneer dedicated to building a smarter, more inclusive, and more humane future.