Not everything “Powered by AI” is

Not everything “Powered by AI” is
June 27, 2025
When Builder.ai burst onto the scene, it was hailed as a revolutionary AI-powered no-code platform designed to simplify app development. Clients could chat with their sleek virtual assistant, Natasha, describe the app they wanted to build, and almost magically receive a functioning application. Investors bought the dream: the company raised over $450 million and at its peak reached a valuation of $1.5 billion. But behind this AI-powered façade lay a duller reality. Instead of cutting-edge automation, hundreds of human engineers in India posed as the AI assistant in real-time conversations, and manually wrote the code behind the scenes.
Builder.ai is not an isolated case; it is a high-profile example of a growing trend in the tech industry: AI washing, where companies exaggerate or misrepresent the extent to which their products rely on artificial intelligence. This matters because the real promise of AI is immense. As a general-purpose technology, AI has the potential to transform economies and societies. It is projected to contribute up to $15.7 trillion to global GDP by 2030, reshaping everything from healthcare and education to agriculture and financial services. But that promise is undermined when smoke and mirrors dominate the conversation. As we stand at the brink of the AI era, separating the wheat from the chaff is not just important, it’s urgent. For investors, especially, distinguishing genuine innovation from marketing hype is critical to ensuring capital flows to technologies that truly move the needle.
What is AI-Washing?
AI washing refers to the practice of companies misleading consumers, investors, or the public about the extent or nature of artificial intelligence in their products or services. AI washing takes many forms. Companies may claim to use AI when they don’t, exaggerate the capabilities of their systems, or fail to substantiate their claims. In a highly competitive market, the pressure to appear AI-enabled has led to inflated claims that have misled investors, regulators, and consumers alike.
This phenomenon has grown alongside a global surge in AI investment. According to UNCTAD’s 2025 Technology and Innovation Report, India ranked 10th worldwide, attracting $1.4 billion in private AI funding. Public markets are similarly swept up in the hype; over 36% of S&P 500 companies referenced AI in earnings calls, and 108 specifically mentioned generative AI in regulatory filings, as per Goldman Sachs. The “AI-powered” label has become a powerful marketing tool, often accepted at face value due to limited technical scrutiny by consumers and investors.
But the consequences are serious. AI washing inflates valuations, misdirects funding, and crowds out truly innovative work. Many products marketed as intelligent are little more than rule-based systems with limited capabilities. As a result, inflated expectations give way to disillusionment, eroding public trust in AI technologies. Infosys founder Narayana Murthy recently captured this concern, warning at TiECon Mumbai 2025: “This fashion of labelling ordinary software as AI is misleading. Many so-called AI solutions are just traditional programs with fancy tags.”
This credibility gap is already visible in India’s startup ecosystem. According to Tracxn, funding for AI startups fell 53% year-on-year from $305.9 million in FY 2023–24 to $143.6 million in FY 2024–25, with a 44% drop in deal volume. Some venture capitalists estimate that as many as 30% of startups exaggerate their AI claims, while others place the pitch-to-reality gap at 60–70%. This mismatch makes it harder for genuine, research-driven startups to attract capital and for regulators to assess real-world risks.


How Regulators Are Tackling AI Washing
Regulators around the world are beginning to take note of the growing trend of AI washing and are stepping up enforcement efforts to curb misleading claims. In March 2024, the U.S. Securities and Exchange Commission (SEC) announced settled charges against two investment advisers, Delphia (USA) Inc. and Global Predictions, Inc., for overstating the role of artificial intelligence and machine learning in their investment services. The firms were fined $225,000 and $175,000, respectively, marking one of the first formal acknowledgments of “AI washing” by a major U.S. regulator. The Federal Trade Commission (FTC) has also launched a crackdown. Under its 2024 initiative Operation AI Comply, the FTC brought enforcement actions against five companies for deceptive practices related to AI. The campaign targets firms that make inflated or unverifiable AI-related claims, emphasizing that AI marketing must adhere to the same truth-in-advertising standards as any other product or service. In the UK, the Advertising Standards Authority (ASA) has also issued specific guidance cautioning companies against exaggerating AI capabilities. The ASA has already ruled against campaigns, such as a sponsored Instagram post for the photo enhancer app ‘Pixelup’.
As of now, there is no regulatory framework in India that directly addresses the issue of AI washing. While certain existing laws touch upon misleading practices, they fall short of capturing the specific nuances of exaggerated AI claims. For example, Section 28 of SEBI’s Investment Advisers Regulations prohibits misleading conduct, but it does not explicitly mention overstatements related to AI technologies. Likewise, the Guidelines for the Prevention of Misleading Advertisements under the Consumer Protection Act mandate that any claims made in advertising must be substantiated. However, AI washing often operates in a grey area. Companies may not make outright false claims, but tend to exaggerate the significance or uniqueness of their AI capabilities. In many cases, it isn’t deliberate deception but rather the result of pressure to keep up with industry trends, leading firms to launch underdeveloped or partially functional AI features.
The Way Forward:
Combating AI washing is in the shared interest of technologists, investors, and consumers. However, this challenge does not have to be met with heavy-handed regulation. Instead, it can be addressed through industry-led coordination and responsible self-governance.
As a first step, the AI ecosystem could co-develop a non-binding taxonomy that clearly distinguishes between basic automation, machine learning, and truly autonomous systems. Establishing a common vocabulary would help investors assess product claims more accurately, allow builders to describe their technologies with greater precision, and reduce the growing gap between marketing and technical reality.
On the policy front, India’s experience with greenwashing offers a valuable precedent. Recognizing gaps in the advertisement guidelines, regulators introduced the Guidelines for Prevention and Regulation of Greenwashing or Misleading Environmental Claims in 2024. While the efficacy of these rules is still being assessed, they offer a clear structure: they define what constitutes a credible environmental claim, prohibit vague terms like “green” or “eco-friendly” unless properly substantiated, and require disclosures that are accessible, specific, and verifiable
The wider AI ecosystem can draw from this model to craft a similarly pragmatic response to AI washing. These greenwashing guidelines demonstrate how vague, value-laden terms can be regulated through clear definitions, context-specific qualifiers, and structured disclosure practices. Applied to the AI space, such principles could guide the development of voluntary safeguards that ensure greater transparency without stifling innovation. For instance, terms like “AI-powered” or “AI-enabled” could be subject to self-imposed standards requiring that companies specify the scope and functionality of the AI in question, clarifying whether it applies to the full product, a discrete feature, or a single process. These disclosures could also include links to technical documentation or demos for detailed substantiation.
Importantly, these voluntary standards would not function as restrictive mandates, but rather as industry guidebooks, tools to help investors verify claims, assist startups in communicating their offerings truthfully, and support policymakers in understanding where soft regulation might suffice. If developed collaboratively and adopted widely, such a framework could significantly reduce the noise around AI, allow genuine innovations to stand out, and foster trust across the ecosystem
As artificial intelligence becomes the defining force of this decade, we cannot afford to build its future on smoke and mirrors. The difference between hype and truth in AI is not just semantic; it is the difference between sustainable progress and a techlash waiting to happen.
