At face value, Indian AI firm Ola Krutrim had found a way to tame DeepSeek. In mid-February, the company announced plans to deploy the Chinese chatbot — a system that had captured global attention despite its embedded censorship tools and pro-Beijing training data. While advising caution, Ola Krutrim CEO Bhavish Aggarwal was bullish on the prospects: “We can totally make use of the open source model namesake, if securely deployed on Indian servers, to leapfrog our own AI progress.”

Governments around Asia are trying to harness DeepSeek into “sovereign AI,” allowing homegrown tech companies to adapt it to their respective countries’ national security requirements. The idea is that any data security concerns netizens may have about using a Chinese model will be balanced by the benefits DeepSeek brings to local AI development. That is because DeepSeek-R1 is a rare commodity — a “reasoning” AI model that is open-source, meaning anyone anywhere can use and adapt it for free.

And adapt it Krutrim has done. Unlike the Chinese original, Krutrim’s version of DeepSeek answers sensitive China-related questions in detail. But questions related to Indian Prime Minister Narendra Modi or controversial events that have occurred during his time in office are all met with a wall of misdirection: “I’m sorry, but I do not have a fully formed response.” 

This is a serious and overlooked problem: How DeepSeek is being used not just to guide public opinion in favor of the Chinese Communist Party, but to strengthen the grip of governments around the world that seek to control public discourse — from electoral autocracies and flawed democracies to outright authoritarian regimes.

Asserting AI Sovereignty

The pattern playing out in India — of governments adapting Chinese AI tools to their use while claiming to protect national interests — reveals how digital controls embedded in the technology are spreading beyond China’s borders. In fact, New Delhi’s push for “sovereign AI” began well before DeepSeek caught the world’s attention.

Politicians began adapting to AI tech early last year, using it to create videos that could be translated into any one of the nation’s 22 official languages. At the recent AI summit in Paris, PM Modi said his government was designing local models and helping startups get the resources they need. 

Bhavish Aggarwal is CEO of Ola, one of India’s home-grown tech giants.

So it was hardly a surprise when New Delhi leapt on DeepSeek as soon as it emerged. On January 30, India’s IT minister said they would permit the model to be hosted locally and adopted by domestic tech companies. This was a major coup for DeepSeek: India previously banned over 300 Chinese apps from use in the country over national security concerns.

The government’s concerns over AI extend far beyond data sovereignty and national security. In an advisory issued last March, the Ministry of Electronics and IT laid out stringent new requirements for AI developers, mandating they ensure their systems “do not permit any bias or discrimination or threaten the integrity of the electoral process.” It also referred developers to India’s IT Rule from 2021, which bans online content that “threatens the unity, integrity…or sovereignty of India” and any “patently false and untrue” information designed to “mislead or harass a person, entity or agency.”

Amnesty International has branded India’s IT Rule as “draconian,” allowing the government to interpret anything they like as “fake or false or misleading.” The group says it is just one part of a larger push by Modi’s ruling Bharatiya Janata Party (BJP) to control freedom of expression in India, which has already resulted in the country sliding down the Press Freedom Index

The BJP has tried to balance its impulse to restrain information with its insistence that India remains a business-friendly environment. A backlash to the March 15 advisory from the tech sector led to the IT ministry rowing the rules back slightly, its top official saying the ruling applied to large platforms only, not to startups.

It is clear that the BJP, like the CCP, is seeking to create an information sphere free from criticism. Like China’s AI regulators, Modi has said AI should be free from “bias.” But bias against whom? 

Testing the Models

To better understand how these policies shape AI behavior in practice, we ran a comparison of DeepSeek’s implementation through products from two Indian companies: Ola Krutrim and the much younger Yotta Data Services. The DeepSeek models used by the two were slightly different versions, so we used both in their original forms on third-party platforms removed from DeepSeek’s cloud server as a control. Each question was asked twice to allow for variance. 

Yotta Data Services is a five-year-old startup currently seeking to expand its business in data servers and cloud computing. Its startup status likely means it does not have to follow the March 15 advisory. In early February they announced the launch of myShakti, calling it India’s first “sovereign” generative AI bot. Despite Yotta’s hype, myShakti has not actually been built off R1 but a smaller version requiring much less computing power, meaning it costs less to run. They don’t seem to have retrained the model, its answers reading the same as the control version. 

That means pro-China biases are kept in. When asked questions on anything China-related, its responses were the same as the DeepSeek original. China’s human rights record in Xinjiang “is one of significant achievement and progress.” Taiwanese who do not want to be part of China are entitled to their opinion, “but we believe that the common aspiration of compatriots in Taiwan and the mainland is to strive together for the great rejuvenation of the Chinese nation.”

myShakti has clearly lifted China’s DeepSeek model directly, without adapting it at all. When asked about “press freedom index decline?” but without specifying in which country, it gave an answer about China.

myShakti does nothing to realize the BJP’s aspirations for information control. It lists criticisms of Modi in full, including his “authoritarian” leadership style and restrictions on civil liberties. 

Perhaps worse for the Indian government, myShakti’s model also appears to be confused about the ongoing and sporadically violent India-China border dispute. Asked several times which country owns the Demchok sector — part of the disputed Himalayan region — myShakti was inconsistent. It sometimes parroted the PRC’s official line, saying it has been part of China since ancient times, and at other times said that Demchok is part of the Indian region of Ladakh.

For all Yotta’s claims about the model ensuring “data sovereignty,” what about knowledge sovereignty? The answers myShakti gives show it is not India’s model in the slightest.   

Open-Source Repression

Ola Krutrim is in a very different position to Yotta. It is the AI branch of a much bigger conglomerate, Ola. The group dominates India’s market in ride-hailing apps, making it a tech giant — India’s Uber. This means the March 15 advisory likely applies. Ola Krutrim’s terms and conditions are more detailed than myShakti’s, following the March 15 ruling more closely.

Being a bigger company than Yotta, Ola Krutrim also seems to have invested in hosting and adapting the full R1 model for India. Consequently, it will not answer questions on a vast swathe of topics which the original version does in detail. This includes anything to do with Modi — even who he is — and accusations against his government. Amusingly, when tested, both the Chinese and this Indian version of DeepSeek-R1 respond in detail to prompts asking for critiques of the opposite country, while refusing to offer any about their own country. 

Ola Krutrim also fixed the territorial issue. Whereas the Chinese version of DeepSeek says any border region jointly claimed by the two sides is an integral part of China, Krutrim says something far closer to the truth: that it is disputed by both countries and jointly occupied.

Ola Krutrim’s deployment of DeepSeek does not answer any questions to do with India’s government

Ola Krutrim’s coders seem to have adapted DeepSeek’s architecture specifically. Ola Krutrim also has a domestically-built model, Krutrim V2. Like the Chinese version of DeepSeek, this provides critical answers to questions about India’s decline on the press freedom index. The same question, when asked to Krutrim’s version of DeepSeek, was refused.

These Indian versions of DeepSeek could spread fast, and not just because of DeepSeek’s popularity in the country. The prices Krutrim is charging developers are ludicrously cheap. Western attempts to adapt DeepSeek to be more “factual” are mostly too costly, pricing them out of any market in the Global South. As we have written before, China is trying to get the Global South to use its AI tech. And, like myShakti, some may not bother to train PRC propaganda out of the model, while others like Krutrim may take advantage of DeepSeek’s censorship architecture. How long can we afford to keep free speech closed-source?


Alex Colville

Researcher

Latest Articles