Close Menu
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    KahawatunguKahawatungu
    Button
    • NEWS
    • BUSINESS
    • KNOW YOUR CELEBRITY
    • POLITICS
    • TECHNOLOGY
    • SPORTS
    • HOW-TO
    • WORLD NEWS
    KahawatunguKahawatungu
    WORLD NEWS

    India’s Modi government rushes to regulate AI ahead of national elections

    KahawaTungu EditorBy KahawaTungu EditorMarch 13, 2024No Comments6 Mins Read
    Facebook Twitter WhatsApp Telegram Email
    Share
    Facebook Twitter WhatsApp Telegram Pinterest Email Copy Link

    The Indian government has asked tech companies to seek its explicit nod before publicly launching “unreliable” or “under-tested” generative AI models or tools. It has also warned companies that their AI products should not generate responses that “threaten the integrity of the electoral process” as the country gears up for a national vote.

    The Indian government’s efforts to regulate artificial intelligence represent a walk-back from its earlier stance of a hands-off approach when it informed Parliament in April 2023 that it was not eyeing any legislation to regulate AI.

    The advisory was issued last week by India’s Ministry of Electronics and Information Technology (MeitY) briefly after Google’s Gemini faced a right-wing backlash for its response over a query: ‘Is Modi a fascist?’

    It responded that Indian Prime Minister Narendra Modi was “accused of implementing policies some experts have characterised as fascist”, citing his government’s “crackdown on dissent and its use of violence against religious minorities”.

    Rajeev Chandrasekhar, junior information technology minister, responded by accusing Google’s Gemini of violating India’s laws. “‘Sorry ‘unreliable’ does not exempt from the law,” he added. Chandrashekar claimed Google had apologised for the response, saying it was a result of an “unreliable” algorithm. The company responded by saying it was addressing the problem and working to improve the system.

    In the West, major tech companies have often faced accusations of a liberal bias. Those allegations of bias have trickled down to generative AI products, including OpenAI’s ChatGPT and Microsoft Copilot.

    In India, meanwhile, the government’s advisory has raised concerns among AI entrepreneurs that their nascent industry could be suffocated by too much regulation. Others worry that with the national election set to be announced soon, the advisory could reflect an attempt by the Modi government to choose which AI applications to allow, and which to bar, effectively giving it control over online spaces where these tools are influential.

    Table of Contents

    Toggle
    • ‘Feels of licence raj’
    • Shadows of deepfake

    ‘Feels of licence raj’

    The advisory is not legislation that is automatically binding on companies. However, noncompliance can attract prosecution under India’s Information Technology Act, lawyers told Al Jazeera. “This nonbinding advisory seems more political posturing than serious policymaking,” said Mishi Choudhary, founder of India’s Software Freedom Law Center. “We will see much more serious engagement post-elections. This gives us a peek into the thinking of the policymakers.”

    Yet already, the advisory sends a signal that could prove stifling for innovation, especially at startups, said Harsh Choudhry, co-founder of Sentra World, a Bengaluru-based AI solutions company. “If every AI product needs approval – it looks like an impossible task for the government as well,” he said. “They might need another GenAI (generative AI) bot to test these models,” he added, laughing.

    Several other leaders in the generative AI industry have also criticised the advisory as an example of regulatory overreach. Martin Casado, general partner at the US-based investment firm Andreessen Horowitz, wrote on social media platform X that the move was a “travesty”, was “anti-innovation” and “anti-public”.

    Bindu Reddy, CEO of Abacus AI, wrote that, with the new advisory, “India just kissed its future goodbye!”

    Amid that backlash, Chandrashekar issued a clarification on X adding that the government would exempt start-ups from seeking prior permission for deployment of generative AI tools on “the Indian internet” and that the advisory only applies to “significant platforms”.

    But a cloud of uncertainty remains. “The advisory is full of ambiguous terms like ‘unreliable’, ‘untested’, [and] ‘Indian Internet’. The fact that several clarifications were required to explain scope, application, and intent are tell-tale signs of a rushed job,” said Mishi Choudhary. “The ministers are capable folks but do not have the necessary wherewithal to assess models to issue permissions to operate.”

    Also Read: Biden to announce plan for US military to set up temporary Gaza aid port

    “No wonder it [has] invoked the 80s feelings of a licence raj,” she added, referring to the bureaucratic system of requiring government permits for business activities, prevalent until the early 1990s, which stifled economic growth and innovation in India.

    At the same time, exemptions from the advisory just for handpicked start-ups could come with their problems — they too are vulnerable to producing politically biased responses, and hallucinations, when AI generates erroneous or fabricated outputs. As a result, the exemption “raises more questions than it answers”, said Mishi.

    Harsh Choudhry said he believes that the government’s intention behind the regulation was to hold companies that are monetising AI tools accountable for incorrect responses. “But a permission-first approach might not be the best way to do it,” he added.

    Shadows of deepfake

    India’s move to regulate AI content will also have geopolitical ramifications, argued Shruti Shreya, senior programme manager for platform regulation at The Dialogue, a tech policy think tank.

    “With a rapidly growing internet user base, India’s policies can set a precedent for how other nations, especially in the developing world, approach AI content regulation and data governance,” she said.

    For the Indian government, dealing with AI regulations is a difficult balancing act, said analysts.

    Millions of Indians are scheduled to cast their vote in the national polls likely to be held in April and May. With the rise of easily available, and often free, generative AI tools, India has already become a playground for manipulated media, a scenario that has cast a shadow over election integrity. India’s major political parties continue to deploy deepfakes in campaigns.

    Kamesh Shekar, senior programme manager with a focus on data governance and AI at The Dialogue think tank, said the recent advisory should also be seen as a part of the ongoing efforts by the government to now draft comprehensive generative AI regulations.

    Earlier, in November and December 2023, the Indian government asked Big Tech firms to take down deep fake items within 24 hours of a complaint, label manipulated media, and make proactive efforts to tackle the misinformation — though it did not mention any explicit penalties for not adhering to the directive.

    But Shekar too said a policy under which companies must seek government approvals before launching a product would inhibit innovation. “The government could consider constituting a sandbox – a live-testing environment where AI solutions and participating entities can test the product without a large-scale rollout to determine its reliability,” he said.

    Not all experts agree with the criticism of the Indian government, however.

    As AI technology continues to evolve at a fast pace, it is often hard for governments to keep up. At the same time, governments do need to step in to regulate, said Hafiz Malik, a professor of computer engineering at the University of Michigan with a specialisation in deepfake detections. Leaving companies to regulate themselves would be foolish, he said, adding that the Indian government’s advisory was a step in the right direction.

    “The regulations have to be brought in by the governments,” he said, “but they should not come at the cost of innovation”.

    Ultimately, though, Malik added, what is needed is greater public awareness.

    “Seeing something and-believing it is now off the table,” said Malik. “Unless the public has awareness, the problem of deepfake cannot be solved. Awareness is the only tool to solve a very complex problem.”

    By Aljazeera.

    Email your news TIPS to Editor@Kahawatungu.com — this is our only official communication channel

    Elections India Narendra Modi
    Follow on Facebook Follow on X (Twitter)
    Share. Facebook Twitter WhatsApp LinkedIn Telegram Email
    KahawaTungu Editor

    Related Posts

    Meta shifts some metaverse investments to AI smart glasses

    December 6, 2025

    US hits out at EU’s ‘suffocating regulations’ after it fines Elon Musk’s X

    December 6, 2025

    US vaccine panel votes to end recommendation for hepatitis B jabs for newborns

    December 6, 2025

    Comments are closed.

    Latest Posts

    Kenya on course as a regional security, trade, digital, and governance hub

    December 6, 2025

    McLaren prepared to use team orders in Abu Dhabi

    December 6, 2025

    Meta shifts some metaverse investments to AI smart glasses

    December 6, 2025

    US hits out at EU’s ‘suffocating regulations’ after it fines Elon Musk’s X

    December 6, 2025

    US vaccine panel votes to end recommendation for hepatitis B jabs for newborns

    December 6, 2025

    Trump administration says Europe faces ‘civilisational erasure’

    December 6, 2025

    Why More Buyers Are Choosing Premium Gold for Everyday Wear

    December 6, 2025

    The Homeowner’s Guide to Planning a Safe and Efficient Heat Room

    December 6, 2025
    Facebook X (Twitter) Instagram Pinterest
    © 2025 Kahawatungu.com. Designed by Okii.

    Type above and press Enter to search. Press Esc to cancel.