Close Menu
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    KahawatunguKahawatungu
    Button
    • NEWS
    • BUSINESS
    • KNOW YOUR CELEBRITY
    • POLITICS
    • TECHNOLOGY
    • SPORTS
    • HOW-TO
    • WORLD NEWS
    KahawatunguKahawatungu
    TECHNOLOGY

    OpenAI to change deal with US military after backlash

    Oki Bin OkiBy Oki Bin OkiMarch 3, 2026No Comments4 Mins Read
    Facebook Twitter WhatsApp Telegram Email
    ChatGPT Logo
    ChatGPT Logo
    Share
    Facebook Twitter WhatsApp Telegram Pinterest Email Copy Link

    OpenAI says it is making changes to the “opportunistic and sloppy” deal it struck with the US government over the use of its technology in classified military operations.

    On Monday OpenAI Chief Executive Sam Altman said the company planned to add language to its agreement, including explicitly prohibiting the use of its systems to spy on Americans.

    The deal had emerged on Friday following a fallout between OpenAI’s rival Anthropic and the Department of Defense, over concerns around the use of its AI model Claude for mass surveillance and in fully autonomous weapons.

    But it has raised questions over how AI is used in war and how much power rests with government and private companies.

    A statement made on Saturday by OpenAI claimed its agreement with the Pentagon had “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s”.

    But on Monday, Altman posted on X to say further changes were being made, including making sure its system would not be “intentionally used for domestic surveillance of U.S. persons and nationals”.

    As part of the new amendments, intelligence agencies such as the National Security Agency would also not be able to use OpenAI’s system without a “follow-on modification” to the contract.

    Altman added the company had made a mistake by rushing “to get this out on Friday”.

    “The issues are super complex, and demand clear communication,” he said.

    “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

    OpenAI has faced backlash from users following its announcement it was working with the Pentagon.

    Day-over-day uninstalls of the company’s Chat GPT mobile app reportedly surged to 295% on Saturday, compared to a typical 9%.

    Meanwhile, Anthropic’s Claude rose to the top of Apple’s App Store ranking, where it still remains on Tuesday.

    The AI model was blacklisted by the Trump adminstration following Anthropic’s refusal to drop a corporate “red-line” principle that its technology should not be used to create fully autonomous weapons.

    Despite this, the use of Claude in the US-Israel war with Iran has since emerged, hours after Trump’s ban.

    The Pentagon declined to comment on its dealings with Anthropic.

    How AI is used by the military
    AI is used in a number of ways in the military, for example streamlining logistics or quickly processing large amounts of information.

    The US, Ukraine, and Nato all use tech from Palantir, an American company which provides data analytics tools to government customers for intelligence gathering, surveillance, counterterrorism, and military purposes.

    The UK Ministry of Defence recently signed a £240m contract with the firm.

    At the end of last year, the BBC spoke to some of those involved in integrating Palintir’s AI-powered defence platform Maven into Nato.

    The software brings together a huge range of military information, from satellite data to intelligence reports, which can then be analysed by commercial AI systems such as Claude to help make “faster, more efficient, and ultimately more lethal decisions where that’s appropriate”, Louis Mosley, the head of Palantir’s UK operations said.

    But AI large language models can make mistakes, or even make things up – known as “hallucinating”.

    Lieutenant Colonel Amanda Gustave, chief data officer for Nato’s Task Force Maven, stressed there was human oversight, adding that they were “always introducing a human in the loop” and that it “would never be the case” that an AI would “make a decision for us”.

    Palantir, unlike Anthropic, does not support a blanket ban on autonomous weapons, but says there should be a “human in the loop”.

    But Professor Mariarosaria Taddeo of Oxford University told the BBC that with Anthropic out of the Pentagon, “the most safety-conscious actor” was now “out from the room”.

    “That is a real problem,” she added.

    By BBC News

    Email your news TIPS to Editor@Kahawatungu.com — this is our only official communication channel

    Follow on Facebook Follow on X (Twitter)
    Share. Facebook Twitter WhatsApp LinkedIn Telegram Email
    Oki Bin Oki

    Related Posts

    US nationals urged to leave Middle East as conflict spreads

    March 3, 2026

    US struggling to de-risk Congo’s ‘war zone minerals’ even after pact

    March 3, 2026

    Senegal PM says party could quit government if president diverges from vision

    March 3, 2026

    Comments are closed.

    Latest Posts

    OpenAI to change deal with US military after backlash

    March 3, 2026

    US nationals urged to leave Middle East as conflict spreads

    March 3, 2026

    FKF admits struggle against match-fixing amid fresh coach revelations in Kenyan Premier League

    March 3, 2026

    Six KWS personnel have a case to answer in Brian Odhiambo disappearance case, court rules 

    March 3, 2026

    US struggling to de-risk Congo’s ‘war zone minerals’ even after pact

    March 3, 2026

    Senegal PM says party could quit government if president diverges from vision

    March 3, 2026

    KAA denies Adani involvement in JKIA modernisation plan

    March 3, 2026

    Oil and gas prices rise on new Iran threat to Gulf shipping

    March 3, 2026
    Facebook X (Twitter) Instagram Pinterest
    © 2026 Kahawatungu.com. Designed by Okii.

    Type above and press Enter to search. Press Esc to cancel.