Close Menu
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    KahawatunguKahawatungu
    Button
    • NEWS
    • BUSINESS
    • KNOW YOUR CELEBRITY
    • POLITICS
    • TECHNOLOGY
    • SPORTS
    • HOW-TO
    • WORLD NEWS
    KahawatunguKahawatungu
    WORLD NEWS

    Man files complaint after ChatGPT said he killed his children

    Oki Bin OkiBy Oki Bin OkiMarch 21, 2025No Comments4 Mins Read
    Facebook Twitter WhatsApp Telegram Email
    ChatGPT Logo
    ChatGPT Logo
    Share
    Facebook Twitter WhatsApp Telegram Pinterest Email Copy Link

    A Norwegian man has filed a complaint after ChatGPT falsely told him he had killed two of his sons and been jailed for 21 years.

    Arve Hjalmar Holmen has contacted the Norwegian Data Protection Authority and demanded the chatbot’s maker, OpenAI, is fined.

    It is the latest example of so-called “hallucinations”, where artificial intelligence (AI) systems invent information and present it as fact.

    Mr Holmen says this particular hallucination is very damaging to him.

    “Some think that there is no smoke without fire – the fact that someone could read this output and believe it is true is what scares me the most,” he said.

    OpenAI has been contacted for comment.

    Mr Holmen was given the false information after he used ChatGPT to search for: “Who is Arve Hjalmar Holmen?”

    The response he got from ChatGPT included: “Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event.

    “He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.”

    Mr Holmen said the chatbot got their age gap roughly right, suggesting it did have some accurate information about him.

    Digital rights group Noyb, which has filed the complaint on his behalf, says the answer ChatGPT gave him is defamatory and breaks European data protection rules around accuracy of personal data.

    Noyb said in its complaint that Mr Holmen “has never been accused nor convicted of any crime and is a conscientious citizen.”

    ChatGPT carries a disclaimer which says: “ChatGPT can make mistakes. Check important info.”
    Noyb says that is insufficient.

    “You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,” Noyb lawyer Joakim Söderberg said.

    Hallucinations are one of the main problems computer scientists are trying to solve when it comes to generative AI.

    These are when chatbots present false information as facts.

    Earlier this year, Apple suspended its Apple Intelligence news summary tool in the UK after it hallucinated false headlines and presented them as real news.

    Google’s AI Gemini has also fallen foul of hallucination – last year it suggested sticking cheese to pizza using glue, and said geologists recommend humans eat one rock per day.

    It is not clear what it is in the large language models – the tech which underpins chatbots – which causes these hallucinations.

    “This is actually an area of active research. How do we construct these chains of reasoning? How do we explain what what is actually going on in a large language model?” said Simone Stumpf, professor of responsible and interactive AI at the University of Glasgow.

    Prof Stumpf says that can even apply to people who work behind the scenes on these types of models.
    “Even if you are more involved in the development of these systems quite often, you do not know how they actually work, why they’re coming up with this particular information that they came up with,” she told the BBC.

    ChatGPT has changed its model since Mr Holmen’s search in August 2024, and now searches current news articles when it looks for relevant information.

    Noyb told the BBC Mr Holmen had made a number of searches that day, including putting his brother’s name into the chatbot and it produced “multiple different stories that were all incorrect.”

    They also acknowledged the previous searches could have influenced the answer about his children, but said large language models are a “black box” and OpenAI “doesn’t reply to access requests, which makes it impossible to find out more about what exact data is in the system.”

    By BBC News

    Email your news TIPS to Editor@Kahawatungu.com — this is our only official communication channel

    Follow on Facebook Follow on X (Twitter)
    Share. Facebook Twitter WhatsApp LinkedIn Telegram Email
    Oki Bin Oki

    Related Posts

    Gunmen kill 10, wound 10 more in South Africa shooting

    December 21, 2025

    India express train kills seven elephants crossing tracks

    December 21, 2025

    US seizes second oil tanker off Venezuela’s coast 

    December 21, 2025

    Comments are closed.

    Latest Posts

    Unions back Kenya-US health deal, cite job security

    December 21, 2025

    Man found dead in toilet in Parklands

    December 21, 2025

    Gunmen kill 10, wound 10 more in South Africa shooting

    December 21, 2025

    One killed, dozens injured at Asake music concert in Nairobi 

    December 21, 2025

    Al-Shabaab Releases Video of Kenyan UN Worker Pleading for Help

    December 21, 2025

    NPR Officer Arrested, AK-47 Seized in Igembe South Robbery Investigation

    December 21, 2025

    Rachel Ruto Celebrates President William Ruto’s 59th Birthday With Heartfelt Prayer

    December 21, 2025

    Boy dies after falling from seventh floor of apartment in Embakasi

    December 21, 2025
    Facebook X (Twitter) Instagram Pinterest
    © 2025 Kahawatungu.com. Designed by Okii.

    Type above and press Enter to search. Press Esc to cancel.