Close Menu
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    KahawatunguKahawatungu
    Button
    • NEWS
    • BUSINESS
    • KNOW YOUR CELEBRITY
    • POLITICS
    • TECHNOLOGY
    • SPORTS
    • HOW-TO
    • WORLD NEWS
    KahawatunguKahawatungu
    WORLD NEWS

    Man files complaint after ChatGPT said he killed his children

    Oki Bin OkiBy Oki Bin OkiMarch 21, 2025No Comments4 Mins Read
    Facebook Twitter WhatsApp Telegram Email
    ChatGPT Logo
    ChatGPT Logo
    Share
    Facebook Twitter WhatsApp Telegram Pinterest Email Copy Link

    A Norwegian man has filed a complaint after ChatGPT falsely told him he had killed two of his sons and been jailed for 21 years.

    Arve Hjalmar Holmen has contacted the Norwegian Data Protection Authority and demanded the chatbot’s maker, OpenAI, is fined.

    It is the latest example of so-called “hallucinations”, where artificial intelligence (AI) systems invent information and present it as fact.

    Mr Holmen says this particular hallucination is very damaging to him.

    “Some think that there is no smoke without fire – the fact that someone could read this output and believe it is true is what scares me the most,” he said.

    OpenAI has been contacted for comment.

    Mr Holmen was given the false information after he used ChatGPT to search for: “Who is Arve Hjalmar Holmen?”

    The response he got from ChatGPT included: “Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event.

    “He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.”

    Mr Holmen said the chatbot got their age gap roughly right, suggesting it did have some accurate information about him.

    Digital rights group Noyb, which has filed the complaint on his behalf, says the answer ChatGPT gave him is defamatory and breaks European data protection rules around accuracy of personal data.

    Noyb said in its complaint that Mr Holmen “has never been accused nor convicted of any crime and is a conscientious citizen.”

    ChatGPT carries a disclaimer which says: “ChatGPT can make mistakes. Check important info.”
    Noyb says that is insufficient.

    “You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,” Noyb lawyer Joakim Söderberg said.

    Hallucinations are one of the main problems computer scientists are trying to solve when it comes to generative AI.

    These are when chatbots present false information as facts.

    Earlier this year, Apple suspended its Apple Intelligence news summary tool in the UK after it hallucinated false headlines and presented them as real news.

    Google’s AI Gemini has also fallen foul of hallucination – last year it suggested sticking cheese to pizza using glue, and said geologists recommend humans eat one rock per day.

    It is not clear what it is in the large language models – the tech which underpins chatbots – which causes these hallucinations.

    “This is actually an area of active research. How do we construct these chains of reasoning? How do we explain what what is actually going on in a large language model?” said Simone Stumpf, professor of responsible and interactive AI at the University of Glasgow.

    Prof Stumpf says that can even apply to people who work behind the scenes on these types of models.
    “Even if you are more involved in the development of these systems quite often, you do not know how they actually work, why they’re coming up with this particular information that they came up with,” she told the BBC.

    ChatGPT has changed its model since Mr Holmen’s search in August 2024, and now searches current news articles when it looks for relevant information.

    Noyb told the BBC Mr Holmen had made a number of searches that day, including putting his brother’s name into the chatbot and it produced “multiple different stories that were all incorrect.”

    They also acknowledged the previous searches could have influenced the answer about his children, but said large language models are a “black box” and OpenAI “doesn’t reply to access requests, which makes it impossible to find out more about what exact data is in the system.”

    By BBC News

    Email your news TIPS to Editor@Kahawatungu.com — this is our only official communication channel

    Follow on Facebook Follow on X (Twitter)
    Share. Facebook Twitter WhatsApp LinkedIn Telegram Email
    Oki Bin Oki

    Related Posts

    Trump administration suspends US green card lottery

    December 19, 2025

    Brown University shooting suspect found dead, police say

    December 19, 2025

    Australia announces gun buyback scheme in wake of Bondi attack

    December 19, 2025

    Comments are closed.

    Latest Posts

    Court orders forfeiture of Sh76 million assets linked to Ex-Kiambu governor Waititu

    December 19, 2025

    Fiuk Siblings: Meet the Siblings Squad Behind the Brazilian Singer

    December 19, 2025

    Bruna Linzmeyer Siblings: Get to Know Helder Linzmeyer

    December 19, 2025

    Fábio Jr. Siblings: A Look at the Singer’s Family Tree

    December 19, 2025

    Glória Pires Siblings: Getting to Know Linda Pires

    December 19, 2025

    Cléo Pires Siblings: Meet the Siblings Squad Behind the Brazilian Actress

    December 19, 2025

    Camila Queiroz Siblings: All About Melina and Caroline Queiroz

    December 19, 2025

    Kalonzo Responds to Ruto’s Attacks Over Development Claims

    December 19, 2025
    Facebook X (Twitter) Instagram Pinterest
    © 2025 Kahawatungu.com. Designed by Okii.

    Type above and press Enter to search. Press Esc to cancel.