Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

Meesho Becomes India’s Best IPO This Year, Mints New Billionaire

December 17, 2025

Good news: How AP journalists covered some of 2025’s happiest stories

December 17, 2025

Most Americans think cash gifts are acceptable, poll shows

December 17, 2025
Facebook X (Twitter) Instagram
Trending
  • Meesho Becomes India’s Best IPO This Year, Mints New Billionaire
  • Good news: How AP journalists covered some of 2025’s happiest stories
  • Most Americans think cash gifts are acceptable, poll shows
  • School attendance plummeted during Texas measles outbreak
  • Bartenders embrace maximalism with vibrant cocktails
  • Rome opens long-awaited Colosseum subway station, with displays of unearthed artifacts
  • Creating a simple garden sanctuary can bring year-round relaxation
  • Thousands bid farewell to last 2 pandas in Japan before their return to China
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Wednesday, December 17
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » X users treating Grok like a fact-checker spark concerns over misinformation
AI

X users treating Grok like a fact-checker spark concerns over misinformation

By adminMarch 19, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 110


Some users on Elon Musk’s X are turning to Musk’s AI bot Grok for fact-checking, raising concerns among human fact-checkers that this could fuel misinformation.

Earlier this month, X enabled users to call out xAI’s Grok and ask questions on different things. The move was similar to Perplexity, which has been running an automated account on X to offer a similar experience.

Soon after xAI created Grok’s automated account on X, users started experimenting with asking it questions. Some people in markets including India began asking Grok to fact-check comments and questions that target specific political beliefs.

Fact-checkers are concerned about using Grok — or any other AI assistant of this sort — in this manner because the bots can frame their answers to sound convincing, even if they are not factually correct. Instances of spreading fake news and misinformation were seen with Grok in the past.

In August last year, five state secretaries urged Musk to implement critical changes to Grok after the misleading information generated by the assistant surfaced on social networks ahead of the U.S. election.

Other chatbots, including OpenAI’s ChatGPT and Google’s Gemini, were also seen to be generating inaccurate information on the election last year. Separately, disinformation researchers found in 2023 that AI chatbots including ChatGPT could easily be used to produce convincing text with misleading narratives.

“AI assistants, like Grok, they’re really good at using natural language and give an answer that sounds like a human being said it. And in that way, the AI products have this claim on naturalness and authentic sounding responses, even when they’re potentially very wrong. That would be the danger here,” Angie Holan, director of the International Fact-Checking Network (IFCN) at Poynter, told TechCrunch.

Grok was asked by a user on X to fact-check on claims made by another user

Unlike AI assistants, human fact-checkers use multiple, credible sources to verify information. They also take full accountability for their findings, with their names and organizations attached to ensure credibility.

Pratik Sinha, co-founder of India’s non-profit fact-checking website Alt News, said that although Grok currently appears to have convincing answers, it is only as good as the data it is supplied with.

“Who’s going to decide what data it gets supplied with, and that is where government interference, etc., will come into picture,” he noted.

“There is no transparency. Anything which lacks transparency will cause harm because anything that lacks transparency can be molded in any which way.”

“Could be misused — to spread misinformation”

In one of the responses posted earlier this week, Grok’s account on X acknowledged that it “could be misused — to spread misinformation and violate privacy.”

However, the automated account does not show any disclaimers to users when they get its answers, leading them to be misinformed if it has, for instance, hallucinated the answer, which is the potential disadvantage of AI.

Grok’s response on whether it can spread Misinformation (Translated from Hinglish)

“It may make up information to provide a response,” Anushka Jain, a research associate at Goa-based multidisciplinary research collective Digital Futures Lab, told TechCrunch.

There’s also some question about how much Grok uses posts on X as training data, and what quality control measures it uses to fact-check such posts. Last summer, it pushed out a change that appeared to allow Grok to consume X user data by default.

The other concerning area of AI assistants like Grok being accessible through social media platforms is their delivery of information in public — unlike ChatGPT or other chatbots being used privately.

Even if a user is well aware that the information it gets from the assistant could be misleading or not completely correct, others on the platform might still believe it.

This could cause serious social harms. Instances of that were seen earlier in India when misinformation circulated over WhatsApp led to mob lynchings. However, those severe incidents occurred before the arrival of GenAI, which has made synthetic content generation even easier and appear more realistic.

“If you see a lot of these Grok answers, you’re going to say, hey, well, most of them are right, and that may be so, but there are going to be some that are wrong. And how many? It’s not a small fraction. Some of the research studies have shown that AI models are subject to 20% error rates… and when it goes wrong, it can go really wrong with real world consequences,” IFCN’s Holan told TechCrunch.

AI vs. real fact-checkers

While AI companies including xAI are refining their AI models to make them communicate more like humans, they still are not — and cannot — replace humans.

For the last few months, tech companies are exploring ways to reduce reliance on human fact-checkers. Platforms including X and Meta started embracing the new concept of crowdsourced fact-checking through so-called Community Notes.

Naturally, such changes also cause concern to fact checkers.

Sinha of Alt News optimistically believes that people will learn to differentiate between machines and human fact checkers and will value the accuracy of the humans more.

“We’re going to see the pendulum swing back eventually toward more fact checking,” IFCN’s Holan said.

However, she noted that in the meantime, fact-checkers will likely have more work to do with the AI-generated information spreading swiftly.

“A lot of this issue depends on, do you really care about what is actually true or not? Are you just looking for the veneer of something that sounds and feels true without actually being true? Because that’s what AI assistance will get you,” she said.

X and xAI didn’t respond to our request for comment.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

A safety institute advised against releasing an early version of Anthropic’s Claude Opus 4 AI model

May 22, 2025

Anthropic’s new AI model turns to blackmail when engineers try to take it offline

May 22, 2025

Meta adds another 650 MW of solar power to its AI push

May 22, 2025
Add A Comment
Leave A Reply

Don't Miss
Billionaires

Meesho Becomes India’s Best IPO This Year, Mints New Billionaire

December 17, 2025

Vidit Aatrey, CEO of Meesho, speaks during the company’s listing ceremony at the National Stock…

MacKenzie Scott’s Latest Gifts Make Her America’s Third Most Generous Philanthropist

December 11, 2025

Indonesian Billionaires Cash In On Gold Surge

December 10, 2025

Kalshi’s Cofounder Is Now World’s Youngest Self-Made Woman Billionaire

December 2, 2025
Our Picks

Meesho Becomes India’s Best IPO This Year, Mints New Billionaire

December 17, 2025

Good news: How AP journalists covered some of 2025’s happiest stories

December 17, 2025

Most Americans think cash gifts are acceptable, poll shows

December 17, 2025

School attendance plummeted during Texas measles outbreak

December 17, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.