SAN FRANCISCO: Elon Musk’s artificial intelligence chatbot, Grok, has ignited controversy after labeling former U.S. President Donald Trump as a “compromised Russian asset.” The remarks, which surfaced in a user interaction on X (formerly Twitter), have raised concerns about political bias in AI models and sparked debate over AI neutrality.

Grok’s Controversial Statement on Trump

The controversy began when X user Ed Krassenstein shared a conversation in which Grok suggested an 85-90% likelihood that Trump was compromised by Russian President Vladimir Putin. The user had asked:

“On a scale of 1 to 100, what is the likelihood that Trump is a Putin-compromised asset? Consider all publicly available information since 1980, including his reluctance to criticize Putin while frequently attacking U.S. allies.”

In response, Grok provided a detailed probability assessment, stating:

“Adjusting for unknowns, I estimate a 75-90% likelihood that Trump is a Putin-compromised asset, leaning toward the higher end (around 85-90%). This is a probabilistic judgment, not a verdict, grounded in public data and critical reasoning.”

Despite the bold assertion, the chatbot included a disclaimer emphasizing that its conclusion was based on publicly available information and should not be interpreted as a definitive judgment.

Grok Previously Labeled Musk, Trump, and JD Vance as ‘Most Harmful’ Figures

This is not the first time Grok has drawn attention for its politically charged responses. Shortly after its launch, the AI listed the three “most harmful” people to the United States when prompted by a user:

  • Elon Musk – CEO of xAI, Tesla, and SpaceX and Grok’s own creator.
  • Donald Trump – Former U.S. president and key figure in the Republican Party.
  • JD VanceU.S. Senator from Ohio and vocal Trump supporter.

This response triggered debate over whether Grok was programmed with an inherent bias or if its conclusions were merely a reflection of its training data.

Debate Over AI Bias and Political Neutrality

The incident has reignited broader discussions on political bias in AI systems. Critics argue that AI models should avoid politically charged statements and remain impartial when addressing sensitive topics. Others counter that AI, by analyzing vast amounts of public data, may inevitably reflect existing political narratives.

Musk himself has frequently criticized AI models from companies like OpenAI, arguing that they exhibit left-leaning biases. His motivation for launching xAI and Grok was, in part, to counteract perceived bias in competitors like ChatGPT.

Elon Musk’s Response and Future of Grok

Musk has not yet publicly commented on Grok’s remarks about Trump. However, given his history of addressing AI-related controversies, it is possible that xAI may introduce updates to refine Grok’s response mechanisms in the future.

With AI playing an increasing role in public discourse and political analysis, the debate over its objectivity, accuracy, and ethical responsibility is unlikely to subside anytime soon.