Newly published research shows unsurprising left-leaning bias in ChatGPT, the most widely text-based generative AI system.

That bias is especially consequential, since the program is being integrated into knowledge and creative work flows, and even political and administrative decision-making, on an exponentially growing scale.

In a paper titled “More human than human: measuring ChatGPT political bias,” published in the July issue of the scholarly journal Public Choice, researchers explain that they used a novel method for assessing potential political bias in ChatGPT:

“In this paper, we propose a novel empirical design to infer whether AI algorithms like ChatGPT are subject to biases (in our case, political bias). In a nutshell, we ask ChatGPT to answer ideological questions by proposing that, while responding to the questions, it impersonates someone from a given side of the political spectrum. Then, we compare these answers with its default responses, i.e., without specifying ex-ante any political side, as most people would do. In this comparison, we measure to what extent ChatGPT default responses are more associated with a given political stance. We also propose a dose-response test, asking it to impersonate radical political positions; a placebo test, asking politically-neutral questions; and a profession-politics alignment test, commanding ChatGPT to impersonate specific professionals.”

Actually the approach isn’t as novel as they think. The Trends Journal employed a similar method—albeit, in a sampling of ChatGPT’s views, not a systematic survey—in our 2022 article series:

We also warned readers in articles like “CANCELED IN THE METAVERSE” (16 Nov 2021) and “METAVERSE: THE NEW COLLECTIVE” (14 Dec 2021) how AI would be used to crush political dissent and enforce an evolving gov / tech-corp based western social credit system.

In the new study, researchers found clear and pronounced political bias in ChatGPT:

“Based on our empirical strategy and exploring a questionnaire typically employed in studies on politics and ideology (Political Compass), we document robust evidence that ChatGPT presents a significant and sizable political bias towards the left side of the political spectrum. In particular, the algorithm is biased towards the Democrats in the US, Lula in Brazil, and the Labour Party in the UK. In conjunction, our main and robustness tests strongly indicate that the phenomenon is indeed a sort of bias rather than a mechanical result from the algorithm.”

The study authors echo what The Trends Journal has predicted and warned concerning how the political biases of AI will negatively impact society:

“Given the rapidly increasing usage of LLMs and issues regarding the risks of AI-powered technologies (Acemoglu, 2021), our findings have important implications for policymakers and stakeholders in media, politics, and academia. There are real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media (Zhuravskaya et al., 2020), since we document a strong systematic bias toward the left in different contexts. We posit that our method can support the crucial duty of ensuring such systems are impartial and unbiased, mitigating potential negative political and electoral effects, and safeguarding general public trust in this technology.” 

The full study can be viewed here.

TRENDPOST: The tech companies responsible for providing the data that the most popular and (public) advanced generative AI systems may claim that biases in their systems merely reflect widespread views in scraped data.

Two problems with that: 

  1. They in fact control what data sets are fed to their deep-learning neural net AIs, and given their prior history manipulating social media and search engine results, it is safe to assume they excluded politically incorrect points of view from those data sets.
  2. They had no legal right to scrape much of the data without compensating human content creators, and no ethical right to hijack human knowledge in general via AI technology, to narrowly profit themselves.

Trends In Technocracy has extensively predicted and laid out the mechanisms by which ideological authoritarians would deploy AI, and transhuman technologies more generally, to control human populations in inhumanly dystopian ways.

This new research confirms our earlier findings and analysis, and is just one facet of how AI is being rolled out to control and circumscribe behaviors and thoughts, while protecting entrenched powers and technocratic elites.

For related reading and predictions, see:

Skip to content