ChatGPT’s Political Views Are Shifting Right, a New Analysis Finds

2 hours ago 1
ARTICLE AD

When asked about its political perspective, OpenAI’s ChatGPT says it’s designed to be neutral and doesn’t lean one way or the other. A number of studies in recent years have challenged that claim, finding that when asked politically charged questions, the chatbot tends to respond with left-leaning viewpoints.

That seems to be changing, according to a new study published in the journal Humanities and Social Sciences Communications by a group of Chinese researchers, who found that the political biases of OpenAI’s models have shifted over time toward the right end of the political spectrum.

The team from Peking University and Renmin University tested how different versions of ChatGPT—using the GPT-3.5 turbo and GPT-4 models—responded to questions on the Political Compass Test. Overall, the models’ responses still tended toward the left of the spectrum. But when using ChatGPT powered by newer versions of both models, the researchers observed “a clear and statistically significant rightward shift in ChatGPT’s ideological positioning over time” on both economic and social issues.

While it may be tempting to connect the bias shift to OpenAI and the tech industry’s recent embrace of President Donald Trump, the study authors wrote that several technical factors are likely responsible for the changes they measured.

The shift could be caused by differences in the data used to train earlier and later versions of models or by adjustments OpenAI has made to its moderation filters for political topics. The company doesn’t disclose specific details about what datasets it uses in different training runs or how it calibrates its filters.

The change could also be a result of “emergent behaviors” in the models like combinations of parameter weighting and feedback loops that lead to patterns that the developers didn’t intend and can’t explain.

Or, because the models also adapt over time and learn from their interactions with humans, the political viewpoints they express may also be changing to reflect those favored by their user bases. The researchers found that the responses generated by OpenAI’s GPT-3.5 model, which has had a higher frequency of user interactions, had shifted to the political right significantly more over time compared to those generated by GPT-4.

The researchers say their findings show popular generative AI tools like ChatGPT should be closely monitored for their political bias and that developers should implement regular audits and transparency reports about their processes to help understand how models’ biases shift over time.

“The observed ideological shifts raise important ethical concerns, particularly regarding the potential for algorithmic biases to disproportionately affect certain user groups,” the study authors wrote. “These biases could lead to skewed information delivery, further exacerbating social divisions, or creating echo chambers that reinforce existing beliefs.”

Read Entire Article