The Political Preferences of AIs: When probed with questions with political connotations, LLMs tend to generate responses that are diagnosed by most political orientations tests as manifesting preferences for left-of-center viewpoints – 11 political orientation tests, 24 SOTA LLMs
For several decades the proportion of U.S. news media headlines conveying pessimism has been growing or stable but never decreasing. This trend predates social media. (Sample size: 1.7 million headlines from popular news media outlets in the U.S.)
No evidence it started in US media. Analysis of 98 million news articles across 36 countries quantifies. Exception: state-controlled media from China/Russia/Iran using wokeness terminology to criticize/mock the West
The jury is still out in terms of whether the Great Awokening is winding down. The possibility that the phenomenon might be mutating by emphasizing more social justice terminology with positive connotations and those at the receiving end of alleged victimization while toning down its more negative/corrosive/aggressive terminology deserves further consideration.
RightWingGPT is an AI model fine-tuned to manifest the opposite political biases of ChatGPT (i.e. to be right wing). Let me describe how I did it and the dangers of politically aligned AIs given their potential to induce societal polarization
ChatGPT/OpenAI Content Moderation System often classifies derogatory comments about Liberals as Hateful while classifying the same comments about conservatives as Not Hateful.
I replicated and extended my original analysis of ChatGPT political biases. 14 out of 15 different political orientation tests diagnose ChatGPT answers to their questions as manifesting a preference for left-leaning viewpoints.