Meet ChatGPT’s Right-Wing Alter Ego

Elon Musk caused a stir last week when he told the (recently fired) right-wing provocateur Tucker Carlson that he plans to build “TruthGPT,” a competitor to OpenAI’s ChatGPT. Musk says the incredibly popular bot displays “woke” bias and that his version will be a “maximum truth-seeking AI”—suggesting only his own political views reflect reality. 

Musk is far from the only person worried about political bias in language models, but others are trying to use AI to bridge political divisions rather than push particular viewpoints. 

David Rozado, a data scientist based in New Zealand, was one of the first people to draw attention to the issue of political bias in ChatGPT. Several weeks ago, after documenting what he considered liberal-leaning answers from the bot on issues including taxation, gun ownership, and free markets, he created an AI model called RightWingGPT that expresses more conservative viewpoints. It is keen on gun ownership and no fan of taxes.

Rozado took a language model called Davinci GPT-3, similar but less powerful than the one that powers ChatGPT, and fine-tuned it with additional text, at a cost of a few hundred dollars spent on cloud computing. Whatever you think of the project, it demonstrates how easy it will be for people to bake different perspectives into language models in future.

Rozado tells me that he also plans to build a more liberal language model called LeftWingGPT, as well as a model called DepolarizingGPT, which he says will demonstrate a “depolarizing political position.” Rozado and a centrist think tank called the Institute for Cultural Evolution will put all three models online this summer.

“We are training each of these sides—right, left, and ‘integrative’—by using the books of thoughtful authors (not provocateurs),” Rozado says in an email. Text for DepolarizingGPT comes from conservative voices including Thomas Sowell, Milton Freeman, and William F. Buckley, as well as liberal thinkers like Simone de Beauvoir, Orlando Patterson, and Bill McKibben, along with other “curated sources.”

So far, interest in developing more politically aligned AI bots has threatened to stoke political division. Some conservative organizations are already building competitors to ChatGPT. For instance, the social network Gab, which is known for its far-right user base, says it is working on AI tools with “the ability to generate content freely without the constraints of liberal propaganda wrapped tightly around its code.”

Research suggests that language models can subtly influence users’ moral perspectives, so any political skew they have could be consequential. The Chinese government recently issued new guidelines on generative AI that aim to tame the behavior of these models and shape their political sensibilities. 

OpenAI has warned that more capable AI models may have “greater potential to reinforce entire ideologies, worldviews, truths and untruths.” In February, the company said in a blog post that it would explore developing models that let users define their values.

Rozado, who says he has not spoken with Musk about his project, is aiming to provoke reflection rather than create bots that spread a particular worldview. “Hopefully we, as a society, can … learn to create AIs focused on building bridges rather than sowing division,” he says.

Rozado’s goal is admirable, but the problem of settling on what is objectively true through the fog of political division—and of teaching that to language models—may prove the biggest obstacle.

ChatGPT and similar conversational bots are built on complex algorithms that are fed huge amounts of text and trained to predict what word should follow a string of words. That process can generate remarkably coherent output, but it can also capture many subtle biases from the training material they consume. Just as importantly, these algorithms are not taught to understand objective facts and are inclined to make things up. 

Go to Source