top of page
Simian Practicalist

AI Can “Shift” Attitudes. And It Can Lie Too.

According to a University of Wisconsin–Madison study by K. Chen et al titled “Conversational AI and equity through assessing GPT-3’s communication with diverse social groups on contentious topics” published on 18 January 2024, chatting with AI can “shift” attitudes regarding climate change and BLM.


One of the interesting observations was that those who belonged to the “opinion minority” group—that is, those who disagreed with what is supposedly the mainstream view—reported lower satisfaction with GPT-3.

Although the opinion and education minority groups reported much worse user experiences with GPT-3 on both issues, their attitudes toward climate change and BLM significantly changed in a positive direction post-chat.

I am not suggesting that AI is “sentient” and, either way, people are responsible for forming their own views, but one wonders to what extent AI is biased and is used as a tool to brainwash or “shift” users.


Am I paranoid? Perhaps… but my so-called paranoia is not merely based on the above.


Daniel Bobinski at Keep the Republic reportedly asked Perplexity.AI about a documentary on what really happened regarding George Floyd. The AI tried to stonewall him before finally answering as it knew the answer all along.


This, of course, is merely Bobinski’s personal experience and I am taking his word for it. But there is plenty of research on AI that is more disturbing than the abovementioned.


According to one study by J. Scheurer et al, GPT-4 was used as an investor. Not only did it resort to insider trading, it “strategically” lied about it.


In a study by E. Hubinger et al, the AI was trained to be “unsafe” before being re-trained in an attempt to remove such behavior. They couldn’t.

 

Be sure to subscribe to our mailing list so you get each new Opinyun that comes out!

 

Recent Posts

See All

Comments


Screen Shot 2021-12-09 at 4.49.31 PM.png

10% Off
Use Code: MERRYXMAS

MERCHANDISE!

bottom of page