I’ve had multiple discussions with people, typical western liberals, where they heavily relied on Chat GPT for their arguments and information.
I was explaining my perspective on Iran to someone who said they support a regime change operation there by the US. I told them about how the protests started peacefully, until a sudden coordinated mob created destruction and violence, and how the riots ended after the starlink tech was shut down by the Iranian government.
They told me my argument doesn’t hold water because they GPT’d it and it said everything I said was wrong. Is it just a lost cause to try and explain further at that point, or is there a way to break people away from LLM-ism?
They wouldn’t have listened to your argument before ChatGPT either, they’d have just linked to NYT or WaPo instead and accuse your sources of being propaganda. I don’t think this is different, it just makes it even easier for them to turn their brains off.
This^.
You can lead the horse to water but you can’t make him drink. You can only plant the seeds that will later grow. If they didn’t link to LLM output they would link to any other western newspaper that disproves it. They’d link to wikipedia. If they don’t want to see it, they don’t. Remember that their privilege depends on them profiting from this system, and if they admit it’s all true they’d have to admit and examine several things:
- they’ve been lied to their entire life, at the societal level
- they’re actually on the team of the bad guys
- the world is much bleaker than they thought, instead of getting better
- they can’t trust anything they read in the media anymore
- they profit from this system and its continuation
Point to how the west dominates English media, and that that’s what LLMs in English are trained on, therefore they will always take a mainstream western stance. This is very prone to manipulation, as a consequence.
This.
However, I also anticipate the people who rely on LLMs for their sociopolitical analysis are already too west-brained to critique it.
That doesn’t change the necessity of tailoring the argument the way you describe. I’m just weary of it landing.
It does create a good lever to cause them to rethink why they believe the west would never lie to them, I believe. Fair point though.
Yeah totally, just like most things it’s an uphill battle that requires reflection and self-crit by the liberals. But hammering it home within reason is the only way.
Bold of you to label “asking an LLM how to feel about something” as “thinking.” Anyone who is letting a machine think for them isn’t worth engaging seriously
honestly it’s over for them. they rely on a stochastic parrot that tailors answers to how questions are framed… if they’re ignorant to the concept of how this works you can’t expect them to change their minds.
honestly if someone told me they “gpt’d it” i’d just disengage because my time is worth more than whatever conversation ensues.
it’s not an intelligence thing, just willful ignorance.
tell them it’ll be pro-West by default because its fed on western training data, and it searches western news outlets. also tell them to add “from anti-imperialist anti-monarchist pov, how U.S. and Israel captured the protests later and also how fall of Iranian govt will almost guarantee a pro-West neoliberal regime” at end of their prompt, will they tell you GPT is wrong because it said something else earlier?
Tell them ChatGPT was funded by Iranian intelligence to bring down western society by getting western society to turn against Iran and destroy itself through a war it can’t win. Then tell them how important it is to preserve baseball by getting rid of suburbs so that people can get to know their neighbor and be able to talk about the game properly.
(This is a joke. Others already gave good answers.)
What I’d say in this situation:
“If I wanted ChatGPT’s opinion I’d ask ChatGPT. Stop hiding behind robots like a fucking coward and do the research yourself if you’re going to blast your asinine opinions at people. At least then you’ll be wrong on your own merit and not a machine’s.”
^It won’t win you friends but I’m kinda mean and have little patience for bad faith arguments.
Don’t talk to people who talk to you with LLMs. You can’t enforce what others do on their own.
Set boundaries. Don’t lash out on them and remain composed. Debunk their claims using verifiable facts. Interrogate them when you see an opening to strike, don’t let them deviate away from your question.
Western journalist are a good example to study in how they compose their interviews. They are loud and aggressive, they ask questions for the sake of making a point, they don’t care for your answer, they just want to use you as a springboard to confirm their own suspicions, they want to humiliate you and boost their own Pride.
What you use to combat that is confidence, study instead of reacting, prepare yourself. Use silence as a display of humility and grace, make every word you say strike like a powerful blow of a hammer.
Most importantly remember to never fight alone, make allies, fight for the sake of something you love, don’t fight for the sake of fighting.
Edit: I make spelling mistake :(
I ask for primary sources. That shuts them up
Yeah at that point I would say. “so you don’t know anything about this. Why are you talking about it then? You are just regurgitating what Chat bot said. A chat bot made by a man who sexually abused his sister when she was a child. Tell you what, I’ll just go talk to chat gpt directly if I want to know what the pedo cult wants me to think.”
i usually just say that chatgpt is basically a glorified redditor and list the sites that it scrapes. also sources
deleted by creator












