I’ve had multiple discussions with people, typical western liberals, where they heavily relied on Chat GPT for their arguments and information.
I was explaining my perspective on Iran to someone who said they support a regime change operation there by the US. I told them about how the protests started peacefully, until a sudden coordinated mob created destruction and violence, and how the riots ended after the starlink tech was shut down by the Iranian government.
They told me my argument doesn’t hold water because they GPT’d it and it said everything I said was wrong. Is it just a lost cause to try and explain further at that point, or is there a way to break people away from LLM-ism?


They wouldn’t have listened to your argument before ChatGPT either, they’d have just linked to NYT or WaPo instead and accuse your sources of being propaganda. I don’t think this is different, it just makes it even easier for them to turn their brains off.
This^.