To get to the root of why AI systems are so socially corrosive, it helps to first step back a bit and look at how they work. Physicist and critical AI researcher Dan McQuillan has described AI as 'pattern-finding' tech. For example, to create an LLM such as ChatGPT, you'd start with an enormous quantity of text, then do a lot of computationally-intense statistical analysis to map out which words and phrases are most likely to appear near to one another. Crunch the numbers long enough, and you end up with something similar to the next-word prediction tool in your phone's text messaging app, except that this tool can generate whole paragraphs of mostly plausible-sounding word salad.AI systems reproduce bias, cheapen and homogenize our social interactions, deskill us, make our jobs more precarious, eliminate opportunities to practice care, and enable authoritarian modes of surveillance and control. Deployed in the public sector, they undercut workers' ability to meaningfully grapple with problems and make ethical decisions that move our society forward. These technologies dehumanize all of us. Collectively, we can choose to reject them.