Newsletter
The best groups have the strongest biases and other things
December 27, 2024
Hello,
Here’s everything since my last little missive to you:
The best groups have the strongest biases
Excerpt: There’s this cluster of classic social psychology experiments from the 50’s through the 70’s that you’ll be presented with in documentaries and whatnot whenever groups of people are behaving crazily. You’ve probably heard of some of them. Milgram’s ‘shock’ experiments, or Zimbardo’s prison experiment, or Asch’s conformity tests, and so on. This is the second in a third on group dynamics. Here we’ll talk about what makes our attraction to groups stronger, as well as what makes people participate in groups, and how all our group biases make sense in the context.
Main idea: The strength of our attraction to a group is a function of how different a group is from other groups in ways that we feel like we are, or like we want to be. Our participation in the group depends on how we see it benefitting us, and see us benefitting the group. The stronger both are, the stronger our biases to stay engaged.
Teachers apparently unable to catch AI papers. Paper is here. This is both surprising and unsurprising. After a year of marking, I’d pick up about 10-15% of papers for AI-sounding writing. I assume more was written, but with more sophisticated prompting and editing, but this paper doesn’t do either of those things and ends up flagging only 6%. Doesn’t seem right. But equally, all the AI related retractions from journals indicate this might be an outlier. I wonder if it’s a function of the quantity of essays you mark at collegiate institutions?
–
Fitbit heart-rate when wife asked for a divorce. It’s a graph. Delightful. For me, not for them.
–
How to judge AI performance. It’s notable that our method of telling which AI model is better than others is to test it on human assessments. But AI’s aren’t human, and concentrating on how human-like they are seems like a good way to miss whatever problems they actually will have. Anyway, this paper reckons that it also makes us think AIs are less useful than they are:
We study how humans form expectations about the performance of artificial intelligence (AI) and consequences for AI adoption. Our main hypothesis is that people project human-relevant task features onto AI. People then over-infer from AI failures on human-easy tasks, and from AI successes on human-difficult tasks. Lab experiments provide strong evidence for projection of human difficulty onto AI, predictably distorting subjects’ expectations. Resulting adoption can be sub-optimal, as failing human-easy tasks need not imply poor overall performance in the case of AI. A field experiment with an AI giving parenting advice shows evidence for projection of human textual similarity. Users strongly infer from answers that are equally uninformative but less humanly-similar to expected answers, significantly reducing trust and engagement. Results suggest AI “anthropomorphism” can backfire by increasing projection and de-aligning human expectations and AI performance.
A very complicated way of pointing out that you won’t think AI is useful unless you figure out where it actually is useful, rather than trying to use it as a drop-in replacement for yourself. At least in the short term, anyway.
–
How class became aesthetic. A counter to the ‘Long March’ of wokism theory:
The universalization of wokeness and its institutional expression was not the result of a hostile ideological palace coup ala the ‘Long March’ narrative, but a reflection of the aggregate values of the predominant class interest.
I think this is only surprising to people who have not done any sociology. You would learn about Pierre Bourdieu, or whoever else, on social and cultural capital, and you would be very clear about the aesthetic value in class relations. You would then observe that the importance of aesthetics is growing and the importance of any given proper noun is declining. You would make the obvious conclusion and not feel particularly inspired to write an article about how wet the water is. That said, you might feel inclined to write an article about how to hijack this to make your groups stronger. I did.
That said, the article is nice for those who missed sociology 101 or any kind of foundation social psych.
–
Academic writing is getting harder to read:
The Economist analysed 347,000 PhD abstracts published between 1812 and 2023 … a majority of English-language doctoral theses awarded by British universities … in every discipline, the abstracts have become harder to read over the past 80 years. The shift is most stark in the humanities and social sciences
A function of increasing specialisation, almost certainly—even the articles I wrote to explain my PhD are hard to read (and were harder to write). What is particularly curious to me, in psychology anyway, is that despite this increasing specialisation, we have not progressed much since the early 20th Century. Read William James on attention, and read attention literature, and you will not really learn anything new except perhaps that various brain things correlate with whatever James reckoned might be happening. So this isn’t just specialisation. it’s also a function of increasingly reified institutional ways of doing things that hides the fact that little new is being discovered behind different and more complicated words.
–
I hope you found something interesting.
You can find links to all my previous missives here.
Warm regards,