Marginalia 8 – Reality, kindness, and the machine
Last week I read Axel Bruns' Are filter bubbles real?, and whilst I admit I was a sympathetic audience, his arguments are pretty compelling. I've always been somewhat sceptical of claims about filter bubbles, but Bruns subjects the related concepts of filter bubbles and digital echo chambers to a much more rigorous analysis. Ultimately, he finds that they are, at best, grossly exaggerated and barely defined theories. Bruns' arguments reminded me of this brief 2018 article I've shared before, by Laura Hazard Owen at NiemanLab. She quotes Matt Grossman:
“Media choice has become more of a vehicle of political self-expression than it once was,” Grossmann writes. “Partisans therefore tend to overestimate their use of partisan outlets, while most citizens tune out political news as best they can.”
It also reminded me that, in some ways, Safiya Umoja Noble's book Algorithms of oppression is a long complaint that Noble was promised a filter bubble that, despite her best efforts, never appears:
Of course, Google Search is an advertising company, not a reliable information company. At the very least, we must ask when we find these kinds of results, Is this the best information? For whom? We must as ourselves who the intended audience is for a variety of things we find, and question the legitimacy of being in a “filter bubble,” when we do not want racism and sexism, yet they still find their way to us. Algorithms of oppression, Noble, p5
Grossman mused that people may wish to claim they live in a filter bubble as a form of political self-identification. Bruns looks from the other side, pointing to the benefits of claiming that others live in a filter bubble:
In particular, even while ramping up their own social media offerings, mainstream news media have gladly accepted the idea of social media as echo chambers because it enables them to claim that, compared to these new competitors for the attention of news audiences, only the carefully researched and edited news published by established news outlets offers a balanced news diet that penetrates the cocoon. Are filter bubbles real?, Bruns, p8
This reminds me of nothing more than the glee with which some librarians have pounced on the idea that citizens are helplessly trapped inside a filter bubble, and need librarians wielding the sword of truth to burst it. Kevin Seeber delivered a wonderful broadside against this delusion in June last year, in a talk at the Canadian Association of Professional Academic Librarians Conference. Ben Jenkins wrote a shorter and somewhat more depressing analysis of why 'fact checking' is of limited value in his newsletter last December.
Edward Shaddow wrote in November of the continuing impact of newCardigan's visit to Incendium Radical Library. Shaddow did not attend the visit, but listened to the podcast recording later, and was struck by many of the same things as those of us who were there. In particular, Shaddow draws on his past work in a sexual health service to consider how library work could fit into the medical profession's idea of harm minimisation – particularly the model commonly used in alcohol and other drugs (AoD) support services. Sam Popowich took a very different approach but explored a similar question in Dialectics and social responsibility:
By thinking of [intellectual freedom] and [social responsibility] as distinct and different, rather than part of a single, larger, total system (which we might call “social justice”), libraries end up reproducing not only the values and structure of capitalism, but it obscures the real interrelationships of the social world of which libraries are a part. In this way they maintain and reproduce the very logic by which capitalism structures the social world, making it that much more difficult to change the world itself. By insisting that intellectual freedom and social responsibility are not two things, but one thing, we might go a long way towards understanding a kind of IF appropriate to the real social relations of capitalist society. And if we can do that, we might stand a chance of changing those relations, something that is impossible by treating intellectual freedom as an isolated, self-sufficient, well-defined set of principles and values, repressing the claims of social responsibility by drawing a neat dividing line between the two positions.
Angela Smith dissects the pseudo-politics of kindness to come to similar conclusions to Popowich:
Looking for solutions in the private sphere of demonstrations of affect and the registration of opinion instead of the public sphere of political processes and participatory action, drains political energies and fractures political will and collective strength.
And Kinjal Davis, in Systemic Algorithmic Harms riffs on Noble's deliberate use of the phrase algorithms of oppression rather than bias:
when we say “an algorithm is biased,” we, in some ways, are treating an algorithm as if it were a flawed individual, rather than an institutional force.
But what are we to do in order to reduce harm, change these relations or blunt this institutional force? Many of us like to joke about wheeling out the guillotines, but like “draining the swamp”, or “blowing the place up”, it's not a particularly helpful conception of how to move forward. For some reason I keep returning to coral reefs as a metaphor. There are three things I find interesting about coral reefs: they are collective projects, they grow by slow accretion, and they're “alive”. Hence my attraction to sortition, which Tim Dunlop mused on last year. Dan Cohen talked about human agency in Humane Ingenuity 12, and agency is exactly what government through sortition brings back to democracy. Cohen talks about it in opposition to automation (thinking in particular of AI), but any system that denies humans agency is degrading – it's largely irrelevant to the victim whether the human decision to deny other humans their agency is embedded within computer code, legal code, or a guidance note for bureaucrats.
Like the animals that live on a coral reef, we are both individual beings living within multiple, stacked environments – and part of those environments ourselves. We act and are acted upon. We fight the machine at the same time we are part of the machine. I have long felt this particularly keenly as a part of the bureaucracy: for many years in local government and now within a university. Emily McAvan attacks this head on in a review of the classic film The Matrix on the occasion of its twentieth anniversary. In I would rather be a cyborg, McAvan explores what The Matrix meant then, and what it means now – its themes of posthumanism and transgenderism more relevant than ever, but also evolving in their meaning over time. McAvan has some ideas that might help us through our despair at the state of the world, when social and environmental revolution seems both necessary and impossible:
Posthuman theory has taught us that there is no real return to the ‘common sense’ of humanism – technology has removed that possibility. Early in the movie, Neo is told that ‘it sounds to me that you might need to unplug,’ yet that is, in the end, impossible ... Perhaps what The Matrix’s really teaches us, with its fight against AI and its ambivalence towards the virtual, is that we need to revolutionise our encounters with the machinic. To take the possibilities inherent in the virtual and radicalise them. How different would our world look if the apps we used, from Facebook to Uber, were socialised, created not to exploit workers and consumers alike but to benefit society?
Silicon Valley has been promising revolution and radical possibilities for decades. What they've given us is Airpods, surveillance capitalism, and Bitcoin. And yet, like Neo, we know how the system works because we are part of it. Of course we don't live in a Hollywood film, so there is no “The One” to save us. We have to dodge the bullets together.