Notes on what I've been reading

RS Benedict recently made the observation that in contemporary Western culture, Everyone Is Beautiful and No One Is Horny. Benedict is writing about about Hollywood film. But she's also writing about the culture more broadly, and it's hard to disagree with her about the vacuity of hyper capitalism and how it drains joy out of even our most elemental animal spirits:

In her blog McMansion Hell, Kate Wagner examines precisely why these widely-hated 5000-square foot housing bubble behemoths are so awful. Over and over again, she reiterates the point that McMansions are not built to be homes; they’re built to be short-term investments... The same fate has befallen our bodies. A body is no longer a holistic system. It is not the vehicle through which we experience joy and pleasure during our brief time in the land of the living. It, too, is a collection of features [which] exist not to make our lives more comfortable, but to increase the value of our assets.

Her piece starts with a reference to Paul Verhoeven's disturbing film “Starship Troopers” — a film that I haven't watched for years, but imagine would probably strike me as less wryly funny now than when it was released, because it so accurately describes our current reality. Benedict's piece is a great pair to David Roth's article published in The New Yorker last July, How “Starship Troopers” Aligns with Our Moment of American Defeat.

Once again, the present has caught up to Verhoeven’s acid vision of the future. It’s not a realization that anyone in the film can articulate, or seemingly even process, but the failure is plain: society has left itself a single solution to every problem, and it doesn’t work.

So how to break out of this cycle? Once answer might be to pay more attention to The Intellectual Life of Kids, the topic of a great episode on KPFA's Against the Grain last week. Psychologist Susan Engel speaks about her research and her new book on the same topic, pointing out that the way we learn (as individuals but also as societies) is by constantly finding new things to be “surprised by”. I found this a really interesting way of thinking about the learning process, and it makes sense. Human brains enjoy novelty, and Engels seems to be arguing (amongst other things of course — her primary point is that children have intellectual lives that are very often under-appreciated by adults) that the driver for learning more and more specific detail about a particular topic is primarily driven not be “interest” in an abstract way so much as the desire to find out new “surprises”.

Something that might surprise you is an independent American small-holding farmer arguing against the Farmers Market model, yet that's exactly what Chris Newman does in a piece published on Medium in 2019, arguing that “the romance of neoliberal peasant farming blinds us to our collective power”. As a Certified Middle Class Wanker I frequent farmers markets but always felt slightly ambivalent about how they operate — at least in the Melbourne context. Newman's central argument is that Farmers Markets are extraordinarily inefficent, and if the same farmers organised as a cooperative they could provide access to their produce where the customers are located but focus their energy on the thing they like doing and are good at — farming — instead of working 100 hour weeks because, due to operating as independent consumer-facing businesses they have to be their own freight carriers, retailers, social media managers, and so on. That is, Newman sees a viable model that is neither the somewhat neo-liberal and very inefficient farmers market model nor the arguably highly efficient but extremely monopolistic and hyper-consumerist supermarket model: a producer cooperative. It's an intriguing idea.

The final little piece I wanted to share today is Mozilla's POP Your Event! guide:

For any project or event, it’s important to be clear on your purpose for the work, the outcomes you hope to see, and the process you’ll use to get there— before you get going.

I've been talking with some colleagues about improving some of our practices, particularly around that bane of office life, meetings. I think the simple POP model can also be useful for either running or avoiding meetings, if it's used to structure thinking around a particular need. If you think about a thing that needs to happen in a work context — “X” — that could be the purpose. Very often people skip immediately to “process” and on auto-pilot decide that the process should be “a meeting”. But if you think even just for a few minutes about the outcomes you want, often “a meeting” is clearly not going to deliver those outcomes. And if getting people together to discuss something is needed, you can use POP recursively to think (before the meeting!) about what the purpose, outcomes, and process of the meeting will be. I guess time will tell whether this helps in my own work context.

“How hard”, asked Josh Dzieza at The Verge last February, “will the robots make us work?”

While we’ve been watching the horizon for the self-driving trucks, perpetually five years away, the robots arrived in the form of the supervisor, the foreman, the middle manager. ...for workers, what look like inefficiencies to an algorithm were their last reserves of respite and autonomy, and as these little breaks and minor freedoms get optimized out, their jobs are becoming more intense, stressful, and dangerous.

Dzieza, like many others, identifies as one of the most dystopian “algorithmically managed” workplaces, but it's certainly not the only example. Callum Cant, on a recent episode of Paris Marx's Tech won't save us podcast, talked about his book and what he learned from his experience working for Deliveroo in the UK — a job entirely driven by an opaque algorithm communicating via an impassive mobile app. Can't maintain the pace it sets? No more 'drops' for you.

In more “highly skilled” workplaces, the algorithms are, ironically, less sophisticated, and still used mostly to launder management decisions rather than completely replace bosses. In University.xlsx, Andrew Brooks and Tom Melick write:

[The university as spreadsheet] allows for some puzzling promises, such as a commitment to research without researchers and a dedication to teaching without teachers. Feedback is encouraged but never enters the spreadsheet itself. ...While the algorithms that work on Excel spreadsheets might remain relatively simple operations when compared with the machinic systems that sort and stratify massive data sets into perceptible patterns, it is important to not to lose sight of their complicated effects. In the workplace, the classification systems that organise the structuring of data in the spreadsheet are determined by managers and productivity consultants, and to many extents dissolve into the daily tasks of management like sugar in tea. Similarly, the problems in need of solving or the forecasts in need of generating have been identified by the same players. Despite the appearance of scientific objectivity, the spreadsheet is always a product of judgement: some things enter the spreadsheet while others are discarded; some things are assigned value while others are dismissed as worthless.

All of this must be a shock to Tech Crunch's Danny Crichton, who in 2014 heralded the dawning of a new age of worker liberation and happiness, declaring that “Algorithms Are Replacing Unions As The Champions of Workers”, and calling out fast-food delivery and university workers specifically as likely beneficiaries. It's hard to tell whether Crichton is extraordinarily credulous, or merely suffers from the myopia common to Silicon Valley vulture capitalists and their cheerleaders in technology “journalism”. Either way, he could hardly have been more spectacularly wrong. Just four years later, the tech workers who write the algorithms directing so many other workers were so fed up, they went on strike themselves — over company culture and management, not pay or hours.

Crichton was certainly aiming at the right target, he was just wildly off-base about how to hit it:

Perhaps most importantly, [under algorithmic management via platform capitalism] workers have the ability to develop their own personalities and brands, an issue that has deeply resonated with me in the past. One of the most insidious ways that employers prevent workers from advancing in their careers is preventing them from having their own voice and being recognized for their accomplishments.

But far from freeing workers to express themselves, algorithmic management has precisely the opposite effect. Dzieza writes about an application used in call centre work to measure and rank workers based on their “empathy”:

Workers say these systems are often clumsy judges of human interaction. One worker claimed they could meet their empathy metrics just by saying “sorry” a lot. Another worker at an insurance call center said that Cogito’s AI, which is supposed to tell her to express empathy when it detects a caller’s emotional distress, seemed to be triggered by tonal variation of any kind, even laughter.

This “affective computing” technology is the subject of Frank Pasquale's article More than a feeling. He's not a fan:

Much of affective computing is less about capturing existing emotional states than positing them. ...If institutions buy into these sorts of assumptions, engineers will continue making such machines that try to actualize them, cajoling customers and patients, workers and students, with stimuli until they react with the desired response — what the machine has already decided certain emotions must look like.

This literally inhuman oversight, far from allowing workers to “have their own voice and be recognised for their accomplishments”, does exactly the opposite:

Angela, the worker struggling with Voci, worried that as AI is used to counteract the effects of dehumanizing work conditions, her work will become more dehumanizing still.

“Nobody likes calling a call center,” she said. “The fact that I can put the human touch in there, and put my own style on it and build a relationship with them and make them feel like they’re cared about is the good part of my job. It’s what gives me meaning,” she said. “But if you automate everything, you lose the flexibility to have a human connection.”

The AI simply doubles down on all the terrible things about life under corporate control. Ingrid Burrington, talking to Inhabit: Territories about capitalism, supply chains, the COVID-19 pandemic and Jenny Odell, cuts to the heart:

One of the things that Jenny Odell gets across very well is that doing nothing is not about actually just stopping, or being useless or being lazy. It’s about being really clear about what you actually want and doing that thing instead of the thing you think you’re supposed to do, or the thing that meets someone else’s expectations.

Callum Cant ended his interview by outlining what on-demand food-delivery really is:

What is a service like [Deliveroo]? Functionally what is its core concrete nature? Well it's really care. What these platforms do is largely provide people who are exhausted from work, hungover, too depressed to go out to the shops, caring for children, with food quickly to their door. ...It's a care service that should be prioritised for people who need care. The actual use value here is “provide hot food to people who need it”... this should be one modality of a universal food service.

And Cant has a beautiful vision of what it could be:

Using food as a care service, providing it universally on a de-commodified basis, in delivery form to people who can't leave the house, in canteen form to those who can, and using that as a basis to rebuild our society.

Sounds pretty great to me.

With the utterly predictable politics of competing US-based media companies fighting a cartoonish proxy war in Australia, it would be easy to dedicate a whole Marginalia edition to that. But frankly, it's boring, and there are just too many people being wrong on the Internet for me to shed much light or add much value. So here's some things to read about information histories and futures instead.

In December Seb Chan shared Looking backwards to go forward — words from talks in late 2020. It's a really interesting read and provides a great tour of the last few decades in museum technology. But Chan also has some observations about maintenance and deep contextual knowledge that unfortunately apply to all cultural institutions and probably plenty of other organisations.

Those with any technical knowledge or experience know that infrastructure needs continual maintenance. Maintenance is unforgiving but is a necessary byproduct of any organisational innovation. Knowing exactly how much maintenance is going to be required by a new system or process requires skilled staff with a deep understanding of what has been made and why, and its lifecycle. If those staff have been let go, outsourced, replaced, then the amount of ongoing maintenance a system needs can be vastly misrepresented and misunderstood. Maintenance needs to be operationalised, and systems always worked on and adapted.

...Capital is easier to raise from funders whilst operations are virtually impossible to secure increased funding for. It can quickly become attractive, in the short term, to seek outsourcing as way out. But once outsourcing begins, it starts an unstoppable process of skill erosion.

I too have seen this process over my career to date. In the case of libraries, it has included not just what most people think of as “technology” but even the “technical knowledge” that defines the profession of librarianship — the creation and maintenance of sophisticated metadata, ontologies, and new ways of organising and managing collections, whether physical or digital. Many days of the week it's pretty depressing, if I'm honest.

But sitting in my virtual pile of things to read or things I have read, there are some sparks of hope and intriguing possibilities for the near and far future. We can be informed by the past without wallowing in it.

Christian Lauersen wrote (also in December) about A new language for the value and impact of libraries, describing how Roskilde Public Library has used the Arts Council of England study Understanding the value and impacts of cultural experience in a Danish context. This looks like a really interesting way to both strengthen advocacy and keep track of progress towards multiple competing outcomes, which has always been a problem for libraries and especially for public libraries. I like the approach both in terms of how the reporting is done (in a visual chart that clearly shows where things may be unbalanced) and also the process of thinking through what it is that libraries do.

Another approach to solving knotty (and quite typical) challenges of advocacy and goal setting is the University of Western Ontario Library's approach to developing an Open Access statement. In 2018 Lillian Rigling, Emily Carlisle and Courtney Waugh shared the library's experience developing a “values based” statement by using Design Thinking principles. This is a really interesting article showing a very concrete real-life example of Design Thinking, and I really like how they centred the approach on the shared values of library staff rather than specific targets or techniques.

Last June, Dan Cohen shared some thoughts about “withness” in his fantastic Humane Ingenuity newsletter. I was really taken by the work of the Siempre Collective and how they though through how to work “with the grain” of our humanity when designing group communication tools. Sometimes the answer is to go “low definition” in order to achieve more connection and lower our own bandwidth.

At last year's Activity Pub Conference there was a fascinating (for lots of reasons) talk about Supporting topic-based content syndication & discovery in a federated environment. One observer cheekily observed “librarians try to re-invent the Internet every few years” and even though I doubt this work will ever be more than an interesting footnote, I really like that libraries and independent technologists are thinking about this sort of thing.

Finally, just as I was feeling quite despondent about the present and future of library technology, I sat down with a beautiful hardcopy of Logic Magazine's Care edition and read Rodrigo Ochigame's Informatics of the Oppressed. What an amazing article. Ochigame introduced me to Cuban information theory (led by Cuban librarians), something I'd never heard of before, and also to how “liberation theology” worked in practice. In the 1960s and 70s the Cubans were trying to resolve a problem that has become even more acute in the decades since:

...publication counts did not conclusively determine the “productivity” of authors, any more than declining citation counts indicated the “obsolescence” of publications... Traditional informatics was incompatible with revolutionary librarianship because, by treating historically contingent regularities as immutable laws, it tended to perpetuate existing social inequalities.

In Informatics of the oppressed we are encouraged to consider the conversations the Cubans were having and the problems they were trying to solve, and how these approaches might inform our own behaviour in relation to modern information storage, retrieval and metadata management:

Whatever the merits and limitations of this particular mathematical model, the broader story of Cuban information science encourages us to be skeptical of the claims attached to models and algorithms of information retrieval in the present. If yesterday’s information scientists claimed that their models ranked authors by “productivity” and libraries by “effectiveness,” today’s “AI experts” claim that their algorithms rank “personalized” search results by “relevance.” These claims are never innocent descriptions of how things simply are. Rather, these are interpretive, normative, politically consequential prescriptions of what information should be considered relevant or irrelevant.

And finally, a call to action:

we must develop more critical methods of information retrieval, continuing the work that the Latin American experiments left unfinished. In short, we need critical search.

We do indeed need “critical search”. And who better to help build it than critical librarians? It was just what I needed to read.

It's probably quite foolish to write this in public — especially after such a long break between posts — but I'm hoping to turn Marginalia into a fortnightly publication. This probably will mean a slight tweak to how it's presented, but ultimately this was always supposed to be a kind of outlet for sharing articles, books, and my thoughts about them, in the way I liked to think I did on Twitter when I was a regular addict uh, user of the site.

Anyway, on with the show...

In Marginalia 11 I wrote about mushrooms, mycelium, and (among other things) the way they complicate any attempt at neat taxonomy. Back in April last year, about 500 months ago, Uneven Earth published a piece on the only topic of conversation. What are viruses? Are they alive? What does that mean? What are limits? What can viruses tell us about them?

All tales told, viruses seem to fit the risk society pretty well. This is the idea that our contemporary society has shifted to an obsession with safety and the notion of risk, and has dramatically shaped its organisation in response to these risks. From a class society where the motto was “I’m hungry”, and where social struggles were organised around this, the risk society’s motto became “I’m afraid”. This created a different set of demands, mostly revolving around the need to feel safe.

Re-reading it now, ten months distant, the article seems more exploratory and intriguing to me. At the time I knew it was saying something important but it was impossible for me to process it properly. There was a lot to think about in April 2020.

Turns out hard-to-classify life forms aren't just good for helping us re-think our assumptions, or frying up to eat with butter and toast. Claire Evans reminds us that “computers are basically just smart rocks”, introducing the weird world of “unconventional computing”. This is computing via slime moulds or mycelium networks, instead of silicon and electricity. It might bring a whole new meaning to the idea of “server farms”. But Evans thinks this through further than just getting excited at the prospect of “greener” or “natural” ways to use computers in the same way.

computing is not so much an industry as a way of seeing — an interpretation of the world. “If we are inventive enough, we can interpret any process as a computation,” [Andrew] Adamatzky says. If you’re looking for a computer — even if you’re looking under a rock — a computer is what you’ll find.

This sort of thinking is why Randy Connolly suggested last August that Computing belongs within the Social Sciences.

To be deinon is to be both wondrous and terrifying at the same time. “There are many deinon creatures on the earth, but none more so than man” sings the chorus in Sophocles' tragedy Antigone.

Within computing we have generally only focused on the wondrous and have ignored the terrifying or delegated its reporting to other disciplines. Now, with algorithmic governance replacing legal codes, with Web platform enabled surveillance capitalism transforming economics, with machine learning automating more of the labor market, and with unexplainable, non-transparent algorithms challenging the very possibility of human agency, computing has never been more deinon.

This datafication of everything without thinking too hard about the consequences is of course not a new observation. Technology Review explored it seven years ago in The dictatorship of data. I also wonder if Connolly knows about ANU's amazing 3Ai.

One small but fantastic (in every sense of the word) project that sometimes turns computational and taxonomic thinking against itself is the Decolonial Atlas. Check out Britain as Palestine, or the Eora map of Sydney Harbour.

See you in a couple of weeks.

Over the last couple of months I've been listening to Mike Duncan's History of Rome podcast. I enjoyed Revolutions and thought maybe I should see what his first outing was like. I managed to get all the way to The Tetrarchy before I finally snapped. Duncan isn't a bad historian, but I just got a little bit sick of everything being viewed from the point of view that more people, land, wealth, and order inside the Empire equals good, and less of those things equals bad.

It might be the untimely death of David Graeber (more on which from me some time soon, probably) that has been weighing on my mind. Perhaps it's simply weeks on end of being confined to my suburb, one hour of exercise a day by law, watching the bunch of kleptomaniacs who rule over my part of the world shamelessly shove more of the country's wealth into the pockets of their mates, their families or — on the odd particularly brazen occasion — themselves.

Robodebt and threats of military strike breaking for us. Mining royalty holidays and tax deductions for them. In the words of the great philosopher Tony Abbott, as he sought to show his humanity to Australian soldiers after their colleagues were killed supporting our imperial masters: “Shit happens”.

Shit has been happening to machine learning algorithms lately. Whilst ethicists, social scientists, and anyone with the most basic understanding of how racist the average police force is have been fruitlessly pointing out to computer scientists for years that algorithms based merely on historical record keeping might be a little problematic, it seems that what has finally got them to sit up and notice is the complete collapse of “just in time” supply chains.

Much more interesting is the disorderly order of the “natural” world. In Europe, scientists have been studying what happens when they do nothing at all instead of “cleaning up” the carcasses of dead animals. It seems amazing that “modern science” has to “prove” things like this, after so many millennia of humans just taking it for granted that death is part of the cycle of life, but here we are. Meanwhile in Christchurch, millions of dollars have been spent on a losing battle to keep a reborn swamp “tidy” after earthquakes returned whole suburbs back to wetlands. The world is in chaos, but then the world is chaos — embrace it, because it turns out that the way epidemics end is that they don't really. There is no normal: neither old nor new.

No gods. No emperors. Never normalise.

Graham Lee's Five Computers is a short article, but I've not stopped thinking about it since reading it. There will be more to say about Five Computers on my blog, but I have some marginalia to mention here as well. Lee skilfully captures the essence of modern computing, and why this industry that is full of people who consider themselves hyper-rational actually makes very little sense.

There is really one central computing economy, controlling well over six trillion dollars of technology investment, with five different public faces...

...A tiny morsel of this multi-trillion-dollar planned economy is sent out to see where it sticks, and what new ideas the computing Gosplan department should factor into their forecasts.

Lee's reference to Gosplan is particularly delicious, though he's not alone in comparing Silicon Valley Venture Capitalist culture to the Soviet Union: Maciej Cegłowsk has been doing so for years. We pretend to watch their ads, and they pretend to protect our privacy.

Of course, the Soviet Union wasn't the only experiment in socialist living. Over in Yugoslavia, they were running their own thing. In Tribune magazine, Michael Eby tells the story of the “Socialism’s DIY Computer” – the Galaksija. I'd never heard of this before, and the whole thing reads like a fusion of cyberpunk and steampunk:

Because all the day’s computers, including Galaksija, ran their programs on cassette ... the idea was that [radio show] listeners could tape the programs off their receivers as they were broadcast, then load them into their personal machines.

...[The host] would announce when the segment was approaching, signaling to his listeners that it was time for them to fetch their equipment, cue up a tape, and get ready to hit record. Fans began to write programs with the expressed intention of mailing them into the station and broadcasting them during the segment. In the case of games, users would “download” the programs off the radio and alter them—inserting their own levels, challenges, and characters—then send them back for retransmission. In effect, this was file transfer well before the advent of the World Wide Web, a pre-internet pirating protocol.

The first computer I ever used (and programmed!) was a BBC Micro, a rough contemporary of this period which also used cassette tapes as storage. Even so, it had never crossed my mind that computer programs could be broadcast over radio waves as a file transfer system 🤯. In principle I suppose this is not much different to WiFi, but using tape cassettes and AM radio to transfer computer programs is definitely not something I'd ever considered before! I'm intrigued to know what it must have sounded like. Not, I'm guessing, quite like Ei Wada's barcode-powered dance floor bangers, but I feel like it might have the same energy.

While I was deep diving into alternative computing realities, I stumbled at some point upon the 100 Rabbits “tools ecosystem”. The 100 Rabbits duo spend their lives sailing the oceans, which sounded idyllic until I read about their most recent trip across the North Pacific when a capsize swept everything off the deck – including their solar power generator. For our purposes, however, the interesting part of their story ties back to the “Five Computers”. Our intrepid explorers simply don't have the energy or the bandwidth – literally – to operate within the “five computers” paradigm:

We had to adapt, to change our workflow. One big decision, was to scale our projects to the amount of energy we had available on the boat. This translates to shorter work hours, smaller projects (books, music etc) and making our own tools.

We made software that work offline, that use little power and that are good at doing one thing. This, recently, has evolved into coding our websites in C99, a language that is more resilient and light. We're also learning to code in Assembly, with the hope of making our games playable on older hardware like the NES and famicom.

I had never before really considered (yes I'm all too aware a theme is developing here...) the energy use of different computer programming languages. In the context of the existing climate emergency, this seems like something that everyone programming computers needs to at least consider. Intrigued, I fished around for some data and found an article suggesting that (SURPRISE!) Silicon Valley darling Ruby is pretty much The Worst at ...everything. Intriguingly for me, however, Rust – a compiled language I'm tentatively exploring – looks to be not only “safe” but also very energy efficient.

For some people, however, all this talk about file sharing over AM radio waves, programming in Assembly or (less hardcore) Rust, and showing your independence by using a recycled MacBook powered by solar panels is just completely soft. Why would you do that, when you could create a living computer made from bacteria?

Now that's green computing.

It's been longer than I thought since the last instalment of Marginalia. Time, as many have noted, has felt different during the COVID-19 pandemic: “March lasted a year and April lasted a week”. But some people think about time quite differently. Peter Brannen, for example, outlines in hilarious detail why he thinks The Anthropocene is a joke, in an Atlantic piece of the same name:

For context, let’s compare the eventual geological legacy of humanity (somewhat unfairly) to that of the dinosaurs, whose reign spanned many epochs and lasted a functionally eternal 180 million years—36,000 times as long as recorded human history so far... If, in the final 7,000 years of their reign, dinosaurs became hyperintelligent, built a civilization, started asteroid mining, and did so for centuries before forgetting to carry the one on an orbital calculation, thereby sending that famous valedictory six-mile space rock hurtling senselessly toward the Earth themselves—it would be virtually impossible to tell... The idea of the Anthropocene inflates our own importance by promising eternal geological life to our creations. It is of a thread with our species’ peculiar, self-styled exceptionalism—from the animal kingdom, from nature, from the systems that govern it, and from time itself.

Brennan's point is not that we are not destroying our species' ability to continue to inhabit our one and only planet. Instead, he rather dismally is pointing to a significant part of how we came to be doing so: hubris.

This is also a reminder (in case you were feeling too comfortable) that humanity is currently in the midst of at least two crises – just because COVID-19 has temporarily changed the lifestyles of some of us, doesn't mean there's no longer a need for us to permanently change our societies to avoid the worst of anthropomorphic climate change. Sorry to be a downer. We can start by linking the two, with Samanth Subramanian's piece on How the face mask became the world's most coveted commodity:

No object better symbolises the pandemic than the mask, and no object better explains the world into which the pandemic arrived. Social distancing, at first, felt like a strange notion: the inaction of it, the vagueness of it. But the mask sang out to our deepest consumeristic impulses. In the absence of a drug or a vaccine, the mask is the only material protection we can buy; it’s a product, and we’ve been trained like seals to respond to products. As a result, in every corner of every country, the humble face mask – this assembly of inexpensive plastic – has been elevated into a fetishised commodity.

Subramanian's article is a masterful exploration of the utter failure of international “free market” capitalism to respond in any meaningful or effective way to a public health crisis, and highlights the way that COVID-19 has simply highlighted what critics of unregulated trade have been saying for decades: concentrating manufacturing in a few places with the cheapest prices, obscuring what's happening with a supply chain of nested contractors, and assuming that “the market” will magically match high-quality supply with actual demand is a house of cut-price cards that are actually tissue paper, resting on quicksand that is also flooding.

Speaking of capitalism being a complete clown car, two articles I read recently are even more revealing when you place them next to each other. Liz Pelly's Big mood machine takes a deep dive into Spotify's business model, finding that they have been remarkably happy to document how they eagerly tobogganed down the slippery slope of surveillance capitalism and mass manipulation of users in the name of a few extra advertising dollars:

In Spotify’s world, listening data has become the oil that fuels a monetizable metrics machine, pumping the numbers that lure advertisers to the platform. In a data-driven listening environment, the commodity is no longer music. The commodity is listening. The commodity is users and their moods. The commodity is listening habits as behavioural data. Indeed, what Spotify calls “streaming intelligence” should be understood as surveillance of its users to fuel its own growth and ability to sell mood-and-moment data to brands.

Pelly notes that Spotify, just like Facebook, has been all too eager not to merely monitor user emotions, but also to use their platform to change user emotions. Perhaps they should have used the Ethical Litmus Tests card deck when they were thinking about how to monetise, but I suspect it would have been far too late anyway.

Whilst Spotify's complete lack of moral compass and willingness to run experiments on unsuspecting music lovers certainly reinforces why universities have ethics approval processes, does it actually help companies to sell more stuff? That is, if Colgate runs adds when people listen to Chilled out study beats because Spotify has identified that “consumers” (🤮) are more receptive during that “emotional moment”, do they sell more units of toothpaste? Turns out the answer is ...probably not. Jesse Frederik and Maurits Martijn argue in The Correspondent that The new dot com bubble is here: it’s called online advertising. It's a long piece, but essentially Fredirik and Martijn argue that the data-driven online advertising revolution ushered in by Google is a mirage, with the effect of online ads no more measurable than the effect of ye olde newspaper ads:

Companies are not equipped to assess whether their ad spending actually makes money. It is in the best interest of a firm like eBay to know whether its campaigns are profitable, but not so for eBay’s marketing department.

Its own interest is in securing the largest possible budget, which is much easier if you can demonstrate that what you do actually works. Within the marketing department, TV, print and digital compete with each other to show who’s more important, a dynamic that hardly promotes honest reporting.

This reminded me of Ellen Broad's 2018 piece in The Guardian, Are Cambridge Analytica’s insights even that insightful? Readers will be unsurprised that Broad's answer to her own rhetorical question was “Probably not”:

These days getting access to data is reasonably cheap and easy. Making bad predictions is also easy. Creating accurate, targeted predictions about what an individual is like and their “inner demons”, in a context where Facebook is also acting on that constantly, remains hard. It’s particularly hard when an organisation’s whole business model is based on keeping those predictions – and how they’re made – secret from the subject of the prediction. We’ve slid so easily into a world of secretive inference, we’ve forgotten that transparency and trust can sometimes get better, more accurate results.

Trust is on Darius Kazemi's mind too. In this wonderful talk from Eyeo 2019 he outlines why “Trust is not harmful” – which seems like it's a thing that would be self-evident, but we live in a world where Bitcoin exists, so I guess not. Anyway, if you're unfamiliar with Kazemi's work this is a wonderful introduction, and a good outline of what trust and designing things to work within Dunbar's number can do that “operating at scale” can't.

To finish up today's Marginalia, I want to circle back to where we started (kind of). I've been meaning to read Troy Vettese's To freeze the Thames for many months, and I finally did last week. There's a lot of terrific (and terrifying) thinking in this piece, and you really should read it rather than rely on my very brief overview. If you need it: there's a pretty big content warning on this essay – it lays out a convincing argument that a quite extreme change in material living standards and behaviours is required if humans and many other species are to survive beyond the next century or two. Vettese says a lot, but in terms of ways forward he identifies two concrete ideas that have been proposed by other thinkers: the “half world” proposal, and the 2000-watt society. I'm a natural sceptic when comes to plans that would require planetary cooperation, though Vettese makes no attempt to outline how humanity might achieve these goals. By chance, around the same time I read To freeze the Thames, I read a speech from Murray Bookchin that is still highly relevant to our times despite being delivered before I was born. Utopia, not futurism: Why doing the impossible is the most rational thing we can do is blunt, sometimes hilarious, and inspiring. Bookchin was very clear that human societies and non-human biological systems flourish together and are both threatened by the capitalism and the state. Bookchin then was just as clear as Vettese is now that tech won't save us:

Another thing that troubles me very deeply is the enormous extent to which social ecology or ecological problems are reduced simply to technological problems. That is ridiculous. It’s absurd. The factory is a place where people are controlled, whether they build solar collectors or not. It makes no difference.

I do have some differences of opinion when it comes to some the detail of Vettese's proposals (or perhaps he would call them observations). Most importantly, living in a settler colonial society has allowed me to (eventually) see that the idea of “nature” is a mirage. Whether in the Amazon, Australia, California or elsewhere, Europeans and their offspring settler colonial cultures have stubbornly refused to see what was right in front of their eyes – societies that managed to assert a high degree of control over the growth of useful plant and animal populations without resorting to Total War on the rest of the biosphere. It's as if a bunch of Cubists suddenly discovered Impressionist watercolours and declared that they must have created themselves in the absence of any artist at all. So 'half worlding' looks awfully like the creation of the United State National Park System all over again. Let's hope not.

In solidarity



I bought a copy of The mushroom at the end of the world last year, but I've been waiting for the right time to read it. It's referenced in a couple of other books I've read in the last year, and I had high expectations, so wanted to be in the right frame of mind to appreciate it. It doesn't disappoint. Anna Lowenhaupt Tsing takes us on a journey from Oregon, across the Pacific to Japan, through to China and on to Finland – in search of Matsutake mushrooms and what they show us about both modernism and capitalism.

It's an extraordinary book: an academic study that rejects the norms of academia; a story of international supply chains that ignores transport logistics to focus on individual workers; an examination of what Tsing calls 'pericapitalism' – salvage practices that sit both inside and outside capitalist modes of activity. What I want to focus on here, however, is Tsing's examination of scale, and names. Perhaps it was always there, but I've increasingly noticed over the last few years an obsession in both the tech world and within libraries for projects and services to be scalable. This is generally understood to mean they are able to become bigger – deal with more users, or more data – without major changes to the underlying structure. But Tsing provides a more interesting take on scalability (my emphases):

Scalability the ability of a project to change scales smoothly without any change in project frames. A scalable business, for example, does not change is organization as it expands. This is possible only if business relations are not transformative, changing the business as new relations are added. Similarly, a scalable research project admits only data that already fit the research frame. Scalability requires that project elements be oblivious to the indeterminacies of the encounter ...scalability banishes meaningful diversity, that is, diversity that might change things.

I've been wary of prioritising making things “scalable” for a while, but couldn't quite place where my unease was coming from. Whilst it's implicitly obvious that “we can scale this up easily” means “we can have this system interact with more people/things without it being changed”, when Tsing made it explicit that this is what “scalability” means, a lot of things fell into place for me. But she also provides a warning about fetishising non-scalability and the idea that “small is beautiful”:

It would be a huge mistake to think assume that scalability is bad and nonscalability is good... The main distinguishing feature between scalable and nonscalable projects is not ethical conduct but rather that the latter are more diverse because they are not geared up for expansion. Nonscalable projects can be terrible or benign; they run the range.

I think librarians often have problems understanding the appropriate scale to work with. There are many things we do in libraries at a very local scale, customised to and operated as bespoke systems. A lot of them don't need to be small scale, and would be much more effective and sustainable if resources were pooled and more large-scale systems and processes used. But you know that story: library managers and consultants have been telling it for years, mostly as a way to reduce the expense of hiring people with specialist knowledge. In some ways the more problematic issue with scale is in the things where the inverse applies. The primary candidate (and sorry if you've heard this from before) is the taxonomies and classification systems we use in library catalogues and discovery systems: Modernist creations like Dewey Decimal Classification (DDC), Library of Congress Subject Headings, and even Moys Classification are used internationally, even though they were not really designed for that. UDC is sometimes claimed to be the superior alternative to DDC due to its international design, but it's still a Modernist project claiming to 'cover the whole universe of knowledge' – a pretty hubristic statement.

This is where Tsing's notes on 'contamination' and names, slightly earlier in the book, become relevant (my emphasis):

We are contaminated by our encounters; they change who we are as we make way for others. As contamination changes world-making projects, mutual worlds – and new directions – may emerge. Everyone carries a history of contamination; purity is not an option... This changes the work we imagine for names, including ethnicities and species. If categories are unstable, we must watch them emerge within encounters. To use category names should be a commitment to tracing the assemblages in which these categories gain a momentary hold. Only from here can I return to meeting Mien and matsutake in a Cascades forest. What does it mean to be “Mien” or to be “forest”?

What would it mean to build library discovery technologies with “a commitment to tracing the assemblages in which categories gain a momentary hold”? If we worked on identifying the points at which identities and categories interact and change, rather than endless debates about which specific word to use when affixing a category to a particular person, concept or work? If our systems revealed what happens to the identity of pin when angels dance on one, rather than debating how many angels there might be?


How identities change through interaction was also on Kelly Pendergrast's mind in Who goes there, a piece for Real Life Magazine. In this rumination on how online security questions reflect the preoccupation and demands of the dominant class, Pendergrast puts a finger on some disturbing truths:

Security questions rest on unexamined assumptions about what constitutes identity, and what biographical details can be assumed as universal, private, and memorable to internet users. They are a form of quiet disciplinary power; specifically, they help enforce the hegemonic subjectivity required to provide an effective and sustained labor force.

I will resist the temptation to quote every second paragraph, but this is a great companion to Tsing's book when it comes to thinking about what identity means and how classification schemes both warp our sense of reality and act to communicate certain messages about who does and does not fit within the bounds of acceptability.

For something related but a little more lighthearted, you might like to read about the research project that set up adversarial AI systems to turn pictures of giraffes into pictures of birds.

If you want to map out how interactions can change taxonomies, or something else, you might like to look at drawio – a free open source diagraming tool that works in a browser. Or for mapping of a different kind, check out the amazing City Roads, where you can make city maps like this:

Map of streets of Hobart


Of course, The Virus has been on my mind as much as anyone's. You're probably avoiding reading about it by now, but I did think a few articles were worth checking out.

The first was in, of all places, The Chronicle of Higher Education. Mark today in your diaries because it may be the only time I ever recommend you read something from The Chronicle:

The answer to the question everyone is asking — “When will this be over?” — is simple and obvious, yet terribly hard to accept. The answer is never.

Why you should ignore all that Coronavirus-inspired productivity pressure is not unique in its main thrust, but does have some great thoughts from Aisha S. Ahmad, who has a lot of experience working in places that are experiencing or recovering from catastrophe.

Ed Summers has an interesting piece about archiving COVID-19, and how Jupyter Notebooks have the potential to change the way web archiving is done – not just on a technical level but also in terms of philosophy and theory: nice thing about working in Jupyter, rather than creating an “application” to perform the collection and present the results, is that notebooks center writing prose about what is being done and why. You can share the notebooks with others as static documents that become executable again in the right environment. I think we must consider adding to our toolbox of web appraisal methods, to do more than simply ask people what they think should be archived, and to factor in what they are talking about, and sharing. Using Jupyter notebooks could be a viable way of both doing that work and providing documentation about it.

Finally, as someone who suddenly found himself managing a remote team from his kitchen table four weeks ago, I really appreciated Mandy Brown's The hard way, which provides some advice to people – particularly librarians – suddenly thrust into this new world:

The first lesson on leading a remote team when the world’s on fire is the most obvious one, but it’s shockingly easy to miss: the operative phrase is not remote team but world’s on fire. This is true whether you’re a veteran of remote practices or you have recently had to instruct people in the correct use of the mute button on video calls.

I think I'm mostly getting things right, but it's hard for everyone right now.

Stay safe, I hope you get time to do some reading of your own.

I've recently been spending a lot of time finalising a coding project I've been working on for the last year, but I have read a few interesting things worth sharing.

Mike Jones is always asking interesting questions, and in Paths he asks plenty about the state of GLAM records and catalogues:

But do we need larger and larger aggregates with less description attached? Do we need more documentation produced from the perspective of the creator rather than considering subjects, communities, and users? As Michelle Caswell might ask, whose standpoint are we encoding through such an approach, and whose perspectives are being excluded as a result? When aggregating discrete records what pathways and relationships are missing from the map, and what stories, narratives, and connections are being lost? In relying on search technologies rather than rich, contextualised, relational description what are we making hard to find? Mike Jones, Paths, 2019

On a slightly related theme, Dan Cohen noted in Humane Ingenuity 13 that archival research practices have rapidly changed, at least in US institutions, and he too is asking questions:

What happens when instead of reading a small set of documents, taking notes, thinking about what you’ve found, and then interactively requesting other, related documents over a longer period of time, you first gather all of the documents you think you need and then process them en masse later? Dan Cohen, Humane Ingenuity 13, 2020

The answer, of course, is that we don't yet know. But it did make me wonder if there is a relationship with Franco Moretti's concept of Distant Reading – the 'Ship Map' referred to in Jones' paper is an example of 'distantly reading' shipping records. I've gone back to both of these pieces in recent days, thinking of course in the context of the current COVID-19 pandemic. Two datasets show how both close and distant reading of records can be useful, and the pandemic itself highlights some limitations of archives in general. Johns Hopkins University is maintaining a dataset and dashboard of COVID-19 cases and deaths, presented both as a global map as well as in a sort of macabre 'leader board' by country or jurisdiction. This provides an interesting 'at a glance' overview of the current state of the pandemic. In contrast, a video passing through Twitter shows a man comparing the Obituary pages of local Italian newspaper L'Eco di Bergamo a few weeks apart: two and a half pages in a 'normal' week, versus ten pages last week. This shows the very human impact of the pandemic on a particular town. These two datasets are about the same event, but tell the story in different ways. As Cohen comments, “For what it’s worth, I actually think that the new practice is neither better or worse than the old practice, but it is vastly different.”

The second point about the Johns Hopkins data is that it is official data: known cases, deaths, and confirmed recoveries. This dataset is useful, but it can only show what is recorded and reported. Early on in the pandemic, many people questioned the accuracy of both the Chinese and, later, Iranian official figures, claiming a cover up. More recently questions have been asked about the real state of the United States situation, with testing unavailable to many, and a President attempting to prevent a cruise ship from docking in the US so as to avoid the official number of US cases going up. In contrast, South Korea made testing widely available, likely recording many cases that in other countries would pass officialdom by. This merely highlights what people who have been subject to state control always say about official records and archives: they record what governments want to record, and can only ever reveal partial realities.

And speaking of COVID-19 testing: Victoria's DHHS has released an amazingly clear and helpful flowchart (pdf) to help those of us who have been completely confused about when and if testing is advised. Recommended reading!

Stay safe, and remember to wash your hands.

Owning things

In Marginalia 8, I wrote about coral reefs as a metaphor. I partially had Mike Jones' Descending upright among staring fish in mind, but also a piece Oliver Wainwright wrote in The Guardian called The case for ... never demolishing another building. I found Wainwright's piece fascinating: some of the ideas within I find deeply attractive, and others are profoundly repellent. The repellent part is not the inherent ideas but rather how they would be carried out in a capitalist society. Indeed, the whole article is deeply subversive: it's just that what it is subverting depends on how one looks at it.

Speaking to Dutch architect Thomas Rau, Wainwright writes:

Taking reuse to its logical conclusion, Rau sees a future where every part of a building would be treated as a temporary service, rather than owned. From the facade to the lightbulbs, each element would be rented from the manufacturer, who would be responsible for providing the best possible performance and continual upkeep, as well as dealing with the material at the end of its life. “Ownership blocks innovation,” he says. “Treating building elements as a service would remove planned obsolescence and increase transparency and responsibility.”

I'm not at all convinced this is true, at least in the current reality. The Internet of Shit provides ample evidence that late stage capitalism is perfectly capable of combining subscription services with planned obsolescence. And whilst technically “smart lightbulbs” are owned by the householder rather than rented, this also would make little difference. I've had suppliers of complex electronic equipment effectively beg to replace equipment for no cost just so they didn't have to maintain an older system. And yet...

Naming things

Ownership blocks innovation is an intriguing statement. When we're attached to things, they can hold us back. Na’ama Carlin writes about this in Of the name, a lush exploration of naming, mental health, and many other things besides:

I don’t know how not to identify, but I know that we could try to allow ourselves the space to let go of all the names, markers, memories and pathologies we think make us who we are, and can pause and look at those around us—truly look—and let them look into us. We should take our stand in relation. And then we can breathe in the world and live.

Speaking of names, Robin Sloan recently published a great piece on software development as home cooking.

The exhortation “learn to code!” has its foundations in market value. “Learn to code” is suggested as a way up, a way out. “Learn to code” offers economic leverage, a squirt of power. “Learn to code” goes on your resume.

But let’s substitute a different phrase: “learn to cook.” People don’t only learn to cook so they can become chefs. Some do! But far more people learn to cook so they can eat better, or more affordably, or in a specific way. Or because they want to carry on a tradition. Sometimes they learn just because they’re bored! Or even because—get this—they love spending time with the person who’s teaching them.

This is the sort of coder I am too – and why I refer to myself as a 'coder' and not a 'programmer' or 'developer'. And speaking of coding, there is some interesting stuff coming in the next ECMAScript (JavaScript) release. Of particular note is Intl.RelativeTimeFormat which will allow developers to display a date in relative time according to the user's device. Time is one of those things that seems simple until you need to program a computer, and from then on seems impossible to fathom. Up until now momentjs has been the go-to solution for tricky chronological problems in JavaScript, but baking in some of the more common but confounding use cases is a great step forward. Now we just need to wait a decade for browsers to catch up.

Waiting for things

If waiting for things frustrates you, I recommend Jason Farman's book Delayed response: the art of waiting from the ancient to the instant world. A meditation on waiting, Farman's book covers phone etiquette in Japan, pneumatic tube mailing systems in New York, and message sticks in Melbourne. Whilst the Melbourne chapter contains some inaccuracies that did make me question how true the rest of Farman's story is, it's still a really interesting book. Great for reading at the train station.

Enter your email to subscribe to updates.