Robot art, software gardening, a load of CRAAP – Marginalia 24

Barbara Fister recently published two windows into her thoughts on information literacy and the mis-information and dis-information economies. For a more general audience, I assume she didn’t choose the headline used by The AtlanticThe Librarian War Against QAnon. Of course, this is a topic that librarians have been working on well before the bizarre phenomenon of QAnon. On the Project Information Literacy website Fister provides a slightly different version of the same message (and Lizard People in the Library) is a much better title!

Fister leaves us with a lot to chew over:

As the historian of science Jacob Bronowski wrote in 1973, “There is no way of exchanging information that does not involve an act of judgement.” We’ve grown accustomed to many of those acts of judgement being made by algorithms that have a commercial goal in mind.

So how do we make those judgements? And who gets to make them? As has been noted elsewhere, conspiracy cults like QAnon are as much about feeling a sense of control as anything else. From Lizard People:

Those who spend their time in the library of the unreal have an abundance of something that is scarce in college classrooms: information agency. One of the powers they feel elites have tried to withhold from them is the ability to define what constitutes knowledge. They don’t simply distrust what the experts say, they distrust the social systems that create expertise. They take pleasure in claiming expertise for themselves, on their own terms.

But wait, what does Fister mean about classrooms lacking information agency? Isn’t education all about providing students with information..?

While school-based efforts to promote information literacy are typically tied to producing information (college papers, digital projects, PowerPoint slide decks), students are not as frequently invited to reflect on how information flows through and across platforms that shape and are shaped by participatory audiences and influencers. They aren’t learning much about how information systems (including radio, print journalism, academic and trade-book publishing, television, YouTube, Facebook, and Instagram) make choices about which messages to promote and how those choices intersect with political messaging and the social engineering of interest groups.

Fister makes this same point from The Librarian War a little more pithily back in Lizard People:

Too often what passes as information literacy continues to be instruction on how to satisfy the requirements of assignments that may explicitly forbid students from using information that doesn’t pass through traditional gatekeeping channels.

But this is all just one librarian’s opinion, right? Well, a group from the Stanford History Education Group really put the boot in to typical university library information literacy training in an October 2020 working paper Educating for Misunderstanding: How Approaches to Teaching Digital Literacy Make Students Susceptible to Scammers, Rogues, Bad Actors, and Hate Mongers.

Based on a study of 263 students but also reflecting on the typical instruction by libraries across North America (Australian academic libraries have a broadly similar approach), this paper has some alarming results, but insightful observations. Students struggled to identify problems with the sources of online information in the study, but the Stanford researchers quickly realised that this was at the very least partially due to the very techniques for assessing information sources they had learned from librarians. The much-maligned and much-relied-upon “CRAAP test” comes in for particular criticism, but it is just one of many long-standing techniques that come under the SHEG blowtorch. This is compulsory reading for all academic librarians and indeed anyone interested in information literacy.

And speaking of standard practice in academia that is self-defeating, Utrecht University has formally abandoned “impact factor” in assessments of researcher performance. I will leave it for readers to decide what rating on the ironometer this gets for being published in the Nature blog.

Oh, irony? How many of the references in Niamh Quigley’s thesis on Open Access do you think were paywalled?

Well we’re a way into this edition of Marginalia and I still haven’t mentioned the robot art or the software gardening. So here goes. ABC Science asked a new kind of AI art tool to make 'paintings' of Australia. They’re weird, but perhaps not as weird as you might expect.

“These programs have only existed since January,” [Katherine Crowson] said.

“We're not even a full 12 months into their existence and we already have art critics going around making sure everyone understands they're not to be considered 'real art'.

“Of course, this is about the surest possible sign something has become real art.”

Meanwhile, Seb Chan of ACMI is Opening a conversation on ‘soft tech infrastructure as community garden’:

As Dan Hill pointed out in his talk in the same session, gardens can be planned but not ‘controlled’. Gardeners ‘tend’ to gardens rather than ‘manage’ them. It helps that the relationship between gardens and gardeners is much more widely culturally understood than that between software systems and programmers. With gardens comes an explicit understanding of the need for continuous care and maintenance — this is the tending, and of lifecycles and evolution. A garden is understood to never stay still.

But how on earth are you supposed to make sense of all this? 3Blue1Brown has a great introductory video on neural networks which might help you to understand how the “robot art” works. And Loleen Berdahl tells us How to assess shiny new ideas and invitations on her Academia made easier Substack. It’s a useful little tool for next time you need to check whether the new thing you just heard about is really what you should focus on, or you’re just procrastinating.