Unmasking AI: My Mission to Protect What Is Human in a World of Machines by Dr. Joy Buolamwini

Minatare R2D2 robot
R2D2, the token AI out to prove they're one of the good ones. Photo by LJ: https://www.pexels.com/photo/star-wars-r2-d2-2085831/

What are the limitations and perils in the current era of AI and what should we be doing to reduce harm?

Trigger warnings: racism, xenophobia

AI is the talk of the town, and when it comes to science fiction it's often a seed that germinates numerous post-apocalyptic and dystopian plots. I've read a few books on biased technology, because it's an incredibly pertinent and important topic especially as faith in the technology outstrips its ability. Even more so after numerous reports about Israel using AI technology to target areas for bombardment in the Gaza Stripe, killing untold number of civilians and flattening apartment blocks indiscriminately.

Unmasking AI: My Mission to Protect What Is Human in a World of Machines by Dr. Joy Buolamwini is categorized as non-fiction, but I'd say it's more creative non-fiction akin to memoir because she's also telling the story of her life and personal experience as a framework for the information and data she's presenting in the book, which makes it an incredibly easy and enjoyable read.

The book follows the life Dr. Buolamwini, a first generation Ghanaian-Canadian-American, as a child inspired by Kismet, an MIT robot, to pursue computer science studies and an academic career including a bachelors degree, two masters and a PhD. During the course of her studies, while working on projects using AI, she discovered her face was invisible to an AI that could easily identify white faces or when she put a white mask on her own face. What I found striking about the first section of the book was how much she just wanted to do a fun project and pursue her studies without getting involved in divisive topics, like the vast majority of people in college, but structural racism threw up a barrier she couldn't ignore.

From there she details her journey investigating AI bias by looking at how AI is constructed and identifying the bias in the data and algorithms that undergirds the technology. But ignore my explanation, because hers is for more engaging. Dr. Buolamwini takes you by the hand and describes these systems clearly and in their must basic terms then lets you follow her path to understanding through a series of anecdotes and metaphors.

I learned just as much about AI bias as I did about the influence of capitalism and racism on systems we're interweaving into our daily lives. She also makes great points about the harm of being excluded, where not being seen by an AI could make it impossible to get a loan, take an exam, have access to your apartment building, or get a job, and being included, which could lead to oversurveillance of BIPOC as well as misidentification that lands innocent people in jail.

Her research pushes back on the assumption that AI could and should replace other systems, such as hiring systems, apartment entry systems, and police surveillance because the AI makes too many harmful mistakes. There's also a discussion about where AI gets its information and the ethics of scrapping the information from the internet and other non-consensual databases without concern for privacy or compensation.

In short, it's fascinating stuff everyone should read. AI is interweaving itself into every aspect of our lives without our consent and is making billions and trillions of dollars off our data: the photos we upload, the art we post, the words we write. People may argue that there's nothing we can do about it, but that's bullshit. When cars came on the scene there were no seatbelts, no airbags, no crumple zones, no anti-lock brakes, no tempered glass, no electronic stability control, and no anchors for child safety seats. Are they in place now because car manufactures give a shit about safety? Absolutely not, they're there for consumer protection mandated by the federal government after people were harmed by the lack of safety protocols in cars. This is the same thing. The willingness for people to roll over because something is 'inevitable' is embarrassing and lazy.

Dr. Buolamwini is doing the hard work, with great flair and style. She understands the importance of symbols and personal stories to leave an indelible mark. She's also a poet and goes by the moniker Poet of Code and has an advocacy non-profit called the Algorithmic Justice League. The book left me with a healthy skepticism of the benefit and accuracy of AI systems that want us to believe they are flawless and benevolent.

/rae ryan/