Pranav Minasandra

'Artificial Intelligence: A Guide for Thinking Humans' by Melanie Mitchell

Reviewed by Pranav Minasandra

18 Dec 2023

Since OpenAI came out with ChatGPT last year, and especially since Elon Musk made Twitter the favourite shouthole for a certain unique breed of humans, Artificial Intelligence has entered the zeitgeist, and it looks like it’s here to stay for a while. Prof Melanie Mitchell dives into AI with a perspective that lies somewhere between historical, pedagogical, introspective, and speculative.

Curiously, this book about AI follows the same trajectory that AI itself has followed through my life.

AI is a broad, nebulous term, and lots of stuff gets thrown together and bundled under its umbrella. The basics of AI consist of the math behind classification and regression, the functioning and training of simple neural nets, and the intuition with thinking of data as a high-dimensional entity. My introduction to this stuff (often referred to as classical machine learning or pattern recognition) was from a course at IISc called Pattern Recognition and Neural Networks by Prof P S Sastry. NPTEL offers this course online for free, and it might be a good place to get started if you have a decent exposure to probability and linear algebra. In 2023, most of the things considered in courses like these will be termed outdated by proponents of more modern (or deep) learning approaches. I strongly think, however, that without exposure to this kind of a basic way to think about data, learning fancy methods is pretty useless. Thankfully, pattern recognition is where Melanie Mitchell begins her book about AI.

In the last decade, the development of powerful, parallelised computation has lead to the emergence of techniques wherein huge data structures can be initialised, and an optimal (very large) set of numbers determined, which let the structures do cool things. Convolutional Neural Networks can do amazing things with images, audio, and video data, encoder-decoder systems can describe images as sentences, and today, transformers with billions (allegedly) of parameters can bring about ChatGPT which, for better or worse, has made its presence known in most people’s lives today. Mitchell writes about ConvNets and encoder-decoder systems in a pretty convincing, easily approachable way (her book pre-dates the ascent of transformers).

In 2019 as my career in research began to take off, I was offered a position at the University of Konstanz for my Master’s. Working mostly remotely, I was to develop a machine learning system to recognize the behaviours of spotted hyenas from accelerometer data (now published). While I don’t think of myself as an ‘AI person’, it is difficult to imagine a career for someone like me without knowledge in machine learning. In a different course I took at IISc, Prof Y Narahari claimed that, in the future, machine learning and game theory will be just as fundamental as calculus or trigonometry. This is starting to seem more and more like the truth. In her book, Prof Mitchell spends some time talking about how AI will affect the future—no, we won’t be killed off by a superintelligence; yes, a lot of us will lose our jobs to AI; and so on.

Overall, Melanie Mitchell’s book is a great read, thoroughly accessible if you are new to the field, and still quite engaging if you’ve been working with this kind of stuff for some time. The book is sane and grounded, and makes sense of the constant hype around AI that has drowned out rational conversation since the past few years. At the same time, it does not take the very cynical, reductionist view that some profess, that AI is just advanced statistics. This is like saying hands are just advanced feet—technically true, but hands are arguably more fun and more useful than feet. In a world heading towards a place where AI and related technologies are becoming quite important, I think reading this book is an important introduction for all thinking humans.