Skip to main content

In this blog, I am going to review a new book released this past summer – one completely related to AI.

The Mind’s Mirror” by Daniela Rus and Gregory Mone

Daniela Rus is a professor and director of the prestigious computer science and artificial intelligence laboratory (CSAIL) at MIT and Gregory Mone is a renowned journalist and science writer. This book was written to demystify generative AI and most of the book is meant as a tutorial introduction to the subject.

The first part of the book labeled “Powers” explains what we can do with generative artificial intelligence (AI) – from gaining knowledge, to gathering insights, improving creativity, marshalling foresight, mastering skills and gaining empathy. The whole section is peppered with lots of use cases. Four billion people now have access to smart phones that can leverage this technology and there is no doubt we are living in a revolutionary time.

The second part explains the “Fundamentals” on how generative AI tech is developed – particularly the development of large language models (LLMs) and what they entail.

Given my background, I found the third section of the book on “Stewardship” most interesting. The focus of this section is on AI governance – an examination of what can go wrong. From deep fakes to accelerating creation of weapons of mass destruction – the threats are many. However, there are more immediate concerns that the research community and society need to address.

On the technical front, one of the core issues involved in training LLMs is the availability of large quantities of data. Not just quantity but quality data, and the need for transparency on the provenance of data used. MIT has developed a VISTA simulator – a tool to generate data for end-to-end training of autonomous cars and to address the long tail of improbable events that happen during driving.

Next in the long list is the cost to train the language models. The latest models with trillions of parameters are beyond the reach of everyone except for a handful of the tech giants and it also has a huge carbon footprint No one completely understands yet how these models work.

Two MIT CSAIL startups MosaicML and LiquidAI were targeting some of the technical challenges. MosaicML (now part of Databricks), focused on developing new models from scratch with a much smaller carbon footprint but that are efficient at solving specific tasks. LiquidAI has a similar mission focused on reducing the cost of deploying LLMs.

Distilling a larger model into a smaller one is another technique now gaining strength. This is a continuing theme – with lots of smaller models outpacing their larger brethren. Allen AI has released smaller models that are trained in high quality data that perform very well on standard benchmarks. By controlling the training process, one can also control the bias and toxicity that data can introduce – one of the concerns in deploying LLMs in practice.

What is the societal impact of this emerging revolutionary technology? This is the focus of the last chapters in the book. As the AI tech rapidly advances, it is capturing aspects of our thinking and reasoning, but has not become self-aware, developed feelings or morality. But AI is slowly shining a light on our own minds and helping us understand ourselves better. In this sense, AI is a mind’s mirror, an incomplete one, but a reflection nonetheless. Hence the title to the book.

Retrospective

I have mixed feelings about books on AI and particularly generative AI. On one hand, this book by a reputed AI researcher and roboticist covers the material extremely well. It is meant to introduce the subject to the general population and highlight the potential to do good and harm. On the other hand, the tech is evolving so rapidly, no one can do justice to what is going on here. I wonder, if the idea of books in this field is obsolete? Should we be thinking of “living books?” Ones that continually change as the tech evolves. And instead of buying a book, we buy a subscription? Or get a warranty to access changes to such a book that is good for a few years? Food for thought.

V. “Juggy” Jagannathan, PhD, is an AI evangelist with four decades of experience in AI and computer science research.