35 AI Books You Should Read!
AI is a topic that I have been working and studying for a long time, since the 80s. I confess that I am an AI enthusiast. And it's not from today. I remember that as a teenager I devoured books by Isaac Asimov, such as the famous trilogy “Foundation” and, mainly, “Eu, robot”. “Me, Robot” was a series of short stories that are a milestone in the history of science ﬁction, by introducing the famous Laws of Robotics, and by taking a completely new look at machines. Asimov's robots have captured the minds and souls of generations of writers, filmmakers and scientists, and until today they are a source of inspiration for everything we read and watch about them. Then came Stanley Kubrick's unforgettable film "2001, a space odyssey" and with it the HAL 9000 (Heuristically programmed ALgorithmic computer), which is a computer with advanced artificial intelligence, installed aboard the Discovery spacecraft and responsible for everything its functioning. His dialogues with the actors left me really impressed with what the future could bring us. When I read a paper about Eliza, software created by MIT researchers, I saw that AI was possible, because already in the 60s a system was able to interact reasonably with humans. I started to read all the books on the subject and in the mid-80s, I got approval to put an experience into practice, within the company I worked for.
At the time, the AI scenario was divided into two lines of thought, a group that adopted the concept of "rule-based", also called "expert systems" or expert systems, and the group that was guided by the concept of neural networks ( neural networks). Neural networks looked very promising, but they lacked data and the available computing power was vastly less than what we have today. I pragmatically opted for expert systems, as the development logic seemed more feasible to me: interviewing professionals who are experts in a certain area and codify their decision processes, in a decision tree, with IF-THEN-ELSE. An expert system has two basic components: an inference engine and a knowledge base. The knowledge base has the facts and rules, and the inference engine applies the rules to known facts and deduces new facts. The first difficulty was to learn languages such as Prolog and Lisp, but once the barrier had been overcome, the practice of seeking knowledge from specialists was an obstacle: as they were specialists, they were in high demand and did not have time available, much less for an experimental project. Furthermore, it was very difficult to try to translate their decisions, often intuitive, into clear rules to be placed in the decision tree. And as the specialist's knowledge accumulated, the process became more and more complex. In short, the system never worked properly and was discontinued. But it was worth the experience.
In the last decade, AI has been reborn and the emphasis has shifted towards neural networks. We already have the two essentials: available computing power and plenty of data. In computational capacity, a simple smartphone has more computational power than the entire data center that NASA had when it sent the first man to the moon, in 1969. And in the rear of this smartphone we have computational clouds with almost infinite capacities. In terms of data, we are now generating more than 2.5 quintillion bytes per day and this number quickly doubles.
The inflection point of neural networks came in the mid-2000s with the research of Geoffrey Hinton, who discovered efficient ways to train several layers of neural networks. This allowed for the rapid advancement of image and speech recognition algorithms. The term “deep learning” (DL) emerged, which today is the basic engine of the main advances in the field of AI. DL's concepts were the basis for the construction of AlphaGo, which won the world champion of Go, a complex oriental strategy game, and later of AlphaZero, which taught himself, in 4 hours, how to play chess and beat the world champion software, the Stockfish.
What do we see today? The rapid evolution of AI has such significant impacts that we still don't realize its amplitude. We have no idea what our society, businesses and the job market will look like in 2050, but we do know that AI and robotics will change almost every way of working today, transforming careers and professions as we know them today.
Yes, the AI is already here. Algorithms are already with us and they spread more and more, pervading all aspects of our lives. We must prepare for the changes that a world filled with AI algorithms will bring to society. However, AI does not need and should not be a zero-sum equation, humans versus AI, but rather humans plus AI generating more intelligence. Of course, for this we have to prepare ourselves. Study, know and understand what AI is, its potentials and limitations. Many indeed! AI is not magic and has more, and much more to evolve. We are at the beginning of the learning curve.
Here is a list of books on the subject of AI that I recommend. I've read them all and I believe they can contribute a lot for us to know more about what AI is and what it isn't.
Let's start with the book “Deep Learning” by Ian Goodfellow et al. It is an excellent book for knowing and understanding Deep Learning. Sejnowski's “Deep Learning Revolution” shows us the evolution of DL algorithms. Unmissable. The Master Algorithm traces a journey through the five largest schools of ML, showing how they transform ideas from neuroscience, evolution, psychology, physics and statistics into algorithms.
AI is increasingly ingrained in our lives, as happened with the smartphone, which today is practically an organ of the human body. Asimov's 1950 book “Me, Robot” was a landmark. The plot follows the story of the character Susan Calvin, a robot psychologist who is being interviewed at the end of her life. She narrates the most important passages of her career in nine stories. Based on particular cases, Asimov draws a future where machines make their own decisions, and human life is unfeasible without the help of automatons.
The book also became a classic because it enumerates the Three Laws of Robotics: 1) a robot cannot harm a human or allow a human to come to harm; 2) robots must obey the orders of humans, except in cases where such orders conflict with the first law; and 3) a robot must protect its own existence as long as it does not conflict with previous laws. The rules aim at peace between automatons and biological beings, preventing rebellions. These guidelines must be adhered to by AI researchers. The issue of ethics in AI cannot and should not be underestimated. It should, in principle, be part of solution design. Ethics by design.
The other day I was thinking about my grandchildren and what their future professional life would be like. I know they don't and will never use a keyboard and mouse. And that they won't need to learn to drive. The Internet, apps and wearables are already part of your life and will increasingly be living in a digital world, with new social and behavioral habits. And they will no longer use apps, replaced by virtual assistants that communicate via gestures and voice. But from then on everything becomes hazy. How can we prepare and prepare our children for a world with so many radical changes and uncertainties? A baby born today will be in his early thirties by 2050. If all goes well, he'll still be alive in 2100, and might even be an active 22nd century citizen. What should we teach this baby to help him survive and thrive in the 2050s or 22nd century world? What kind of skills will he need to get a job, understand what's going on around him, and explore the maze of life?
We don't have answers to these questions. We have never been able to accurately predict the future, but today it is virtually impossible. And AI is already in our present. Imagine the future! We need to understand it and use it to its potential, recognizing its limitations.
Remember the movie Her, where the AI system, in the beautiful voice of Scarlett Johansson, understood everything Theodore said? Well… we are still far from this scenario. There are many well-evolved bots (and others not so much), which give the impression of understanding what we are talking about. But when we look more closely at deep learning techniques, we see that current systems have inherent difficulties in understanding how sentences relate to their parts, like words. It is the principle of compositionality. Compositionality is a property of complex expressions whose meaning is determined by the meanings of their constituents and the rules used to combine them. All linguistic theories agree that one of the main characteristics of human languages is their ability to create complex expressions from simple linguistic units. AI has difficulty handling compositionality. He also has difficulty understanding the ambiguity that we humans put into our conversations. Does it mean that a machine cannot interact with us? Of course not, but he can't hold a conversation at the level that we humans do. What is missing? Common sense. There are some good books on NLP techniques.
Digging through my library stored on the Kindle, which is now 404 books, I rediscovered a book I read in mid-2011, “Final Jeopardy: Man vs. Machine and the Quest to Know Everything,” which tells the story of Watson's development. I started to reread it, because now, almost ten years later, with so much evolution happening at an accelerated pace in AI, it's good to remember pioneer cases. The moment Watson won the TV competition, Jeopardy, is a milestone in AI history. Anyone interested in the evolution of AI and how it got here should give this book a read.
I enjoyed three books that look at AI from a business perspective. The first “How AI is Transforming Organizations” is a collection of articles published by MIT Sloan. I also recommend “Competing in the Age of AI” by the Harvard Business Review. "Rule of the Robots" argues that AI is an exceptionally powerful technology, a type of "intelligence electricity" that is altering every dimension of human life. AI has the potential to help us fight climate change or the next pandemic, but it also has the capacity to do profound damage. Deep fakes — AI-generated audio or video of events that never happened — can wreak havoc across society. AI enables authoritarian regimes like China to implement unprecedented mechanisms of social control. And AI can be deeply biased, learning biased attitudes from the data used to train algorithms and perpetuating them. These are matters that we need to understand so that we can use AI in the most appropriate and useful way for everyone.
The most technical bit is the book “Data Science for Business”, which is a good book for business managers who lead or interact with data scientists and ML engineers and who want to better understand the principles and algorithms available, without going into detail technical.
I read and reread Kai-Fu Lee's excellent book, "AI Super-powers: China, Silicon Valley and the New World Order." The book addresses an interesting thesis, which analyzes the evolution of AI under the perspective of 4 waves: “Internet AI”, “Business AI”, “Perception AI” and “Autonomous AI”. Each wave leverages the evolution of AI and disrupts business sectors. Recently, AI researcher Andrew Ng said that “AI is the new electricity”. About a century ago, we started to electrify the world through the electricity revolution. By replacing steam engines with machines that use electricity, we transform transportation, manufacturing, agriculture, healthcare, and virtually the entire society. Now, AI is at its tipping point, initiating an equally dramatic transformation in society. Now in 2021 he has written his second book, 2041, where he imagines the world 20 years from now, with AI acting pervasive and how that would impact society.
And I recommend three excellent books that discuss the subject of the relationship between humans and machines. They are worth reading!
Help us form opinions to try to answer questions such as how will AI affect crime, wars, justice, work, society and our very sense of being human? How can we increase our prosperity through automation without leaving people bereft of income or purpose? What career advice should we give kids today to avoid jobs that will soon be automated? How do we build more robust AI systems so they do what we want without risking them to malfunction or get hacked? Should we fear wars that use autonomous lethal weapons? Will machines outsmart us at every task, replacing humans in the workforce and perhaps everything else? Will AI help life to flourish like never before, or will it give us more power than we can face? What kind of future do you want?
One thought-provoking discussion is how far the AI can go. Does the AI understand what it is doing or not? AI is nothing new. The term was coined in the 1950s, has gone through ups and downs, and now, thanks to available computing power and a flood of data in digital form, it's starting to take shape. Today, we are able to develop very sophisticated algorithms and a specific form of AI, DL, has been the big bet for its evolution. For many, DL is the current state of the art in AI. It's true that sophisticated DL algorithms, outperforming humans on very specific tasks, makes us consider AI to be super human in every way. Is not true. What we still have is a "narrow AI", which can do a specific thing very well, but has no idea what it's doing. It is not aware, and therefore, in light of what we consider human intelligence, it is still very, very far from being intelligent. Machines have no conscience either. When Watson won the “Jeopardy!” he didn't go out to celebrate with friends. When AlphaGo beat Lee Sedol no Go, he didn't have the slightest understanding of what he did. It did what its algorithms had to do and that was it. AlphaGo doesn't know how to do anything other than play Go. It doesn't know how to play chess. This prevents us from using AI for activities that demand common sense, empathy and creativity. For example, in healthcare, the machine can do well with image analysis, but as they don't actually see, but simply see pixels, they cannot replace the doctor in interactions where healthcare demands personalization and humanity. There is a phrase attributed to Einstein that is worth quoting here: “Any fool can know. The point is to understand". Some books discuss this subject and are worth reading!
A question that must be explored: the future of work in the AI era. Just the other day I was thinking about my grandchildren and what their future professional life would be like. I know they don't and will never use a keyboard and mouse. And that they won't need to learn to drive. The Internet and apps (and wearables!) are already part of your life and will increasingly be living in a digital world, with new social and behavioral habits. In fact, they probably don't even use apps anymore, replaced by virtual assistants that communicate via gestures and voice. But from then on everything becomes hazy. How can we prepare and prepare our children for a world with so many radical changes and uncertainties? A baby born today will be in his early thirties by 2050. If all goes well, he'll still be alive in 2100, and might even be an active 22nd century citizen. What should we teach this baby to help him survive and thrive in the 2050s or 22nd century world? What kind of skills will he need to get a job, understand what's going on around him, and explore the maze of life?
We don't have answers to these questions. We have never been able to accurately predict the future, but today it is virtually impossible. Once technology allows us to design bodies, brains and minds, we can no longer be sure about anything, including things that previously seemed stable and eternal. We've already left the term science fiction behind. It becomes more appropriate to talk about scientific anticipation, as it will no longer be a question of “if” something will be invented, but “when”.
For example, the accelerating advance of automation and AI will greatly change today's professions. The impact of robotization reaching knowledge areas radically changes our perception of automation. Before, there was a consensus that automation would only affect operational activities, such as on production lines. But now we realize that we can see her working in activities that are more mental than manual, which involve decision-making, which traditionally includes people with university education and are responsible for the professional stratum considered superior.
It seems impossible? Every day there is more evidence that this change is much closer than we thought. And the day will soon come when automation can replace people in business decision-making. The machines will be able to replace managers who currently rely on instinct, experience, relationships and financial incentives for performance to make decisions that sometimes lead to very bad results. This scenario will force us to change many professions and obviously to redesign academic training to face this challenge. We're not really training people for the professions of the future. But what should we do? How about we start to analyze the topic in more depth? Some books properly debate the subject.
AI has very large disruptive potential and therefore having a strategy is essential to face the challenges that are already at hand. But how to design an AI strategy? The AI will be able to replace or modify professions that exist today, in addition to creating others. Generally speaking, AI uses capabilities such as knowledge, perception, judgment, and the means to perform specific tasks that were the exclusive domain of human beings. The question we ask ourselves is where and how to apply them? Should we use them to create new products or offers? To increase the performance of your products? To optimize internal business operations? To improve customer processes? To reduce the number of employees? To free up employees to be more creative? The answers will come from our strategy for applying AI. There is no single answer, as each organization has its own strategy and pace of adoption. In any case, three pillars should underpin the strategy design: the level of knowledge and conceptualization of AI potential; the capacity (talents) available to implement the concepts; and the organization's culture and its adherence or not to innovations and experiments. A first step is to study companies that can be considered “AI powered organizations”, those in which AI is at the heart of their business, such as Amazon and Alibaba.
An instigating and controversial book was “Superintelligence: paths, dangers, strategies”, by Nick Bostrom, director of the Future of Humanity Institute, at the University of Oxford, in the United Kingdom. Although the topic is apparently inhospitable, it became one of the New York Times best sellers. He debates the real possibility of the advent of superintelligence machines, and the associated benefits and risks. He ponders that scientists consider that there have been five mass extinction events in our planet's history, when large numbers of specimens disappeared. The end of the dinosaurs, for example, was one of them, and that today we would be experiencing a sixth, this one caused by human activity. he asks, and won't we be on that list? Of course there are exogenous reasons like the arrival of a meteor, but he focuses on a possibility that seems straight out of a sci-fi movie, like the Terminator. The book, of course, stirs controversy and seems somewhat alarmist, but its assumptions could come true. Some scientists are in favor of this warning, such as Stephen Hawking, who said verbatim: “The development of full artificial intelligence could spell the end of the human race”. Also Elon Musk, who is the founder and CEO of Tesla Motors, recently tweeted: “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes”.
On the positive side, Bostrom points out that the creation of these machines can exponentially accelerate the process of scientific discoveries, opening up new possibilities for human life. An open question is whether and when such a capacity for intelligence would be possible. A survey of AI researchers points out that a super-intelligent machine — Human Level Machine Intelligence (HLMI) — has a 50% chance of appearing around 2050. For 2100, the probability is 90%! For others this is just a myth. A good discussion.
So guys, I hope you enjoy this bibliography as much as I did. These are thought-provoking readings that open up fantastic insights. Happy reading to everyone!
About the author
Head da CiaTécnica Research, Partner/Head of Digital Transformation da Kick Corporate Ventures. Investidor e mentor de startups de IA e membro do conselho de inovação de diversas empresas. Na sua carreira foi Diretor de Novas Tecnologias Aplicadas e Chief Evangelist da IBM Brasil; e sócio-diretor e líder da prática de IT Strategy da PwC.
Também exerceu cargos técnicos e executivos em empresas como Shell e Chase Manhatttan Bank. Com educação formal diversificada, em Economia e mestrado em Ciência da Computação sempre buscou compreender e avaliar os impactos das inovações tecnológicas nas organizações e em seus processos de negócio.
Escreve constantemente sobre tecnologia da informação em sites e publicações especializadas como NeoFeed e outros, além de apresentar palestras em eventos e conferências de renome como IT Forum, IT Leaders, CIO Global Summit, TEDx, CIAB e FutureCom. É autor de onze livros que abordam assuntos como Inteligência Artificial, Transformação Digital, Inovação, Big Data e Tecnologias Emergentes. Membro notável do I2AI. Advisor da EBDI e professor convidado da Fundação Dom Cabral, da PUC-RJ e PUC-RS. Publisher da Intelligent Automation Magazine.
Artificial Intelligence for all
Free Introduction to Artificial Intelligence - come and participate in a free live class, lasting 60 minutes, with Professor Alexandre Del Rey, president of the International Association of Artificial Intelligence