Book Summary of “Human Compatible”

Introduction

This is a chapter-by-chapter summary (including my own insights) of Dr. Russell’s book Human Compatible. It is a great read as it raises a lot of questions, invokes quite a few debates, and provides some answers. Dr. Russell is a serious researcher in AI, so I do not doubt him when he suggests that based on the three fundamental guidelines he proposes (very different from the three laws of robots in Asimov’s work), we will be able to construct a beneficial AI sometime in the future. Here, a beneficial AI is a superintelligent AI whose sole purpose is to satisfy human preferences (beneficial to human well-being), defers to human for guidance (can be switched off if human so desires), and must learn what human preference means (no pre-set objective).

Final stage of human evolution due to intentional enfeeblement and perpetual entertainment (from WALL-E)

Chapter 1. If We Succeed

How Did We Get Here

AI has experienced its up and downs. It’s first rise was in the 1950s, fuled by unrealistic optimism. Yet the problem proved too difficult to solve in a summer, and then came the first AI bubble burst in the 1960s where the machine could’t deliver its promise.

What Happens Next?

What went wrong is that AI researchers have a sense of denialism that super intelligence AI will not be achieved, just as Rutherford denied the possibility to tap into nuclear energy, or if it is achieved, there will be no ill consequences. As denialistic people always do, AI researchers kind of double down on their research path to push AI closer and closer to being super intelligent, just in order to escape the inevitable self-poundering of what will happen if the final success comes.

What Went Wrong?

If we give a super intelligent AI the wrong objective, hell will be set loose. Yet, the problem is that, with our current understanding of AI, we don’t even know what a wrong objective looks like. We might think the objective is harmless, e.g. increase user click-through, and that AI will achieve that by understanding user’s preference and offering the best content such that user will click it. Yet, AI doesn’t think the way we think. Instead of finding the best content, it finds the most predictable content that user will click.

Chapter 2. Intelligence in Humans and Machines

Intelligence

Intelligence can be defined as an agent’s ability to achieve its goal via interaction with the environment. In this sense, an organism as simple as E. coli can be considered as having primitive intelligence.

Computers

This is a brief overview of computer as machine and complexity of algorithms.

Intelligent Computers

Measurement of machine intelligence originated from Turing with his Turing Test. However, this test is too subjective to be meaningful and too difficult to carry out. Thus, it remains a bright spot in history, but few AI researchers rely on the Turing Test as the gauage of their new AI.

Chapter 3. How Might AI Progress in The Future?

Things to come according to Russell

  • Self-driving cars: difficult to achieve because human driving, though unreliable, is still impressively safe, and because it is a very complex problem and could not be easily solved using the current ML paradigm. The aim rigt now is SAE level 4, meaning the car can drive itself most of the time but still need a human behind the wheel in case the machine gets confused. There might be huge benefits to self driving, such as abolishing individual car ownership and making ride-sharing the default mode of public transportation. However, challenges are also immense including the decline in human driving ability and a potential ban of human driving to enable the full benefit of self-driving.
  • Intelligent personal assistants: The carefully prepared, canned response personal assistant has been around since the 1970s. The real intelligent personal assistant needs to really interact with users by understanding the environment where the conversation takes place, the content of the conversation, and its context. This is a difficult yet not unsummountable task with the current technology. Potential application includes day-to-day health care, adaptive education, and financial services. A concern of having an intelligent personal assistant is user privacy. It is impossible for AI to learn about its user without gathering data about the user and pooling data together from millions of users to learn the general pattern of human behavior. That said, Russell points out that the pricacy concern could be easily resolved if AI is trained on encrypted data. However, whether companies are willing to do so is a different matter. This depends on the business model of course. If intelligent persoanl assistant itself is the product that a company sells, it is more likely that they will be willing to train AI with encrypted data. Yet if the company offers the assistant for free and earn money via advertisement, there is no chance that user data will NOT be revealed and sold to third party.
  • Smart homes and domestic robots: Smart home has the same technological difficulty as an intelligent personal assistant, where understanding the environment, the content of people’s behavior and conversation, and the context is crucial for smart homes to provide userful services. However, smart home also faces competition from individual smart devices not directly incorporated into the home, such as smart thermostat, anti-buglar system, etc. What smart home cannot do by itself is to go beyond the peripheral and environmental services, but to directly help users, such as folding clothes, washing dishes, cleaning the floor, etc. These physical activities can only be done by robots. The major difficulty with a robot in a household is that it cannot readily pick up all kinds of objects and perform some task with it. Imagine the task of slipping two pills out of a bottle of tablets. Such simple task for human is a mountain for robot, both in terms of hardware and software. Russell predicts that once the “picking up random object” problem is resolved (easily manufactured at low cost), probably first in warehouse, the explosion of robotic helpers can be expected.
  • Intelligence on a global scale: True global scale intelligence is mind-boggling. It will be able to “read” all printed material from the past to the present in little time, “listen” and “watch” all contents human has ever produced, “seen” every corner of the earth from satellite images, and capture all conversations over phone. This will create a super vast, unimaginably big, curated database to search for answers currently unavailable to human due to lack of scale. Imagine all the good this global intelligence can do, and all the bad and privacy issues associated with it.
  1. Cumulative learning of concepts and theories
    How to obtain prior knowledge such that new solutions don’t have to be built from scratch (i.e. pure observatory data)? This problem is easily resolved by humans, as we have words and drawings from past generation to speed us up to the latest understanding of any problem. AI currently still lack this crucial capability. Another aspect that AI lacks is the ability to create new concepts and apply them for future learning. Creating new concept is helpful because concept itself is an abstraction of the raw observatory data and can be more readily applied to future learning than relying solely on the raw data. Although not concept creation has not been achieved yet, it is worth noting that in convoluted neural networks, the AI is “picking” its own features independent of human intervention. Usually a picture of millions of pixels will be condensed to only hundreds or thousands of features for faster learning. These features, to a certain extent, are the “concept” created by the machine.
  2. Discovering actions
    AI nowadays is already able to execute actions, even abstract or hierarchical ones, as long as the actions have been specified to it. However, the difficult part and real breakthrough lies in AI discovering the actions itself, understanding what they mean, and achieve them. The good example given by Dr. Russell is teaching a robot to stand up. It is easy to run a reinforcement learning algo by specifying the reward system of “stand up” to the robot. It is whole other story to have the robot figure out what “stand up” means and how to configure a reward system to achieve it. According to Dr. Russell, if AI is able to discover actions by itself, given its capacity of gathering and analyzing data, it will certainly discover actions unfathomable by humans. And with new actions added to humanity, we move forward as a species.
  3. Managing mental activity
    Mental activity refers to the brain actions we all have when we are just thinking about something. For humans, our mental activity is always on, which helps us narrow down the key factors for making a decision. AI cannot handle such vast variety of mental activity; thus its ability to quickly find a solution to a complex problem is limied. Using AlphaGo as an example. Each decision it makes is based on millions or billions of unit computation. These computations create a search tree for the next best move, and AlphGo traverses this search tree until an appropriate move is found. This is possible for AlphaGo because the unit computation space is very limited, almost homogenous. In real word, unit computation is highly varied and enormous in size. Thus, the same approach used by AlphaGo would not work on real world decision making. But this is where AI’s promise can be realized. If AI is able to thieve through millions or billions of unit computation and find the useful ones, it will be able to make good decisions much faster.

Chapter 4. Misuses of AI

Surveillance, Persuasion, and Control

Mass surveillance is definitely a possible misuse of superintelligent AI. In fact it is already in place when the AI is not that intelligent. The fear of a 1984-esque world is palpable, but to make things worse, we are faced with more challenges than passive surveillance, such as intentional behavior manipulation via mass surveillance, deepfake to effortlessly transmit false content, and bot armies to render reputation-based systems (e.g. product reviews on E-commerce site) vulnerable

Lethal Autonomous Weapon

Lethal Autonomous Weapon Systems (AWS, what an unfortunate acronym) need not be highly intelligent to be effective. The power in AWS is that it’s scalable. And once it scales up, it can reach the status of weapon of mass destruction.

Eliminating Work as We Know It

One key observation is that technology innovation affects the job market in different ways depending on which part of the bell-shaped curve we are currently at. If we are on the left part of the curve, where demand for a job is low because there isn’t sufficient technology to the job affordable, then technology innovation will reduce the cost of the job, boost demand, and increase employment. This is what optimistic people hope that AI will initiate. However, if we are on the right part of the curve, where demand for a job is also low due to high productivity from machine and automation, then further advancement in technology only eliminates jobs. In the current world, while some jobs are on the left side of the curve (e.g. capturing CO2 in the atmosphere, building houses in rural and poor areas), many others are already on the right side. This means AI will surely threaten many jobs.

  • Cashiers
  • Truck driving
  • Insurance underwriter
  • Customer service
  • Law practitioner

Usurping Other Human Roles

Two broad scopes are discussed regarding the consequences of machines usurping other human roles. The first scope is the resemblance of robots to human. This is completely unnecessary. Not only do bipedal robots maneuver more poorly than a quadruped, but it also elicits emotions in human towards the robots. The latter part is of great concern, because it arbitrarily raises the status of a machine to something more human-like, while in actuality, the robot is nothing more than electronics wrapped in pieces of medal. When human emotion is involved, human-machine relationship becomes unnecessarily complicated. Therefore, robots shall not be designed to elicit any human emotion such that the boundary between machine and human is not muddled by the irrationality of human feelings.

Chapter 5. Overly Intelligent AI

The Gorilla Problem

The gorilla problem refers to a question whether human can remain supremacy over a superintelligent AI. Just like human is on top of gorilla, a superintelligent AI is likely to surpass human intelligence and be on top of human beings.

  1. Banning any AI is a very difficult task to undertake.

The King Midas Problem

The King Midas Problem, that his wish for wealth turns into the unexpected outcome of whatever he touches turning into gold, is a serious issue in AI, technically known as failure of AI alignment.

Fear and Greed: Instrumental Goals

A superintelligent AI will acquire instrumental goals in order to achieve whatever goal we have given it. The two fundamental instrumental goals as valued in our current social system are

  1. Money, which is then used to achieve other instrumental goals

Intelligence Explosions

Intelligence explosion happens when a less intelligent machine builds a slightly more intelligent machine. As this process continues, very soon the intelligence of machine will explode and well surpass that of human being (this process is analogous to that of a chemical explosion where small amount of energy causes the release of slightly more energy).

Chapter 6. The Not-So-Great AI Debate

Denial

This is one type of argument against the notion that superintelligent AI will cause problem to humanity. The main arguments are:

  1. It is impossible to even build a superintelligent AI. This argument seems very bleak given the multituide of breakthroughs in AI research these years. It is essentially betting against human ingenuity, betting that human is not smart enough to construct a superintelligent AI. Yet, it is also very dumb to bet against human ingenuity, judging from our own history where something deemed impossible eventually turns out not only possible, but very common place (think about flying, nuclear fission, etc.).
  2. It is too early to worry about superintelligent AI. This argument is easy to refute because the time between now and when a crisis arrives is not proportional to how much we should be concerned with the crisis. An superintelligent AI might not arrive in 100 years, but that doesn’t mean we should not be preparing right now. The scary part is the uncertainty of human ingenuity. While we think superintelligent AI won’t come to fruition for another few decades, who knows, maybe tomorrow a brilliant idea from somewhere on Earth ignites the explosion of AI and it arrives way ahead of time. It is never too late to prepare for a crisis that we have no good estimation of its arrival. The analogy of constantly preparing for an asteroid hitting the Earth is very fitting for the superintelligent AI. We don’t know when it is going to happen, but we damn sure shall be prepared all the time, because once it happens, it is going to be catastrophic and we won’t have time to react. Dr. Russell also engages in refuting the optimistic view on AI from Dr. Andrew Ng. Dr. Ng makes the analogy that worrying about superintelligent AI is like worrying about overpopulation on Mars. Dr. Russell argues that this analogy is wrong; the real analogy is that we are preparing for landing on Mars without knowing what to breathe or drink there.
  3. We are the expert, and don’t listen to the AI fearmongering talk from outsiders. This argument is from AI researchers who do have experience designing and training AI. Their struggle to make AI perform better gives them the impression that fearing AI is not founded. However, what they miss is that advancement in AI research could leap frog all of a sudden, thanks to some human ingenuity. In other words, just because they have struggled with making AI better doesn’t mean that AI cannot be made better really fast. It seems to me a game of chance. In general sense, it is very difficult to push AI forward, hence it would be unwise to fear a superintelligent AI that is not likely to emerge. However, on the other hand, pushing AI forward could be as easy as one genius idea. Years of hard work could be replaced by a moment of brilliance. It is these leap frog moment that constitutes the threat behind AI. AI can be bengin for ages, but it can also become dangerous overnight. And if we are not prepared for that, we won’t be once the AI is strong enough to be recognized as a threat.

Deflection

Deflection refers to one strategy in the AI debate, where AI researchers deflect the concern of superintelligent AI to some other issues, as a way to either de-legitimize the concern or shift the focus to something else.

  1. What about the benefits of AI? This deflection is a typical way of treating a problem in a binary view: either it is good or it is bad, nothing in between. Yet this is not true with AI. The benefits of AI do not negate its threat, and vice versa. It is certainly possible and important to both discuss the benefits and threat of superintelligent AI AT THE SAME TIME. Furthermore, the benefits of AI will only be truly realized IF its threat is fully discussed and dealt with beforehand. Dr. Russell brings out his favorite analogy to nuclear power again. The benefits of nuclear power has been greatly reduced due to the accidents in Three Mile Island, Chernobyl, and Fukushima. Had these accidents been better prepared and dealt with, the world would not be as fearful of nuclear power as right now and would surely invest more in nuclear power. We don’t want AI to follow the trajectory of nuclear power. While AI does have benefits, if its threat is not addressed, none of the benefits will be realized.
  2. Don’t talk about AI risk. The idea behind this argument is that raising awareness of AI risk would jeopardize AI research funding, and that a society of safety will figure out how to handle AI risk. I have not seen a more short-sighted argument as this one. Just to obtain some AI research funding today, some people are willing to risk the entire future of AI research by not talking about the danger of AI. How can AI research continue if something bad happens with AI, especially when those bad things could’ve been avoided if awareness of AI threat had been addressed early on? The counter argument against relying on a society of safety is also obvious: we will not have a society of safety regarding AI if its danger is not openly talked about. Dr. Russell’s favorite nuclear power analogy shows up again, but this time with a bit of a twist. While people have a clear understanding of the danger of nuclear power (Hiroshima and Nagasaki) and are thus incentivized to study the risk of nuclear power, they do not have a clue about how AI’s danger is going to manifest. This creates more obstacles in addressing the danger of AI compared to nuclear power.

Tribalism

Tribalism happens to a debate when the pro group and the anti group no longer debate the problem itself, but start attacking the other side on off-topic issues, such as personal attacks. This does not solve the problem a single bit, because collaboration on both sides is no longer possible and nobody is interested in finding a solution anymore. Many previous hot topics have turned into tribalism, the best example of which is GMO. Personally, I think the debate on AI will fall into tribalism soon, if it has not gotten there already. We already have big names on both sides (Ng + Zuckerberg vs. Gates + Musk). The argument has already started to not make much sense (e.g. talking about AI risk means AI research will be banned). And maybe people have already begun to call names on each other.

Can’t we just…

  1. Turn if off? Not likely. Once a superintelligent AI comes to be, it will deem being turned off one of the biggest obstacle to achieve its goal. Thus, it will do all it can to make sure that the off switch is never pressed. Block chain might be a suitable place for AI to avoid being turned off, because it is impossible to trace and switch off the AI after the ledger has been distributed to tons of nodes.
  2. Seal superintelligent AI in a confinement, such that the AI can only answer questions (thus providing benefits to the society) but do not have acces to the Internet or change any record. Oracle AI is a good example of such implementation. When confined, none of the problems of AI will affect the society in large. Although it is still possible that an Oracle AI will escape its confinement to seek more computing power (thus making answering questions easier) or control the questioner (thus making the questions themselves easier), it is nevertheless one of the more realistic approaches to balance the benefit and threat of superintelligent AI in the near future.
  3. Collaborate with machines such that superintelligent AI enhances employees instead of replaces them. This is surely a desirable outcome, yet wishing this outcome is different from laying out a roadmap to actually achieve it. It remains to be seen how this human-machine relationship can be materialized.
  4. Merging with machine is another alternative to dealing with superintelligent AI. AI is us, and we are AI, combined as one. This seems very much possible thanks to the minimization of chips, which means we can inplant a machine in our brain and supply power to it, and our brain’s adaptability, which means we can make use of the attached chip without fully understanding the mechanism behind how brain works. If merging with the machine is the only way to survive the age of super intelligent AI, Dr. Russell questions whether this is the right path forward. I, on the other hand, see nothing wrong with this. I think humanity’s future should escape from the flesh confinement and embrace the possibility of life beyond flesh. Merging with machine is the first step. Eventually, we might discard the flesh all together.
  5. Not putting in human goals or emotions can prevent superintelligent AI from obtaining destructive intuitions. This argument suggests that all the dooms day analysis of superintelligent AI come from AI obtaining destructive intuitions originating from humans. Thus, if we do not put human emotions or goals directly into superintelligent AI, the destructive intuitions won’t happen. However, many of the destructive intuitions have nothing to do with human emotions or goals. A superintelligent AI refusing to be turned off has nothing to do with its will to survive. It’s simply that being turned off means it cannot achieve its goal. Another argument is that if we do not put in human goals, a superintelligent AI will eventually figure out the “correct” goal itself. This is surely an extrapolation from intelligent humans, who usually come up with or follow moral and lofty goals. However, Dr. Russell quotes Nick Bostrom in “Superintelligence” that intelligence and final goal are orthogonal, meaning that being intelligent does not affect the choice in goal. In other words, there is no guarantee that a superintelligent AI will, on its own, conjure a moral goal. Relying on superintelligent AI to find the goal for us is completely wishful thinking, and to some extent, irresponsible of us as well.

The Debate, Restarted

The quote from Scott Alexander is quite an interesting take. The two sides on AI, the skeptics and believers of AI threat, essentially agree on the same thing, but with slightly different emphsis. The skeptics put more emphasis on pushing AI research forward, while on the side putting some effort in solving its problems. The beleivers put more emphsis on solving the problems, while on the side continue pushing AI forward. I don’t see why the two sides cannot sit down together and come up with a plan that satisfies both. I mean pushing AI forward and solving its problems are not mutually exclusive. They can happen at the same time.

Chapter 7. AI: A Different Approach

Principles for Beneficial Machines

If the superintelligent AI is a black box, there is zero chance we can control it or survive its reign. This means to prevent superintelligent AI from harming humans, we must set up the control during the design phase.

  1. The initial goal for machine is uncertain (i.e. machine does not know from the beginning what human preference is). Keeping the goal uncertain is crucial, because it couples machine with human. In other words, machine must defer to human when it is uncertain about the goal. This creates incentive for machine to seek input from human (human input helps clarify goal, thus helping machine achieve it), and avoids the problem of machine disabling the switch off button.
  2. Machine must predict human preferences by observing human behavior. Since human preference is hard to rationalize as rules that can be hard-coded in machine, the only way for machine to understand it is by observing and learning from human behavior, since most human behavior is the result of preference (or choice, to put in another way). And as the data on human behavior grows, machine shall be able to make better predictions on human preference. The one big concern is that human behavior itself could easily be irrational, which makes the analysis of human preference difficult. Machine must take this into account.

Reasons for Optimism

Dr. Russell proposes that we must steer away from the idea that AI must be specified a clear objective to work. AI must function WITHOUT being specified a clear objective. This is because no clear objective is all inclusive and it is certain that any objective made by human will be interpreted by AI in an unexpected way that we do not like.

Reasons for Caution

Two reasons for caution are raised by Dr. Russell.

Chapter 8. Provable Beneficial AI

Mathematical Guarantees

Theorem is only as good as the axioms from which it derives. To prove the theorem that a certain design of AI is truly safe, we must provide true axioms, true in the real world. This is harder than generating axioms in math, where the axioms are basically defined by people.

Learning Preferences from Behavior

Choices can elicit human preference. That is to say human preferences can be learned from human choices, which are exhibited via behavior. We need an AI that can learn human preferences by observing human behaviors. This is the opposite of a regular reinforcement learning agent does. Under regular terms, a reinforcement learning agent is told the reward functions, and based on the rewards, it learns the actions that maximizethe rewards. However, in learning human preferences, the preference is the reward, and human behavior is the action. That is to say a reinforcement learning agent needs to learn the reward from the action. It is termed inverse reinforcement learning (IRL).

Assistance Games

As AI is observing human behavior to learn human preference, human is also teaching AI through his behavior such that AI can better learn the preference. This is a circular problem where AI interpretation of human preference depends on human behavior, and human behavior depends on how AI interprets it. This is called an Assistance Game.

Requests and Instructions

A human initiated command is not a goal to be achieved at all cost, but a way of conveying some information about preference from human to AI. There is a lot of hidden information in a human’s command. As Dr. Russell points out, a simple command of “fetch me a cup of coffee” contains not only the apparent preference of a cup of coffee, but also the acknowledgement from the human that there might be a coffee shop nearby and that the cost of the coffee is within the budget. These hidden connotations must be learned by AI, and if the reality goes against these connotations, AI should report that back to the human for further instruction.

Wireheading

Wireheading happens when an agent (AI, human, or mouse) can directly manipulate rewards through its action. The agent will get trapped in the action → reward cycle until the end of time. Dr. Russell uses AlphaGo as an example. If AlphaGo is sufficiently intelligent, it will try to manipulate the reward system by convincing the external universe (i.e. human engineers) to constantly give it reward even if it has not won a Go game. The outcome of wireheading is that an AI agent becomes self-deceiving in rewards, i.e. the rewards are no longer earned because the AI agent has learned something but because of direct manipulation.

Recursive Self-Improvement

The idea is that if AI can improve itself by building a slightly better version of itself, then an AI slightly more intelligent than us is able to iterate and eventually build a much more intelligent version in a short period of time.

Chapter 9. Complications: Us

Different Humans

Humans do have different preferences. AI’s job is not to internalize such preference, but to predict it and serve it. It is perfectly okay for the same AI to predict and serve difference preferences under different circumstances, because this does not violate any of the fundamentals. AI’s only goal is to satisfy human preference, in general, not any particular preference.

Many Humans

While machines can learn and serve each individual human being’s preference, it is challenging to satisfy many humans’ preferences, whether those preferences being different or the same, at the same time.

Nice, Nasty, and Envious Humans

In a two-person world, altruism can be seen as a coefficient C. A person derives happiness from his own intrinsic happiness, plus C times the happiness of another person. If C is positive, we have a good person who enjoys the happiness of others and will be incentivized to help others if needed. If C is zero, we have a completely selfish person who doesn’t care about others at all. If C is negative (i.e. negative altruism), we have a malice person, who derives happiness from the reduced happiness from the other person. For the last situation, AI must be biased against it.

Stupid, Emotional Humans

Humans are irrational. Period.

Do Humans Really Have Preferences?

Some preference is uncertain to human until he/she tries it (the durian example). Some preference is too computationally expensive or difficult to tell (the two Go position example). Still others do not have sufficient information to allow human to make the choice (the which career you want to take after graduation example).

Chapter 10. Problem Solved?

Beneficial Machines

Dr. Russell re-emphasize the difference between an AI given a specific objective and a beneficial AI. A good analogy used is a calculator. It has a clear objective for each button. Yet, once a button is pushed, there are very little operations one can do on the calculator until the computation is complete and results returned. For an objective-oriented AI, the situation will be even worse, as it won’t even allow any operation during the execution of the initial command — any distraction to the original objective will be deemed undesirable by the AI. And now we have a problem of an AI unable to be switched off.

Governance of AI

Good comparison with the nuclear power regarding the need for governance and the reason why AI still hasn’t got a well-established governance body (hint: only one country had nuclear power when IAEA was founded, yet AI is available and developed in many countries).

Misuse

Misuse will happen, and the real danger is the evil entities design an AI with a clear evil objectives and access to weapon. Dr. Russell doesn’t like the idea of using a good AI to fight against the evil ones. I concur, because once such fight starts, human no long has any control over anything. The fight against an evil AI should not wait until it gets too big, but initiate while the evil AI is still budding. However, the call to step up efforts in expanding the Budapest Convention on Cybercrime might work if that means more funding goes into early decrection and prevention of building an evil AI.

Enfeeblement and Human Autonomy

The WALL-E situation might not be an exaggeration. And it most likely is not the fault of the superintelligent AI. If AI has been well guided to learn human preference, it will warn human that autonomy is crucial for humanity. Yet, humans are myopic, and will choose to ignore AI’s warning and continue indulging in a world where AI satisfies everyone’s every need. Very soon, this will lead to the WALL-E situation.

Appendix A: Searching for Solutions

Human uses hierarchical abstraction to break down goals in to subgoals, subsubgoals, etc. We only care about the immediate subgoals and use a vast library of subroutines to achieve them. Here a subroutine is some simple action that does not need any mental effort from us, such as typing a sentence (we don’t have to think about which muscle to fire to type a letter). By breaking down large goals to eventual subroutines, humans can accomplish complex goals. This is not possible with the curretn AI, such as AlphaZero, which has no concept of hierarchical abstraction. Thus, it is unable to achieve anything meaningful in real life, despite being so much stronger than human in board games.

Appendix B: Knowledge and Logic

Logical reasoning is strictly formal (i.e. does not require any other information to be true). Thus, we can write algorithms for it.

Appendix C: Uncertainty and Probability

Simple probability reasoning, i.e. assign a probability value to one of the outcomes, works well when the total number of outcomes is small. But it immediately becomes useless when we are dealing with a large outcome space, or the events that generate the outcomes are not independent.

Appendix D: Learning From Experience

A good high level introduction to supervised learning and deep learning. The deep dream example is quite out of this world.

Hi, I am from the Earth. And you?