Reading Period: April 12 - Present
1. The Player of Games (A), by Iain M. Banks
Link: https://www.goodreads.com/book/show/18630.The_Player_of_Games
Pretty interesting story, was less dark and depressing than Consider Phelbas. I wasn't quite sure what to expect with Iain, since he's so praised, but this was a bit more of an aged, simple story than I was expecting. I think it was pretty good, and worth the read, but it didn't quite knock my socks off. The worldbuilding is quite interesting, but I'm not insanely excited about the next read.
2. Expecting Better (A), by Emily Oster
Link: https://www.goodreads.com/book/show/16158576-expecting-better
I like the concept a lot (Freakonomics but for pregnancy), and the end of the book was pretty useful/informative. I think the beginning is a bit contrarian/destructive. Basically the tone feels off, like a "I drink red wine when pregnant, because I've done the research and I'm superior" type of energy. I think it's the wrong sort of vibe, people should think more in terms of traditional finance (risk/reward, Sharpe ratios), instead of probabilities. Not sure how to say this, but maybe it's a book husbands should read but wives shouldn't (the data is useful and calming, but pregnant women could easily overindex on Emily's confidence). Basically it's a useful break from tradition (which is often clouded in fear and mystery), but often that tradition contains some wisdom. I also think that when carrying a child, the child cannot consent to any decisions (such as the mother deciding to drink occasionally), and as such there is a duty to do right by your child even at personal cost, especially if it is small (such as not being able to drink while pregnant). The sections on third trimester pregnancy were the most interesting, I'd recommend those regardless of the rest.
3. The Infinity Machine (P), by Sebastian Mallaby
Link: https://www.goodreads.com/book/show/241434373-the-infinity-machine
Demis Hassabis is a True Believer. Perhaps one of the first to recognize the true potential of AGI, and certainly the first to go after building it. He states that "AGI is infinitely bigger than a company or a person or a set of owners. It's bigger than capitalism and national economies." People today worry about UBI or the electricity demands of data centers, but they lack vision. "People aren't thinking ambitiously enough about what a post-AGI world will look like... I don't think money's even going to be relevant. What will money mean in a post-scarcity society? Or corporations. Or the stock market. What do these things mean if we have superabundance." Demis believes that the future is going to be like the Culture series (referenced above!), he states "maybe it's as big as the emergence of the prefrontal cortex in humans."
This book chronicles the early history of Demis and his company, DeepMind, and ends with its current trajectory in the AGI race, aka "the most crazy, ferocious corporate battle that we've ever seen." I found this book pretty excellent overall, but the overall storyline was a bit cluttered. The layout is a bit stream-of-consciousness, starting with a cheerful biography of Demis and ending with a chronology of Google's place in the product battle on route to AGI. Sebastian, unfortunately, isn't a great biographer. He's clearly a big fan of Demis, but if I have to read one more pontification about Demis the "Ender Jedi" I'm going to lose my mind. Sebastian doesn't really dive deep into personalities (not his skill set), and he overuses the same metaphors and comparisons ad nauseam. Sure, Demis is quiet and "Jedi-like." Is that it?
While Sebastian fails as a biographer, he shines as a historian. The story he outlines is fascinating, that of an over-eager English genius setting out to transform science forever. Demis is unfortunately born in Britain - "The Founders Fund team joked that investing in Britain was like investing in Somalia" - and is the first mover in the AGI race. This means he is starved of capital and has to pave the vision from scratch with overseas investors, a really hard task. He sells to Google for $650 million at the end of 2014, so that he can focus on research and not runway. He (and Mustafa) sort of sell their soul insofar as they are unable to negotiate any safety guarantees outside of Google's profit maximization incentives. The lesson here is a bit brutal - "The notion that a well-meaning individual had a seat at the table offered a flimsy scaffolding of reassurance to an alarmed world. But perhaps it was the best comfort available."
This is stated through a few stories, but because of public and governmental ignorance, and the speed at which AI is developing, traditional corporate controls are unlikely to prove decisive outside of just having the "right" people in the room. "Reid Hoffman had been correct in 2017. It was worth risking his fortune on corporate-governance experiments because governments were unlikely to take action." Demis knew the stakes - "Hassabis would inform candidates that, if they signed on, they should prepare for a climactic endgame when they might have to disappear into a bunker" - but couldn't foresee the messiness execution would require. He doesn't control Google, Larry does. He can get fired. He will play a role in the AGI future insofar as he's been able to amass power within the organization, but he is much less of a decision maker over AGI-gemini than Larry, Elon, or Sam are, given he's working within a huge organization with limited voting stake. Now, I quite like Demis. He seems to have set out on the AGI journey for truly noble reasons (science) and Sebastian gives no inkling of doubt in this regard. Demis thinks Sam is doing this for power, for example, whereas he is content with his Nobel prize and modest living situation. What is most endearing is his frustration, which I share, with other players who don't "get" it. He decided not to entertain Zuckerberg further, once Zuck shared just as much excitement about AI as he did virtual reality, crypto, etc., and I'm sure he feels similarly about a16z and others who lack vision.
Demis states that "any pattern that can be generated or found in nature can be efficiently discovered and modeled by a classical learning algorithm." The OpenAI people were really the only others more bullish than him "if you have a very large dataset and a very large neural network, then success is guaranteed" said Ilya. Ilya also believed that "RL amounted to 'an endless hill of suffering'", whereas DeepMind spun their wheels for years. Sebastian is great at explaining concepts, and he chronicles the RL vs deep learning debate and historical context very well. He states that "the essence of intelligence is the ability to respond flexibly to complex situations," and it is clear that both RL and deep learning will play large roles in AI's future. OpenAI's history is peripherally discussed, the most interesting quote was "Pushing back against Musk's obsession with the race against Google and DeepMind, Brockman added, 'It doesn't matter who wins if everyone dies.'" There's clearly some traditional Eleizer hesitation among the creators of the most important research:
"I am in the camp that is hopeless," Hinton informed Bostrom. "In that you think it will not be a cause for good?" Bostrom inquired. "I think political systems will use it to terrorize people," Hinton answered. "Then why are you doing the research?" Bostrom asked. "I could give you the usual arguments," Hinton replied. "But the truth is that the prospect of discovery is too sweet."
Science is amazing, and I agree with Demis that scientific discovery is really the only place where AI is unambiguously good. I also agree with Sebastian that "to understand biology, you needed more than biological intelligence." Neural networks are messy and complex, which poses an apt challenge: "And of course that's quite troubling for science, because science is about reducing things to their essentials. You have complexity, and then you break it down to understand it; You look at the components. But the problem is, what if the phenomenon you're interested in only exists when you put the components together? That poses a bit of a challenge to the normal scientific method." As someone who is working to use AI to advance scientific discovery in biology, the relevant discussion in the book was quite fascinating.
Given how steeped into the book's content I've been for the past five years, I probably got less out of this book than others will. But it's probably a must-read, and the flawed organization and delivery shouldn't dissuade someone from reading. Sebastian is probably the best non-fiction business writer on the planet, and certainly my favorite, and this will be one of the most important stories in world history. Any details shouldn't be missed.
4. Reasons and Persons (P), by Derek Parfit
Link: https://www.goodreads.com/book/show/327051.Reasons_and_Persons
This will probably change my worldview quite substantially, or at least reinforce a lot of the ideas that have been swirling around in my head (likely as a result of reading material heavily influenced by this). I really enjoyed the book, and found Derek's focused analysis very compelling. It starts off with a random-seeming set of arguments against self-interest theory, and other philosophical groundwork.
Derek argues that even if there is a true north star goal for morality (or really anything), often best achieving that goal may be believing something else (or using something else as a north star). For example: "Hedonists have long known that happiness, when aimed at, is harder to achieve. If my strongest desire is that I be happy, I may be less happy than I would be if I had other desires that were stronger. Thus I might be happier if my strongest desire was that someone else be happy." Derek also shows that sometimes it can be rational to become irrational. If a robber puts you in a bind, depending on the correct assumption that you are a rational actor, you can actually reduce the likelihood of getting killed by becoming completely and totally irrational.
Note that morality, in a sense, especially utilitarianism, is in some sense built off of ideal deliberation. If you could think clearly, weren't distracted, and knew all the relevant facts, you could do not only what you think you want, but what you'd want to do if you did this ideal deliberation. Moral reasoning requires rationality, and thus intelligence and forecasting. This is a pretty big deal, because better prediction on long term effects of actions (which could require simulations or higher brain power), and thus AI or accelerated technology, could be massively important for moral reasoning. Maybe we have a moral duty to accelerate technology so that we can actually make headway on such moral decisions (versus flying blind).
A lot of the time, common sense morality and consequentialist thought aims at similar targets and has similar steps along the way (individual rights, etc.). I point this out in my book, but Derek mentions that "we might find that, in Mill's words, our opponents were 'climbing the hill on the other side.'" I'd like to make a quick point here - and that is that I'm beginning to believe that qualia is the only thing that matters. Utilitarianism, of the one life = one life variety, is a bit crude. The number of positive experiences, and the quality of those experiences, could be what matters. Not every life is equal, as some lives lead to better long-term qualia maximization. Even if we go off of some objective list, or other theory, it's a max and min function at the end of the day. There are a lot of interesting thoughts about uncertainty here (and how we should act under such extreme uncertainty), but my main takeaway is that the psychological connectivity of experiences (instead of persons) matter mainly as this sort of individual rights route as a way to climb the hill on the other side. The rudimentary "all lives are equal" is thus the same, as its reasonable to expect qualia/consciousness doesn't differ between humans (or at least there's no way to know right now). Thus consciousness is the only thing that matters, morally, and the duration and flavor of such consciousness (pleasure or some higher-order art or discovery played out in the real world) constitutes moral theory, alone. Maybe the experience machine is the end-all-be all, as long as it's output is maximized and someone ensures it continues functioning. Thus, acquiring computational resources that can output positive qualia (or doing things in the world to result in positive qualia) could be a moral imperative as well. Intentional growth, it seems, may be a moral imperative across all of these axes.
Anyway, the book moves on to the more interesting discussion of personhood. First, Derek argues that there is no such thing as a "person," if you really think about it. There are plenty of interesting thought experiments here, but basically he outlines the Reductionist view, that "the existence of a person to involve nothing more than the occurrence of interrelated mental and physical events." Note some definitions: "Psychological connectedness is the holding of particular direct psychological connections. Psychological continuity is the holding of overlapping chains of strong connectedness."
In Star Trek, does teleportation kill you? Derek states that "Teletransportation is about as good as ordinary survival... ordinary survival is about as bad as being destroyed and replicated." I'm not going to rehash all the arguments, you should just read the book, but they are pretty convincing. The implications are what I will focus on: "On the Reductionist View, it is more plausible to reject distributive principles. It is more plausible to focus, not on persons, but on experiences, and to claim that what matters morally is the nature of these experiences. On the impersonal Utilitarian Principle, the question who has an experience is as irrelevant as the question when the experience is had."
Should we be impartial morally, across not just space but also time? Yes. Is past suffering bad? Yes. Is smoking cigarettes, or trying Heroin, morally wrong because it is an injustice to your future self? Yes. Derek claims that "We could make similar claims about our future selves. If we now care little about ourselves in the future, our future selves are like future generations. We can affect them for the worse, and, because they do not now exist, they cannot defend themselves. Like future generations, future selves have no vote, so their interests need to be specially protected." He also states that "We should claim that great imprudence is morally wrong. We ought not to do to our future selves what it would be wrong to do to other people." Note that "This reduces the claims of personal autonomy. We no longer have the right to do whatever we like, when we affect only ourselves. It is wrong to impose upon ourselves, for no good reason, great harm." Sound scary and totalitarian? Maybe. But maybe rights are just an important way to climb this hill, but sticking so fast to them may be a misjudgment for the long term.
Derek states that it is hard to internalize, and thus believe, the Reductionist view. Derek half-jokes that Descartes "should not have claimed, I think, therefore I am. Though this is true, it is misleading. Descartes could have claimed instead, 'it is thought: thinking is going on.' Or he could have claimed, 'this is a thought, therefore at least one thought is being thought.'" I find it pretty confusing to reorient my mind around this discussion of personhood, but it seems compelling logically. Also, "When we cease to believe that persons are separately existing entities, the Utilitarian view becomes more plausible," so perhaps I am primed by my beliefs already. Plus, the Buddha did it! Derek states that upon changing his outlook, "other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others."
Population ethics makes my head hurt, but Derek closes with them. This closing is full of thought experiments and the now-famous Repugnant Conclusion. I really like Derek's rigor here. He'll lead you along a path that seems plausible (perhaps average quality is what matters), and then shatter it before your eyes (Hell 1: ten people suffer torture for 50 years; Hell 2: ten million people suffer torture for 49 years). The non-identity problem, and other issues Derek raises, are brilliantly handled. The book ends this section with more questions than answers. Sure, you could bite the bullet on the repugnant conclusion, but it does feel so intuitively wrong. But I'd have said the same for the Reductionist view before reading this book, so it's hard to be confident of anything. Overall, I can see why this book had such a far-reaching impact. It's one of the most original works of philosophy I've read, and I'm sure I'll be thinking about it for decades to come.