EP. 103: HUMAN FLOURISHING IN THE AGE OF ARTIFICIAL INTELLIGENCE

WITH ERIC HORVITZ, MD, PHD

The Chief Scientific Officer at Microsoft discusses what artificial intelligence teaches us about natural intelligence and why AI technologies will enable human flourishing.

Listen Now

Episode Summary

Anyone who has interacted with ChatGPT is likely to agree that it is one of the most powerful and transformative artificial intelligence tools out there. Writes our guest on this episode, Microsoft's Chief Scientific Officer Eric Horvitz, MD, PhD, “ChatGPT left me awestruck. It is a polymath with a remarkable capacity to integrate traditionally disparate concepts and methodologies to weave together ideas that transcend disciplinary boundaries.” 

Dr. Horvitz is one of the leading voices in artificial intelligence (AI), serving now on the President's Council of Advisors on Science and Technology and formerly as President of the Association for the Advancement of Artificial Intelligence. His research has been foundational to machine learning, AI integration of multisensory streams of information, computational models in imperfect information systems, and applications of AI amidst the complexities of the open world. As it happens, Dr. Horvitz is also a physician by training. 

Over the course of our conversation, Dr. Horvitz discusses how studying AI has enabled him to explore the mysteries of human intelligence and why there are some domains of the human experience that AI will never capture. As you will hear, he brings an eloquent optimism to articulating the ways that AI will contribute to human flourishing. 

  • Eric Horvitz, MD, PhD serves as Microsoft’s Chief Scientific Officer. He spearheads company-wide initiatives, navigating opportunities and challenges at the confluence of scientific frontiers, technology, and society, including strategic efforts in AI, medicine, and the biosciences.

    Dr. Horvitz is known for his contributions to AI theory and practice, with a focus on principles and applications of AI amidst the complexities of the open world. His research endeavors have been direction-setting, including harnessing probability and utility in machine learning and reasoning, developing models of bounded rationality, constructing systems that perceive and act via interpreting multisensory streams of information, and pioneering principles and mechanisms for supporting human-AI collaboration and complementarity. His efforts and collaborations have led to fielded systems in healthcare, transportation, ecommerce, operating systems, and aerospace.

    He currently serves on the President’s Council of Advisors on Science and Technology (PCAST) and advisory boards of the Allen Institute for AI and Stanford’s Institute for Human-Centered AI (HAI). He served as President of the Association for the Advancement of Artificial Intelligence (AAAI), as a board member on the Computer Science and Telecommunications Board (CSTB), and on advisory committees for the National Science Foundation (NSF), National Institutes of Health (NIH), Defense Advanced Research Projects Agency (DARPA), and the Computing Community Consortium (CCC).

    He received PhD and MD degrees at Stanford University.

  • In this episode, you will hear about:

    • 3:00 - Dr. Horvitz early trajectory from medical school to a PhD in computer science

    • 7:42 - What Dr. Horvitz’s studies in AI have taught him about natural intelligence

    • 10:00 - A primer of generative AI

    • 21:16 - Dr. Horvitz’s view on the future potentials and dangers that AI will bring to society

    • 29:04 - How the profit motive might shape the utilization of AI in our society

    • 36:48 - The importance of approaching AI development from a human-centered lens

    • 47:29 - What human flourishing could look like in a society steeped in artificial intelligence

  • Henry Bair: [00:00:01] Hi, I'm Henry Bair.

    Tyler Johnson: [00:00:02] And I'm Tyler Johnson.

    Henry Bair: [00:00:04] And you're listening to The Doctor's Art, a podcast that explores meaning in medicine. Throughout our medical training and career, we have pondered what makes medicine meaningful. Can a stronger understanding of this meaning create better doctors? How can we build healthcare institutions that nurture the doctor patient connection? What can we learn about the human condition from accompanying our patients in times of suffering?

    Tyler Johnson: [00:00:27] In seeking answers to these questions, we meet with deep thinkers working across healthcare, from doctors and nurses to patients and health care executives those who have collected a career's worth of hard earned wisdom probing the moral heart that beats at the core of medicine. We will hear stories that are by turns heartbreaking, amusing, inspiring, challenging, and enlightening. We welcome anyone curious about why doctors do what they do. Join us as we think out loud about what illness and healing can teach us about some of life's biggest questions.

    Henry Bair: [00:01:03] Regardless of what else one may think of the technology, anyone who has interacted with ChatGPT would agree that it is one of the most powerful and transformative artificial intelligence tools out there. I remember the first few times I played around with a large language model, astonished by the fluidity and versatility of its responses, exhilarated by the applications I could immediately foresee in medicine and scientific exploration, and even slightly unnerved by how well it was doing things that I previously thought only humans could do. From writing poems to generating jokes, to picking up on subtle emotional cues in text. Says our guest on this episode, Microsoft's Chief Scientific Officer, Doctor Eric Horvitz, "Chatgpt left me awestruck. It is a polymath with a remarkable capacity to integrate traditionally disparate concepts and methodologies to weave together ideas that transcend disciplinary boundaries."

    Henry Bair: [00:02:00] Doctor Horvitz is one of the leading voices in artificial intelligence, serving now on the President's Council of Advisors on Science and Technology and formerly as president of the Association for the Advancement of Artificial Intelligence. His research has been foundational to machine learning, AI, integration of multisensory streams of information, computational models within imperfect information systems, and applications of AI. Amidst the complexities of the open world as it happens, Doctor Horvitz is also a physician by training. Over the course of our conversation, Doctor Horvitz discusses how studying artificial intelligence has enabled him to explore the mechanisms and mysteries of human intelligence, and why there are some domains of the human experience that I will never capture. As you will hear, he brings an eloquent optimism in articulating the ways that AI will contribute to human flourishing.

    Henry Bair: [00:02:56] Eric, thank you so much for taking the time to join us and welcome to the show.

    Dr. Eric Horvitz: [00:03:00] It's great to be here with you.

    Henry Bair: [00:03:01] So you were one of the pioneers of artificial intelligence, working in the trenches during its early days, and you are now one of its most important leading voices as chief scientific officer of the largest tech company in the world. Well, actually, as of early 2024, the largest company in the world by market cap. But what some might not know is that you also went to medical school. So I'm hoping you can tell us about that trajectory. Why did you go to medical school initially, and did you at one point pivot towards computer science? Well, if not, how have you seen those two threads of your training, medicine and computer science meshing together?

    Dr. Eric Horvitz: [00:03:43] It's a great question, and I don't know if I've pivoted versus stayed true to my interests and curiosity and passions the whole way through. I've always been curious about the universe, the ontology of things. So I was passionate about physics and chemistry and biology. So I kind of went up the stack, you know, going through high school and going on to undergraduate education. I love the multi-scale thinking about everything at once. As an undergrad, I ended up doing a biophysics degree that really brought together chemistry and physics and biology in a way that really I thought was satisfying, and it continually to kind of nourish, uh, the multidimensional aspects of the world as I, as I understand it and understood it. Of course, I'm always craving for more information. Some big questions in all those domains that are remain unanswered. But one thing that I did during my undergraduate days was. I got involved in a in a neuroscience research lab and I became kind of expert, according to my mentor, ahead of ahead of the pack and expectations with pulling microelectrodes and then listening into single neurons in the brains of rats. And I have to say that sitting in darkened rooms with this kind of curiosity about all these fields, listening to the clicks and clacks of single neurons popping, doing their Nernst equation thing, raised deep questions about unknowns, about intelligence. It was kind of a different layer beyond biology, so shocking that neurons could actually underlie my thinking and reasoning as a human being and human intelligence. And I started thinking deeply about getting a neuroscience PhD as I as I came to graduating. And I thought, well, how about an MD PhD? I want to know about people and have the the most access that I would need to understanding human intelligence someday so that I came to Stanford Medical School working with neuroscience professors, trying to find a lab and so on.

    Dr. Eric Horvitz: [00:05:52] And in the midst of all this, I started getting concerned that in what I saw, that we would never have the tools and understandings of this gigantic complex system, the vertebrate nervous system, to make progress at the foundations of understanding at the level of physics or chemistry. And started taking classes at main campus, now Stanford. I used to just get on my bicycle, you know, right after anatomy and sit in graduate courses in artificial intelligence, including some some really interesting, challenging philosophical classes, some of the leaders of artificial intelligence that we know as the founding creators of the field, the original kind of some of the people, even even John McCarthy, who came up with the phrase artificial intelligence back in 1956. And I just started moving into AI now. So there I was, a medical student taking medical school classes, which I found interesting but not directly resonant with my with my passions and really being like over the moon about learning about artificial intelligence. And I eventually made that my PhD work working with advisors. And I eventually came back to apply. Given my understandings of some of the challenges in medicine and getting involved with the medical AI group under Ted Shortliffe excited about applications of artificial intelligence in medicine. So that was kind of the course of what happened, although even in those days I viewed myself more centrally located in computer science and AI scholarship than in medicine, and it only came around to understanding the depth and my passion about how these systems might help physicians and patients and biomedical science. Later, as I learned more and finished medical school.

    Henry Bair: [00:07:42] So I think what's really fascinating is that to further explore your interests in understanding how the human mind works and how consciousness works, you actually turn to artificial intelligence rather than, say, neurobiology. So then my next question is, what have your subsequent forays into artificial intelligence taught you about the possibilities and limits of natural intelligence?

    Dr. Eric Horvitz: [00:08:05] I've always been motivated by the things we learn about and question about human minds. If you go back and look at the papers that I've written and some of the deep dives that I've taken, you can see sparks of curiosity that come from what it is to be human. So my dissertation work ended up focusing, for example, on bounded rationality, the limits of thinking when you have not enough time or information and you're in a complex setting and need to take action under great uncertainty. Well, humans have somehow muddled through all the centuries through that, that hard, ongoing challenge to for all their decisions and actions in the world. And so I've learned and built intuitions from frame by human psychology, cognitive psychology, and come back around with intuitions that come from the principles and learnings and findings in the area of artificial intelligence, for example, what does it mean to reflect, to reason about reasoning, meta reasoning? What are principles of meta reasoning? What does it mean to be immersed in the world where you don't just have one problem you're thinking about, which has been the traditional focus of artificial intelligence research, but you're immersed in embodied and you're dealing with a stream of problems over time. How do you spend your idle time? What's a formal model of anxiety about the future to allocate thinking and resources to pre-compute. What does it mean to have a model of attention and to be disrupted, and to recover? So many of the things we studied in artificial intelligence really have their basis for me in questions and findings and standing unknowns about human minds.

    Tyler Johnson: [00:10:01] So let me take us back for a second, maybe even a little bit more basic than what you're already talking about, in a sense, in that... So the last year or so, there has been an enormous amount of excitement and consternation in the media and in public consciousness about generative AI and chat, GPT four and then this whole Sam Altman drama. And anyway, the whole thing. Right, that's been very prominent in the news. And I think that everybody who reads much about this sort of has this vague, overriding sense that it's a big deal. And no matter what you do or what you like to do, it's probably going to play a role in your life in everything from helping you select the clothes you buy and the music you listen to all the way through to how doctors make decisions or how airplanes get built or whatever. But most laypeople, if you then say, okay, well, what is generative? Ai really have no idea what it really means or what it does, right? So can you. Just as best you can, though I know it's always a challenge for someone who's steeped in this to explain it to people who don't even have a sort of a basic CS background, but like, what is generative AI? Like, what is a computer or many computers? What are they doing? If I go to chat, GPT four and I type in a medical question, or I type in a, you know, question about how to buy plane tickets to Paris or whatever, what is it that the computer is doing? And how is this different from what computers could do previously, and why is there so much excitement about this?

    Dr. Eric Horvitz: [00:11:39] First of all, I agree in the first part of your question and how you frame the question, I think that 500 years from now, the next 25 years will be recognizable and they'll be named something in particular because of the influence that these technologies will have on people in society in multiple ways. Getting more into the particular of generative AI. Some of the magic that's behind all this came from a couple of innovations, really, just a couple of really interesting ideas about how to build and train large scale what are called language models, which in itself was kind of waited for this moment, or was waiting for the moment when we had the compute and the online digitized data resources to build such large artifacts. What's going on is these systems are using a trick, or I should say, the engineers that build them are using a trick called self-supervision. For decades, AI scientists, and I'd say maybe 3 to 4 decades have been in search of labeled data. You know, here's a whole set of radiological films, all tagged by the findings in them by expert radiologists. Or, you know, here's a set of pathology slides, each marked by the final diagnosis that's been confirmed by outcomes hard and scarce, difficult to come by this kind of data with Self-supervision several years ago getting to be like maybe 7 or 8 now, people realized engineers that you could take a gigantic corpora like, let's say, all digitized Wikipedia and play hide and seek with specific words. So the system will hide some randomly, randomly selected words and then try to predict them. Since you know the answer when you uncloak the word that's been hidden. That's kind of interesting training data.

    Dr. Eric Horvitz: [00:13:39] Well, it turns out that gives us incredible scale. And so what these generative systems are doing today, most of these systems is we're training them to predict the next what's called token or the next word. What's the next word going to be given the sequence of words. What's the next pixel going to be given this set of pixels. And what's amazing is by pushing really hard to make these systems better and better and better with optimization computation with a couple of procedures, one called back propagation, gradient descent, these kinds of interesting approximation strategies, pushing these systems to get better and better at predicting the next word, it seems. And the conjecture is right now they're being forced to compress or build really rich models of the universe, models of the world internally that turn out to be generating brilliant productions or generations of possibilities, including expert medical problem solving, detailed mathematical proofs, and so on. There's a bunch of magic there that we don't understand yet. As computer scientists, I believe deep down, I don't talk about this very much, that some of the surprising emergent behavior, when we just push these systems to predict the next token better and better and better and better, somehow relate to what's going on. I'm going to come full circle now to the magic we see with these large scale systems we call nervous systems. I'm not sure what the connection is just yet, but it's the first time in my career that I'm seeing something a little bit closer to what might be going on, giving us some intuitions, although it's unclear what the relationship will be as we push into the science.

    Henry Bair: [00:15:34] There's definitely a lot of like, pattern recognition that goes on in the human brain when we learn new things. I think, however, the fact that this is primarily how generative AI functions highlights one of the fundamental characteristics that it lacks, which is something that you brought up earlier, actually, which is metacognition. Right? Like humans, we can think about something, we can learn something, but then we can think about thinking those things. Whereas like with these language models, to the best of our understanding, they're not aware of the meaning of what they're generating. They can't generate responses and actually understand what responses they are generating. And that kind of makes me curious about the limits and the possibilities. If we keep pushing and pushing these models. And based off of what you have seen so far and your work, in what ways will these models, these AI models eventually surpass the human mind? And are there any areas of human cognition that you believe, genuinely, that machines will never be able to really replicate?

    Dr. Eric Horvitz: [00:16:36] So first let me say that it's difficult, or at least very challenging, to make comments or to ascertain what it is and what it is not the systems are doing and can do. This gets into the nuances of defining what we mean by the notion of, does this system have a sense for the semantic, a semantic understanding of words or concepts or sentences the way people do? Can they reason about reasoning? Do they have metacognition? You certainly can push them in a variety of ways to generate very competent reflections about their own thinking. Whether you can say the system knows what's going on might be as difficult someday as do do individual neuron neurons or neuronal subsystems in human minds know know what's going on when it comes to, you know, interpreting words in in the Broca area, for example, of the human brain versus what we seeing emergent behaviors of larger assemblies.

    Dr. Eric Horvitz: [00:17:39] But back to your question. I had to just make a comment on your premise. And I think, by the way, this will be a moving target. I mean, over time, it's great to ask these questions and to call out limitations. And in fact, one reflection is that with the rise of the recent models, I've noticed some of my colleagues from the past. I've been in the field since, you know, I was I was at Stanford as a grad student in the mid 80s. I noticed that many of my colleagues that I've, who were grad students with me across the country as part of our invisible college of graduate students working together at conferences and meeting and talking and hanging out. Several people have their hands on their hips like saying, yeah, yeah, yeah, but but can these new fangled things, can they do this? Can they do that? Can they really plan as opposed to being completely mystified? I'm completely blown away by these models. And I expect and I think most of my colleagues are, but we still see quite a few that are saying, yes, but tell me more. Show me. What possible limits might we find?

    Dr. Eric Horvitz: [00:18:39] Well, I think that, forever, human intellect will be quite distinctive in terms of who it is that we are, and the sort of the evolutionary course that we've been, the ride that we've been on and and how we how our minds have been shaped over time. To date, I don't see the same brilliance that I see coming from the most creative thinking that humans do putting things together, coming up with, you know, Nobel Prize winning concepts and ideas out of the blue. This idea of the conscious reflection, the subjective states that we have, it's unclear what the foundations of that might be and where that might go when it comes to machines someday. The empathy and caring, the connection we seek, the understanding of what it means to be human. You know, we ground with each other human, human experiences.

    Dr. Eric Horvitz: [00:19:34] I see incredible roles for these machines, even where they are today, and not just generative AI, because I think we we need to not be so completely enamored with the latest advance that we forget all about decades of incredible research, some of which led to systems that are just getting their toe in the water in medicine, for example, and other fields, traditional machine learning, you know, the supervised machine learning. I mean, we've barely seen the impact of those systems, and they're extremely promising. So I hate to say, oh, that's old stuff. Let's just let that go and look at how we can fit these language models into medicine. I think in the near term, these systems will are showing incredible abilities to do synthesis of multiple concepts, for example, in health care summarization, generation of in some ways Templatized notes, for example, going from the open rough interactions or say, ill defined interactions between physicians and patients into structured Soap notes, these kinds of applications will have remarkable use in reducing the workload of documentation of writing authorization letters. You know, the kinds of of work that I think makes for my colleagues and my my daughter, who's a physician, look at medicine and say, gosh, I work so hard and I have such little time with my patients. I'm not sure this is such a fun job anymore. Might these systems, you know, help us in that way to relieve us of the burdensome aspects? And these are things that we can do as humans, but maybe we'd like to sort of offload a bunch of that to machines.

    Tyler Johnson: [00:21:16] So I want to go back a little bit. When you were describing a couple of answers ago about, you know, with some of the newer technologies, what is going on behind the scenes. So I'm really struck to think, on the one hand of you, as a student in college, working with individual neurons in rat brains and then trying to sort of imagine or deduce or some combination of those, how you go from individual neuronal firing to this grand mystery that we call consciousness. And then at the same time, now you're involved in, among other things, building large language models and this sort of nascent generative AI field. And yet, when I ask you, one of the world's experts, to describe what's going on under the hood, you use words like conjecture and magic. And this is you who's like the world's expert, right? Which reminds me of this paragraph. There was an article in The Atlantic a couple of months ago. I think this was before the fiasco, but I can't remember about Sam Altman getting fired. But anyway, there was this very lengthy article about Sam Altman, who, again, is one of the world's experts in this. So it's talking about Sam Altman has been talking about sort of what's going to come in the future of AI. And then the the author of the article summarizes by saying: "but the more we talked, the more indistinct the other side of this equation seemed. Altman, who is 38, is the most powerful person in AI development today. His views, dispositions and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the US president. So all of that is an interesting opinion. But that aside, this is what I care about. But by his own admission, the future is uncertain and beset with serious dangers. Altman doesn't know how powerful AI will become or what its ascendance will mean for the average person, or whether it will put humanity at risk. I don't hold that against him. Exactly. I don't think anyone knows where this is all going, except that we're going there fast, whether or not we should be of that. Altman convinced me."

    Tyler Johnson: [00:23:22] And there are other articles that have been around where people quote Altman as laying odds as to how likely he thinks it is that AI and the whole revolution it's ushering in will be net positive for humanity versus net. You know, some Terminator future where the machines come back and. Or, you know, whatever, which sounds sort of funny to joke about, but in seriousness, it is striking that this thing is happening, and to some degree, it's sort of happening out in the open, right? Like the public is already interacting with and using it. And yet the people who know the most about it, who are literally creating it or unleashing it, or I don't know what your preferred verb is, but still don't really know exactly what's happening and don't really know exactly where it's going to go. So, I mean, I guess I want to say, does that worry you? Do you, like, lie, lie awake at night, worried about how this incredibly powerful thing will affect us all one day? Or is there something that settles your mind about that and makes it so that it doesn't seem that concerning?

    Dr. Eric Horvitz: [00:24:32] What settles my mind is looking back at the history of how civilization has been both created and has leveraged tools of our own making for over 100,000 years. The language that we're using right now to communicate is a human invention, and it's so central to our thinking and our civilization. It certainly changed everything, and it certainly dual use, but it's something that we've leveraged in very powerfully positive ways. Overall, the same for mathematics and music. Our understandings of the world, electricity, the telephone, air transport and so on. I think that the rise of powerful computational intelligence will bring a new, significant set of tools that we will adapt to and shape, and they will change us as to who we are over time. But it's part of what makes us uniquely different from other animals on the planet are history of creating and leveraging and adapting to live with and to benefit from the tools that come of our own hands. I'm overall optimistic, even though I do think these technologies will be disruptive. Now. Disruptive is not necessarily a negative thing, but we have to watch out for the rough edges. I do think we have to keep our eyes on what I would call deep currents, of how these technologies will affect everything from education to our appreciation of who it is we are, what jobs we select, how we come to understand the universe, how we communicate, the authenticity of communications among people.

    Dr. Eric Horvitz: [00:26:21] For example, the nature of how these tools are leveraged in the, you know, with jobs in the economy, what it means for national security and warfare issues rising, like will there be a AI divide like that's bigger than the digital divide when it comes to the haves and have nots that people can that can leverage these tools versus that don't so many areas of possibility. And then all the dual use applications, for example, I'm passionate about the the leveraging of these models for the sciences, whether it be material science, bioscience, look at what's going on with AI powered protein design right now to design new kinds of proteins that could serve as new as new therapeutics. I see an incredible acceleration of our sciences across domains coming via these tools. And at the same time, let's think about the downside. I mean, biosecurity issues. The president's executive order last October calls out concerns about AI uses, leading to challenges in biosecurity. What might we do about that now? I do think we're going to see regulations, new kinds of norms arising and best practices. And it's certainly where we are even today.

    Dr. Eric Horvitz: [00:27:39] So with Microsoft, for example, with all our excitement and energy when it comes to bringing these technologies to the daily life of our customers, we think deeply about, well, how can this technology be used within the workplace? Let's characterize it. Let's look at the downside. Let's look at the failure modes we set up. And this is just an example of what we need to do as a society. But at Microsoft for example we have a sensitive uses review committee and a deployment safety board, our DSP that reviews every single product that ships with a with a large scale language model. And when you think about the process we've gone through, what are the metrics? What does it mean to have a process we call red teaming to really go up against and try to sort of break the safety features of what we put in place to guide the behavior of these models. We're learning a lot, and I think that the analogy is that we will learn a lot as a society. We will come up with regulatory. Approaches to dealing with the rough edges, but in the end, I think we for all the disruptions, there will be an incredible upside with this new set of tools that we've created as humans.

    Tyler Johnson: [00:28:54] Can I ask? And this may sound sort of, um, passively accusatory. And I know.

    Dr. Eric Horvitz: [00:29:00] You can be actively accusatory as well. I'm happy to hear the raw... The raw form.

    Tyler Johnson: [00:29:04] Okay. I'll just go ahead and accuse outright. At least its accusations towards other people. So I want to think about two technologies that have come into being in the last 20 years that are now ubiquitous, one for doctors specifically and one for society in general. So for society in general, you can think about the rise of social media, right? Which 20 years ago, for all intents and purposes, didn't exist. And now virtually everybody uses in, you know, whatever their particular flavor is. And then the other one is the rise of the electronic medical record for doctors. So if you look at social media, I think it's pretty reasonable to say we could debate this, but I would make the argument and I think I would have a lot of evidence at my disposal to make the argument that net net social media has at least had heavy negative effects and maybe has had more negative than positive effects. We could debate that, but certainly there are a lot of negative effects. They were just hearings in Congress last week, right, about this very thing with Mark Zuckerberg and other social media leaders there. And I think that one of the things that has become clear with the whistleblower who revealed internal documentation from Facebook that they knew about the harm, for instance, that Instagram was causing to teen girls in terms of upswing and depression and other things, is that whether it has always been true, it has certainly sometimes been true that the social media companies have, understandably, because at the end of the day, they're answerable to their shareholders, have been more concerned about turning a profit than they have been about human flourishing and welfare.

    Tyler Johnson: [00:30:36] By the same token, I think you can make the argument, and we've had many guests on the program who have made this argument that the electronic medical record, which was supposed to be this somewhere between a boon and a panacea for doctors that was supposed to make lives more efficient and save us a bunch of time. And all these things has done exactly the opposite to where it's now one of the most oft cited causes of burnout. And many people feel that that's because at the end of the day, if you really drill down to the bedrock, the EMR is mostly focused on maximizing profit ization for health care companies. So it is built around how do you identify as many things, as many billable things as possible to maximize corporate profit, which is just to say that if instead it were built around truly built around either patient welfare or doctor welfare or whatever, it would probably be built in a very different and less onerous for medical professionals way.

    Tyler Johnson: [00:31:33] So all of that is to say that I think those two stories, though I'm telling them in a very rough, brief format, have a similar theme, which is that if profit is the thing that drives the development of a system, whether social media or the electronic medical record or whatever, then that is the variable that you will solve for when push comes to shove. And so I guess then my question is, what is there to say that profit will not be the ultimate, most important driving motive for the development of AI? And what is to say that this is not to be clear, I'm not saying that Mark Zuckerberg or whoever doesn't care about human flourishing, nor am I saying that epic or whichever electronic medical record CEO or whatever, that those people do not care about human flourishing. But what I am saying is that if profit is the number one priority, even if you have human flourishing or doctor welfare or whatever somewhere on the list, it worries me that AI will be deployed in a way that always guarantees profit and then looks after the other things if and when they can be done tidily. So is that a justified concern or am I overly pessimistic there?

    Dr. Eric Horvitz: [00:32:47] Well, I think it's a concern. You might be overly pessimistic in that in your view of profit being the pathway to suboptimal solutions and costly outcomes for people in society, I deeply ascribe to something that Colin Mayer says about the role of innovation, and companies and organizations that innovate in that profits should come from successful solutions to hard challenges of people and planet. Optimistically, you'd think that the better you do at solving problems, the more profit you deserve to receive. Might there be a kind of a meandering path through nasty user interfaces and a side effects and abuses for payers and so on when it comes to electronic medical records? Possibly. But let me just step back a second. Given that we've created the internet, which has a has an incredible value to the world social connection technologies. That we call social media now we're bound to happen. Could they have been better guided? Can they be better shaped over time? Will they be polished and refined to provide much more net benefit? And by the way, when you talk about social media and and you made the comments about downside versus upside and, you know, particularly call it the downside, we hear more about that than the upside. There's a lot more written about the costly outcomes societal polarization, disruption of democracies and so on from via social media. We don't hear about all the incredible connection that's going on between people and among people. I would guess that it's way net beneficial if you really thought deeply and did did studies of of how these technologies are being used around the world. However, the rough edges are significant in both cases. Digitization of medical data in capturing specifically specific patients course of life history as well as in populations has to be.

    Dr. Eric Horvitz: [00:34:57] It's like the scientific approach. It's following what Hippocrates wanted when he called out evidence centered medicine. There's no way we're not going to be doing that. Can we do it better? Has the implementation been rough? Have we heard lots of grumpiness when it comes to the daily life of physicians engaging with electronic medical records? Yes, but it's on a path. And the upside is going to be much, much at a higher place than scribbling, you know, notes with a pen. So I just I just think that we, we have to sort of we can't throw out the technical possibilities with the, you know, with the, with the bathwater. We have to sort of keep on moving and iterating and the idea of how you put it like, here's what I'm seeing. That kind of recognition that we have to do better is a critical part of the evolution of technology and how it's being harnessed by civilization. If you look at the geological time scale we're in, these are the early days of the first implementations. I remember 2010 or 11, maybe three and a half, four years into Facebook's existence and related technologies, hearing about the prospect that these systems were creating echo chambers and the prospect of these systems creating new forms of societal silos, and being amazed at this new concept that we hadn't thought of before, it's like, wow, we need to study this. This is really interesting. The sociotechnical side of technology. We need to invest a lot more in the socio part of sociotechnical and sociotechnical, as much as technical in moving forward as society and as research scientists.

    Henry Bair: [00:36:48] Our conversation reminds me of a time I've seen with increasing frequency over the recent years in media and tech. Stanford University actually now has a center dedicated to this, which is human centered AI. What do you think about this phrase and what does it mean to you?

    Dr. Eric Horvitz: [00:37:05] Well, I think it means getting to a place where we move well beyond the laboratory and prototypes. And the days when I was in grad school, when you got a prototype to work and we screamed for joy, the thing was actually running, puffing steam and look, look, it's actually generating a differential diagnosis. Oh, and it's running at expert levels to thinking deeply about what this would all mean. How do humans work with this technology? How will it change the way they think and their ultimate quality of their decisions, the quality of their of their vocations? Thinking deeply about all aspects of how people and technology come together. And this gets in part at the question we were just talking about, what's the difference between the pure concept of a social network from how's it really going to fold with society? How will it be used? How can we understand it? So when I think of human centered AI, it takes me initially to the human computer interface. How can we do better when it comes to human AI collaboration? How can we have technologies that support that? And how can we have studies and understandings and qualitative assessments that help us with, you know, refining and translating the technology into practice? And so it's part of life in a way that contributes to the quality of our lives and to long tum human flourishing. That's what it means to me. And, you know, it really is, in some ways, the fact that we even need to say, human centered AI indexes into the limited thinking we've had in the past when it came to looking at solutions that were quite technical, you know, in isolation, I often call this, you know, sort of the view is like, you know, sort of sort of moving from the walls of the laboratory, the AI lab, into the real world.

    Dr. Eric Horvitz: [00:39:01] Now, in 2008 and 2009, those were my two years of serving as the President of the Association for the Advancement of Artificial Intelligence, the triple A I, which is the kind of the main society of scientists and professionals in AI. I named my presidency. I gave it the theme of AI in the open world, and it was right around that time, 2008, 2009, we started seeing our systems coming out of the world of our laboratories and our dissertations like, oh, look how this how this actually works. And here are the principles into actual practice. And that meant we had to really think deeply about the human side as well as the technologies, actual technologies that would enable these systems to be more human centered, to understand their fragility and to give them more robustness when it came to actually performing in the real world, working with people, being an important part of society. So in my presidential lecture, I gave presidents give a singular lecture that defines where things are and where things are going. I talked about the technical side of that and the socio technical side of where we were, but I also said we needed to kick off deep studies of AI people in society. And we ran a multi month study during my presidency called the Presidential Panel on Long Term AI futures, that culminated in a meeting at Asilomar for symbolic reasons, where we met to think deeply about long term possibilities and outcomes, short term disruptions, as well as ethics and legal issues back then.

    Tyler Johnson: [00:40:33] So let me and I'm going to sound stubbornly skeptical here, but I am really grateful for the opportunity to talk to someone who really knows sort of what's going on behind the scenes and at the forefront. So this is a quote from a columnist for The Wall Street Journal named Peggy Noonan, who wrote this really interesting column almost a year ago called Artificial Intelligence in the Garden of Eden. And in that she makes the observation that the Apple logo looks like it has a bite taken out of it. And so then she analogizes the Apple logo to the apple from the Garden of Eden, which, if you know the biblical story, whether you think of that religiously or as myth or whatever, or just literature, right? Adam and Eve are in the Garden of Eden. They're told by God, don't eat the fruit of this one tree. Then Lucifer comes and says, well, if you eat it, then you will start to know things like God knows things. And so then they eat of the fruit, and then they're cast out of the garden, and then they come to know other things. So she says, so she writes about that, makes the analogy and then says,"But developing AI is biting the apple. Something bad is going to happen. I believe those creating, fueling and funding it want, possibly unconsciously, to be God on some level and on some level think they are God. The latest warning and a thoughtful, sophisticated one it is underscores this point in its language."

    Tyler Johnson: [00:41:53] The tech and AI investor Ian Hogarth wrote this week in the Financial Times that a future AI, which he called quote unquote godlike AI, could lead to the "obsolescence or destruction of the human race" if it isn't regulated. He observes that most of those currently working in the field understand the risk. People haven't been sufficiently warned. His colleagues are being "pulled along by the rapidity of progress, unquote, mindless momentum is driving things as well as human pride and ambition. It will likely take a major misuse event, a catastrophe, to wake up the public and governments."

    Tyler Johnson: [00:42:30] So what I think I hear you saying in response to that, and I don't mean to put words in your mouth, but I, I want to summarize what I think you've already said, and then you can go from there. Is that while you recognize that, yes. Whether it's the frustrations of the EMR for doctors or whether it's the effect on especially female teen mental health of Instagram or whatever, while you recognize that the implementation of AI and social media and the internet and whatever all of these things has, as you have put it, its rough edges and that those need to be considered candidly and taken seriously. You believe, I mean, in effect, what it sounds like you're saying is that ultimately you believe that humans are good and that the arc of the moral universe bends towards justice, and that over the long terme, as we are doing with these articles that I've mentioned and other things, we will recognize what those problems are. And just like we've worked to fix the problems that we were trying to fix of an old falling apart medical record written on paper and of people living far away from each other, not being able to get in touch with each other with an EMR and social media, that we will try to fix the problems of social media and the EMR by other problems, including by using generative AI, and that over time the arc of things will bend optimistically. Does that seem sort of a fair summation of where you would come down in response to what I read there?

    Dr. Eric Horvitz: [00:43:52] Yes. Let me add a few thoughts, please. Is our life better now than it was a thousand years ago? It's hard to know. What's the good life for Homo sapiens? How do we make decisions about what it means to live a I'll just call it a good life. And I think we're not experts as humans as thinking that through. One thing I do know is human beings are extremely curious. We continue to push on our understandings, including of mind. And I'll make a comment that I differ in my assessment of what's pushing AI right now. It really is being pushed by deep scientific curiosity, and there's no way to stop that, and there's no way to put back in the bottle learnings and understandings. Well, we can shape is how we feel, the technologies and how we come to understand ways to get the best out of them, per various conceptions of what it means to be living a good life and to making our lives better across the planet as individuals and as larger society. So given that we're not going to stop asking questions and being curious and innovating, we're going to continue to try our best, whether it be individuals in academia, in medicine, in industry, in government, to innovate when it comes to interesting and promising applications of newly invented technologies, we need to sort of keep an eye to make sure that we are keeping an eye on value and trade offs. We're keeping an eye on answering the question of what's the good life, and how can these technologies be leveraged for those goals. And I think we will find incredible uses of these technologies. As you mentioned, social media is keeping is allowing us to stay in touch. Think about what email has done to research.

    Dr. Eric Horvitz: [00:45:50] I mean, I remember in the 80s having email access, and I couldn't imagine research and collaboration without something even as simple as email, but that was kind of a rarified thing back then. Telephone use to communicate quickly, I just, I, I'm not an expert at sitting here today in 2024 and thinking about whether or not my life would have been better in 1860 without telephones, because it gets into the really a deep appreciation for what it means to to live a good life. As I mentioned earlier, Technophilic folks tend to think for a number of good reasons that by leveraging technology in a variety of ways, we can solve really hard challenges in health care. You know, chronic disease, there are some obvious win win no brainers where we want to apply technology. I think communications and connection among people is overall a very good thing, and we should continue to push on that. I think building tools that help us with scientific reasoning to master Long Terme technical challenges and understandings will be a very good thing. You can think of likely outcomes of where AI is going that most people would say, yeah, yeah, yeah, that's a really good thing. Let's push on that. But you're not going to necessarily get there in isolation with with a powerful technology, you're going to have to watch how it's being used in a variety of ways and keep your eyes on that. I don't think there's any easy escape where we're only going to get the great things and not get the rough edges, and I'm continuing to call them rough edges versus catastrophes, because I believe human beings will always remain as overseers in control of these technologies.

    Henry Bair: [00:47:29] You know, what's striking about your perspective is that it is brimming with optimism in human goodness. Even when Tyler presents you with some skeptical counterpoints, this aligns with something that I've seen you discuss numerous times in your various lectures, which is the concept of human flourishing. What to you does human flourishing mean in the context of a world steeped in AI?

    Dr. Eric Horvitz: [00:47:53] Well, I believe that human connection will always be central in human flourishing, grounding and understanding among people, providing opportunities for people to learn and grow and come to new understandings of their self and the universe, come to assessments of their role in the universe and what they want to be doing with their lives. Helping people to understand what kind of contributions they might be able to make to society and to others. Providing opportunities for creative work for people to to express themselves through art, visual art, poetry, writing, living a long and healthy life. It's interesting when you go back to some of the early musings of some of the initial, at least per writings about what it means to live a good life. It goes back to some of Aristotle's reflections and kind of lists of what components of a happy, productive, flourishing life. When I was doing research on human flourishing for my Tanner lecture that I gave at the University of Michigan last year, what smacked me in the face was how little, given all that we do as civilization and as individuals, how little thinking there is about what it is that we really want as humans. And in some ways, we need to go back to this. I mean, we have these high level principles. We want equity. We want democracy, shared experience. We want to we want connection. We want health. We want to be learning. We want to educate, be educating ourselves and and our families and society as a whole. But we haven't really landed on what I would call a set of attributes that we believe should be guiding everything from our personal decisions to our technologies. And I think that that kind of continuing assessment of our deep values and reassessment, especially in light of some of these potential technical disruptions like artificial intelligence, will be remarkably valuable, and we need to do a lot more of that.

    Henry Bair: [00:50:00] I love that at the tail end of this conversation we're having with you, one of the world's leaders in artificial intelligence, in 2024, we're invoking Aristotelian ethics with the ancient Greek concept of eudaimonia, or human welfare. These things were discussing like wisdom, virtue, living the good life, societal contributions. These are the same things we humans have been debating for millennia.

    Tyler Johnson: [00:50:26] I just want to have on the record. I think for the public there is always a concern that people are. You can make an argument that Peggy Noonan draws a caricature in her her Apple article, right, about that. You have these sort of mindless technocratic technophiles who are just trying to do the thing they can so they can prove they can do the thing, and that's all they care about. Right? But clearly. If that caricature exists somewhere, it does not appear to be you. You seem to have thought very deeply about human flourishing. And what does it mean? And what are the larger societal and even philosophical and universal impacts? But what I think I hear you saying at the end of the day is that while, of course, nobody knows the future and anything could happen with any technology, right? But from a considered, informed perspective of someone who is very senior in the field, is making decisions that have great impact in this domain, and who has also thought a great spent a great deal of time thinking about human happiness and societal flourishing. You are comfortable that the world that will be birthed into place by the integration of AI and related technologies into society, you would rather leave that world to your grandchildren or whatever. Then you would. A world that doesn't have that integration. Is that fair to say?

    Dr. Eric Horvitz: [00:51:53] The way I put it is there is uncertainty as to how these technologies will be leveraged. Overall, my answer is yes. I think that it's part of our evolution as human civilization. I don't see a way to stop the growth and the pursuit of understandings that would lead to technologies and then, in a larger society, lead to its being leveraged in ways that would be beneficial. When it comes to will our great grandchildren be living better lives, or as good a lives as our parents or ourselves? That gets back to reptilian reflections as to what the good life is. I remember being visiting what we would call a very economically depressed region of an Asian country, and it was nighttime and the sun was setting, and I saw this gigantic area of just tents where people were living in. That was their homes. And I saw little fires, started with families gathered around them. And I thought to myself, well, I can see how people in Western society are used to their homes and their Hvac systems and their protections and their health care would look at that and go, wow, that's that's kind of scary way to live at the on the edge. But as I watched people gathering around fires as families, I thought to myself, wow, that seems like such a a wonderful evening collecting of spirits together to share. It seems like that's part of a of a good life. So it's just hard to make assessments exactly how things are going to go and for whom. But overall, I tend to to think that people are generally good. They look out for each other. They're generally altruistic. That's what got us here. We will make the most of these technologies coupling our curiosity, our passion to understand. And that includes understanding mind. We will make the best of these technologies and we will grow with them.

    Henry Bair: [00:53:54] Well, with that, we want to thank you so much, Eric, for taking the time to join us for engaging in these difficult big questions. You know, as we have demonstrated and specifically pointed out, it is through these conversations that we continue to advance our understanding of the intersection between artificial intelligence and the good life. So it was a true privilege talking to you. Thank you again for your time.

    Dr. Eric Horvitz: [00:54:16] Great questions and thanks for the conversation.

    Tyler Johnson: [00:54:18] Thanks, Eric.

    Henry Bair: [00:54:23] Thank you for joining our conversation on this week's episode of The Doctor's Art. You can find program notes and transcripts of all episodes at www.thedoctorsart.com. If you enjoyed the episode, please subscribe, rate and review our show available for free on Spotify, Apple Podcasts or wherever you get your podcasts.

    Tyler Johnson: [00:54:42] We also encourage you to share the podcast with any friends or colleagues who you think might enjoy the program. And if you know of a doctor, patient, or anyone working in healthcare who would love to explore meaning in medicine with us on the show, feel free to leave a suggestion in the comments.

    Henry Bair: [00:54:56] I'm Henry Bair

    Tyler Johnson: [00:54:57] and I'm Tyler Johnson. We hope you can join us next time. Until then, be well.

 

You Might Also Like

 

LINKS

Dr. Eric Horvitz is the author of numerous publications on artificial intelligence and its role in society.

Dr. Horvitz can be found on Twitter/X at @erichorvitz.

Previous
Previous

EP. 104: THE BEAUTY IN THIS LIFE

Next
Next

EP. 102: THE MAKING OF A HEART SURGEON