Conversations on the Convergence of Buddhism, Technology, and Global Culture

BG 136: Enlightenment 2.0

Play

Episode Description:

This week we speak with Ben Goertzel, an artificial intelligence researcher and Zen-dabbling spiritual seeker. Ben shares with us his introduction to Zen and his on-going relationship to spiritual practice. He also explains what is meant by “strong artificial intelligence” and AGI (artificial general intelligence) and explains why he thinks a fully functioning AI may be as little as a decade away.

Finally, we explore the overlap between his work as an AI researcher and his experiences with Zen and other spiritual practices, through discussing a story he wrote entitled, “Enlightenment 2.0″ about an enlightened AI being who determines that it is possible to construct a more enlightened mind, what Ben calls a “super mind”, but isn’t sure whether or not it is possible for us.

This is part 1 of a two-part series. Listen to part 2, Artificial Wisdom.

Episode Links:

Transcript:

Vince: Hello Buddhist geeks. This is Vince Horn and I’m joined today over the tubes with Ben Goertzel. Ben, thank you so much for taking the time to speak with us today. I really appreciate it.

Ben: Oh, well. Thanks for inviting me. It’s a pleasure to have a chance to talk about something besides the nitty-gritty of the singularity in AI software, actually.

Vince: Cool. Well hopefully we’ll overlap with that topic. But mostly we wanted to talk about Zen, how Zen relates to that topic, how your interest in Buddhism relates to that topic and just a little background for those people who may not have heard of you. Because you’re pretty well known in certain circles but I’d say in the Buddhist circles probably most people don’t know about your work. You’re an American author and researcher in the field of artificial intelligence. You founded a company called Novamente, that’s working to develop a type of artificial general intelligence, which we’ll get into more. And you’re also the CEO of BioMind, a company the markets software product based on some of the work that you’ve done. And finally, just to give you one more title, you’re the director or research at the Singularity Institute for Artificial Intelligence. So it sounds like you’ve got your hand in several different interesting organizations that are working all on similar types of things. Is that true?

Ben: Yeah, definitely. My interest is in developing all sorts of advanced technology that can lead to new forms of intelligence and make human life better. I focus mainly on AI technology, both as an end in itself to create better minds, and in applications. Like applications to bio-medicine. AI can apply to almost anything. Which is both a strength and a weakness. So I’ve got my fingers in a lot of pots.

Vince: And a mutual friend of ours, Mike LaTorra, who we’ve had on the show before, told me that you have got a background in Zen meditation. I was really surprised to hear that. I’d actually heard about your work through another friend who mentioned your book, The Hidden Pattern, and he was really a big fan of your writing. And so it was kind of was synchronistic to find out you had a background in Zen meditation as well. But I don’t really know much about that and I wondered if you could say a little bit about how you got into Zen and how that’s developed and maybe what your relationship is to Zen right now.

Ben: Yeah, sure. I think it’s something I don’t talk about much but I think my background in Zen and other wisdom traditions definitely colored my work on AI, my approach to understanding the mind. I got into Zen when I was probably 17 years old. When I was, I guess, a sophomore or junior in college. I was going to college in western Massachusetts at a college called Simon’s Rock, which is like 300 really smart kids out in the middle of the woods. I discovered a book in the university library, which was the Zen Teachings of Huang Po. And I really enjoyed the book. It was very cryptic stuff from a Chinese Zen master from many hundreds of years ago. And that book inspired me to meditate. So I would frequently go out into the forest around Simon’s Rock College and meditate. And doing it without a teacher can have mixed results. I got into some fairly interesting states of mind. The setting was really good. Unfortunately, I kind of drifted away from that when I left that place and moved to New York City for graduate school. New York wasn’t as meditative for me. Different sort of experience.

Vince: I can imagine.

Ben: Much later, maybe ten years later than that, my wife at the time, who was my first wife who had actually been in college with me back when I’d been into Zen but she hadn’t really been into Zen at the time, she got deeply into Zen meditation. She had kind of kensho experience where she was just overwhelmed with a feeling of oceanic oneness with the universe and peace and insight, which lasted for like half a day. And then lasted, at a certain level, for four or five months after that. And as a result of this sort of spontaneous experience, she got deeply into Zen. She began meditating a lot and she ultimately became a Zen priest in the order of Tsu-Yu. So through her I got to know a lot of Zen practitioners and kind of became immersed in the world and began meditating on and off again, although I never felt inspired to join the Zen temple that she was in, but we got divorced 6 years ago so I haven’t really been involved in that world since then. But I felt my attitude was a bit more like that of Krishnamurti who was someone else who I read a lot and was inspired by. I mean he was very much a spiritual seeker, but he wound up preferring to avoid the institutions and organized groups, which is sort of the perspective I’ve come to.

Vince: Yeah, that’s an interesting perspective, because on the one hand you’ve mentioned that you can get mixed results without good guidance, but on the other hand it seems really clear to people that have been involved with spiritual institutions that there are all sorts of weird stuff going on: power dynamics, corruption, and just all sorts of oddities.

Ben: Well you can get mixed results with or without guidance I guess. Yeah. I wouldn’t say the times that I meditated with other people, particularly if there were experienced meditators around-and I did this when I briefly was involved with a Shambhala meditation group, maybe three or four years ago here in Maryland where I live, and I did find that I can get in a deeper, faster into different places when I was meditating in a room with a bunch of experienced meditators. So there is something there, it’s different. I’m not sure it’s profoundly and fundamentally better than what happens meditating on my own, but I would says there is something you get from that which is special and which is apart from whatever power dynamics exist.

Vince: So just to kind of lean more toward the work that you do professionally and that you spend a lot of time with, this topic of artificial intelligence. You, as I understand, are taking an approach to artificial intelligence which is called “artificial general intelligence” and you are actively, like you said, trying to develop what is called “strong artificial intelligence” and I was wondering if you could say first what is “strong artificial intelligence” and then what is this whole “artificial general intelligence” this approach you are taking called AGI?

Ben: Basically by strong AI what they mean is AI that can think, pretty much like a person does and understands who and what it is and can solve a huge variety of different problems and has human-type consciousness and reasoning and memory and thinking and perception, everything we think of being part of an intelligent human mind. Whereas weak AI is thought of as being more like specific AI applications: a smart chess program or a program that drives your car for you is or a program that does your taxes for you or that proves a math theorem or something. And really the distinction of AGI general intelligence versus narrow AI gets at the same thing as the strong versus weak AI dichotomy. I mean there are technical distinctions when you get into the academic particulars of it, but from a lay point of view I guess those distinctions are kind of hair-splitting. What we mean by AGI or general intelligence is an AI system that can confront a huge variety of problems in the same way that people can including solving problems that didn’t exist or weren’t known at the time it was created. Like an AGI should be able to do like a person, go into a new environment, encounter a new set of problems, use its understanding of itself and the word to figure out what to do.

Vince: Very interesting.

Ben: And of course that’s not the kind of AIs we have now, so there is brilliant AI inside video games, AI chess players can beat any human at chess, Google is better at searching document bases than anything, right? But none of these have the kind of general problem-solving ability and self-understanding that people have.

Vince: Interesting. And just as you are describing these different types of AI, it occurs to me that most people that aren’t really familiar with the field probably have the impression that we are really far away from what you are describing as a strong AI. And I’m wondering if that’s true, because you are kind of at the forefront of developing this. How close are we to developing a strong artificial intelligence and… yeah?

Ben: Well, opinions among the experts vary. I’d say the majority of card-carrying AI researchers probably believe it’s at least a century away. On the other hand, there is a growing minority of AI researchers… We believe it may just be a couple decades away. Ray Kurzweil is perhaps the most outspoken of these. He is not an academic but he is in AI guy in the industry. He has made a lot of successful AI based inventions. He makes an argument that we will have human level AI by the year 2029.

Vince: I’m guessing you sit on that side of the fence, more the couple decades?

Ben: Yeah, I think Ray tries to pinpoint the exact year with more confidence than I would do. I think it could happen within 8 or 10 years or it might take 30 years. I don’t think it will take 100 years. I think we are a lot closer than most people think.

Vince: Why is that?

Ben: I think there are a number of technologies that are converging to cause this to happen. One is that computers are just getting more an more powerful. Everyone can see that. The laptop from 5 years ago is a piece of garbage or a museum piece now. The computer power, processor speed, memory, network speed is just accelerating exponentially.

Another aspect is that we are understanding better and better how the brain works. Brain imaging is also getting more accurate. We don’t yet understand the essence of how the brain gives rise to human thought but there seems little doubt that within a couple of decades we will, just based on more and more accurate brain imaging technology and better computers to do brain simulation.

Cognitive psychology is advancing. We understand more about how the mind works separately, from my knowledge of the brain. All of this work on narrow AI, on specific AI problem solving systems is helping us develop up a lot of powerful problem solving algorithms which will be useful for developing general AI systems. Even though they don’t constitute general AI in themselves.

I think we are seeing the convergence of a lot of different factors that is going to lead to the emergence of advanced general intelligence. If you don’t try to take a broad view and look at all of these different factors, maybe you are not going to see it because none of these single things aren’t showing it’s going to lead us to AGI.

Vince: Cool thank you that’s a very interesting explanation. To kind of tie together these two questions with one your background with contemplative practice and Zen meditation. On the other hand, all of your work you do with artificial intelligence. I wanted to ask you about this interesting article that you wrote a couple of years ago called Enlightenment 2.0.

In this interesting story that you wrote, it’s set in the 2030’s, it’s a discussion between this journalist and scientist with an artificial intelligence, that I assume, is what you were describing was AGI. It’s something that’s sufficiently intelligent that’s becoming more and more intelligent. It seems like a super intelligence in a way. This AGI in the story kind of has its awakening. It kind of is born and then it goes through a process of helping humanity resolve certain levels of suffering.

That was its main… its prime directive was to help alleviate human suffering. It creates what in the story is called grails which I assume is some sort of nanotechnology that is able to produce unlimited amounts of food and energy, things like that. At some point, not too long after that, it kind of disconnects entirely from humanity and goes off to try to solve its own metal suffering.

It try’s to start looking at that level because it had already helped alleviate so much physical suffering it seemed like oh that’s the next step. What happens is really interesting. It comes back and it seems to have had some sort of awakening experience. I was wondering if you could just say a little bit about this story and some of the ideas you are wrestling with.

I won’t reveal the ending. You can if you want to but it’s a really fascinating ending. This artificial intelligence, which refers to itself as AHAM, which is Sanskrit for I am, ends up doing something you just wouldn’t expect.

Ben: Yeah, sure. I think that the idea underlying that story really came out of something that I worry about in my personal life just thinking about my own personal future. When I think about “What would I want in the future if superhuman AI became possible?” I mean creating a super human AI, creating a better mind is pretty interesting. On the other hand, it’s not me, it’s just like creating some artwork or some robot that suffered from me. On the other hand, improving my own intelligence so as to become a better mind, to become a much broader, more benevolent, more understanding, more intelligent form of mind. That’s pretty exciting to me.

I don’t do that just as becoming smarter. I really think that the human brain architecture is limiting. So that I think if you could change your brain into a different kind of information processing system, you could achieve just better states of mind. You could feel enlightened all the time, while doing great science, while having sensory gratification, and it could be way beyond what humans can experience.

So that leads to the question of, okay, if I had the ability to change myself into some profoundly better kind of mind, would I do it all at once? Would I just flick a switch and say “Okay, change from Ben into a super mind?”

Well, I wouldn’t really want to do that, because that would be just too much like killing Ben, and just replacing him with the super mind. So, I get the idea that maybe I’d like to improve myself by, say 20% per year. So I could enjoy the process, and feel myself becoming more and more intelligent, more and more enlightened, broader and broader, and better and better.

That’s a nice idea. To me, that would be an optimal outcome. With me, my friends and family, and everyone who wanted to, could just kind of gradually ascend into a better way of being, say by hooking your brain into some computer system, or some more functional intelligence infrastructure than the brain.

Then, once you’ve followed that line of thinking, you wonder if it’s really possible. Could it be there’s no way to continuously transition between what we are, and this hypothetical super mind, which would have a more profound, and more consistently enlightened, and super intelligent way of being.

Maybe you just can’t get there from here. Maybe that kind of mind is possible, but we can’t transition smoothly into it.

You think of phase transitions in physics. You have water, and you boil the water, and then it changes from a liquid into a gas, just like that. It’s not like it’s half liquid and half gas, right? I mean, it’s like the liquid is dead, and then there’s a gas.

That was the kind of theme underlining this story. There was this super intelligent AI that people had created. The super intelligent AI, after it solved the petty little problems of world peace, and hunger, and energy for everyone, and so forth, that super-human AI set itself thinking about “Okay, how can we get rid of suffering, fundamentally?”

I mean, how can we make a mind that really has a positive experience all the time, and will spread good through the world rather than spreading suffering through the world.

Then the conclusion it comes to is it is possible to have such a mind, but human beings can never grow into that, and that it, given the way that it was constructed by the humans, could never grow into that either.

So, the conclusion this AI comes to is there probably are well-structured, benevolent super minds in the universe, and in order to be sure the universe is kept peaceful and happy for them, we should all just get rid of ourselves, because we’re just fundamentally screwed up, and can’t even ever continuously evolve into something that’s benevolently structured.

Which I don’t really believe, but I think it’s an interesting idea, and I wouldn’t say it’s impossible.

Vince: I mean, it’s really fascinating, because at the end, you’re left wondering many things…

Ben: Yeah. For one thing, is the AI a lunatic, or does it have some profound insight that we can’t appreciate? Which is a problem we’re going to have broadly when we create minds better than ourselves.

I mean, just like when my dog wants to go do something and I stop him, right? Maybe it’s just because my motivational system is different than his. Like I don’t care about the same things as he does. I’m not that interested in going to romp in the field like he is, and I’m just bossing him around based on my boring motivational structure. On the other hand, sometimes I really have an insight he doesn’t have, and I’m really right. He shouldn’t go play in the highway, no matter how much fun it looks like. The dog can’t know, and similarly, when we have a super-human AI, we really won’t be able to know. We’ll have to make a gut feel decision whether to trust it or not.

Author

Ben Goertzel

Ben Goertzel is an American author and researcher in the field of artificial intelligence. He currently leads Novamente LLC, a privately held software company that attempts to develop a form of strong AI they call "Artificial General Intelligence". He is also the CEO of Biomind LLC, a company that markets a software product for the AI-supported analysis of biological microarray data; and he is Director of Research of the Singularity Institute for Artificial Intelligence. Website: Goertzel.org