We’re back again with Artificial Intelligence researcher and Zen-dabbler, Ben Goertzel. We continue our exploration of some of the major themes in his non-fiction story “Enlightenment 2.0″. This precipitates a conversation about whether consciousness is a result of the mechanisms of the brain, or whether it is fundamental. And connected to that, what are the ethical implications of creating an artificial intelligence, if we do indeed see it as having BuddhaNature?
Finally, Ben shares what he has discovered while exploring the notion of “artificial wisdom”–including what difference there is between intelligence and wisdom. He also talks about the seeming incompatibility between intense scientific thinking and enlightenment, and how that might be rectified by creating a more wise and intelligent super-mind.
This is part 2 of a two-part series. Listen to part 1, Enlightenment 2.0.
Vince: One question that came up for me when I read the story was that on the one hand, Aham seemed to really recognize that enlightenment was, kind of, independent of conditions. In a way, it didn’t really matter so much the hardware behind what was being enlightened, that there was a way in which that was not different for the AI than it was for humans. He kind of recognized that point, which was, I think, if you were to define enlightenment as just getting better and better and better–most Zen masters, for instance, would slap you in the face with a stick and be like, “No, that’s not enlightenment.”
So Aham, on the one hand seemed to recognize that, but on the other hand had this kind of Bodhisattvic ideal going on where I need to find a way to alleviate suffering, that his conclusions were based on that kind of thinking… well if we can make our experience better, and we can become more benevolent, like you’re saying, then that is a better expression, if you will, of enlightenment. So it seems like you had, kind of, two things going on. Is that true, where you saw enlightenment as something that was independent of conditions, and yet you were also focusing on the betterment of conditions?
Ben: Yes, I would say the story, it’s probably supposed to be like a science fiction koan, or something. It’s not all supposed to quite hang together and make sense. But I think this ties in with the difference in the way I look at consciousness and awareness and experience, compared to how most AI researchers look at it, which I think goes to my background in Zen and other spiritual practices. So I organized this June in Hong Kong, I worked up on the machine consciousness, which was mostly AI researchers and roboticists, some psychologists–I was struck by how different my basic perspective on consciousness and experience and most of these researchers because to me, my basic view is that awareness and consciousness is just everywhere and everything. That’s the ground, and then different systems, like computers or brains or robots, whatever it may be, they kind of manifest universal awareness in different ways. So then to me, if you ask, can an AI be conscious, that experience, you’re asking like “Can we configure matter in a way that will sort of manifest the universal awareness in the same sort of way that the human brain manifests awareness?” And that’s not the way AI people look at it, more and more, because AI people look at it like “Physical reality is the ground. You’re building a physical system, and somehow consciousness is, like, produced by that physical system. It seems related to a Buddhist perspective because every Buddhist that I talk to has a view that’s roughly sort of like mine, like yeah, mind and awareness are everywhere and in everything, that’s the basic thing. And then it comes out of different physical things, in different ways, and you look at the physical world as being kind of less fundamental than the mind and the experience. It’s totally not the way most AI researchers look at it.
Vince: And have you found that that different view, I mean we could call it in Buddhist thinking, of the Yogachara Buddhists, they’re called the mind-only school, you’re right, this is definitely a strong tendency in the Buddhist tradition to see mind as fundamental. Have you found that that view, or that predisposition, changes the way that you approach the whole problem of artificial intelligence?
Ben: In a way, yeah. The reason I brought that up is because it ties in with the story you mentioned, that Aham saw enlightenment as being kind of independent of whether he was a machine or we were a brain. And I think if you take the kind of mind-only view of things, then that becomes kind of obvious. Like enlightenment and experience are always there, they just kind of express themselves in different ways through different kinds of systems.
In terms of my approach to AI, probably the biggest impact this perspective has had is to cause me not to look for some magic ingredient that you need to put into your AI system to make it have experience, or consciousness, or awareness,or whatever it is. I don’t think there is that magic ingredient, like we’re going to poke somewhere in the brain and find, “Yeah, here’s the consciousness cell, or here’s the thing that let’s us have experience and understand the world.” A lot of people thinking about AI are misled by looking for the seat of awareness and experience in some mechanism. And then they think they don’t know how to make AI, because they don’t know how to build that mysterious mechanism. And my view is more that that mysterious mechanism, wherever you look you aren’t going to find it there, it’s just kind of imminent in being. And yet, you can build an information processing system and it’s going to have that awareness of experience because everything does. I don’t think experience and awareness are something you have to build into the system.
Vince: That’s really fascinating because we’ve spoken to people in the past. One example coming to mind is Nova Spivack, and he contends the opposite. You know, he’s the founder of Twine. We talked to him about there is no way that a machine intelligence could ever have a Buddha-Nature. And I thought it was really fascinating that you would say that.
Ben: To me everything is. I think my cell phone has Buddha-Nature. Everything has Buddha-Nature. So that doesn’t concern me. It’s more a matter of does it display its Buddha-Nature in the same sort of way a person does or not. That’s a different question.
Vince: And is your sense, from an ethical standpoint that because everything has Buddha-Nature that there is an intrinsic value to things that might not be there if we didn’t think it had? And I ask this question because assuming yourself, or someone else is able to develop an AI, immediately the questions will be, “How do we interface with this intelligence?” Do we treat it like it’s human in a way, do we treat it like it has certain ethical value? I guess I just wonder what you think about that.
Ben: Well, to me there is no question about it. I mean, according to my own ethics, I would rank intelligent machines ethically at least on the same level as people. Of course, ethical decisions are not something that human beings are very good at making, even within the domain we are familiar with. So, I wouldn’t expect us to cope well with the ethical decisions that come up with the non-human minds that we may create. This is something I’ve often thought about AIs. We focus on making AIs that are more intelligent than people, which I am sure will be possible. But, it should also be possible to make AIs that are more ethical, more compassionate, more empathetic than people.
Vince: And that kind of ties back to the whole Enlightenment 2.0 story.
Ben: Yeah, I think we can make AIs that are more compassionate or less compassionate than people. I mean, in a way, people are probably better at intelligence than compassion. It may be easier for us to make a more compassionate machine than a more intelligent machine. But that’s of course not what researchers are focused on now. Partly because 80 percent of AI funding comes from the military.
Vince: A bastion of compassion. [Laughs]
Ben: [Laughs] Exactly.
Vince: I found all these things really interesting. It seems you could think of this as far-fetched and far out, but I mean if you’re familiar with the kind of work that you’re doing, this is happening right now anyway. There are so many people, and so much money behind the development of these types of things. It seems pretty likely that we are going to be able to create something like this, and that brings up all these sorts of interesting questions. I am glad that you are one of the few AI researchers actually grappling with them, instead of just assuming that the point is just to make a really super-intelligent system then we will just deal with the consequences later.
Ben: Yeah, in terms of the consequences, I think you have to consider them in the context of all the other technologies that people are developing. We have nanotechnology and biotechnology that can lead to genetic engineered organisms. There are a lot of other technologies with their own benefits and dangers being developed at the same time.
I have a gut feeling that if we don’t create some kind of non-human minds to help us grow all these other technologies, then human beings are likely to create a lot of destruction with technologies aside from AI. But then there is a risk in AI, we could create quite malevolent AIs. We could create AIs that serve some national interest and try to help a certain group of people, oppress another group of people. Or, we could create benevolent AIs that have a goal of helping us to manage all these other technologies and helping us grow into better people and better minds and so forth.
One thing most people don’t have a feel for is, I think, human minds are a very special example out of the scope of possible minds that could exist. There is a lot of possibilities out there, and that holds in terms of our states of experience and consciousness as well. In Zen and various other wisdom traditions you can experience a lot of states of consciousness that most people don’t even imagine exist…
Ben: …On the other hand, all of those are probably a small subset of the states of consciousness that some non-human mind could experience. There’s a lot of things out there.
Vince: Right, so you’re saying, different types of minds could have a whole different range of potential altered experiences or understandings?
Ben: I would say so. I’m curious, how many of those I could experience and still have any sense of being me, which, of course, gets into a thorny question, because, I mean, from a Zen perspective, the sense of being me, as an individual, isn’t necessarily so important anyway. But, I seem to be somewhat attached to it, in practice. [Laughter]
Vince: Now. One last question I had for you. You wrote a really interesting article called “Artificial Wisdom.” Even in the article you acknowledge, “at first, I thought, artificial wisdom might just be, kind of, word play on artificial intelligence.” But you, actually, ended up exploring
it further to see, “is there any difference between what we think of as intelligence and then, what we call wisdom?” And, as you, I’m sure, are really well aware of, wisdom is, kind of, a fundamental idea in all of the different Buddhist traditions and probably all of the wisdom traditions. There’s a reason they’re called wisdom traditions, I guess.
Vince: I was wondering if you could share with us, what some of the things are thatyou ran across as you explored this topic, of the difference between wisdom and intelligence, and how that relates, maybe, to the Buddhist ideals of wisdom?
Ben: Yeah. One of the things that I’ve thought about a lot recently, when thinking about the nature of wisdom, is the different kinds of memory that we have. So we have declarative memory–memory of facts and beliefs. Procedural memory–which is, how to do stuff. Which is different, because we sometimes will know how to do something without even being able to explain how. Then, episodic memory–which is just our life history, everything we’ve been through and remember. And it seems that there’s intelligence associated with each of these.
In western culture we tend to value declarative knowledge and procedural knowledge a little more. I think, in oriental culture, episodic knowledge, life history and then collective history of a group of people is valued a lot more. One of the things associated with wisdom, I think, is being able to bring all of these together in a, sort of, synergetic way, like, facts and beliefs, how-to and then, experiences. And bringing all those together in a combined way. Something most people don’t do very well. And I think it’s something that more enlightened minds do a lot better than I do, or than most people do.
Another thing that I think is interesting in terms of wisdom and AI’s, is the ability to lash into experience and lessons going beyond yourself. Part of the meditative experience is a kind of oceanic feeling that you’re just sort of part of a larger whole, and you’re tapping into this huge realm of patterns and experiences and forms that go beyond your individual self, which ties in with what Jung called the collective unconscious. And a lot of the wisdom you get from meditation, kind of has to do with tapping into something beyond your individual self. And I think AI’s should be able to do that in different ways than humans can. I mean, maybe in the same ways that humans can also.
For human beings, mental telepathy doesn’t work very well. I tend to believe that some kind of paranormal phenomena do exist. But it’s clear, we’re not very good at them. They’re limited in power, if they do exist. Whereas, for an AI, two AI’s could swap experiences, thoughts and feelings. They could just open a channel from one guys mind to another, you know. And that’s a pretty amazing thing. So I think AI’s should have such a better ability to share with each other they should be able to bring collective knowledge and wisdom to bear more than we can. They won’t be trapped in their individual universes as rigidly as we tend to be.
This kind of gets at what frustrated me about Zen and Buddhist practice overall. I felt like I could meditate and get in to some really profound and enriching states and then if I wanted to do science and, like, discover new things, I had to get out of those states of mind; to be able to think really hard. Of course you could ask, “But why was that important to me? Why would I want to think really hard, why not just feel enlightened and peaceful?” Which, for whatever reason, I was motivated to get out of those states of mind and think really hard, and bang my head against the wall of hard science problems, which is frustrating and involves suffering as well as joy. And, has its ups and downs and doesn’t leave you feeling very enlightened most of the time. I would like to be able to have the most profound meditated bliss I ever had, at the same time as making amazingly fast progress solving difficult science and engineering and math problems. And I’m not sure you can do that while being a human. It may be that being enlightened in some sense sops up all your brain power, whereas being a really good scientist sops up all your brain power too. Either because we have limited brain power, or because the way the brain is built, we can’t do both those things, at the same time. Maybe we can get around that by moving to better brains or improving our brains, or something. And of course, I understand when I discuss this with the Zen people, they say, “Well, it’s a false dichotomy, you need to get beyond it.” In a way that’s true, on the other hand, when I look around me… these are Zen masters I’ve met. Or they’re really spiritually enlightened people. In practice, they’re not developing advanced new scientific ideas, right? So they don’t mind they’re not, but they’re not. So I don’t see that they’ve gotten beyond that dichotomy and the way that, that I would like to.
Vince: Right, so you’re saying getting beyond the dichotomy would look like being able to both access blissful states, at the same time think really hard and be able to innovate in that way?
Ben: Yeah, I would like to be able to do both those things at the same time, and I don’t know how but I think even the great scientists, like David Bohm who were deeply into wisdom traditions and, you know Bohm had this long relationship with the Krishnamurti that he was interested in and all this. I don’t feel like Bohm really got around that problem either, and it’s interesting, too, that his most scientifically productive years were when he was younger. And then when he was older and got deeply, deeply into spiritual stuff, was not his most intensely productive period as a scientist.
So there was some sort of dichotomy there it seemed like. And this has concerned me in terms of wisdom, because its made me feel like if my wisest states of mind are not the same as my most intellectually productive states of mind, I don’t like that fact. Maybe that’s the way it has to be, or maybe that’s just the way it has to be for people. Because people are limited.
Vince: Well, I was wondering if there was anything else, along these lines that we were discussing, that you wanted to speak about or share with listeners that might be interested in the intersection between these two fields?
Ben: The one thing that has struck me a lot, which is not so much about Zen, but about Buddhism, it’s that… I feel like Buddhism and other wisdom traditions, genuinely got a lot of insights into the mind that science hasn’t caught up to yet. I read stuff from Buddhist logicians, Dharmakirti and Dignāga were your medieval Buddhist logicians, and I felt like these guys mapped out stuff about consciousness and the relations between different states of consciousness, that modern psychologists, and neuroscientists, and AI guys haven’t come close to getting those insights.
My honest feeling is we would progress faster towards understanding the brain and mind, and building powerful and beneficial AI’s if people studied that stuff at the same time as they study how the brain works, and study computer science, and algorithms and so forth. That doesn’t really seem to happen, which is too bad. If there are listeners who are, working on AI or Neuroscience, and are spiritual practitioners, my call would be to think more about the insights into the mind that you can get from these wisdom traditions and you can synergize with scientific work.