How to build an A.I. brain that can surpass human intelligence
Watch the newest video from Big Think: https://bigth.ink/NewVideo
Join Big Think Edge for exclusive videos: https://bigth.ink/Edge
----------------------------------------------------------------------------------
Artificial intelligence has the capability to far surpass our intelligence in a relatively short period of time. But AI expert Ben Goertzel knows that the foundation has to be strong for that artificial brain power to grow exponentially. It's all good to be super-intelligent, he argues, but if you don't have rationality and empathy to match it the results will be wasted and we could just end up with an incredible number-cruncher. In this illuminating chat, he makes the case for thinking bigger. Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.
----------------------------------------------------------------------------------
BEN GOERTZEL:
Ben Goertzel is CEO and chief scientist at SingularityNET, a project dedicated to creating benevolent decentralized artificial general intelligence. He is also chief scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics; Chairman of AI software company Novamente LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation.His latest book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.
----------------------------------------------------------------------------------
TRANSCRIPT:
Ben Goertzel: If you think much about physics and cognition and intelligence it’s pretty obvious the human mind is not the smartest possible general intelligence any more than humans are the highest jumpers or the fastest runners. We’re not going to be the smartest thinkers.
If you are going to work toward AGI rather than focusing on some narrow application there’s a number of different approaches that you might take. And I’ve spent some time just surveying the AGI field as a whole and organizing an annual conference on the AGI. And then I’ve spent a bunch more time on the specific AGI approach which is based on the OpenCog, open source software platform. In the big picture one way to approach AGI is to try to emulate the human brain at some level of precision. And this is the approach I see, for example, Google Deep Mind is taking. They’ve taken deep neural networks which in their common form are mostly a model of visual and auditory processing in the human brain. And now in their recent work such as the DNC, differential neural computer, they’re taking these deep networks that model visual or auditory processing and they’re coupling that with a memory matrix which models some aspect of what the hippocampus does, which is the part of the brain that deals with working memory, short-term memory among other things. So this illustrates an approach where you take neural networks emulating different parts of the brain and maybe you take more and more neural networks emulating different parts of the human brain. You try to get them to all work together not necessarily doing computational neuroscience but trying to emulate the way different parts of the brain are doing processing and the way they’re talking to each other.
A totally different approach is being taken by a guy named Marcus Hutter in Australia National University. He wrote a beautiful book on universal AI in which he showed how to write a superhuman infinitely intelligence thinking machine in like 50 lines of code. The problem is it would take more computing power than there is in the entire universe to run. So it’s not practically useful but they’re then trying to scale down from this theoretical AGI to find something that will really work.
Now the approach we’re taking in the OpenCog project is different than either of those. We’re attempting to emulate at a very high level the way the human mind seems to work as an embodied social generally intelligent agent which is coming to grips with hard problems in the context of coming to grips with itself and its life in the world. We’re not trying to model the way the brain works at the level of neurons or neural networks. We’re looking at the human mind more from a high-level cognitive point of view. What kinds of memory are there? Well, there’s semantic memory about abstract knowledge or concrete facts. There’s episodic memory of our autobiographical history. There’s sensory-motor memory. There’s associative memory of things that have been related to us in our lives. There’s procedural memory of how to do things.
Read the full transcript on: https://bigthink.com/videos/ben-goertzel-how-to-build-an-ai-brain-that-can-surpass-human-intelligence
,1,Why 'upgrading' humanity is a transhumanist myth
New videos DAILY: https://bigth.ink
Join Big Think Edge for exclusive video lessons from top thinkers and doers: https://bigth.ink/Edge
----------------------------------------------------------------------------------
Though computer engineers claim to know what human consciousness is, many neuroscientists say that we're nowhere close to understanding what it is, or its source. Scientists are currently trying to upload human minds to silicon chips, or re-create consciousness with algorithms, but this may be hubristic because we still know so little about what it means to be human. Is transhumanism a journey forward or an escape from reality?
----------------------------------------------------------------------------------
DOUGLAS RUSHKOFF:
Douglas Rushkoff is the host of the Team Human podcast and a professor of digital economics at CUNY/Queens. He is also the author of a dozen bestselling books on media, technology, and culture, including, Present Shock, Program or Be Programmed, Media Virus, and Team Human, the last of which is his latest work.
----------------------------------------------------------------------------------
TRANSCRIPT:
Douglas Rushkoff: I think we still know so little about what it means to be human that the confidence with which we think we can upload ourselves to silicon or recreate ourselves with algorithms is shocking to me. The only ones out there who think they know what human consciousness is are computer engineers. If you talk to actual brain researchers and neuroscientists, they say, we're nowhere close. We don't even know for sure what goes on in a single square centimeter of soil. We're still trying to teach agriculture companies that the soil is alive, that it's not just dirt that you can put chemicals on. It's a living matrix. If we don't even know what a single centimeter of soil is, how do we know what the human brain is? We don't. We don't know what the source of consciousness is. We don't know where we come from. We don't even know if there's a meaning to this universe or not. Yet, we think that we can make a simulation that's as valid as this? Every simulation we make misses something. Think about the difference between being in a jazz club and listening to a great CD. There's a difference, you know. And some of those differences, we understand, and some of them, we don't.
So when I see people rushing off to upload consciousness to a chip, it feels more like escape from humanity than it is a journey forward. And I get it. Life is scary. I mean, women, real-life women, are scary. You know, the people are scary. The moisture is scary. Death is scary. Babies are scary. Other people who don't speak the same language or have the same customs, they're scary. All sorts of stuff is scary. And I understand the idea of this kind of having a Sim City perfected simulation that I can go into and not have to worry about all that stuff I don't know, where everything is discrete, everything is a yes/no, this/that, all the choices have been made. There's a certain attractiveness to that, but that's dead. It's not alive. There's no wonder. There's no awe. There's nothing strange and liminal and ambiguous about it.
I was on a panel with a famous transhumanist, and he was arguing that it's time that human beings come to accept that we will have to pass the torch of evolution to our digital successors. And that once computers have the singularity and they're really thinking and better than us, then we should really only stick around as long as the computers need us, you know, to keep the lights on and oil their little circuits or whatever we have to do. And then, after that, fade into oblivion. And I said, hey, no, wait a minute. Human beings are still special. We're weird. We're quirky. We've got David Lynch movies and weird yoga positions and stuff we don't understand, and we're ambiguous and weird and quirky. You know, we deserve a place in the digital future. And he said, oh, Rushkoff, you're just saying that because you're human. As if it's hubris, right? Oh, I'm just defending my little team. And that's where I got the idea, all right, fine, I'm a human. I'm on Team Human. And it's not Team Human against the algorithms or against anything other than those who want to get rid of the humans. I think humans deserve a place.
Certainly, until we understand what it is we are, we shouldn't get rid of us. And as far as I'm concerned, we're cool. We're still weird and funny and wonderful. And yeah, we destroyed the environment. We did really nasty things. But I would argue we do those things when we're less than human. We do those things when we can dehumanize others. You can't have slaves if you're thinking of those as people. You can only have slaves if you're thinking of them...
For the full transcript, check out https://bigthink.com/videos/douglas-rushkoff-critiques-transhumanism
,1,Why religion is literally false and metaphorically true
New videos DAILY: https://bigth.ink
Join Big Think Edge for exclusive video lessons from top thinkers and doers: https://bigth.ink/Edge
----------------------------------------------------------------------------------
Where do your beliefs come from? There's a school of thought that sees religion as a mind virus that wastes the time and effort of human beings, but evolutionary biologist Bret Weinstein offers a more reasonable explanation: "belief systems have flourished because they have facilitated the interests of the creatures involved," he says. Religious people are evolutionarily fitter than non-believers, not because they are protected by a deity but rather because religion is a form of adaptive evolution. Religion is so widespread because it has massive survival advantages beneath the supernatural elements—that's what Weinstein refers to as "literally false and metaphorically true". For example, believing in heaven is literally false—there is no such place—but believing in it keeps your descendants in good standing in the religious community after you're gone, thus setting your lineage up to continue. The thought itself may be untrue, but the result of the thought is evolutionarily effective. "Despite the fact that human beings think that they have escaped the evolutionary paradigm, they’ve done nothing of the kind, and so we should expect the belief systems that people hold to mirror the evolutionary interests that people have," Weinstein says. For more from Bret Weinstein, visit bretweinstein.net.
----------------------------------------------------------------------------------
BRET WEINSTEIN:
Professor Bret Weinstein has spent two decades advancing the field of evolutionary biology with a focus on adaptive trade-offs. He has made important discoveries regarding the evolution of cancer and senescence as well as the adaptive significance of moral self-sacrifice.
He applies his evolutionary lens to human behavior in order to sketch a path through the many crises we face as a species. By confronting emerging authoritarianism, and abandoning the archaic distinction between political right and left, we can discover a new model of governance that frees humanity to seek a just, sustainable and abundant future.
----------------------------------------------------------------------------------
TRANSCRIPT:
Bret Weinstein: We have minds that are programmed by culture that can be completely at odds with our genomes. And it leads to misunderstandings of evolution, like the idea that religious belief is a mind virus, that effectively these beliefs structures are parasitizing human beings and they are wasting the time and effort that those human beings are spending on that endeavor rather than the more reasonable interpretation, which is that these belief systems have flourished because they have facilitated the interests of the creatures involved.
Our belief systems are built around evolutionary success and they certainly contain human benevolence, which is appropriate to phases of history when there is abundance and people can afford to be good to each other. The problem is if you have grown up in a period in which abundance has been the standard state you don’t anticipate the way people change in the face of austerity. And so what we are currently seeing is messages that we have all agreed are unacceptable reemerging because the signals that we have reached the end of the boom times, those signals are everywhere, and so people are triggered to move into a phase that they don’t even know that they have.
Despite the fact that human beings think that they have escaped the evolutionary paradigm they’ve done nothing of the kind, and so we should expect the belief systems that people hold to mirror the evolutionary interests that people have rather than to match our best instincts—when we are capable of being good to each other because there’s abundance, we have those instincts and so it’s not incorrect to say that human beings are capable of being marvelous creatures and being quite ethical.
Now I would argue there’s a simple way of reconciling the correct understanding that religious belief often describes truths that, in many cases, fly in the face of what we can understand scientifically, with the idea that these beliefs are adaptive. I call it the state of being literally false and metaphorically true. A belief is literally false and metaphorically true if it is not factual but if behaving as if it were factual results in an enhancement of one’s fitness. To take an example, if one behaves in let’s say the Christian tradition in such a way as to gain access to heaven one will not actually find themselves at the pearly gates being welcomed in, but one does tend to place their ...
For the full transcript, check out https://bigthink.com/videos/bret-weinstein-how-evolution-explains-religion
,1,Jordan Peterson: The fatal flaw in leftist American politics
New videos DAILY: https://bigth.ink
Join Big Think Edge for exclusive video lessons from top thinkers and doers: https://bigth.ink/Edge
----------------------------------------------------------------------------------
What is political extremism? Professor of psychology Jordan Peterson points out that America knows what right-wing radicalism looks like: white nationalism. "What's interesting is that on the conservative side of the spectrum, we've figured out how to box-in the radicals and say, 'No, you're outside the domain of acceptable opinion,'" says Peterson. But where's that line for the Left? There is no universal marker of what extreme liberalism looks like, which is devastating to the ideology itself but also to political discourse as a whole. Peterson is happy to suggest such a marker: "The doctrine of equality of outcome. It seems to me that that's where people who are thoughtful on the Left should draw the line, and say no. Equality of opportunity? [That's] not only fair enough, but laudable. But equality of outcome…? It's like: 'No, you've crossed the line. We're not going there with you.'"Peterson argues that it's the ethical responsibility of left-leaning people to identify liberal extremism and distinguish themselves from it the same way conservatives distance themselves from the doctrine of racial superiority. Failing to recognize such extremism may be liberalism's fatal flaw.
----------------------------------------------------------------------------------
JORDAN PETERSON
Jordan B. Peterson, raised and toughened in the frigid wastelands of Northern Alberta, has flown a hammer-head roll in a carbon-fiber stunt-plane, explored an Arizona meteorite crater with astronauts, and built a Kwagu'l ceremonial bighouse on the upper floor of his Toronto home after being invited into and named by that Canadian First Nation. He's taught mythology to lawyers, doctors and business people, consulted for the UN Secretary General, helped his clinical clients manage depression, obsessive-compulsive disorder, anxiety, and schizophrenia, served as an adviser to senior partners of major Canadian law firms, and lectured extensively in North America and Europe. With his students and colleagues at Harvard and the University of Toronto, Dr. Peterson has published over a hundred scientific papers, transforming the modern understanding of personality, while his book Maps of Meaning: The Architecture of Belief revolutionized the psychology of religion. His latest book is 12 Rules for Life: An Antidote to Chaos.
----------------------------------------------------------------------------------
TRANSCRIPT:
JORDAN PETERSON: I would like to talk briefly about depolarization on the Left and the Right, because I think there's a technical problem that needs to be addressed. So here's what I've been thinking about.
It's been obvious to me for some time that, for some reason, the fundamental claim of post-modernism is something like an infinite number of interpretations and no canonical overarching narrative. Okay, but the problem with that is: okay, now what?
No narrative, no value structure that is canonically overarching, so what the hell are you going to do with yourself? How are you going to orient yourself in the world? Well, the post-modernists have no answer to that. So what happens is they default—without any real attempt to grapple with the cognitive dissonance—they default to this kind of loose, egalitarian Marxism. And if they were concerned with coherence that would be a problem, but since they're not concerned with coherence it doesn't seem to be a problem.
But the force that's driving the activism is mostly the Marxism rather than the post-modernism. It's more like an intellectual gloss to hide the fact that a discredited economic theory is being used to fuel an educational movement and to produce activists. But there's no coherence to it.
It's not like I'm making this up, you know. Derrida himself regarded—and Foucault as well—they were barely repentant Marxists. They were part of the student revolutions in France in the 1960s, and what happened to them, essentially—and what happened to Jean-Paul Sartre for that matter—was that by the end of the 1960s you couldn't be conscious and thinking and pro-Marxist. There's so much evidence that had come pouring in from the former Soviet Union, from the Soviet Union at that point, and from Maoist China, of the absolutely devastating consequences of the doctrine that it was impossible to be apologetic for it by that point in time.
So the French intellectuals in particular just pulled off a sleight of hand and transformed Marxism into post-modern identity politics. And we've seen the consequence of that. It's not good. It's a devolution into a kind of tribalism ...
For the full transcript, check out https://bigthink.com/videos/top-10-jordan-peterson-leftist-liberal-politics
,1,Richard Dawkins: Why Religion and Evolution Don't Mix Well
New videos DAILY: https://bigth.ink
Join Big Think Edge for exclusive video lessons from top thinkers and doers: https://bigth.ink/Edge
----------------------------------------------------------------------------------
What is the Darwinian survival value of religion? That's not the right question, says Richard Dawkins. To find the right question, he relies on an evolutionary analogy: Why do moths fly into flames? It means instant death, so what's the evolutionary value of this kamikaze behavior? Dawkins delivers a crash course in proximate and ultimate causality, two very important distinctions in biology. Moths evolved to navigate using celestial objects as compasses. The moon and the stars emit parallel light, a very reliable and consistent beam, meaning a moth can fly in a straight line guided by that light. Candle light is an entirely different source that emits light in a spiral... leading straight to the hottest part of the flame. These moths aren't suicidal, says Dawkins, it's a misfiring of an evolutionary trait because of a modern technology in their environment. "The right question is not, 'What’s the survival value of a suicidal behavior in moths?'" he says, "The right question is, 'What is the survival value of having the kind of physiology which, under some circumstances, leads you to fly into a flames?'" There survival value of religious behavior may be at the genetic level, Dawkins suggests, and the proximate question in this case would be: what part of our brain does religion serve, and is religion really the only way that function is manifested? Richard Dawkins' new book is Science in the Soul: Selected Writings of a Passionate Rationalist.
----------------------------------------------------------------------------------
RICHARD DAWKINS:
Richard Dawkins is an evolutionary biologist and the former Charles Simonyi Professor of the Public Understanding of Science at Oxford University. He is the author of several of modern science's essential texts, including The Selfish Gene (1976) and The God Delusion (2006). Born in Nairobi, Kenya, Dawkins eventually graduated with a degree in zoology from Balliol College, Oxford, and then earned a masters degree and the doctorate from Oxford University. He has recently left his teaching duties to write and manage his foundation, The Richard Dawkins Foundation for Reason and Science, full-time.
----------------------------------------------------------------------------------
TRANSCRIPT:
Richard Dawkins: I’m very often asked, "What is the Darwinian survival value of religion?" and I usually reply, "That may be the wrong question." You may have to rephrase the question and it may turn out to be not the survival value of religion but the survival value of something else in the brain, which manifests itself as religion under the right circumstances.
Now, a good analogy which I’ve used is the question: Why do moths fly into candle flames? Now, you could describe that behavior as suicidal behavior in moths, self-immolation behavior in moths, kamikaze behavior in moths. That would be one way of phrasing the question, but it’s the wrong question.
If you actually look at the way moth and insect eyes generally work, insects use celestial objects like the sun or the moon or the stars as compasses. It’s important, it’s valuable, there is survival value for any animal moving in a straight line. If an animal wants to move in a straight line, a very good way to do it is to keep a celestial object at a fixed angle, and that’s easy to do in insects because they have compound eyes, very unlike our eyes. Their eye is a whole sort of hemisphere of little tubes looking outwards, and so you can maintain a fixed angle to something like a star or the moon by keeping the moon in one ommatidium. If you do that, because rays from the moon come from optical infinity, they’re parallel, and so if you keep of the moon in one ommatidium you will fly in a straight line. It might be say 30 degrees; keep the moon at 30 degrees to your right. And that works, and that’s valuable, and that’s what many insects do.
However, candles are not at optical infinity, candles are close. The rays of light from a candle are therefore not parallel, they are radiating out. If you maintain a fixed angle of say 30 degrees to the rays that are emitted from a candle you will describe a neat logarithmic spiral into the candle flame and kill yourself.
So these moths are not killing themselves, it’s not suicidal behavior; it’s a misfiring of a natural, normal behavior, which before the invention of candles would have worked. And it still does work the vast majority of time because most of the time in the dark a moth is not subjected to artificial light. So ask the right question. The right question...
For the full transcript, check out https://bigthink.com/videos/richard-dawkins-did-evolution-gives-us-religion
,1,How to build an A.I. brain that can surpass human intelligence
Watch the newest video from Big Think: https://bigth.ink/NewVideo
Join Big Think Edge for exclusive videos: https://bigth.ink/Edge
----------------------------------------------------------------------------------
Artificial intelligence has the capability to far surpass our intelligence in a relatively short period of time. But AI expert Ben Goertzel knows that the foundation has to be strong for that artificial brain power to grow exponentially. It's all good to be super-intelligent, he argues, but if you don't have rationality and empathy to match it the results will be wasted and we could just end up with an incredible number-cruncher. In this illuminating chat, he makes the case for thinking bigger. Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.
----------------------------------------------------------------------------------
BEN GOERTZEL:
Ben Goertzel is CEO and chief scientist at SingularityNET, a project dedicated to creating benevolent decentralized artificial general intelligence. He is also chief scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics; Chairman of AI software company Novamente LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation.His latest book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.
----------------------------------------------------------------------------------
TRANSCRIPT:
Ben Goertzel: If you think much about physics and cognition and intelligence it’s pretty obvious the human mind is not the smartest possible general intelligence any more than humans are the highest jumpers or the fastest runners. We’re not going to be the smartest thinkers.
If you are going to work toward AGI rather than focusing on some narrow application there’s a number of different approaches that you might take. And I’ve spent some time just surveying the AGI field as a whole and organizing an annual conference on the AGI. And then I’ve spent a bunch more time on the specific AGI approach which is based on the OpenCog, open source software platform. In the big picture one way to approach AGI is to try to emulate the human brain at some level of precision. And this is the approach I see, for example, Google Deep Mind is taking. They’ve taken deep neural networks which in their common form are mostly a model of visual and auditory processing in the human brain. And now in their recent work such as the DNC, differential neural computer, they’re taking these deep networks that model visual or auditory processing and they’re coupling that with a memory matrix which models some aspect of what the hippocampus does, which is the part of the brain that deals with working memory, short-term memory among other things. So this illustrates an approach where you take neural networks emulating different parts of the brain and maybe you take more and more neural networks emulating different parts of the human brain. You try to get them to all work together not necessarily doing computational neuroscience but trying to emulate the way different parts of the brain are doing processing and the way they’re talking to each other.
A totally different approach is being taken by a guy named Marcus Hutter in Australia National University. He wrote a beautiful book on universal AI in which he showed how to write a superhuman infinitely intelligence thinking machine in like 50 lines of code. The problem is it would take more computing power than there is in the entire universe to run. So it’s not practically useful but they’re then trying to scale down from this theoretical AGI to find something that will really work.
Now the approach we’re taking in the OpenCog project is different than either of those. We’re attempting to emulate at a very high level the way the human mind seems to work as an embodied social generally intelligent agent which is coming to grips with hard problems in the context of coming to grips with itself and its life in the world. We’re not trying to model the way the brain works at the level of neurons or neural networks. We’re looking at the human mind more from a high-level cognitive point of view. What kinds of memory are there? Well, there’s semantic memory about abstract knowledge or concrete facts. There’s episodic memory of our autobiographical history. There’s sensory-motor memory. There’s associative memory of things that have been related to us in our lives. There’s procedural memory of how to do things.
Read the full transcript on: https://bigthink.com/videos/ben-goertzel-how-to-build-an-ai-brain-that-can-surpass-human-intelligence