Skip to main content

Discover How Squirro's AI & Semantic Search Is Revolutionizing Enterprise Search. Transform Your Data Strategy Now! Read Our Buyer's Guide

Podcast

The Fourth Age

Squirro's Podcast: Redefining AI - Season 1

 

Serial entrepreneur, technologist, and futurist - Byron Reese has enjoyed a wide range of success over 30 years, including two NASDAQ IPOs as well as the sale of the three companies he founded. Byron has spoken around world to both technical and non-technical audiences, and his keynotes and appearances include SXSW, TEDx Austin, Google, Nvidia, FedEx and Fortune 1000 companies (Dell Foundation, Johnson & Johnson, Oticon, Swisslog) and universities Rice (University of Texas, Queen’s University, TWU) and futurist conferences (TimeMachine, PICNIC Festival in Amsterdam, Wolfram Data Summit, and the IEEE Conference) among others. He is the former CEO of Gigaom (and is still its primary shareholder) and was the creator / host of the Voices in AI podcast.


Full Transcript

Intro:
Everybody who follows AI has to have noticed that there are these two extreme camps: people who think that AI is going to usher in this golden age of humanity, there will be no hunger, and there will be no greed and other people who think it's an existential threat to tell us all, it's going to kill us all. And that's a wide range of opinions. It's unlike the question of like, well, when will we get to Mars? You know, well, maybe 2030, maybe 2080. But nobody's saying tomorrow or in 45,000-years. That's the range of these views on AI and I wanted to understand what it was that some people believed that other people didn't.

Lauren:
Hi, everyone. My name is Lauren Hawker Zafer and this is Redefining AI, a tech podcast that focuses on key narratives that help people explore artificial intelligence, machine learning, insight agents, and the insight era.

In this episode, I've been joined by Byron Reese. Byron is an Austin based entrepreneur with a quarter of a century of experience building and running technology companies. He's a well-recognized authority on AI and he holds a number of technology patents. Byron is also a futurist, with strong conviction that technology will help bring about a new golden age of humanity. He's the author of four technology books, with his work being featured in multiple news outlets.

As we started to understand Byron is an impressive personality. And I'm looking forward to discussing the fourth age with him today, especially as he, like myself, believes that technology is and will remain what people use to improve their productivity. I'm also curious as to how he remains so optimistic about the fourth turn of change, and how the change in technological landscape can create multibillion dollar opportunities for organizations. Byron, it is a pleasure, all the way from Austin. Welcome.

Byron:
Thank you so much for having me here.

Lauren:
It's really lovely to have you here. How are you today?

Byron:
I'm doing really well, thank you.

Lauren:
And I'm sure that our guests are curious to hear a little bit about you? And what has brought you to this particular point and your own adventures, Byron?

Byron:
Well, I may be the least interesting part of what I have to talk about. Like I graduated from university in the ‘90s. And I moved to the Bay Area to be a part of that. But then I used to watch Star Trek and I read a quote once. It was Gene Roddenberry, the creator, and he said that, “In the future there will be no hunger, there will be no greed, and all the children will know how to read.” And I wondered if that was true or if it just rhymed. And so, I was very curious why the future happens the way it does. That is what a futurist is. Not like prognosticate but try to understand why one thing happens and not another. And the only way you get insight into that is by going into the past. So, I really write books mostly about the past.

And if you think about this way, that's what AI is, right? It is data about the past. We look for patterns in it and we use those patterns to make predictions into the future. And that's kind of what a futurist is.

Lauren:
So, would you say then, I mean, you write about the past but you're looking to the future? You try to understand the future? Do you live in the future? Do you live in the past? Or do you live in the present?

Byron:
I don't know. I mean, my wife and I have four children that we homeschool. That is an exercise in thinking about the future. Because you realize they are going to be in the job market, maybe until year 2100 or something. And so, what do you teach somebody today that would be useful in 2100? I don't know. I don't know how to answer that.

Lauren:
Okay. I mean, it's admirable homeschooling four kids. I have one child of my own and I've always envisioned-- I do partly homeschool, obviously in the evening, to encourage extracurricular activities. And homeschooling as well, getting that balance between the authority of being able to transfer knowledge and teaching your own children, and then changing into the role of a caregiver, a father, everything else that comes with that. It is something that requires a good amount of balance. Like how do you find that balance?

Byron:
Well, I credit more of it to my wife than to me. You know, maybe we should have put one of the four in regular school, like a controlled experiment, you know. We could see how that one turned out versus the other three, but we didn't. And I would say, as a parent, the most important thing is to teach your kids to be good people and you want to see that they are mentally engaged in whatever they are doing. It doesn't really matter what it is. You want to just see all the neurons firing. And then I don't know, and you have a whole lot of trust in them. They do not know how the world operates, but usually they begin life in a really good place in terms of optimism and possibilities about the future that they are going to grow up into.

Lauren:
And keeping them challenge nervously firing up these neurons, we're talking about cognition and cognition seems to be of importance to you. We mentioned in the introduction that you've released several books now Byron, two of which we intend to touch upon today, The Fourth Age and Stories, Dice, and Rocks That Think.

Firstly, what motivates you to write? And did technology play a role in this motivation? I mean, more like, you know, access to outlets, the internet, opportunities to share and communicate?

Byron:
I would say I write about things I'm interested in. So, most of the time when I start a book, like Stories, Dice, and Rocks That Think, most of the material in it, I don't know when I go into the book. And I have a lot of fact checkers who are like a safety net under that. I pursue my own interest. Just like I was talking about with your kids, in the end you just want to see them do something that they are passionate about. Like the people that I know who are the happiest, are doing things that they want to do. And I like figuring things out.

Like Stories began as a book when I watched the Werner Herzog documentary about the cave Chauvet. And that got me interested in like, where did that come from? I mean, just kind of one thing leads to the other. Oftentimes I write chapters for books, I don't even know what the book is going to be. And I would say I'd probably write way more that doesn't get published than the little bit that comes out in the books. Because in the end, you know, I want to write about these things I'm interested in, but in the end you're trying to draw a common thread through them and figure out, what does that teach me other than about that thing itself?

Lauren:
Yeah, and that would probably be my next question. So, I mean, first we've got The Fourth Age and then Stories, Dice, and Rocks That Think. What is the central premise of both? What do you want your readership to take away from both of those if they are interlinked in some way through the narration?

Byron:
The Fourth Age is a philosophy book about artificial intelligence. I'm not supposed to say that, because that doesn't boost sales usually. And I hope it doesn't read like a philosophy book. But that's really what it is. And the opening sentences of that book tell why I wrote it, which is, everybody who follows AI has to have noticed that there are these two extreme camps. People who think that AI is going to usher in this golden age of humanity, and there will be no hunger, and there will be no greed. And other people who think it's an existential threat to us all. It's going to kill us all. And that's a wide range of opinions. It is unlike the question of like, when will we tomorrows? You know, well maybe 2030, maybe 2080. But nobody is saying, tomorrow, or in 45,000 years. That is the range of these views on AI. I wanted to understand what it was that some people believed that other people didn't.

There is one central question that it boils down to which is are people machines? If people are machines, then some day we will build a mechanical person and every two years it will double in capability. And if people are not machines, then no machine can ever do what a person does.

I used to have a podcast about artificial intelligence and I would interview all these people about it. I would always ask them, do you believe general intelligence is possible? 97 of 100 said, yes. And only three--

Lauren:
Really?

Byron:
Well, these are AI people. And only three said, no. And I'm in the no camp. The Fourth Age isn't trying to convince anybody of anything. But for the record, that is what I am. I don't believe we are machines. And you don't even have to go particularly spiritual to have that belief. You know, we have these brains. We don't understand how they work and they give rise to these minds that are emergent. Like I'm creative but none of my cells are creative. So, somehow these emergent properties come out. And then, I have something called consciousness, which means I can experience the universe. A thermometer can measure temperature, but it can't feel warm. And I can. And I have to think that those things are wrapped up in intelligence. And I'm not convinced we can make that in a fab. I don't have that much faith.

Lauren:
Interesting. There are so many questions that have come out of that. So, first of all, going back to The Fourth Age is a philosophy book about AI. Now, you said that it wouldn't boost sales. Is that because people are not interested in the philosophical aspect? Or they turn off philosophy on AI? Why is that?

Byron:
A philosophy book about AI sounds like chloroform in print, even to me. Like, oh, my gosh, that sounds boring. Like philosophy books sound terrible. I mean, I've tried to write it to be engaging, but that is what it really is. It tries to answer the questions. Like is automation going to take all the jobs? Well, again that just boils down to a philosophical question. Well, in fact, are we actually machines? The machines are going to be better at doing our jobs than us. If we are not machines, then we are not. So, that in a sense makes it a philosophy book. Kierkegaard is never quoted in it, or Spinoza, or anything like that. But it's still fundamentally that.

Lauren:
And then, you've mentioned out of the 100 people that you questioned around, if general AI will ever be possible, 97 said yes and three said no. The three that said no, did they pertain or stem from a certain faculty or thought camp?

Byron:
One of them said, a neuron may be more complicated than a supercomputer. It may operate at the plank level, right? Like that is the green of the universe, that's the pixels of the universe, as low as you get. You can't cut anything down there in half. And if every neuron you have is more powerful than a supercomputer and it operates at that level, then your chances of building that are low. And one of the guests said, machines don't have souls. So, that's a range of opinions about why we are not machines.

Lauren:
And you said, you are in the no camp?

Byron:
Absolutely. Absolutely. Yes.

Lauren:
I think I'm in the no camp as well.

Byron:
It is very unusual because when I speak in front of audiences and I say, who here believes you are a machine? 97% of my AI people say, yes. The general public is about 20% believe they are machines. So, it's a huge disconnect. And so, when those people believe that they are machines and make predictions, it is with that core assumption. And if you don't know that, then those predictions can be very, very frightening. Because they know more than I did, right? It may be true, but they may have false beliefs about the nature of things, which would be forgivable. It's what we try to figure out as people. What is the nature of all of this?

Lauren:
Yeah, that's very true. So, who's your ideal reader then? Or does it change dependent on the narrative from book to book? I mean, you mentioned that sometimes you start with writing chapters and then it develops into a book. But you must have, I mean in the sense of, when you write to express, you know you are writing to a certain ideal or optimal. Who is the ideal reader?

Byron:
So, my ideal reader is somebody who I would like to hang out with. And I never feel like I'm bringing down revealed truth from Mount Sinai and I had the tablets and I give them. Like, not at all. It's like, I have worked really hard on understanding this thing. I am dying to tell you what it is. And when I am done, I want to hear what you have been working on, trying to figure out. So, it's very much somebody who appreciates honest inquiry, has a curious mind, willing to listen to what I've been thinking about, and will want to share what they've been thinking about. I get wonderful emails, all the time, even today from people who apply what they are working on to something in the book. And I love that. I love that. Because it really does take on a conversation. And it feels like a conversation at that point.

Lauren:
And is that why you write them?

Byron:
I don't know. I don't know. I mean, I've been having a little bit of a crisis about that lately. And whether I'm going to write anymore or not. I really don't know. It is one of the strangest things. I'm sure, nobody really cares. But it's a compulsion to do it. But then, you know, I'm 33 days away from my next deadline. If I wrote fiction, you could tell your story and put it aside for a month and come back to it and keep telling it. My books are not like that. I have to keep all this stuff in my head at once. And I find it overwhelming. And I just don't know if I'm going to do it anymore.

Lauren:
Okay. But hopefully you do, I mean, we enjoy reading your books. And I think that part of writing a book as well, is making demands. Not only on yourself as a writer, but on the reader and also taking care of the reading. How do you try to get that balance as well when you are writing?

Byron:
What do you mean?

Lauren:
Obviously when writing a book, you are trying to meet demands in the sense that you want to pose questions. I mean, part of your book, let's see in chapter five. Chapter five and throughout your book you refer back to the three foundation questions. What artificial intelligence will be capable of? Whether computers can become conscious? And if robots will take all of our jobs? And I would say that in doing this, you are obviously critically asking your readers, you know, to think about this and this is part of you making demands on them to actually contemplate and think about all of these important questions that are foundational and to your narrative. But how do you also take care of them in the sense to make sure that you're taking them with you? Do you have any techniques? Or is it something that just comes naturally in progression when writing?

Byron:
I don't know. I mean, I would say that I always think about my reader. I never try to sell them anything. I never tried to persuade them of something. Because if it is an open question-- I write in stories. I have mentioned something about erectus, it's huge creature that lived and I have very different views about what they are capable of. And again, they are minority views. Like I am not in the majority on that. And so, I always point that out. I even quote people who are directly opposed to it by name and title. I never argue with that person in the book. I let them have their say. So, I try to keep a high amount of honesty with my reader that I'm not presenting a carefully constructed little view of the world. And then, they go off and tell people, erectus was like this, and other people are like, nobody thinks erectus was like-- So, I feel like if I am just honest about my learning process and about where I am out of the consensus. I hope that my reader will be forgiving of that. I don't want them to think that I am trying to pull a fast one. I am still trying to put a lot of humor in the books. In my experience, you can't really tell the voice of one person from another when you are pulling the books off the shelf. And I think maybe the reader is supposed to forget they are reading at some level. They are just kind of absorbing it. And my books don't read like that all. They are definitely me and I use popular cultural references all the time. I try to be humorous and funny. And I even go down tangents which usually authors won't. And the bar is high on that. It sure better be interesting. If you are going off topic to tell your reader something, you can do that. But you have to make sure that they are not like, why in the world did he just tell me that? So, I go into degressions that I think are fun. I used to use footnote them all, but I think those ruined the flow of the reading, if your eyes are popping up and down, they ruined the audio book about where to put it. So, now I just leave them all the narrative.

There was a lot of humor cut from The Fourth Age because somebody believed it detracted from the gravitas. But nowadays, I would have left it all in.

Lauren:
Interesting. So, if we look at those three foundation questions that we highlighted of what artificial intelligence will be capable of, whether computers can become conscious, and if robots will take all of our jobs? You mentioned that at the dawn of The Fourth Age these questions have become immensely practical. And we also spoke obviously, about the implications that it has on not only our generation, but future generations. I mean, we spoke about the homeschooling and obviously thinking about what future generations are going to do in 2100. What does this shift mean in the sense of thinking about these questions for younger generations as well?

Byron:
I have so much confidence in the younger generation. I think that they really are wonderful. I mean, everybody I know today who's young has a cause or something that they believe in. And they're very socially conscious. And all of these things that when I was in high school in the ‘80s, I just don't remember. I mean, I think they're wonderful. Unfortunately, coming to a world which has peddled fright to them, and everybody else, not just them. But I kind of chalk it up to the 24-hour news cycle, you know. What in your water is killing you? Tune in after these messages to find out. And I think if you just get that relentless, things are going to be bad, bad, bad, bad, bad, bad, bad, bad, bad, over and over and over, you can't help but to internalize it. And I am an optimist, for reasons I'm happy to go through. I can tell you the only thing, the only thing that will keep an optimistic future from happening is if nobody believes in it. Because it's work, right? It's work. So, the minute that everyone is like, I'm not going too.

When they built St. Peter's in Rome, instead of paintings they used mosaics, little tiles. And then they got all the artisans to do overage of the tiles that they put back. So, for the next 1,000 years, they would be able to repair them using the exact same tiles. So, the colors were the same and everything. And earlier this year, I had an opportunity to go back there, open the drawers, and see those handwritten notes from the 1500s of the artisans. And realize that it was the way that the Swedish government, a few hundred years ago, planted an entire island of Oak trees so that in 200 years when they were mature, would be able to build the kinds of ships that they would need. See they didn't get exactly what was going to happen in 200 years. But they were thinking about it.

I was just thinking about when they put it in the sewer system in London, which was a big ordeal. I mean, it was a big thing. And they figured, the guy who was in charge, what was the biggest, could possibly need to be ever? And they told him, I don't remember, 36 inches across so to say a meter. And then he said, you know what we are only going to get one chance ever to do this, so make it twice that size. And they did. And then, the future where people are flushing their nappies and all that down, they can deal with it because they believed in the future. And the minute you lose that, the minute that you are like, it doesn't matter. Why polish brass on a sinking ship, right? Why bother? That's the only thing we have to be careful of, I think.

Lauren:
And I think that that's something that you do well, in the book, The Fourth Age, as well. In the book you tell us about how we got to the point of the fourth age. And how rather than what we should think about the topics we will all be facing very soon. So, some of these topics are like machine consciousness, automation, employment, creative computers, radical life existence, artificial life, AI ethics, the future of warfare, super intelligence, and the implications of extreme prosperity.

Now, in teaching about this "how", you know, this "how" rather than the "what". How would you personally summarize this "how" thinking you are encouraging for our listeners when it pertains to those topics?

Byron:
What I said earlier about, most of your thinking, almost will always be about the past and how things turned out the way they did. And so, there was a time 60, 70,000 years ago where we think humans got down to 1,000 mating pairs and that was it. And nobody would have bet on us. And somehow, somehow, we became the preeminent species. That is really shocking. You have to say, how did that happen? And how did we do that? And also, you have to realize that we were a timid little species. Because it served us well. It would be better to see a rock and mistake it for a bear and run away. Like, aww, it's a bear, run, run. Then to be an optimist. And to see this thing, it's probably just a rock. I'm sure it's a rock and then get eaten by the bear. So, we are like these timid, it could be a bear, that could be a bear. So, we have this innate fear of anything new, we have innate fear of all these things in the future, and at some point, you just own that and recognize that. You go, my tendency is to be afraid of this.

If somebody had gone back in time 25 years from today, the beginning of the internet, the consumer internet, right? And showed somebody a browser and said, you know what in the future billions of people are going to use this thing. What do you think that's going to do to jobs? That person, if they were farsighted would say, well, I bet that it's going to be really hard on the newspapers. They will probably go out of business. And the directories, the printed directories of phone numbers, that's gone. And the travel agents, they're going. And the stockbrokers no more, we don't need them. And the stores, people are going to start buying their stuff online. And every single thing you would have listed would have been bad for jobs. But that person would never say, oh my gosh, there is going to be Etsy, eBay, Twitter, Facebook, Google, Amazon, Uber, Airbnb, all of it, a million new companies. And that's always a challenge. It's, it's easy to see what it's going to destroy. That's always like usually right. It's impossible to see what it's going to create. You have to have faith that increasing people's productivity is always good.

And if you don't think that, then you should suggest everybody worked with one arm tied behind their back. Because what would happen? You would need a lot more people to do anything. So, you just created a bunch of jobs. However, their productivity is so low, they don't pay very much. On the other hand, if you could give everybody a third arm and increase their productivity, that's good. And I try to always stick to the basics and think of the underlying dynamic and what's really going on.

I used to get in disagreements with people on my podcast about the effect on jobs. And now, I just asked a real basic question, which is, can you name one job that's been eliminated by technology in the last five years? Just one? I mean, I can't.

Lauren:
Yeah, on the spot it's quite hard to actually come up with one. I'm sure there is.

Byron:
But that's the thing, if it were killing all that stuff, then we would see it. And you can't name one, can't name one. They've been talking about truck drivers for like, I don't know how long. And the fact that they always say, x-ray technicians. That just tells you that like one job that could actually happen. But it is easier to be afraid of it because it's easy to see what would be destroyed. And then, the media jumps in and are like, will a computer steal your job? It's like, the computer is not stealing your job. It doesn't like break in at night and change the HR records. I mean, it's all this language. And what they try to do is convince you that you will not be able to provide for yourself, you will not be able to provide for your family, you're going to be obsoleted, and that's the message. It should be in a message of empowerment. If you hold a smartphone in your hand and it has some medical app or something on it. Gardening, we will just be simple. You are effectively like a brilliant gardener, or at least a pretty good one.

Lauren:
That's true.

Byron:
But the technology just empowers you. I don't think it's conspiracy why everyone seems to be trying to frighten people. I don't think it's a conspiracy. But I don't understand it. And I don't know why people want to be--

Lauren:
Yeah, I mean, that's two different sort of threads that we could follow. And I think that it's obviously evident that you are very optimistic about the productivity that can be encouraged with the implementation of technologies, especially when we're talking about in particular AI.

The question is, though, how do you drive your own optimism, and really ensure that people can benefit from your own personal optimism and see that technology is what people use to improve their productivity?

Byron:
I don't think I'm reflexively an optimist. I don't think that's my nature. I'm not just like pollyannish about everything. Oh, it'll all work out. Like that's, not what I think. And so, every day, I put my optimism on trial. Every day I ask if it is warranted. And all I try to do is say, I only know three things. Over the course of the last 10,000 years things have gone pretty well for us as a species. We are up and to the right in my parlance. That's the first one. And the second one is that technology is going to increasingly amplify what we're able to do and that's good. And the third one is that people are good, people are good. Most people are good. First of all, I think that's self-evident. Because if everybody was out for themselves, we never would have gotten to this part.

Lauren:
That's very true.

Byron:
I will give you a reason. I sold something on eBay, very recently. And I packed it very carefully, and double box, shipped it to the person. And they filed a complaint and said, I had not sent them the item. That I had just put a brick in the box and shipped it to them. Which I did not put a brick in the box and ship it to them. And of course, eBay sided with them. And said, well, you can't prove they got it. Which is fine. But I knew in that moment, is that almost never happens. Because if that every happened like 5% of the time, it wouldn't work. None of it wouldn't work. Credit cards wouldn't work. Nothing would work. So, there is just that dude who has my thing and he's in the minority. And he always will be. Always will be. Because people, we are social. And we have altruism. And we try to help even perfect strangers.

Lauren:
No, I completely agree with that sort of philosophy and train of thought and outlook on life. And I do believe that it is something that has helped people through a lot of the trials and tribulations that have happened in the past few months. I mean, obviously, globally, there's been a lot of them trying situations for a lot of people in different scenarios. And I think that one key consideration that can hold truth, and there is evidence in the development of history, if we look as you mentioned into the past, and if we're looking into the future, it's the nature and the intrinsic desire of the majority of human beings to actually do good and not evil.

And I mean, it goes back to the question then that I would say that we start with, our people machines? And we could then contribute, or collectively argue, together or look at you know, this one example of can a machine identify between good and bad and motion that into a positive direction as well.

So, before we sort of close, I want to touch upon Stories, Dice, and Rocks That Think, How Humans Learn to See the Future and Shape It. What's this book about?

Byron:
That book is about what makes people different than animals. I'm really fascinated by that for two reasons. One, why are we like so exceptional among all creatures on this planet? There's like everything else in us. We're basically like aliens to them, right? Like why? Why is there just one? Why is there just one? And second, where are the species that are coming up? Where are the bronze age beavers? Where are the industrial age iguanas? Where are the penguin poets? Where are the ones that are kind of like not there yet? What has a dolphin done? Like what do they have? I know they do not have the internet or telegrams or mail or an alphabet or writing or anything. I wanted to understand those two things. Why it happens? And that we are so different. The first thing is we can know that there is something called the future and we know there is something called the past. And as strange as it may seem, the animals don't. They live in the perpetual present. And they don't have episodic memory. Fascinating ways that they test this, there may be a few animals that can think into the future hours, maybe. But I spent a lot of ink in the book trying to draw the distinction between us and them. I'm not down on animals, at all. I am sure that they feel pain like we do. They are intelligent. They can very well be conscious. All of that. But I am very interested in their cognition. And I do think that their cognition is completely different than ours. And that is the interesting thing.

So, the book begins about that. How are people in animals different? And when you get to that, and you say, well, we can see the future. We can imagine the future. We understand that the way we do that is, through little stories, we tell ourselves every minute. Oh, I can get over there. I can change lanes and do this. And then, that might happen. Or I can get this down. Like all your life is you are kind of living in this constant assessing the future. And then, we wanted to do more than that. We wanted to predict the future, which you will know what is going to happen, driven to do that. And that's about the invention of probability. And that is dice, that's the middle section. And boy, that sounds like a boring topic. And I promise you it isn't. There is not an equation in the book. It is just storytelling about letters between Fremont and Pascal, and the things they didn't know. Like I think this is the coolest thing in the world. Or not the coolest, the most interesting thing, which is, governments used to raise money by selling annuities. So, you would give the government $100, and every year, they would give you $10 until you die, okay? Two guys walk in to buy an annuity, one is 20 and one is 80. Who should have to pay more for the annuity? Well, the young person because they are going to live longer than the old person who probably may not make it out of the office, right? They didn't know that. Up until the mid 1600s, there was a belief that everybody had an equal chance of dying every year, after you were no longer an input. And if you think about it, in a world where death is capricious, like the donkey could kick your head and kill you. Donkey might kick an 80 year old persons head or a 20 year old, you don't know. But the minute we came up with probability, one person could spend one afternoon walking around the cemetery just writing down the ages of people at death, and they could build an actuarial table and make predictions about when you're going to die. And that's all of a sudden, like, you just changed death from capricious to math, and things like that. That's what that book is about.

And then the last chapter, rocks that thing. That's the last section. That's a metaphor for computers. Which are rock silicon chips that think. That is the part about AI. You know, a great tragedy as a species is that we die and everything we knew dies with us. Very little passes on. Very little, very little. And there is kind of no way around that. But there will be in the future. Not because we live longer but because we have sensors that are recording all this stuff. Cause and effect, cause and effect, cause and effect. And over time, all the decisions that you make, good and bad in your life, will be combined with those made by billions of other people. And be used to advise people. And therefore, in the future, everybody is going to be more wise than the wisest person that has ever lived. They are going to have the experience of the billions of people before them. That is what that is about.

Lauren:
It sounds fascinating and really enticing. And you also mentioned that only animals are immortal, don't you in the book?

Byron:
I do because they don't know they're going to die. Yeah, I think that's pretty-- Yeah. That was the price that we had to pay to see the future and the past. To know that there is going to be a future.

Lauren:
Yeah, yeah. But to say a poignant ending to the conversation, but that is something that obviously comes from ACS as you mentioned and the book, Jorge Luis Borges.

Byron:
Yeah. That wasn't me. That was, yeah.

Lauren:
Except from man, all creatures that are mortal for they are ignorant to death. Yeah, but we're living today and we've had a really wonderful discussion. And it's been great to have you here. As I mentioned at the start between you and I, I have really enjoyed reading your books. I love the way that you write. And I think our listeners would profit form looking at one or many of your writings. You've also got one coming out. What is that about? What's it called? Or are you not allowed to disclose the information?

Byron:
No, I would love too. Do you know what a super organism is? I'm trying to figure out if most of my readers are going to know the term of super organism?

Lauren:
Super organism, I have heard it, but I wouldn't be able to give you a definition of what it is.

Byron:
Good to know. So, the idea is, there are bees, honeybees. I used to raise bees. So, there is a lot about bees in this book. And the honeybee, that is a creature, like the honeybee. But all the honeybees can come together, that's an organism. Those honeybees can come together and make a beehive. And that is an organism too.

Lauren:
Interesting.

Byron:
Super organism. And it's different, just like, your cells don't have a sense of humor, but you do, the hive is a different creature than the bee. For instance, the bee is a cold-blooded creature. It cannot regulate its own body temperature. The beehive is warm blooded, it holds its body temperature at 97 degrees. The bees work to cool and heat it to keep it that. It is a different animal.

Lauren:
That's incredible.

Byron:
Now, here's the question, do all people form a super organism? That we cannot even perceive. And I call it, Agora, that is the name of the super organism of humans. It's after a Greek word for like the marketplace, where it's noisy, and all the activity, and the government, and the businesses, and the doctors--

Lauren:
Yeah, beautiful analogy.

Byron:
That's us. That's our collective. And so, my question that I ask in this book is, last thing I will say on it, what is Agora? Is it like a metaphor for understanding the division of labor? Or is it a system, like a car? Like it can break? But it's not alive. Or is it a creature? Creature, it is a live, it has consciousness. And if so, what does it want? Why is it here?

Lauren:
Well, I wish you luck with that. It sounds like it's going to be a very enjoyable read as well. And it's something that I can sort of ponder upon as well and ask myself the question there this evening or later in the weekend, one for a debate, I think.

Byron:
Thank you so much for having me.

Lauren:
No, thank you. It's really been enjoyable.

And I want to thank everyone else for listening today, it's been insightful. If you want to learn more about AI, machine learning, and search, then come and take one of our free courses at Learn.Squirro.com.

host-f
Lauren Hawker Zafer

Host

guest-m
Byron Reese

Guest

Discover More from Squirro

Check out the latest Redefining AI Podcasts for candid
conversations, unspoken secrets, and industry truths

The Integration of AI in Life Sciences and Biotechnology Sectors
Podcast
The Integration of AI in Life Sciences and Biotechnology Sectors
You, and Artificial Intelligence
Podcast
You, and Artificial Intelligence
Digital Humans - Creepy or Cool?
Podcast
Digital Humans - Creepy or Cool?