The Singularity Is Nearer featuring Ray Kurzweil

iamdtms
32 min readJun 2, 2024

--

His forthcoming book, The Singularity Is Nearer, will be released June 25, 2024.

SXSW 2024

(audience applauding)

- All right.

I’m so excited to be here with you, Ray.

- It’s great to be here.

Great to see everybody together.

- Yeah.
- Beautiful audience.

- So, my favorite thing in
that introduction of you

is that you have been working in AI

longer than any other human alive,

which means, if you live forever,

and we’ll get to that,

you will always have that distinction.

- I think that’s right.

Marvin Minsky was actually my mentor.

If he were alive today,

he would actually be more than 61 years.

We’re gonna bring him back also.

- So, maybe you’ll, I’m not sure

how we’ll count the distinction then.

- [Audience] Louder, louder.

- All right, so we’re gonna fix the audio,

but this is what we’re gonna
do with this conversation.

I’m gonna start out
asking Ray some questions

about where we are today.

We’ll do that for a few minutes.

Then we’ll get into what has to happen

to reach the singularity.

So, the next 20 years.

Then we’ll get into discussion about

what the singularity is, what it means,

how it would change our lives.

And then at the end we’ll
talk a little bit about how,

if we believe this vision of the future,

what it means for us today.

Ask your questions.

They’ll come in, I’ll ask ‘em

as they go in the different
sections of the conversation,

but let’s get cracking.

- Can you hear me?

(audience answers indistinctly)

- You can’t hear, Ray?

(audience answers indistinctly)

Well, this will be recorded.

You guys are gonna all live forever.

There’ll be plenty of time.

It will be fine.

I’m just gonna get started.

I assume the audio will get worked out.

They do a fabulous job here at South by.

- I think they should be
able to hear me and you.

(audience laughing)

- All right, we got
this over on the right?

(audience applauding)

Audio engineers, are we good to go?

We’re good to go, all right.

All right, first question, Ray.

So, you’ve been working
in AI for 61 years?

- Oh wait, can you hear me?

- [Audience] No.

- That’s not.

- So, everybody in the front can hear you,

but nobody in the back can hear you.

- Can you hear me now?

- [Audience] Yes.
- Okay.

- All right.
- I’ll speak louder.

- First question, so you’ve been living

in the AI revolution for a long time.

You’ve made lots of predictions,

many of which have been
remarkably accurate.

We’ve all been living in

a remarkable two year transformation

with large language
models a year and a half.

What has surprised you
about the innovations

in large language models and
what has happened recently?

- Well, I did finish this book a year ago,

and didn’t really cover
large language models.

So, I delayed the book to cover that.

But I was expecting this that to happen

like a couple of years later.

I mean, I made a prediction in 1999

that would happen by 2029,

and we’re not quite
there yet, but we will.

But it looks like it’s maybe

a year or two ahead of schedule.

So, that was maybe a bit of a surprise.

- Wait, you predicted back in 1999

that a computer would pass
the Turing Test in 2029.

Are you revising that to
something more closer to today?

- No, I’m still saying 2029.

The definition of the
Turing Test is not precise.

We’re gonna have people claiming

that the Turing Test has been solved

and people are saying that

GPT-4 actually passes it, some people.

So, it’s gonna be like
maybe two or three years

where people start claiming

and then they continue to claim

and finally, everybody will accept it.

So, it’s not like it happens in one day,

- But you have a very specific definition

of the Turing Test.

When do you think we’ll
pass that definition?

- Well, the Turing Test is
actually not that significant,

’cause that means that you can,

a computer will pass for a human being.

And what’s much more important is AGI,

automatic general intelligence,

which means that it can
emulate any human being.

So, you have one computer,

and it can do everything
that any human being can do,

and that’s also 2029.

It all happens at the same time.

But nobody can do that.

I mean, just take an average
large language model today.

You can ask it anything

and it will answer you
pretty convincingly.

No human being can do all of that.

And it does it very quickly.

It’ll write a very nice
essay in 15 seconds

and then you can ask it again
and it’ll write another essay

and no human being can
actually perform at that level.

- Right, so you have to
dumb it down to actually

have a convincing Turing Test.

- [Ray] To have a Turing Test
you have to dumb it down.

- Yeah, let me ask the first
question from the audience

since I think it’s quite
relevant to where we are,

which is Brian Daniel.

Is the Kurzweil Curve still accurate?

- [Ray] Say again?

- [Nick] Is the Kurzweil
Curve still accurate?

- Yes, in fact it’s, can I see that?

- [Nick] Let’s pull the
slides up. First slide.

- [Ray] So, this is an
80-year track record.

This is an exponential growth.

A straight line on this curve
means exponential curvature.

If it was sort of exponential,

but not quite, it would curve.

This is actually a straight line.

It started out with a computer

that did 0.0000007 calculations

per second per constant dollar.

That’s the lower left hand corner.

At the upper right hand corner,

it’s 65 billion calculations per second

for the same amount of money.

So, that’s why large language models

have only been feasible for two years.

Prior, we actually had large
language models before that,

but it didn’t work very well.

And this is an exponential curve.

Technology moves in an exponential curve.

We see that, for example,
having renewable energy

come from the Sun and wind,

that’s actually an exponential curve.

It’s increased, it’s gone.

We’ve decreased the price by 99.7%.

We’ve multiplied the amount of energy

coming from solar energy a million fold.

So, this kind of curve really

directs all kinds of technology.

And this is the reason
that we’re making progress.

I mean, we knew how to do large
language models years ago,

but we’re dependent on this
curve, and it’s pretty amazing.

It started out increasing relay speeds,

then vacuum tubes, then
integrated circuits,

and each year it makes the
same amount of progress,

approximately regardless of
where you are on this curve.

We just added the last point.

And it’s again, we basically multiply this

by two every 1.4 years.

And this is the reason that
computers are exciting,

but it actually affects
every type of technology.

And we just added the last
point like a two weeks ago.

- Okay. All right, so let
me ask you a question.

You know, you wrote book
about how to build a mind.

You have a lot about how the
human mind is constructed.

A lot of the progress in AI,
AI systems are being built

on what we understand about
neural networks, right?

So, clearly our understanding
of this helps with AI.

In the last two years,

by watching these large language models,

have we learned anything
new about our brains?

Are we learning about

the insides of our skulls as we do this?

- It really has to do with
the amount of connections.

The brain is actually
organized fairly differently.

The things near the eye, for
example, deal with vision.

And we have different ways of implementing

different parts of the brain
that remember different things.

We actually don’t need that.

In a large language model, all
the connections are the same.

We have to get the connections
up to a certain point.

If it approximately matches
what the brain does,

which is about a trillion connections,

it will perform kind of like the brain.

We’re kind of almost at that point.

- [Nick] Wait, so you think.

- GPT-4 is 400 billion.

The next ones will be a trillion or more.

- So, the construction of these models,

they are more efficient
in their construction

than our brains are?

- We make them to be as
efficient as possible,

but it doesn’t really matter
how they’re organized.

And we can actually
create certain software

that will actually expand
the amount of connections

more for the same amount of computation.

But it really has to do
with how many connections

are particular computers
is responsible for.

- So, as we approach AGI,

we’re not looking for a new understanding

of how to make these
machines more efficient?

The transformer architecture
was clearly very important.

We can really just get
there with more compute.

- But the software and the
learning is also important.

I mean, you could have
a trillion connections,

but if you didn’t have
something to learn from,

it wouldn’t be very effective.

So, we actually have to be
able to collect all this data.

So, we do it on the web and so on.

I mean, we’ve been
collecting stuff on the web

for several decades.

That’s really what we’re
depending on to be able

to train these large language models.

And we shouldn’t actually call
them large language models,

because they deal with
much more than language.

I mean, it’s language,

but you can add pictures,

you can add things that affect disease

that have nothing to do with language.

In fact, we’re using now simulated biology

to be able to simulate different
ways to affect disease.

And that has nothing to do with language,

but they really should be
called large event models.

- Do you think there’s
anything that happens

inside of our brains that can be captured

by computation and by math?

- No. I mean, what would that be? I mean.

(Ray and audience laughing)

- Okay, quick poll of the audience.

Raise your hand if you think
there’s something in your brain

that cannot be captured by
computation or math, like a soul.

All right, so convince them
that they’re wrong, Ray.

- I mean, consciousness is very important,

but it’s actually not scientific.

There’s no way I could slide somebody in

and the light will go on.

Oh, this one’s conscious.

No, this one’s not.

It’s not scientific,

but it’s actually extremely important.

And another question, why am I me?

How come what happens to me?

I’m conscious of, and I’m not conscious

of what happens to you.

These are deeply mysterious things,

but they’re really not,
it’s really not conscious.

So, Marvin Minsky, who was my
mentor for 50 years, he said,

it’s not scientific and therefore

we shouldn’t bother with it.

And any discussion of consciousness,

he would kind of dismiss,
but he actually did.

His reaction to people
was totally dependent

on whether he felt they
were conscious or not.

So, he actually did use that.

But it’s not something
that we’re ignoring,

because there’s no way to tell

whether something’s conscious.

And that’s not just something

that we don’t know and we’ll discover.

There’s really no way to tell

whether or not something’s conscious.

- What do you mean, like
this is not conscious

and you know, the gentleman

sitting right there is conscious.

I’m pretty confident.

- How do you prove that?

I mean we kind of agree with human

that humans are conscious.

Some humans are conscious, not all humans.

(audience laughing)

But how about animals? We
have big disagreements.

Some people say animals are not conscious

and other people think
animals are conscious.

Maybe some animals are
conscious, and others are not.

There’s no way to prove that.

- Okay, I wanna run down
this consciousness question,

but before we do that, I wanna make sure

I understood your
previous answer correctly.

So, the feeling I get of being in love

or the feeling, any emotion that I get

could eventually be represented

in math in a large language model?

- Yeah, I mean certainly the behavior,

the feelings that you have,

if you are with somebody that you love.

It’s definitely dependent
on what the connections do.

You can tell whether or
not that’s happening.

- All right, and back to,

is everybody here convinced?

- [Audience] No.

= Not entirely.

All right, well close enough.

So, you don’t think that it’s worth

trying to define consciousness?

I mean, you spend a
fair amount in your book

giving different arguments
about what consciousness means,

but it seems like your argument on stage

that we shouldn’t try to define it?

- There’s no way to actually prove it.

I mean, we have certain agreements.

I agree that all of you are conscious,

you actually made it into this room.

So, that’s a pretty good
indication that you’re conscious.

But that’s not a proof.

And there may be human beings

that don’t seem quite
conscious at the time.

Are they conscious or not?

And animals, I mean I think elephants

and whales are conscious,

but not everybody agrees with that.

- So, at what point can we then,

essentially how long
will it be until we can,

essentially download the
entire contents of your brain

and express it through
some kind of a machine?

- That’s actually an important question,

’cause we’re gonna talk about longevity.

We’re gonna get to a point

where we have longevity escape velocity.

And it’s not that far away.

I think if you’re diligent,

you’ll be able to achieve that by 2029.

That’s only five or six years from now.

And that, so right now
you go through a year,

use up a year of your longevity,

but you get back from scientific progress

right now about four months.

But that scientific progress
is on an exponential curve.

It’s gonna speed up every year.

And by 2029, if you’re diligent,

you’ll use up a year of your
longevity with a year passing.

But you’ll get back a full year.

And past 2029, you’ll get
back more than a year.

So, you’ll actually go backwards in time.

Now, that’s not a
guarantee of infinite life

because you could have a 10-year-old

and you could compute his
longevity as many, many decades

and he could die tomorrow.

But what’s important about

actually capturing
everything in your brain,

we can’t do that today,

and we won’t be able to
do that in five years.

But you will be able to do
that by the singularity,

which is 2045.

And so, at that point you can
actually go inside the brain

and capture everything in there.

Now, your thinking is
gonna be a combination

of the amount you get from computation,

which will add to your thinking.

And that’s automatically captured.

I mean, right now,
anything that’s you have

in a computer is
automatically captured today.

And the kind of additional
thinking we’ll have

by adding to our brain
that will be captured.

But the connections that
we have in the brain

that we start with will still have that.

That’s not captured today,

but that will be captured in 2045.

We’ll be able to go inside the brain

and capture that as well.

And therefore, we’ll actually
capture the entire brain,

which will be backed up.

So, even if you get wiped out,

you walk into a bomb and it explodes,

we can actually recreate everything

that was in your brain by 2045.

That’s one of the implications
of the singularity.

Now, that’s doesn’t absolutely guarantee,

because I mean the world could blow up

and all the computer,

all the things that contained
computers could blow up

and so you wouldn’t be
able to to recreate that.

So, we never actually get to a point

where we absolutely guarantee
that you live forever.

But most of the things
that right now would upset

capturing that will be
overcome by that time.

- There’s a lot there, Ray.

Let’s start with escape velocity.

So, do you think that
anybody in this audience,

in their current biological body

will live to be 500 years old?

- You’re asking me?

- Yeah.

- Absolutely, I mean, if you’re gonna

be alive in five years,

and I imagine all of you
will be alive in five years.

- Oh okay, if they’re
alive for five years,

they will likely live to be 500 years old?

- If they’re diligent.

And I think the people in this
audience will be diligent so.

- Wow, all right.

Well, you can drink whatever you want

as long as you don’t get run over tonight,

’cause you don’t have
to worry about decline.

(audience laughing)

All right, so let me ask you a question.

I wanna get, we’re gonna
spend a lot of time

on what the singularity is,

what it means, and what it’ll be like.

But I wanna ask some questions
that’ll lead us up there.

So, I’m gonna take this question

from Mark Sternberg
and modify it slightly.

In the timeframe, AI will be able to do,

or sufficiently sophisticated
computers in your argument

can do everything that
the human brain can do.

What will they not be able
to do in the next 10 years?

- Well, one thing has to
do with being creative.

And some people go, they’ll be able

to do everything a human can do,

but they’re not gonna be
able to create new knowledge.

That’s actually wrong,

because we can simulate,
for example, biology.

And the Moderna vaccine for example,

we didn’t do it the usual way,

which is somebody sits down and thinks,

well, I think this might work.

And then they try it out.

It takes years to try it
out in multiple people

and it’s one person’s idea
about what might work.

They actually listed
everything that might work

and there was actually several billion

different mRNA sequences and
they said let’s try them all.

And they tried every single
one by simulating biology

and that took two days.

So, one weekend they tried out

several billion different possibilities

and then they picked the one

that turned out to be the best.

And that actually was the
Moderna vaccine up until today.

Now, they did actually test it on humans.

We’ll be able to overcome that as well,

’cause we’ll be able to test

using simulated biology as well.

They actually decided to test it.

It’s a little bit hard to
give up testing on humans.

We will do that.

So, you can actually try
out every single one,

pick the best one, and
then you can try out that

by testing on a million simulated humans

and do that in a few days as well.

And that’s actually the future

of how we’re gonna create
medications for diseases.

And there’s lots of things
going on now with cancer

and other diseases that are using that.

So, that’s a whole new method.

This actually starting now.

Started right with the Moderna vaccine.

We did another cure for a mental disease

that’s actually now in stage three trials.

That’s gonna be how we create
medications from now on.

- But what are the frontiers?

What can we not do?

- So, that’s where
computers being creative

and it’s not just
actually trying something

that occurs to it.

It makes a list of
everything that’s possible

and tries it all.

- Is that creativity or
is that just brute force

with maximum capability?

- It’s much better than any
other form of creativity.

And yes, it’s creative,

’cause you’re trying out
every single possibility

and you’re doing it very quickly

and you come up with something
that we didn’t have before.

I mean, what else would creativity be?

- All right, so we’re gonna

cross the frontier of creativity.

What will we not cross?

What are the challenges that will be

outstanding the next 10 years?

- Well, we don’t know everything,

and we haven’t gone through this process.

It does require some creativity
to imagine what might work.

And we have to also be able to simulate it

in a biochemical simulator.

So, we actually have to figure that out

and we’ll be using people
for a while to do that.

So, we don’t know everything.

I mean, to be able to do everything

a human being can do is one thing,

but there’s so much we don’t
know that we wanna find out.

And that requires creativity.

That will require some
kind of human creativity

working with machines.

- All right, let’s go back
to what’s gonna happen

to get us to the singularity.

So, clearly we have the chart

that you showed on the power of compute.

It’s been very steady, you
know, moving straight up,

you know, on a logarithmic
scale on a straight line.

There are a couple of other elements

that you think are necessary
to get to the singularity.

One, is the rise of nanobots

and the other is the rise
of brain machine interfaces.

And both of those have
gone more slowly than AI.

So, convince the audience that.

- Well, it would be slow,

because anytime you affect the human body,

a lot of people are gonna
be concerned about it.

If we do something with computers,
we have a new algorithm,

or we increase the speed of it,

nobody really is concerned about it.

You can do that.

Nobody cares about any dangers in it.

I mean that’s the reality.

- [Nick] Well, there’s some dangers

that people care about, yes.

- Yeah, but it goes very, very quickly.

That’s one of the reasons it goes so fast.

But if you’re affecting the body,

we have all kinds of concerns

that it might affect it negatively.

And so, we wanna actually
try it on people.

- But the reason brain machine interfaces

haven’t moved in an exponential curve

isn’t just because, you know,

lots of people are concerned
about the risks to humans.

I mean, as you explain in the book,

they just don’t work
as well as they could.

- If we could try things out
without having to test it,

it would go a lot faster.

I mean, that’s the reason it goes slowly.

There’s some thought now
that we could actually

figure out what’s going
on inside the brain

and put things into the brain

without actually going inside the brain.

We wouldn’t need
something like brain link.

We could just, I mean there’s some tests

where we can actually tell
what’s going on in the brain

without actually putting
something inside the brain.

And that might actually be a way

to do this much more quickly.

- But your prediction
about the singularity,

depends, maybe I’m reading it wrong,

not just on the continued
exponential growth of compute,

but on solving this
particular problem too, right?

- Yes, because we wanna increase

the amount of intelligence
that humans can command.

And so, we have to be able

to marry the best computers
with our actual brain.

- And why do we have to do that?

Because like right now, here I go,

I have my phone in some ways
this augments my intelligence.

It’s wonderful.

- Yeah, but it’s very slow.

I mean, if I ask you a question,

you’re gonna have to type it in,

or speak it and it takes a while.

I mean, I ask a question

and then people fool
around with their computer.

It might take 15 seconds or 30 seconds.

It’s not like it just goes
right into your brain.

I mean, these are very useful.

These are brain extenders.

We didn’t have these a little while ago.

Generally, in my talks, I ask people,

“Who here has their phone?”

I’ll bet here maybe
there’s one or two people,

but everybody here has their phone.

That wasn’t two, five years ago,

definitely wasn’t two, 10 years ago.

And it is a brain extender,

but it does have some speed problems.

So, we wanna increase that speed.

A question could just come
up where we’re talking

and the computer would instantly
tell you what the answer is

without you having to fool
around with an external device,

and that’s almost feasible today.

And something like that
would be helpful to do this.

- But could you not get a lot of the good

that you talk about if we just kept.

The problem with connecting
our brains to the machines

is suddenly you’re in this whole world,

these complicated privacy issues

where stuff is being injected in my brain,

stuff in my brain is, you
know, is going elsewhere.

Like you’re opening up
a whole host of ethical,

moral, existential problems.

Can’t you just make the
phones a lot better?

- Well, that’s the idea
that we can do that

without having to go inside your brain,

but be able to tell what’s
going on in your brain

externally without going inside the brain,

you know, with some kind of device.

- All right, well, let’s
keep moving into the future.

So, we’re moving into the future.

We have exponential growth of computer.

We solve a way of, you
know, ideally figuring out

how to communicate
directly with your brain

to speed things up.

Explain why nanobots are essential

to your vision of where we’re going.

- Well, if you really wanna tell

what’s going on inside the brain,

you’ve gotta be able to go

at the level of the particles in the brain

so we can actually tell
what they’re doing,

and that’s feasible.

We can’t actually do it, but
we can show that it’s feasible.

And that’s one possibility.

We’re actually hoping
that you could do this

without actually affecting
the brain at all.

- Okay. All right, so we’re pushing ahead.

We’ve got nanobots that running
around inside of our brain.

They’re understanding our head,

they’re extracting thoughts,
they’re inputting thoughts.

Let’s go to this nice question,

which fits in lovely
from Louise Condraver.

What are the five main ethical questions

that we will face as that happens?

- Is four enough?

- Four is fine.

There might even be six, Ray,
but you can give us four.

- I mean we’re gonna
have a lot more power,

if we can actually with our
own brain control computers.

Is I give people too much power?

Also, I mean right now we talk about

having a certain amount of
value based on your talent.

This will give talent to people

who otherwise don’t have talent.

And talent won’t be as important,

because you’ll be able to gain talent

just by merging with the right
kind of large language model,

or whatever we call them.

And it also seemed kind of arbitrary

why we would give more power

to somebody who has more talent,

’cause they didn’t create that talent,

they just happened to have it.

But everybody says we should give

somebody who has talents
in an area more power.

This way you’d be able to gain talent,

as in the “Matrix”.

You could learn to fly a helicopter

just by downloading the right software

as opposed to spending a
lot of time doing that.

Is that fair or unfair?

I mean I think that would fall

into the ethical challenge area.

And it’s not like we get
to the end of this and say,

okay, this is finally what
the singularity is all about

and people can do certain things

and they can’t do other
things, but it’s over.

We will never get to that point.

I mean this curve is gonna continue.

The other curve, it’s gonna
continue indefinitely.

And we’ve actually shown, for example,

with nanotechnology we
can create a computer

where one liter computer would actually

match the amount of power that
all human beings today have.

Like 10th to the 10th persons

would all fit into one liter computer.

Does that create ethical problems?

So, I mean a lot of the
implications kind of run against

what we’ve been assuming
about human beings.

- Wait, on the talent question,
which is super interesting.

Do you feel like everybody,

when we get to 2040 will
have equal capacities?

- I think we’ll be more different,

because we’ll have different interests

and you might be into some
fantastic type of music

and I might be into some kind of

literature or something else.

I mean we’re gonna have
different interests

and so, we’ll excel at certain things

depending on what your interests are.

So, it’s not like we all have
the same amount of power,

but we all have fantastic power

compared to what we have today.

- And if you’re in Texas where
there are no regulations,

you’ll probably get it first

instead of you in Massachusetts.

- Exactly, yeah.

(audience laughing)

- Let me ask you another ethical question,

while we’re on this one.

So, about a few minutes ago
you mentioned the capacity to,

you know, replicate someone’s
brain and bring ’em back.

So, let’s say I do that with my father.

Passed away six years ago sadly.

I bring him back and I’m able

to create a mind and a body
just like my father’s, right?

It’s exact perfect replica,
all of his thoughts.

What happens to the, all the
bills that he owed when he die?

Because like that’s a lot of money

and a lot of bill collectors call me.

Do we have to pay those
off or are we good?

- Well, we’re doing something
like that with my daughter

and you can read about this in her book

and it’s also in my book.

We collected everything
my father had written.

He died when I was 22.

So, he is been dead
for more than 50 years.

And we fed that into
a large language model

and basically, asked it the question,

of all the things he ever wrote,

what best answers this question?

And then you could put
any question you want

and then you could talk to him.

You’d say something,

you’d then go through
everything he ever had written

and find the best answer

that he actually wrote to that question.

And it actually was a
lot like talking to him.

You could ask him what
he liked about music.

He was a musician.

He actually liked Brahms the best,

and it was very much like talking to him.

And I reported on this in my book

and Amy talks about this in her book.

And Amy actually asked the question,

could I fall in love with with this person

even though I’ve never met him?

And she does a pretty good job.

I mean you really do fall
in love with this character

that she creates even
though she never met him.

So, we can actually,
with today’s technology,

do something where you can
actually emulate somebody else.

And I think as we get further on

we can actually do that
more and more responsibly

and more and more that really
would match that person

and actually, emulate
the way he would move,

and so on, his tone of voice.

- And well, you know, my
dad, he loved Brahms too,

particularly those piano trios.

So, if we can solve
the back taxes problem,

we’ll get my dad and
your dad’s bots hang out,

it would be great.

- Well, yeah, that’d be cool.

- All right.

(audience laughing)

All right, we got 20 minutes left.

I wanna get to the thing
that I most wanna understand,

’cause it’s something that’s,

by the way, this book is wonderful.

I think you guys are all gonna get

signed copies of it when it comes out.

It’s truly remarkable, as
are all of Ray’s books,

whether you agree or disagree,

they’ll definitely make you think more.

One of the things that I don’t
think you do in this book

is describe what a day
will be like in 2045

when we’re all much more intelligent.

So it’s 2045, we’re all a
million times as intelligent.

I wake up, do I have breakfast
or do I not have breakfast?

- Well, the answer to that question is

kind of the same as it’s now,

but first of all, the reason
it’s called a singularity

is because we don’t really
fully understand that question.

Singularity is borrowed from physics.

Singularity in physics is
where you have a black hole

and no light can escape.

And so, you can’t actually
tell what’s going on

inside the black hole.

And so, we call it a singularity,
a physical singularity.

So, this is a historical singularity,

but we’re borrowing that term from physics

and call it a singularity,

because we can’t really
answer the question.

If we actually multiply our
intelligence a million fold,

what’s that like?

It’s a little bit like asking a mouse,

gee, what would it be like,

if you had the amount of
intelligence of this person?

The mouse wouldn’t really
even understand the question.

It does have intelligence,

has a fair amount of intelligence,

but it couldn’t understand that question.

It couldn’t articulate an answer.

That’s a little bit what
it would be like for us

to take the next step in
intelligence by adding

all the intelligence that the
singularity would provide.

- Wait, wait, I just wanna
make sure I understand.

- But I’ll give you one answer.

I said if you’re diligent, you’ll achieve

longevity escape velocity
in five or six years.

And if we wanna actually
emulate everything

that’s going on inside a brain,

let’s go out a few more years.

Let’s say the 2040, 2045.

Now, there’s a lot, you talk to a person,

they’ve got all the connections
that they had originally,

plus all this additional connections

that we add through having
them access computers

and that becomes part of their thinking.

So, can you suppose that
person like blows up

or something happens to their mind.

You definitely can recreate everything

that’s of a computer origin.

’Cause we do that now,
anytime we create anything

with a computer, it’s backed up.

So, if the computer goes away,

you’ve got the backup
and you can recreate it.

Maybe says, okay,

but what about their thinking
in their normal brain

that’s not done with computers?

We don’t have some ways
of backing that up.

When we get to the singularity with 2045,

we’ll be able to back that up as well,

because we’ll be able to figure out,

we’ll have some ways of
actually figuring out

what’s going on in that
sort of mechanical brain.

And so, we’ll be able to back
up both their normal brain

as well as the computer edition.

And I believe that’s feasible by 2045.

- In your vision of it.

- So, you can back up their entire brain.

Now, that doesn’t guarantee,

I mean the whole world could blow up

and you lose all the data centers.

And so, it’s not absolute guarantee.

- That’ ll be ashamed,

but what I don’t understand is

will we even be fully distinct people

if we’re sharing memories

and we’re all uploading
our brains to the cloud

and we’re getting all this
information coming back

directly into our neocortex,
are we still distinct?

- Yes, but we could also

find new ways of communicating.

So, the computers that extend my brain

interact with computers
to extend your brain.

We could create something
that’s like a hybrid or not

and it would be up to our own decision

as to whether or not to do that.

So, there’ll be some new
ways of communicating.

- Let me ask another question about this.

This is what, when I was reading the book,

this is where I kept getting stuck.

You are extremely optimistic, right?

You’re optimistic about
where we are today.

You’re optimistic that technology

has been a massive force for good.

You’re optimistic that it’ll continue

to be a massive force for good.

Yet, there is a lot of uncertainty

in the future you were describing.

- Well, first of all, I’m
not necessarily optimistic

the things that can go wrong.

We had things that can go
wrong before we had computers.

When I was a child, atomic
weapons were created

and people were very
worried about an atomic war.

And we would actually get under our desk

and put our hands behind our head

to protect us against an atomic war.

And it seemed to work, actually.

We’re still here,

but if you would ask people,

we had actually two weapons
that went off in anger

and killed a lot of people within a week.

And if you’d ask people, what’s the chance

that we’re gonna go another 80 years

and this will never happen again.

Nobody would say that, that was true,

but it has happened.

Now, that doesn’t mean it’s
not gonna happen next week,

but anyway, that’s a great danger.

And I think that’s a much greater
danger than computers are.

Yes, there are dangers,

but the computers will
also be more intelligent

to avoid kinds of dangers.

Yes, there’s some bad people in the world,

but I mean, go back 80, 90 years,

we had 100 million people die in Asia

and Europe from World War II.

We don’t have wars like that anymore.

We could, and we certainly

have the atomic weapons to do that.

And you could also imagine computers

could be involved with that.

But if you actually look,

and this goes right through war and peace.

First of all, you, if you look
at my lineage of computers

going from tiny fraction of
one calculation to 65 billion,

that’s a 20 quadrillion fold increase

that we’ve achieved in 80 years.

And look at this,

US personal income is
done in constant dollars.

So, this has nothing to do with inflation.

And this is the average
income in the United States.

It’s multiplied by about a hundred fold

and we live far more successfully,

if you actually, people say,

oh, things were great 100
years ago, they weren’t.

And you can look at this chart,

and lots of, I’ve got
50 charts in the book,

which are the kind of progress we’ve made.

Number of people that live in dire poverty

has gone down dramatically.

And we actually did a poll
where they asked people,

people that live in poverty,
has it gone up or down?

80% said it’s gone up.

But the reality is it’s
actually fallen by 50%,

in the last 20 years.

So, what we think about the past,

is really the opposite of what’s happened.

Things have gotten far
better than they have

and computers are gonna
make things even better.

I mean, just the kind
of things you can do now

with a large language model
didn’t exist two years ago.

- Do you ever worry
that take it as a given,

if computers have made things better,

take it as a given that personal
income will keep going up.

Do you ever worry it’s
just coming too quickly

and it’ll be better if maybe
the slope of the Kurzweil Curve

was a little less steep?

- I’s a big difference in the past.

I mean, talk about what
effect did the railroad have?

I mean, lots of jobs were lost

or even the cotton ginny
that happened 200 years ago

and people were quite happy

making money with the cotton ginny

and suddenly that was gone
and machines were doing that.

And people say, well,
wait till this gets going,

all jobs will be lost.

And that’s actually what
was said at that time.

But actually, income went up

and more and more people worked.

And if you say, well,
what are they gonna do?

You couldn’t answer that question,

because it was in industries
that nobody had a clue of.

Like for example, all of electronics.

So, things are getting
better even if jobs are lost.

Now, you can certainly point to jobs

like take computer programming.

Google has, I don’t know, 60,000 people

that program computers and
lots of other companies do.

At some point, that’s not
gonna be a feasible job.

They can already code.

Large language models can write code

not quite the way an
expert programmer can do.

But how long is that gonna take?

It’s measured in years, not in decades.

Nonetheless, I believe that
things will get better,

because we wipe out jobs,

but we create other ways
of having an income.

And if you actually point to something,

let’s say this machine

and this is being worked
on, can wash dishes.

You just have a bunch of
dishes that’ll pick the ones

that have to go in the dishwasher

and clean everything else up,

and that will wash dishes for you.

Would we want that not to happen?

Would we say, well, this is
kind of upsetting things,

let’s get rid of it.

It’s not gonna happen.

And no one would would advocate that.

So, we’ll find things to do.

We’ll have other methods
of distributing money

and it’ll continue these kinds of curves

that we’ve seen already.

- It’s kind of remarkable that
we got large language models

before we’ve got robotic dishwashers.

You have grandchildren, you know?

What would you tell a young person?

You know, they buy in, they agree that

or you know, how would you tell them

to best prepare themselves
for what will be a,

if you’re correct, a
remarkably different future?

- I’d be less concerned
about what will make money

and much more concerned
about what turns them on.

They love video games and so
they should learn about that.

They should read literature
that turns them on.

Some of those literature in the future

will be created by computers,

and find out what in the world

has a positive effect
on their mental being.

- And if you know that your
child or your grandchild,

this gets to one of the questions

that is asked on the screen here.

If you know that someone is gonna live

for hundreds of years, as you predict,

how does that affect the way,

certainly it means they
shouldn’t retire at 65.

But what else does it change

about the way they should
think about their lives?

- Well, I talk to people and they say,

“Well, I wouldn’t wanna live past 100.”

Or maybe they’re a little
more ambitious to say,

“I don’t wanna live past 110.”

But if you actually look at

when people decide they’ve had enough

and they don’t wanna live
anymore, that never, ever happens

unless these people are
in some kind of dire pain.

They’re in physical
pain, or emotional pain,

or spiritual pain, or whatever,

and they just cannot
bear to be alive anymore.

Nobody takes their lives other than that.

And if we can actually
overcome many kinds of

physical problems and
cancer’s wiped out and so on,

which I expect to happen,

people will be even that
much more happy to live

and they’ll wanna continue
to experience tomorrow,

and tomorrow’s gonna be better and better.

These kinds of progress,
it’s not gonna go away.

So, people will want to live,

you know, unless they’re in dire pain.

But that’s what the whole sort

of medical profession is about,

which is gonna be greatly
amplified by tomorrow’s computers.

- Can I ask you a great question

that has popped on the screen.

This is from Colin McCabe.

“AI is a black box, nobody
knows how it was built.

How do you show that AI
is trustworthy to users

who want to trust it,
adopt it, and accept it?

Particularly, if you’re gonna upload it

directly into your brain?”

- Well, it’s not true that
nobody knows how they work.

- Right. Most people who are
using a large language model

don’t know what data sense went into it.

They’re things that happen
in the transformer layer

that even the architects don’t understand.

- Right, but we’re gonna learn
more and more about that.

And in fact, how computers work will be,

I think a very common type of talent

that people want to gain.

And ultimately, we’ll have
more trust of computers.

I mean, large language
models aren’t perfect

and you can ask it a question

and it can give you
something that’s incorrect.

I mean, we’ve seen that just recently.

The reason we have these computers

give you incorrect information is

it doesn’t have the
information to begin with

and it actually, doesn’t
know what it doesn’t know.

And that’s actually,
something we’re working on

so that it knows, well, I don’t know that

that’s actually very good,
if it can actually say that.

’Cause right now it’ll find
the best thing it knows

and if it’s never trained
on that information

and there’s nothing in
there that tells you,

it’ll just give you the best guess,

which could be very incorrect.

And we’re actually, learning to be able to

figure out when it knows
and when it doesn’t know.

But ultimately, we’ll have
a pretty good confidence

when it knows and when it doesn’t know.

And we can actually, rely on what it says.

- So, your answer to the question is,

A, we will understand more,

and B, they’ll be much more trustworthy,

so it won’t be as risky
to not understand them?

- Right.
- Okay.

You’ve spent your life making predictions,

some of which, like the Turing Test,

you’ve held onto ‘em
been remarkably accurate.

As you move from a overwhelming optimist

to now slightly of a pessimist.

What is your prediction?

- Well, my books have always had a chapter

on how these things can go wrong.

- Tell me a prediction that
you are chewing over right now,

but you’re not sure
whether you wanna make it

or whether you don’t wanna make it.

- I mean there’s well known
dangers in nanotechnology,

if someone were to create a nanotechnology

that replicates well known,

if it replicates
everything into paperclips.

Turn the entire world into paperclips.

That would not be positive.

- No.

Unless you’re staples, but then.

- And that’s feasible.

Take somebody who’s a little
bit mental to do that,

but it could be done and we actually,

will have something that
actually avoids that.

So, we’ll have something that can detect

that this is actually turning
everything into paperclips

and destroy it before it does that.

But I mean I have a
chapter in this new book

“The Singularity is Nearer”

that talks about the kinds
of things that could happen.

- Oh, the most remarkable
part of this book

is he does exactly the
mathematical calculations

on how long it would take nanobots

to turn the world into gray goo

and how long it would take the blue goo

to stop the gray goo, that’s remarkable.

The book will be out soon.

You definitely need to read until the end.

But this leads to a,

maybe let me try and answer
the question I asked before is,

what should young people
think about and be working on?

And you said their passions
and what turns them on.

Shouldn’t they be thinking through

how to design and architect
these future systems

so they’re less likely to turn us

into gray goo or paper clips?

- Yeah, absolutely, yeah.

I don’t know if everybody
wants to work on that but.

- But folks in this room,
right, technologically minded,

you guys should all be working on

not turning us into gray goo, right?

- Yes, that’d be on the list, you know.

- But then that leads to another question,

which is, what will the role of humans be

in thinking through that problem

when they’re only a
millionth, or a billionth,

or a trillionth as
intelligent as machines?

- Say that again.

- So, we’re gonna have these
really hard problems to solve.

- Yeah.
- Right?

Right now we are along with
our machines, you know,

we can be extremely intelligent,

but 10 years from now, 15 years from now,

there will be machines that will be

so much more intelligent than us.

What will the role of humans be

in trying to solve these problems?

- First of all, I see those
as extensions of humans.

And we wouldn’t have them,

if we didn’t have humans to begin with.

And humans have a brain that
can think these things through.

And we have this thumb,

it’s not really very much appreciated,

but like whales and elephants,

actually have a larger brain than we have

and they can probably
think deeper thoughts,

but they don’t have a thumb.

And so, they don’t create technology.

A monkey can create, it
actually has a thumb,

but it’s actually down an inch or so

and therefore it really
can’t grab very well.

So, it can create a
little bit of technology,

but the technology it creates

cannot create other technology.

So, the fact that we have a thumb means

we can create integrated circuits

that can become a large language model

that comes from the human brain.

And it’s actually trained with everything

that we’ve ever thought.

Anything that human beings have thought

that’s been documented,

and it can go into these
large language models.

And everybody can work on these things.

And it’s not true, well,

only certain wealthy people will have it.

I mean, how many people here have phones?

If it’s not 100% it’s like 99.9%.

And you don’t have to be
kind of from a wealthy group.

I mean, I see people who are homeless

who have their own phone.

It’s not that expensive.

And so, that represents

the distribution of these capabilities.

It’s not something you have to be

fabulously wealthy to afford.

- So, you think that we’re
heading into a future

where we’re gonna live much longer

and we’ll be much more equal?

- Say again?

- Well, you think we’re
heading into a society

where we’ll live much
longer, be wealthier,

but also much more equality?

- Yes, absolutely.

And we’ve seen that already.

- All right. Well, we are at time,

but Ray and I’ll be back
in 2124, 2224 and 2324.

So, thank you for coming today.

Thank you so much.

He is an American treasure.

Thank you, Ray Kurzweil.

(dramatic music)

--

--

iamdtms

Information Technology Specialist, Frontend Designer