We fear AI because we fear ourselves

Josh Sowin
Between Letters
Published in
4 min readMay 29, 2017

--

I was walking along a fire road in LA thinking about the end of the world when it came to me: We are AI.

We’re carbon that can think. Our intelligence came about by sheer luck and using it we’ve dominated the entire Earth.

Now we’re trying hard to re-create intelligence using machines.

Thinking carbon creating thinking silicon.

“There are three ways we could bring about our own end,” I thought to myself as I kicked a rock, “killing each other, destroying our planet before we get off it, or creating intelligent machines that kill or enslave us.”

We’re not only the biggest threat to our planet and all the species on it… we’re also the biggest threat to ourselves.

But here’s the thing: when we’re scared about AI, we’re scared of ourselves. Because we are basically AI. And we know our own history and tendencies.

Our AI created itself.

No one programmed our intelligence (at least that we can prove) — we were incredibly lucky. So lucky that we can’t even comprehend the odds against us. Mathematically we shouldn’t have happened and as far as we know nothing like us has ever emerged before us or will ever again.

Our AI has an operating system.

But we don’t really understand how it works yet. The code is insane, complex, and super buggy. We’ve made huge leaps into seeing the code and splicing it… but we’re nowhere near writing programs from scratch.

Our AI is self-replicating.

Even though it’s slow and messy, we can replicate ourselves. This is a huge fear about the emergence of true machine intelligence — what happens when it can replicate? Probably something very similar to our own history… except the speed will be measured in days or weeks or years, not centuries.

Remember the nanites from Star Trek? They were a microscopic robot that started self-replicating. When the crew tried to kill them, the nanites took over the life support systems and tried to kill them.

Don’t mess with machines when machines run life support.

Our AI is fault-tolerant and adaptive.

We can handle a broad range of situations and setbacks and still come out successful. We have redundancy. We can live without limbs. Without voice. Without hearing. Without sight. We can live on land and under the land. Floating on water and underwater. We can even live in outer space.

Our AI has a purpose.

This is the real reason we’re afraid of AI.

Our AI seems to have a purpose: to take over everything at all cost.

Our history shows this over and over again and we don’t seem to be slowing down. We have an intense desire to conquer and colonize the unknown. And now we’re setting our sights on other planets, too.

What if machines have the same tendency? And why wouldn’t they? Even without intelligence, life replicates until it runs out of resources, from germs to rabbits.

Humanity’s history is full of slavery, violence, power, domination, slaughter, extinction, and subjugating everything under us. (And yet in parallel it’s full of beautiful things like love, sacrifice, music, art, and philosophy.)

We’ve been dominating things for thousands of years and we’re still at it.

Would machines be any different? Maybe, but if not, they’ll be a heck of a lot faster at subjugation because…

Machines can copy information exactly and FAST.

Copying intelligence for us is difficult, time-consuming, and inexact. We have conception, birth, parenting, a long schooling process that takes at least 12 years and often 19. Compare that to a machine that could copy a petabyte of information in the blink of an eye without any data loss.

Machines can start with clones.

Today anyone can instantly create a virtual machine clone in the cloud. You hit a couple buttons and BAM you have an exact replica. Imagine what happens when intelligent machines, all linked together, can breed/replicate in milliseconds. We could be outsmarted in minutes.

Machines can process things in parallel.

Put 1,000 humans together and you get a mob. Put 1,000 computers together and you can get 1,000 times the computing power. Imagine a self-replicating machine creating 1 billion copies of itself and use each copy with parallel processing power.

Machines are tied to everything.

We’re already entirely reliant on machines. Our banks are all digital. Our electricity is monitored, measured, and controlled through machines. Our products are built with machines. Our water purification is done with machines. Our farms are farmed through robot tractors. Our cars all have operating systems — some can even drive themselves. And all our communication is digital unless someone is right next to us (and sometimes not even then!).

So basically, we’re screwed.

Or maybe not. Smart minds are divided on the topic. It’s hard to say before it happens. But it’s certainly a possibility.

I reached the top of the fire road and looked across the valley. So many buildings. So many millions of people replicating and operating and working towards some purpose. But what purpose? What does this all add up to?

What if this ends up being our purpose?

To create silicon that can think.

Maybe this is how humanity will be remembered; self-arising intelligence that created artificial superintelligence.

These would be our true children; our lasting mark on the universe. Machines made in our own intellectual image.

Superintelligence that overcomes our biological weaknesses and necessities.

Superintelligence with the ability to reason, plan, replicate, play.

And unfortunately for us… the ability to destroy us.

--

--

foundᵉʳ, AI nerd, web wanderer, investor-tinkerer... aannnndd jigsaw makerer?