By Brenden Bobby
Reader Columnist
You know me — we like to keep things pretty grounded in science. After all, this isn’t Mad About Science Fiction.
However, science fiction is transforming before our eyes into science fact. The delivery man is being replaced by flying robots. Wars are being waged in cyberspace. Computers are telling us what we like.
Besides that, a series about artificial intelligence isn’t complete without looking at where we want to go with it.
The movies have done a good job at stoking fears about artificial intelligence, but they weren’t very accurate about why AI can be scary.
Perhaps the scariest thing about AI is how quickly it could transcend beyond anything humans could control. In AI research, AI is split between three distinct tiers: weak/narrow AI, which is what we have now. Artificial General Intelligence, or AGI, would be comparable to a human being, and Artificial Superintelligence, or ASI, would be everything above a single human’s intelligence.
In theory, the time it would take AGI to become an ASI is believed to be sometime between three hours and two weeks. Once it passes the AGI threshold, all bets are off. We have no idea what could happen. If you’re wondering why this transition is so rapid, it’s because of how computers work. Computers can’t focus on millions of things at any given time. They focus on a single task, but they can complete millions of tasks per second, which gives the illusion of focusing on millions of different things at any given time. The human brain can focus on millions (perhaps trillions) of tasks per second, too, but it is divided up between orchestrating the cells in the rest of our body to performing their tasks. While you’re reading this, your brain is making your heart pump, your lungs breathe and your gut move and digest that delicious taco. A computer with the same processing power as a human brain wouldn’t have to do that, it would just have to figure out problems.
A self-contained AI really isn’t that big of a deal. Sure, perhaps it has reached some state of godlike Nirvana inside of its isolated cube, unlocking all the mysteries of the universe and beyond over the course of a few hours, but it doesn’t do us any good unless we connect to it. And once we connect to it, it can apply all of that intelligence to whatever task it sees fit, so long as it’s connected to the internet.
If the internet is connected to the hardware that could make nanobots (nanoscopic robots capable of manipulating individual atoms), it can do anything. I mean literally anything.
Anything can be taken many different ways. Will it rip the carbon out of our bodies to build carbon suit to house itself? Sure, why not?
Will it scan, copy and replicate our consciousness and eradicate disease and death at an atomic level? I sure hope so.
Will it figure out how to manipulate microscopic black holes and upload itself to a completely different dimension? There was a movie about that.
Of the three weird scenarios I’ve listed there, the second is the one humans have been dreaming about since the ‘80s. It’s called the singularity, and we believe the only way we can achieve it is with a godlike artificial superintelligence.
The gist of the singularity is that our consciousness is made up of atoms in a very specific configuration, programmed to do specific tasks. If we had an incredibly precise machine that could manipulate individual atoms, why couldn’t you just “move” your consciousness into a digital format? Then you could upload and download everything about yourself wherever you wanted.
Humans would be able to communicate instantaneously with one another. Arguments and wars would be won and lost in microseconds. Our entire ability to think and process information would be restricted only by how much hardware and energy we had at our disposal and the speed of light. Without biological components, there would be no sickness, no disease, no death. Getting old just means you need to change some of your components. Currency would be worthless, time and energy would be the new economy. If you think about it, it’d be a form of artificial evolution.
This may sound all hokey and far-fetched, but it’s widely believed that by 2050, some form of major human-AI event will have occurred. Many speak of this as an inevitability, but there are doubts.
If the universe is over 13 billion years old, and it’s taken life on Earth a little over a billion years to get this far, why haven’t we seen some other race that evolved itself into a computer dominate the galaxy? Are we the first? Is it even possible? I really don’t have an answer to that, other than: Space is really big, and if something evolved to that point and had a stable energy source, why even bother leaving it?
This is a vastly complex and completely unfinished subject. I could go on for days, weeks, years, but I think Ben would throw me out a window if I did.
I’m sure lots of people are rolling their eyes at this article, but just remember: if you told people in 1950 that in 68 years, everyone in the world would be able to look at their palm and talk to anyone else in the world on a whim, you would’ve seen some eyes roll. Look at us now.
While we have you ...
... if you appreciate that access to the news, opinion, humor, entertainment and cultural reporting in the Sandpoint Reader is freely available in our print newspaper as well as here on our website, we have a favor to ask. The Reader is locally owned and free of the large corporate, big-money influence that affects so much of the media today. We're supported entirely by our valued advertisers and readers. We're committed to continued free access to our paper and our website here with NO PAYWALL - period. But of course, it does cost money to produce the Reader. If you're a reader who appreciates the value of an independent, local news source, we hope you'll consider a voluntary contribution. You can help support the Reader for as little as $1.
You can contribute at either Paypal or Patreon.
Contribute at Patreon Contribute at Paypal