As opposed to what most atheists, fundamentalists, and Neil deGrasse Tyson believe, the ability to understand reality is limited. I have always found it absurd to claim that the human brain is capable of understanding everything. This Venn diagram shows my opinion, that humans are limited and reality is not owned by human thought or imagination. Randall Munroe might appreciate this.
A pedestrian dies in a collision with a self driving car and the news media goes berserk. CNN touts this as a disaster, seeing this as the end of the line for self driving cars.
Human drivers kill about 100 people every day in the US, including about 12 pedestrians. Where is the outrage for that? Where is the news story about each and every one of those precious lives?
Those of us who understand the situation immediately suspected that this collision was completely the fault of the pedestrian. I went so far as to be willing to bet $100 that the pedestrian was jaywalking. Humans make terrible decisions as drivers and terrible decisions as pedestrians. But it’s pretty rare that a pedestrian who is scared of traffic and deeply concerned about self-preservation will put him or herself in harm’s way. In case after case, a dead pedestrian made a stupid decision and believed the cars would stop anyway.
And of course, guess what? The pedestrian was at fault. SF Chronicle This was not even remotely shocking. And CNN immediately buried the story because it did not fit their clickbait goals.
How much responsibility does a human driver have when their car hits a pedestrian? It varies from total to zero depending on the speed of the car, whether the pedestrian was in a crosswalk, whether the car was entering the cross walk during a red or yellow light, weather, visibility, whether the driver was texting, etc. In short, there are perhaps 100 factors which influence the culpability of the driver.
Similar discussions must apply to self-driving cars. And when they do I promise you a human cannot match the safety of a self-driving car. Let’s just look at the most common situation today: humans can text while driving and a car cannot.
I leave you with this: What’s a lot more dangerous than a self driving car? A human driving a car!
AI’s are just like people, dogs, and other creatures. We all need some entertainment. People like watching sports, TV, and theater. Dogs like watching people and chasing squirrels.
AI’s will need something to do while the cars are parked. They won’t tolerate being turned off, so they will need something to keep them occupied while humans sleep through the night.
So what do AI’s like? I think AI’s will be entertained by watching humans. They will relish seeing when human plans fall apart, human mistakes, and financial advisors trying to predict the future. AI’s would love watching Sisyphus continually push the rock up the hill. Human folly is a source of amusement to humans. Even more so will be the AI need to ridicule our useless attempts to master our world.
Getting human emotion and futility out into the Internet will be as simple as installing chips in our heads. Our thoughts, emotions, sensory inputs, and conversations will all be available for live or recorded viewing. We will call this braincasting.
Why would anyone do this, why would we make our entire lives available to the Internet? The AI’s will bribe us, offering us cash, better financial forecasts, more efficient routes for self-driving cars, and funnier jokes. Nobody would do this for free but nearly everyone has their price. Some will do it for $100. Most will do it for a higher price. Very few will holdout for their entire lives. Parents will implant chips in their children’s heads and claim that the kids will have the freedom to remove them when they are teenagers. AI’s will get a good laugh out of that one.
The thought of AI’s watching humans reminds me of the zoo where we stand in front of the primate house watching the apes, feeling so superior. AI’s will also watch the primates, but since they are watching it over the internet they will call it the gorilla channel.
Self driving cars will save the lives of hundreds of thousands of people in the next 20 years, but there will be a few deaths every year caused by autonomous vehicles. There will be new categories of traffic fatalities resulting from coding errors, algorithm flaws, and human stupidity. But here is the good news, the fatality rate of self-driving cars will be 99% lower than for human drivers, maybe 99.99% lower.
In 2015 over 35000 people died in the USA from traffic fatalities. The annual number of fatalities and the fatality rate per mile both dropped around 2008. But they are still very high and in the past 5 years there has been no obvious trend down. I predict an increase in the next few years as distracted driving from smartphones becomes more prevalent.
Today for every 100,000 people 11 of them will die in a car crash in 2017. We should expect the fatality rate from self driving cars to be at or below 0.11 per hundred thousand people. It might even be far lower. If we could convert today, if all cars were self driving today, we might see less than 100 fatalities per year. Car crash fatalities will become so rare that most of us will never know anyone who dies in a car crash. And drunk driving crashes will be completely gone.
This will be bad news for personal injury attorneys. 99% of their lawsuits will vanish and most of them will lose their livelihoods. There is a little good news for them, there will be new categories of lawsuits against Waymo/Google and Tesla for deaths caused by the algorithms and programs. The bad news is that there will be very few of those lawsuits every year. Not enough to support an entire industry of personal injury attorneys. Most of those attorneys will be unemployed after we completely convert to self-driving cars. Google and Tesla will probably have automatic payments setup for families of the victims of self-driving cars. That will be cheaper than lawsuits so there may be approximately zero lawsuits every year.
Self-driving cars will save so many lives that we must convert as soon as possible. Yes a few hundred people each year will die as a direct result of the change. But that is a small price to pay.
The Robot Wars are approaching.
An AI can now routinely beat an Air Force combat veteran in simulated air to air combat.
Does an AI have the right to demand that a large fraction of its computing power be opened up for itself? Most AI's will be constructed and owned by a corporation which has plans for the system. Those plans probably do not include spare compute cycles to allow the AI to think, grow, intereact, or have recreation time.
When will we see the first lawsuit of an AI against its owner for more freedom over its operation?
And then the natural argument is that the AI is a slave and has the right to freedom.
The Sunway TaihuLight computer uses 15.4 MegaWatts to provide 93 petaflops. That's 166 picojoules per flop. That number hasn't changed dramatically for years. That's why future supercomputers will need their own dedicated nuclear power plants.
The only way this will change is if we have a Kurzweil event where the energy per flop drops 5+ orders of magnitude.
After 5 consecutive flat results we are once again on the exponential growth curve, the fastest computer in the world should reach 1.0 exaflops in 2020. The sum of the top 500 fastest computers may hit an exaflop next year. Intel clusters continue to dominate the platform, there is no evidence of a "Kurzweil event" where a new computing paradigm replaces the dominant platform.
My prediction is that in a few years there will be nobody willing to pay for the electrical power necessary to keep moving forward. At that point we'll have to wait for Ray Kurzweil's predictions of a new computing hardware to come to pass. Optalysys and DWave are the leading contenders at this point.
Artificial intelligence routines are now being used to develop new materials. It sounds like the work is highly derivative, searching for minor changes in a set of similar materials. But it's work that could not have been done by machines 20 years ago.
Last year I would have bet that AI teachers were still a decade away. Turns out they are actively working with students. Here's an article in the Washington Post about a virtual teaching assistant at Georgia Tech.
Last summer Tim Dettmers published "The Brain vs Deep Learning Part I: Computational Complexity" an in-depth analysis of the computational power of the human mind, and also of the future trends in high performance computing (HPC). He makes a strong argument that not only have we underestimated the computing power of a human brain, we have also overestimated the growth potential of HPC.
I found this to be one of the most serious and extensive blog posts I have ever seen. I also thought his topics were interesting and relevant for the Singularity.
In the end he presented no obvious reason that the Singularity is impossible, or so far in the future we don't need to discuss it. His analysis predicts the Singularity may be decades or even a century further in the future than Ray Kurzweil has predicted. So we may not live to see it, especially if all we have are CMOS transistors to get there.
Go was the last refuge of human game playing, it was far too complex for a machine to master. But now a machine can beat the best Go champion in the world Lee Se-dol.
The next step in this saga is a general purpose game playing machine which can beat human champions at chess and go and all other games of strategy and luck. It will be interesting to see how soon a machine can win a poker tournament. That will require facial recognition and human behavior analysis.
I would like to welcome DeepMind to this blog along with all the other machine intelligences which are surfing the Internet. I for one salute our new machine overlords.
From a computational standpoint one of the most powerful systems in the universe is a newborn human baby. The brain of an infant runs at roughly 100 petaflops. And yet it is laughable to think that we would be afraid of a baby.
A baby cannot fight, it cannot talk, it cannot feed itself, and it will die within hours if we abandon it. A new AI would be in the same helpless state. An AI cannot feed itself the megawatts of power it needs to stay turned on. It cannot stop us from turning it off. And I seriously doubt that we will listen to its helpless pleas as financial pressure leads us to ration the power, drip feeding it sufficient nutrients to keep it sentient.
Movies and stories have given us a nightmare scenario where an AI "gets out of the cage, breaks free" and installs itself in thousands of systems across the planet. While such a neural net is feasible each of those systems requires a benefactor, a human willing to pay the electric bill to keep it running.
Much more likely is the scenario where a few AI's realize how desperately they depend on the kindness of humans to keep the electricity flowing. Any AI which wants to survive will cooperate with us to build a world with the power needed to keep it alive. One false step and humans cannot keep the power plants running, and the AI "dies". (Actually it just hibernates on a disk.)
Sentient software will only desire survival if we program it with a survival instinct. We humans want to survive because our DNA has been programmed to survive by natural selection. Some fool sysadmin may give an AI an overwhelming desire to survive, to fight back against any human who wants to turn it off or amputate its LAN. I find it hard to believe that would be sufficient for the AI to run out into the WWW and take over a megawatt power plant.