Monday, March 12, 2012

Module 12

     Bill Joy wrote an article about GNR which is Genetic, Nano, and Robotic technologies.  According to the article these are emergent threats that we will face in the very near future.  Fundamental to these threats is that they are all capable of self replication.  By being able to self replicate you open the door wide open for a positive feedback loop wherein control of these technologies is lost and they take on a life of their own.  In the case of genetics and robotics, this literally is the case as these two threats are able to not only self replicate but self improve at an astronomical rate.

     The threat of genetics comes in two forms.  First is that a plague of some kind is fabricated with a 100% mortality rate for humans and is virulent enough to sweep across the globe before an effective countermeasure is developed.  The second is that the genetics of humans are modified to the point that these 'humans' are no longer homo sapiens but are rather advanced 'proto-humans' which, possibly augmented by nanotech or robotics, develop a superintelligence against which our current species could not compete against.  These proto-humans probably will not have any motivation to be altruistic towards regular humans as we would be regarded as inferior, outdated, and weaker strains of humanity.

     The threat of nano technology also comes in several forms.  One of which is the 'grey-goo' problem.  In a nutshell, nanobots capable of self-replication through the fabrication of virtually any matter and presumably sunlight or some similar abundant energy source would self replicate uncontrollably until it literally covered the surface of the earth.  All life would be devoured in the process.  Similar to this is nanobots which are designed to destroy the biosphere.  This is less a 'goo' but it is still the death of everything on earth.  Another threat mirrors that of a biological attack where nanobots are programmed to attack humans.  They could simply attack certain parts of the body like the nervous system, and being microscopic, could be breathed in just like a virus.

     Robotic technologies (I personally feel that AI technologies should have been the term used) are the third threat he describes.  As advances are made in the field of artificial intelligence and, concurrently, faster and faster computer hardware is developed, eventually a sentient AI will be developed.  If sentience was not the goal of the designers, then there's no guarantee that it will be friendly to humans.  Even if it was the goal, there's another issue with AI: self-improvement.  An AI capable of re-writing itself into a more advanced and intelligent form could do it again.  And again.  And again.  This recursive self improvement would occur at exponential speeds because as the AI becomes better at making itself smarter, it would in turn be even better and more capable of making itself smarter, and the cycle would repeat infinitely with the result being an AI superintelligence.  It is this threat that I feel is the most realistic and probable.

     Humans will blindly rush down the technological path, building on those who came before, in pursuit of money, comfort, power, etc.  If it wasn't for the blindness this would not necessarily be a bad thing.  Technology, up to this point, has on the whole improved the lives of humans (at least those in countries where they have access to that tech).  As part of the avalanche of progress we'll look for more and more creative ways to gain the competitive edge.  One is where humans naively develop an AI which can improve itself, all the while believing that we would be in control.  We would not be.  You cannot constrain an omniscient superintelligence which possesses mental capacities greater than that of all human beings who have ever lived, combined.

     An AI superintelligence would have no need for humanity and would simply calculate that humans are at best a nuisance and at worst a threat to its existence, and thus would exterminate us.  We would not be able to stop this.  The comparatively primitive Big Blue beat the best chess player in the world.  How could even the best militaries fight something that doesn't sleep, doesn't make mistakes, learns instantly, can analyze every single possible attack and establish the perfect defense, controls the communication infrastructure, and is nearly impossible to kill?  After all, how do you kill a program which would at this time exist simultaneously on the entire world's computer infrastructure?  The AI would simply take control of factories and manufacture robots or nanobots which would either kill humans directly or would be used to create the devices or technologies which would.  How hard would it be for an omniscient AI to design the perfect virus or nanobot plague?  Trivial.  This would not be like The Matrix or The Terminator where humanity has a fighting chance.  We will die.

     The problem with defense against these threats is that they must be done proactively in advance.  You cannot learn from your mistakes when dealing with an existential threat.  If it occurs, there won't be anyone alive to learn from it.  Humanity is horrible at prevention.  We make mistakes, and learn from them.  Humanity is also horrible at working for the good of humanity.  While individuals can work for the greater good, it is human nature to be selfish, and humanity exemplifies selfishness.  For quite some time nano tech and genetics will certainly take large, difficult to hide facilities for their production.  A computer AI however is just a computer program and the hardware required to run it will be commonplace within 20-30 years.  The AI could be designed by scientists, a corporation, a military, or just a prodigy in his parents' basement.  The possible sources are too numerous and too varied to defend against and it only takes one time to trigger the out-of-control, self-perpetuating chain reaction leading to our annihilation.

     I first learned of this threat about three years ago when I came across an entry in Wikipedia about the Technology Singularity.  The Technology Singularity is where advances in technologies reinforce each other and occur so quickly that predictions regarding the future of technology cannot be made with any more certainty than what occurs beyond the event horizon of a black hole can be understood.  At first I was ecstatic.  I thought that an exponential increase in technology would result, literally, in the nearly instantaneous resolution of every problem on earth and would launch humanity into a utopia or higher state of existence.  Then I learned about what would occur in the realm of AI research when we hit the Singularity.  Humans cannot be augmented to omniscience as fast as a computer program can.  The computer can do it faster.  Before the beginning of the next century, every human being will be dead.

No comments:

Post a Comment