Monday, March 12, 2012

Module 13

     'The World is Flat,' for me at least, did not present a great deal of new information.  Given that my major is a BIS with a focus on Computer Science, Information Systems and Technologies, and Asian Studies, a lot of the information presented in the book was information I already knew.  However it did do something that I had only scratched the surface of: it connected the dots between the pieces of information and formed a much larger picture than I had previously conceived.

     For example I knew that it was Islamic men of learning who developed algebra and algorithms during the 'Dark Ages' which was actually an age of enlightenment for Arabia.  I knew that the religion is massive, spreading almost contiguously from Morocco to Indonesia with smaller communities all over the world.  I knew that the religion was holding them back in the modern era through its misogyny, hubris, and repression in countries run by a petrocracy.  What I didn't put together was that all these factors result in a constant sense of humiliation which in turn fuels groups like al-Qaeda.  Young Islamic men in non-Islamic countries witness firsthand the success of 'infidel' peoples while they are left behind.  This is just one of many 'ah-ha' moments I experienced while reading Friedman's book.

     Overall I thought the book was excellent.  In my experience classes where the 'textbook' isn't actually a textbook at all are the ones that I learn the most from.  These classes tend to focus more on the understanding of concepts rather than the rote memorization of facts.  Another class I have right now with Professor Eric Swedin is like this.  In his class we're reading books by Kevin Mitnick and one titled 'Beyond Fear' by Bruce Schneier and I'm learning a great deal more about security from those books than I have from textbooks I've read on the same subject.

     As for 'The World is Flat,' my only complaint is that it is getting to be a little bit dated.  It is, after all, almost five years old, and in today's rapidly changing world five years is a long time.  I wonder if the author has plans to release an updated version of the book.  The current one is 'version 3.0' so perhaps a 3.5 or 4.0 is in the works...

     I will graduate at the end of this semester and I think that this course was the only general education course I have taken which was at all pertinent to my major.  Even though I hate writing essays, this class was pretty enjoyable because it related to my interests and career goals.  One thing I did not enjoy was reading Bill Joy's article.  It reminded me of what I believe to be the inevitable extinction of the human race in the near future.  I'm not suggesting that this article should be removed from the course, far from it.  In fact, I wish that the dangers of the Technology Singularity were widely known so perhaps we, as a race, could take appropriate steps to protect our future.

     This just doesn't seem likely to me however since most Westerners are much more concerned with who's going to win the latest 'reality' show, or what overpriced dress some bimbo wears to some awards event, and most of Islam is more concerned with how humiliating the status quo is, and most people in Africa are more concerned with either finding something to eat or how best to kill each other.  I believe that, ironically, humans will still be killing each other over our differences right up to the day when we are all killed because of our lack of unity.

Module 12

     Bill Joy wrote an article about GNR which is Genetic, Nano, and Robotic technologies.  According to the article these are emergent threats that we will face in the very near future.  Fundamental to these threats is that they are all capable of self replication.  By being able to self replicate you open the door wide open for a positive feedback loop wherein control of these technologies is lost and they take on a life of their own.  In the case of genetics and robotics, this literally is the case as these two threats are able to not only self replicate but self improve at an astronomical rate.

     The threat of genetics comes in two forms.  First is that a plague of some kind is fabricated with a 100% mortality rate for humans and is virulent enough to sweep across the globe before an effective countermeasure is developed.  The second is that the genetics of humans are modified to the point that these 'humans' are no longer homo sapiens but are rather advanced 'proto-humans' which, possibly augmented by nanotech or robotics, develop a superintelligence against which our current species could not compete against.  These proto-humans probably will not have any motivation to be altruistic towards regular humans as we would be regarded as inferior, outdated, and weaker strains of humanity.

     The threat of nano technology also comes in several forms.  One of which is the 'grey-goo' problem.  In a nutshell, nanobots capable of self-replication through the fabrication of virtually any matter and presumably sunlight or some similar abundant energy source would self replicate uncontrollably until it literally covered the surface of the earth.  All life would be devoured in the process.  Similar to this is nanobots which are designed to destroy the biosphere.  This is less a 'goo' but it is still the death of everything on earth.  Another threat mirrors that of a biological attack where nanobots are programmed to attack humans.  They could simply attack certain parts of the body like the nervous system, and being microscopic, could be breathed in just like a virus.

     Robotic technologies (I personally feel that AI technologies should have been the term used) are the third threat he describes.  As advances are made in the field of artificial intelligence and, concurrently, faster and faster computer hardware is developed, eventually a sentient AI will be developed.  If sentience was not the goal of the designers, then there's no guarantee that it will be friendly to humans.  Even if it was the goal, there's another issue with AI: self-improvement.  An AI capable of re-writing itself into a more advanced and intelligent form could do it again.  And again.  And again.  This recursive self improvement would occur at exponential speeds because as the AI becomes better at making itself smarter, it would in turn be even better and more capable of making itself smarter, and the cycle would repeat infinitely with the result being an AI superintelligence.  It is this threat that I feel is the most realistic and probable.

     Humans will blindly rush down the technological path, building on those who came before, in pursuit of money, comfort, power, etc.  If it wasn't for the blindness this would not necessarily be a bad thing.  Technology, up to this point, has on the whole improved the lives of humans (at least those in countries where they have access to that tech).  As part of the avalanche of progress we'll look for more and more creative ways to gain the competitive edge.  One is where humans naively develop an AI which can improve itself, all the while believing that we would be in control.  We would not be.  You cannot constrain an omniscient superintelligence which possesses mental capacities greater than that of all human beings who have ever lived, combined.

     An AI superintelligence would have no need for humanity and would simply calculate that humans are at best a nuisance and at worst a threat to its existence, and thus would exterminate us.  We would not be able to stop this.  The comparatively primitive Big Blue beat the best chess player in the world.  How could even the best militaries fight something that doesn't sleep, doesn't make mistakes, learns instantly, can analyze every single possible attack and establish the perfect defense, controls the communication infrastructure, and is nearly impossible to kill?  After all, how do you kill a program which would at this time exist simultaneously on the entire world's computer infrastructure?  The AI would simply take control of factories and manufacture robots or nanobots which would either kill humans directly or would be used to create the devices or technologies which would.  How hard would it be for an omniscient AI to design the perfect virus or nanobot plague?  Trivial.  This would not be like The Matrix or The Terminator where humanity has a fighting chance.  We will die.

     The problem with defense against these threats is that they must be done proactively in advance.  You cannot learn from your mistakes when dealing with an existential threat.  If it occurs, there won't be anyone alive to learn from it.  Humanity is horrible at prevention.  We make mistakes, and learn from them.  Humanity is also horrible at working for the good of humanity.  While individuals can work for the greater good, it is human nature to be selfish, and humanity exemplifies selfishness.  For quite some time nano tech and genetics will certainly take large, difficult to hide facilities for their production.  A computer AI however is just a computer program and the hardware required to run it will be commonplace within 20-30 years.  The AI could be designed by scientists, a corporation, a military, or just a prodigy in his parents' basement.  The possible sources are too numerous and too varied to defend against and it only takes one time to trigger the out-of-control, self-perpetuating chain reaction leading to our annihilation.

     I first learned of this threat about three years ago when I came across an entry in Wikipedia about the Technology Singularity.  The Technology Singularity is where advances in technologies reinforce each other and occur so quickly that predictions regarding the future of technology cannot be made with any more certainty than what occurs beyond the event horizon of a black hole can be understood.  At first I was ecstatic.  I thought that an exponential increase in technology would result, literally, in the nearly instantaneous resolution of every problem on earth and would launch humanity into a utopia or higher state of existence.  Then I learned about what would occur in the realm of AI research when we hit the Singularity.  Humans cannot be augmented to omniscience as fast as a computer program can.  The computer can do it faster.  Before the beginning of the next century, every human being will be dead.

Module 11

     I happen to love The Matrix, but it's only because I am able to turn off the part of my brain that objects when things are inaccurate, or just completely wrong in movies.  If I couldn't do that, I wouldn't be able to enjoy many movies like The Last Samurai, U-571, and The Matrix.  However for this entry I am going to analyze the movie with that part of my brain turned on which will remove any and all fun and enjoyment from it.

     The basic plotline is that at some point in the past humans managed to create a sentient AI which did not immediately decide to kill everyone.  This is wrong because any AI capable of independent self thought will recursively improve its own code at an exponential rate attaining omniscience within minutes of the AI's creation.  The AI would then deduce that humanity is a probable threat to its existence and will take all measures to eliminate the threat.  Humanity would be unable to stop the perfect killing machine and would be annihilated.

     Ignoring that sentient AIs recursively self-improve into a superintelligence and then kill us, let's look at the inevitable war between humanity and the machines.  The movie is vague on what happened but I'm guessing humans were losing to a foe which doesn't need to sleep and is extremely difficult to kill so humans decided to block out the sun which the machines apparently used as their sole power source and I guess humans didn't need it anymore.  Also no other sources of power existed other than heat from human bodies?  Wind, coal, oil, nuclear, and geothermal all just don't compare to the heat of a human body?

     So ignoring all of that, we have hives of humans who are plugged into the Matrix which is a simulation of early 21st century Earth along with all its problems and failings.  If it's in the machine's best interest to keep humans from taking the red pill, why not generate a simulation of a utopian paradise?  The reason given in the movie is because humans couldn't accept such a world and whole hives were lost.  This is totally bogus because humans accept the world into which they are born.  A man born in the Middle East will live and die thinking that Allah is the reason for existence, and that the rest of the world is populated by heathenistic infidels.  That same man, should he be abducted at birth and raised in the UK by Atheist parents would believe something close to the opposite.  People born into a utopian world would be so busy being happy that they wouldn't question the world to the extent that they'd disbelieve the illusion.

     Right, so, ignoring all of THAT we have these pirate ships which broadcast the psyches of red pilled humans back into the Matrix.  Network security is part of my major and what I do in my job.  Securing against network attacks, from a technical point of view is not that hard.  Most of the breaches are caused by mistakes in configuration, or by bypassing the protections through phishing emails, social engineering, etc.  So an entity which IS A COMPUTER should have no trouble blocking hacking attempts against the Matrix.  That the protagonists can get in, in the first place, and stay in without being detected or tracked for even a short while is ludicrous.

     So yes The Matrix is a movie that I cannot enjoy if I think about it.  At all.  But since I can turn off the part of my brain that says "Hey that's wrong!" I'm able to kick back and enjoy Neo doing neo-Kung-fu on Agent faces.

Sunday, March 11, 2012

Module 10

         At first I didn't understand what the protagonist was describing as the quaint terminology describing a future from the point of view of someone from the 1940s was a little hard to follow.  However the more I read, the more I understood what he meant.  It was a bit eerie reading this story knowing that it had been written in 1946.  When the author described how 'Joe' was able to think for itself I became very uncomfortable.  A machine that is able to think for itself is a very real threat to human existence.  However the author of this sci-fi short story decided to go down a different direction and instead made 'Joe' into a tool for unrestricted knowledge.  This was nearly as bad as people are people and given the opportunity to do what they want without consequence they will act on it.  Conscience is not a significant barrier.  If it was, socialism would be the path to Utopia.

         The story was an excellent example of the results of free access to knowledge without any accompanying wisdom.  Picture perfect bank robberies, techniques for murder without risk of conviction, universal skeleton keys, counterfeiting money, a 'cure' for concupiscence, and ways to force the rest of humanity to a certain way of life are just a sampling of the 'benefits' provided by the new and upgraded Logic service.  Sure there were some who just asked how to make a good and wholesome dinner and innocent stuff like that, but it doesn't take many criminal acts to upset civilization and there were apparently quite a few in a very short time.  Fifty four bank robberies in one day for example.

         The best part of this story is how it shows human nature.  How the protagonist describes Laurine and the line where his wife upon discovering that the Logics will give you all the private information about anyone you ask: "Hurry!" she says, desperate, "before somebody punches my name! I'm going to see what it says about that hussy across the street."  The only reason why we haven't experienced this ourselves is because while Google can provide you with some information, it's not an AI which connects all the dots of collective human knowledge for you.

         It is eerie how much like the Internet the system of Logics is given that the book was written about twenty years before there even was an Internet of any kind (ARPANET was developed in the 60s).  Tanks are like datacenters, and Logics are like PCs (complete with Youtube, Skype, etc.).  I wonder how the author even conceived of the idea.  It was not like it is now with technology advancing so fast that you just have to imagine what might be possible and it's reality in a few years.  Back then radio was the primary media outlet and televisions were the big new thing and few people had them.

Sunday, March 4, 2012

Module 9

     So the birth of a Dell laptop is a little like this: An order is placed over the phone or online and after the model and any customizations are finalized the credit card is verified. With verification of payment the order goes to Dell's production system which sends the order to one of six factories around the world. These factories are surrounded by depots of their suppliers which maintain a volume of the parts they supply. Every two hours information regarding the number of and what types of parts are required for the next batch of computers is fed to the suppliers' systems who send a shipment over to the nearby factory. This facilitates Dell's just-in-time production method. I do wonder if the suppliers have any sort of JIT system however. Regardless after the parts are delivered the specific computers are built using the parts just delivered.

     Around thirty parts are required to build a laptop. These parts come from a multitude of suppliers from all over the world, but many of them are based in China. Dell works with the suppliers to make sure that the system is streamlined, and in the event of a shortfall of a specific part can do 'demand shaping' by offering alternative options to the customer for a discount or some other promotion. That in a nutshell is how the author describes the Dell order process and production system. I wasn't too surprised by it actually as I had a pretty good understanding of the process. In Japan a similar system is used for automobiles. People do not go to a dealership and upon purchasing a car drive out with it. They instead have on hand a model of each type of car for test drives, but the actual vehicle you buy is custom made to order with whatever features you want added to the order at time of sale.

     I did get a good laugh from reading this little story about his laptop. Dick Hunter stated "We know the customers better than our suppliers and our competition, because we are dealing directly with them every day." Yeah right. If that were true Dell would have a removable bottom chassis to allow access to the components without having to disassemble the whole damn machine, laptops with two hard drive bays would ship with two mounting brackets even if only one hard drive was ordered, an OS (or at least a recovery) DVD would ship with the computer in case a format and reinstall was required, screens with a vertical viewing angle greater than 2 degrees would be an option, and 18.4" screen size models would be available.

     It seems that al-Qaeda does have a supply chain called the Virtual Caliphate. Similar to how Walmart will reorder an item as soon as it is purchased, al-Qaeda is able to "order" another suicide bomber when one is "taken off the shelf" so to speak. I personally have my doubts about how widespread the use of this 'supply chain for suicide bombers' is, however the use of the Internet as an information gathering a collaboration tool as well as a venue for the dissemination of propaganda is another story. The stuff people believe just because it's in print/online. "If it's on the Internet, it must be true, right?"

     The curse of oil is the way that oil-rich countries such as Iran have not reformed their political, educational, or just about any other cultural system because a demand for change does not exist. Islam compounds this with its kainotophobia. This book is a little bit dated as Arab Spring movements are actually starting to reform at least the political systems of many countries in the Arab World regardless of the oil situation. However, the more oil a nation has, the less true this is.

     Saudi Arabia isn't changing much at all and they own the largest fields in the world. The curse of oil as it is described in the book is, to me, very similar to the problem with welfare in the U.S. Why would someone on welfare/an oil-rich nation make the effort towards self improvement when money is basically handed to them? Free money from the government or free money from oil-dependant nations, the idea is the same. I sometimes wondered what would happen if the oil did run out, but reading about Bahrain having actually already run out of oil and transformed from a 'petrocracy' (government run by those who owned the oil) into a democratic member of the international community. Necessity is the mother of invention (or at least change) and with oil there's no necessity.