So preoccupied with whether they could..

I will be completely frank here, AI is a terrifying thing.  Sure, some of the more harmless implementations are cool, but if you look down the road AI is on, you’ll surely see the problems.  It’s great to conceptualize and implement an app on your phone that will automate tedious phone calls (See Google Duplex).  But considering what else is emerging in technology it’s not hard to imagine what the future will bring.  I want to believe that everything will be used solely for the benefit of all humanity, but I realistically know that’s not going to be the case.

So, let’s paint the landscape.  AI can be classified in one of two buckets; Narrow, and General.  Narrow can be defined as a machine (or computer) producing human-like results or decisions in a small subset of tasks (e.g. image recognition).  General can be defined as a machine (or computer) producing human-like results or decisions in a greater set of tasks (e.g. autonomous interaction with the world).  To be clear most of the implications of AI that make it to the news are narrow.  Some examples; would be Siri, Google Assistant, and Cortana, these applications are good at listening to your voice and returning results that you are asking for.

By in large Narrow AI is mostly benign, until of course you start applying it to teach General AI.  Some other terms thrown around when talking about AI are Machine and Deep Learning.  Both terms are essentially a method to teach machines or computers how to make decisions on their own.  A third term that pops up is Big Data which is just an extremely large data set.  Developers and companies will use Big Data to teach Narrow AI.  You’ve probably seen the captcha images that ask you to select the portions of the image that have a specific object in it, these are usually used for machine learning to teach a Narrow AI.  If you feed a trillion images to a fledgling AI that do or do not have a cat in them, eventually the AI will be able to recognize any picture’s feline content.

Tin-Foil-Hat Warning.

Now, the scary part; We’ve been made comfortable with the fact that we’re identifying roads for self-driving vehicles, but what we haven’t even thought of is all the surveillance data collected by the NSA.  This is largely our behavioral information, and since it was collected in a method we would think of as unethical, it’s not a stretch to think that the data will be used in an unethical manner as well.

Then you’ve got the big names collecting an absurd amount of information on people.  Facebook, Google, Amazon etc.  These companies are using that data to build AI.  Unfortunately, these companies are in business to make money and fiscal partnerships are bound to happen.   The prompting of this particular post is just one of those partnerships; a partnership with Google and the Department of Defense (Project Maven).  The idea is to assist our unmanned drones better identify their targets.  What nobody is saying out loud is this is attaching an amount of fire power to AI.

Let’s move on to the truly terrifying; AI doesn’t have the conscience of a traditional human and will follow orders given to it so long as it is adhering to programming.  Meaning that an AI will act on instructions that a soldier simply will not.  Coupled with the people in control of the instructions may or may not have the best intentions in mind.  With both sides of the political argument throwing around collusion with an outside source, and some notable people of power having their data / information leaked or taken over.  The technology is cool, and I’m curious to see what we can achieve with it.  The problem is, of course, who’s driving it?