Wednesday, June 13, 2018

Why AI should be called Artificial Intuition*

This post got started because of this tweet:
The referenced article really isn't about AI, rather it's about an inexplicable algorithm, but a lot of "AI" fits into that category, and so is an appropriate starting point. Intelligence isn't just about getting the right answer. It's about knowing how we get to that answer, and being able to explain how you got there. If you can come up with the right answer, but cannot explain why, it's not intelligent behavior.  It might be trained behavior, or instinctive or even intuitive behavior, but it's not "intelligent".

What's been done with most "AI" (and I include machine learning in this category) is to develop an algorithm that can make decisions, perhaps (most often in fact) with some level of training and usually a lot of data.  We may even know how the algorithm itself works, but I wouldn't really call it intelligence until the system that implements the algorithm can sufficiently explain how its decision was reached for any given decision instance.  And to say that it reached that decision because these vectors were set to these values (the most common form of training output) isn't a sufficient explanation.  The system HAS to be able to explain the reasoning, and for it to be useful for us, that reasoning has to be something we (humans) can understand.

Otherwise, the results are simple mathematics without explanation.  Let me tell you a story to explain why this is important:

A lifetime ago (at least as my daughter would measure it), the company I worked for at the time obtained a piece of software that was the life's work of a physician and his assistant.  It was basically a black box that had a bunch of data associated with it that supported ICD-9-CM coding of data.  We were never able to successful build a product from it, even though we WERE able to show that it was as accurate as human coders at the same task.  In part, I believe that it was because it couldn't show coders (or their managers) HOW it came to the coding conclusions that it got to, and because that information wasn't provided, it failed to be able to argue for the correctness of its conclusions (nor could it could be easily trained to change its behavior).  It wasn't intelligent at all, it was just a trained robot.

Until systems can explain how they reach a conclusion AND be taught to reach better ones, I find it hard to call them intelligent.  Until then, the best we have is intuitive automata.


For what it's worth, humans operate a lot on gut feel, and I get that, and I also understand that a lot of that is based on experiential learning that we aren't even aware of.  But at the very least, humans can argue for the justification their decision.  Until you can explain your reasoning to a lesser intelligence (or your manager for that matter), you don't really understand it.  Or as Dick Feynman put it: "I couldn't reduce it to the freshman level. That means we don't really understand it."

   Keith

P.S. The difference between artificial and human intelligence is that we know how AI works but cannot explain the answers it gets, whereas we don't know how human intelligence works but humans can usually explain how they arrived at their answers.

* Other Proposed Acronyms for AI

  1. Automated Intuition
  2. Algorithms Inexplicable
  3. Add yours below ...



1 comment:

  1. Good perception. I agree.

    So far, what most people sell claiming to be AI, is just Statistics on steroids.

    ReplyDelete