March 21, 2019

Plain and Practical Speaking about Artificial Intelligence/Machine Learning (AI/ML) – Part I

Lee Gopadze

Lee Gopadze
Principal and Co-Founder/Broadband Initiatives, LLC

Share This Post

Much has been written concerning the upcoming revolution in Artificial Intelligence and Machine Learning (AI/ML) capabilities and the new analysis and predictive AI/ML software platforms available from a host of established and new software vendors. In manufacturing, banking, telecommunications networks, environmental and industrial systems, scores of companies are reaching out to investigate the capabilities, value, and cost of these AI/ML platforms.

While there is little doubt that AI/ML platforms can make significant contributions to establishing better performance profiles in a host of industries including manufacturing, environmental, telecommunication, banking and HVAC systems and financial processes and forecasting, an often misunderstood key element is the requirement for data mining.  What is not readily apparent is that AI/ML solutions require significant amounts of data and logically constructed databases to re-create or replicate the learning, analysis, and remediation capabilities inherent in the human mind.

AI/ML platform vendors often talk about data sets as being part of a Supervised Learning experience, an Unsupervised Learning experience or Reinforcement Learning¹. Simply put, Supervised Learning is data input from a sensor or other device for which there is a known pattern of responses or where decision data exists for prescribed courses of action. Unsupervised Learning is where there is no pattern of responses or prescribed courses of action evident. Reinforcement Learning falls between these two extremes and can be seen as data where a general response is available, but no direct action can be taken and further classification needs to be developed. The ability to data mine from all data types is critical to the AI/ML solution.

What is also critical is also the existence of a significant amount of ‘relevant data’ and perhaps heretofore, what was thought of as ‘irrelevant data’ for the AI/ML algorithms and engines to sift through. Supervised Data can be seen as analogous to the sensor inputs which signal a particular action to be taken on a manufacturing line, a particular action taken by a NOC technician for a failure in a wireless network, or the action taken to shut down a turbine in a power generator when temperatures run too high. Typically, the sensor inputs will be well understood in these cases and there will be a significant database of these sensor inputs and what actions were taken to analyze and then prevent or remediate the problem.

What is more complicated to ferret out are the data sets for Reinforcement and Unsupervised Data learning where only general or no patterns of input-output responses exist. Here AI/ML must use techniques such as cluster analysis and/or probability theory to assemble responses for input data sets that may seem similar, but for which no set of response mechanisms yet exist. This may require significantly more data availability than Supervised Data sets where input-output response and prescriptive learning already exists.

Assembling the Supervised, Unsupervised and Reinforcement data sets is where challenges exist for the use of AI/ML today. Paraphrasing Donald Rumsfeld “There are known knowns, but there are also unknown unknowns—the ones we don't know, we don't know”. This can be an apt explanation for the challenges in capturing the data sets for Supervised Learning, Unsupervised Learning and Reinforcement.

But, if this were not enough, there is a final challenge in the development of the data to support the development of accurate and prescriptive AI/ML platforms. This the database in which all of the data sets are captured.

I would postulate that the key to establishing which AI/ML vendors to examine more closely revolves around their ability: (a) to competently identify and classify the datasets which will be required to create prescriptive actions; and (b) the ability to define the database requirements for processing performance and scalability.

Inability to recognize and isolate the key datasets can lead to missed information and thus missed prescriptive decision making. Equally important is the ability of the database with the correct attributes to scale (e.g. ability to rationalize sensor feeds which operate at different time scales and sometimes lag behind reality). The bottom line is that even though we may have algorithms that work, using it at scale and in real time is far from a trivial task.

But more about this in the next installment in the series of “Plain and Practical Speaking About Artifical Intelligence/Machine Learning (AI/ML)”

Comments? You can contact me directly via my AdvisoryCloud profile.

¹ Peter Norvig’s Artificial Intelligence?—?A Modern Approach

Part 1 of a three (3) part series . The next part will focus on database requirments, advantages and disadvantages, and the final part will focus on techniques for systematizing the data into relevant bins/categorires  

Share This Post