folienfeuer - Fotolia

The promise of ML/AI is real -- so are the challenges

Machine learning and artificial intelligence will make the impossible happen, if developers can understand the what-ifs involved. Expert Torsten Volk unpacks it all.

Editor's note: This is a two-part look at how to evaluate the complex challenges inherent in machine learning and artificial intelligence. In part one, Torsten Volk explains the opportunities of a machine learning model and outlines the breadth of potential issues. In part two, he dissects the level of human involvement required to take full advantage of ML/AI.

Machine learning and artificial intelligence have a present and future impact on nearly any aspect of our business and personal lives. There is also a level of exasperation about the ML topic, as there often is no clear and shared understanding of what reasonable expectations for ML/AI are and how success can be measured.

Everyone needs a basic understanding of ML/AI. Software developers staring down an organization's ML strategy must cut through the marketing diversions of software, hardware and services vendors. We all need to understand what ML and AI are and, as importantly, what they are not and what they cannot deliver today. While ML/AI requires complex algorithms to work, it is not difficult to understand and apply those algorithms. We can take advantage of what exists in ML/AI today and plan for more exciting things in the near future.

The state of ML

Software developers staring down an organization's ML strategy must cut through the marketing diversions of software, hardware and services vendors.

ML/AI leverages mathematical algorithms to translate a set of input variables into concrete predictions. The machine learning model can make valid decisions based on input variables it has never seen before. The self-driving car is a simple example. The self-driving car will avoid hitting pedestrians in any situation, no matter what pedestrians look like, what they are wearing, how fast they are moving, how tall they are, how loudly they talk or whether or not they carry a bag, sit in a wheelchair or wear a hat. The car will also avoid pedestrians, no matter whether there is rain or snow, how many lanes there are, if there is a sidewalk, whether it is light or dark or any other situational parameters.

But how do you train a machine learning model that requires such a wide range of capabilities and must not make any costly mistakes? This is the crux of the problem. To train a machine to drive a car took a multibillion-dollar investment that did not result in an AI model that applies to other problems. In other words, to train the ML model to be able to drive a car was a long process of solving separate challenges, from the pedestrian safety rules to how it decides on speed when a sign isn't visible. Many of these situations have more than one dimension and significant legal implications if addressed incorrectly. For example, the self-driving car should brake for a dog in many cases, unless that action would endanger a human life. How far does our machine learning model weigh its options? Will it accept a small fender bender to save the dog? But what if that accident happens to a truck pulling a large camper in snowy conditions on a bridge? Should the autonomous vehicle use the sidewalk for an evasion if it is fully certain that no pedestrians are present?

The self-driving car

How ML works

To train this self-driving car (and please note I am dramatically simplifying this process), we take the machine for a ride. Or better yet, we take it for hundreds of thousands of car rides with all different drivers in different cars on different roads during different times in different countries. We fill up the car with sensors -- lidar, cameras, distance sensors -- and measure all inputs from the driver -- gas, steering, shifting, braking -- so the machine can observe as many natural traffic situations as possible. While this unsupervised learning approach is not a trivial task, as it requires a lot of expensive experimental vehicles and process power, it alone is not sufficient. It does not give the machine enough relevant feedback on which driving actions it should learn to imitate, versus the ones it should avoid. In an unsupervised learning approach, the machine will mostly attribute negative ratings to behavior that led to an actual incident, such as a crash. But what about decisions that led to near misses of pedestrians? Or bad habits that are often harmless but sometimes can lead to serious accidents? Or bad habits that simply increase the risk of traffic jams? When should the car stop to let a police vehicle through or an ambulance? The list goes on. In addition, it would be unethical to allow the machine to run over pedestrians for learning purposes, but by simple human driver observation, the ML/AI software would never receive enough experience in this specific department. Other methods, such as to use a simulator or show images of accidents to the machine, are only supplementary measures that do not replace the analysis of actual live sensor data and event feedback.

Read part two to learn about how human involvement still factors into ML.

Dig Deeper on Software development team structure and skills

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close