Weissblick - Fotolia

What developers must know about supervised machine learning

Humans play a vital role in training an ML/AI system. Expert Torsten Volk explains what software developers need to understand about machine learning education.

Editor's note: This is the second piece in a two-part look at how to evaluate the complex challenges inherent in machine learning and artificial intelligence. In part one, Torsten Volk explains machine learning opportunities and explains issues associated with ML/AI.

Like everything to do with AI, supervised machine learning faces complicated technical challenges and decisions. The question at the core of the entire ML discussion is how much human intervention is needed. Software developers in organizations that pursue an ML/AI strategy must know what is involved in supervised machine learning to become part of the process. When complexity is high and failure threatens to be catastrophic -- as is the case with a self-driving car, for example -- there is a staggering amount of manual training and testing to do. With the self-driving car example, manual training is needed beyond just taking the machine for a ride.

Steps to supervised machine learning

Provide domain knowledge. Dictionaries are basically knowledge plug-ins for certain domains. In our car example, a dictionary could include a pretrained ML model that recognizes street signs and assigns the correct driving instructions in response to them. Of course, there are some manual translation requirements to make the machine that drives the car understand these instructions.

The question at the core of the entire machine learning discussion is how much human intervention is needed.

Provide labels. In self-driving cars or computer games, such as World of Warcraft, labeling the environment means to mark up video streams with relevant tags. These supervised machine learning labels act as training data to inform the machine of specific cases that are, based on human judgement, particularly important or rare. Human judgement is another problem in ML, by the way.

Provide feedback. To provide feedback, either let the machine experience the negative outcome itself or manually override bad algorithms. It is not feasible or legal to just let self-driving cars get into accidents to learn from their negative experiences. Therefore, humans ride along and provide supervised machine learning. The humans overwrite mistakes to prevent accidents or bad habits, and the algorithm continuously learns from this feedback.

Provide hyperparameters. Hyperparameterization describes the requirement for humans to make judgement calls that specify how the machine should learn. This type of supervised machine learning includes answers to questions such as "How good is good enough?" and "At what point is it detrimental for a certain algorithm to learn from more corner cases?" While we have powerful and well-optimized CPUs and GPUs today, there are limits to how much context the machine can process, in terms of cost and response time.

Decide when the training is complete. When you teach the machine to play Pong, the training is complete when it can beat any Pong software and human Pong champion. But how about when you teach the machine how to be a VMware administrator? In this case, there is no clear positive or negative outcome connected to any one action. There are too many different and equally valid ways to complete a task to capture the situational factors that are relevant without simply including all situational data. To capture all that data is not a trivial task. But even if we were successfully able to acquire all of it, how do we know that there are not numerous other, equally important data streams that occur in other customer environments that we did not even consider training the machine in? The possibility of missing potential data is another significant challenge with supervised machine learning.

What happens in machine learning

The black box challenge

The black box challenge is about trust in the decision the machine makes based on all the supervised machine learning, which can be a problem. The AI model does not make its decision process transparent to humans. The model stores its knowledge in a digital form that cannot be deciphered through human logic. For example, when the machine learns to recognize a picture of a turtle or whether or not a hot dog is present on an image, it stores the data in a matrix that describes multidimensional correlations at the level of individual pixels. Of course, with some effort, we can teach the machine to recognize the presence of specific body features and turn this fact into detailed inferences. But to do this is labor-intensive and would result in an ML system that is not easy to modify.

Read part one to learn about the opportunities of an ML model and the breadth of potential issues.

Dig Deeper on Software development team structure and skills

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close