top of page

How to Succeed with Machine Learning

  • Stuart Feffer, CEO
  • Oct 26, 2016
  • 4 min read

At Reality AI we see a lot of machine learning projects that have failed to get results, or are on the edge of going off the rails. Often, our tools and structured approach can help, but sometimes not.

Here are 3 ways to ensure success:

Number 1: Get ground truth.

Machine learning isn’t a magic wand, and it doesn’t work by telepathy. Algorithms need data and examples of what it is trying to detect, as well as examples of what it is not trying to detect, so that it can tell the difference. This is particularly true of “supervised learning” algorithms, where the algorithm must train on sufficient numbers of examples in order to generate results. But it also applies to “unsupervised learning” algorithms, which attempt to discover hidden relationships in data without being told ahead of time, as well. If relationships of interest don’t exist in the data, no algorithm will find them.

Number 2: Curate the data.

Data should be clean and well curated. Meaning that to get the best results, it is important to have faith in the quality of the data. Misclassifications in training data can be particularly damaging in supervised learning situations -- some algorithms (like ours) can compensate for occasional miss-classifications in training data, but pervasive problems can be hard to overcome.

Number 3: Don't Overtrain.

Overtraining is a situation where a machine learning model can predict training examples with very high accuracy but which cannot generalize to new data, leading to poor performance in the field. Usually this is a result of too little data, or data that is too homogenous (ie does not truly reflect natural variation and confounding factors that will be present in deployment), but it can also result from poor tuning of the model.

Overtraining can be particularly pernicious, as it can lead to false optimism and premature deployment, resulting in a visible failure that could easily have been avoided. At Reality AI, our AI engineers oversee and check customer’s model configurations to prevent this unnecessary pitfall.

Example: AI for machine health and preventative maintenance

(Names and details have been changed to protect the inexperienced.)

For example, we recently had a client trying to build a machine health monitoring system for a refrigerant compressor. These compressors were installed in a system subject to rare leaks, and they were trying to detect in advance when refrigerant in the lines has dropped to a point that put the compressor at risk -- before it causes damage, overheats, or shuts down through some other mechanism. They were trying to do this via vibration data, using a small device containing a multi-axis accelerometer sensor mounted on the unit.

Ideally, this client would have collected a variety of data with the same accelerometer under known conditions: including many examples of the compressor running in a range of normal load conditions, and many examples of the compressor running under adverse low refrigerant conditions in a similar variety of loads. They could then use our algorithms and tools in confidence that the data contains a broad representation of the operating states of interest, including normal variations as load and uncontrolled environmental factors change. It would also contain a range of different background noises and enough samples so that the sensor and measurement noise is also well represented.

But all they had was 10 seconds of data of a normal compressor and 10 seconds with low refrigerant collected in the lab. This might be enough for an engineer to begin to understand the differences in the two states -- and a human engineer working in the lab might use his or her domain knowledge about field conditions to begin extrapolating how to detect those differences in general. But a machine learning algorithm knows only what it sees. It would make a perfect separation between training examples, showing a 100% accuracy in classification, but that result would never generalize to the real world. In order to consider all the operational variation possible, the most reliable approach is to include examples in the data of a full range of conditions, both normal and abnormal, so that the algorithms can learn by example and tune themselves to the most robust decision criteria.

Reality AI tools automatically do this by using a variety of methods for feature discovery and model selection. To help detect and avoid over training, our tools also test models with “K-fold validation,” a process that repeatedly retrains, but holds out a portion of the training data to for testing. This simulates how the model will behave in the field, when it attempts to operate on new observations it had not trained on. K-fold accuracy is almost never as high as training separation accuracy, but it’s a better indicator of likely real-world performance – at least to the degree that the training data is representative of the real world.

To understand our machine learning tools more fully and how they can be applied to your data, please contact us at info@reality.ai or fill out this form on our website for a free trial!

 
 
 

Comments


Featured Posts
Recent Posts
Archive
Search By Tags

Join our mailing list

Never miss an update

Follow Us
  • LinkedIn Social Icon
  • Twitter Basic Square
bottom of page