Utility companies own millions of miles of power lines and need to inspect these regularly. Anomalies on power lines can have enormous consequences, such as large power outages, forest fires and putting lives at serious risk. Millions of power line poles have to be inspected from various angles in order to find defects that can be only a few millimetres wide.
Manually going through an incredible amount of images to find these defects is a harrowing task.
Sterblue managed to reach human-levels in several anomaly and equipment segmentation tasks on power line equipment, by using state-of-the-art deep learning and data science methods.
Here we are sharing some basic insights from our growing experience, far from the clichés of AI as a magical tool that solves all problems in a single click.
It All Starts With Good Data
Very high quality labeled data is necessary to train neural networks successfully. The fact is that in 2019, in order to have success in machine learning, you need a near-perfect dataset.
This is why Sterblue’s strategy is to provide an end-to-end inspection solution that encompasses both automated data analysis and automated data capture. Having a clean, deterministic and reproducible flight plan of drones around power lines is critical in gaining a nice homogenous dataset.
After we started using images captured using our drone software, we noticed a great improvement in results as compared to when using images captured by other means.
A Lot of Data
High-quality data is one thing. But in order to make a difference in the world of applied AI, it is necessary to have access to a lot of data.
Any AI project that lacks a vast amount of training data is deemed to fail, except for extremely rare cases where a revolutionary approach is successfully developed, but we are talking about a once-in-a-decade event here. For us, the rest of the world, having a large funnel to get data into the system is a must-have in order to get valuable results.
Sterblue’s data acquisition tools help us in getting these large amounts of data. But even then, spending a lot of energy to operationally deploy the data capture solution at a large scale is an absolute necessity. Every day, dozens of people are involved in capturing data in the field, requiring costly logistical preparation.
Having a lot of good images is useless if the image labelling is not top-notch. Sterblue started by training neural networks using images labelled by our end-users (utilities) on our platform. However we noticed that the quality was not always perfect.
This makes sense: our end-users are the exact people who pay us to avoid having to label images themselves. They are happy to label datasets initially to get the AI pipeline started, but 2019 AI needs more data than that, and higher quality standards that our customers can’t provide without too much pain on their side.
This is why we use third party data labelling providers to clean datasets so we end up with perfect data for AI training. This switch several months ago was the single most impactful change we implemented to reach our goal.
Knowing the Business Domain
Sterblue’s strategy has always been to address business verticals one after each other. This proved highly valuable as we realized that deep business domain knowledge is a critical factor of success in applied machine learning.
This business knowledge allowed us to design appropriate data representations, optimize labelling tools, ensure data quality, and many other aspects of the data science pipeline.
Training deep neural networks without business knowledge would be like teaching a topic you don’t know anything about just by following a book: you’d think it could work in theory, but it fails badly in practice.
On the other hand, the detailed business knowledge we learned by interacting with real world data over the course of several years is an invaluable asset for us now.
Making Hard Science Happen
As its name implies, data science is a science. Some people portray it as an art form, but they could not be more wrong.
Data science is science, and that means assessing results against real-life objective measurements, and never losing sight of the hard truths. It’s easy to fool yourself and be enthusiastic looking at these few amazing results your AI provided, when all around is a sea of garbage results. Consistentobjective metrics are a good way to know where you are precisely and whether you are going in the right direction. Anecdotal results are not.
Success in AI is not achieved by performing one-of-a-kind stunts, but by methodically applying state-of-the-art methods, with a healthy dose of pragmatism and creativity along the way.
The Power of Mixing
Machine Learning is not a fully solved problem. This means that for one given problem, several methods can provide good results. As a data scientist, it is necessary to be open to various approaches and try them in order to find the most relevant ones to the use case.
Often, the robustness of the final solution is achieved by using a smart mix of several solutions. An illustration of this is adversarial examples: a simple sticker or a few changed pixels can fool some neural network architectures into confusing an object with another seemingly unrelated object.
Just like pure-breed animals are the most fragile and mixed-race animals are more robust, pure-breed application of neural nets are sometimes fragile. A product that mixes various applied machine learning approaches will be more robust.
We see that achieving valuable results in applied machine learning relies on several key elements listed above in this post.
What I didn’t talk about yet is that each of these elements relies on a lot of tooling in order to be performed efficiently. Developing tools that support and optimize all these steps is actually what applied machine learning is about.
Developing new neural network models is just a tiny part of what makes a success in applied machine learning. The bulk of the work is actually all the supporting tools that go around it.
From super efficient data labelling interfaces to optimal drone flight planning, all Sterblue tools participate in supporting an efficient use case for machine learning at scale.