Einstein invented his theories of relativity, and it took decades for engineers to understand them and turn them into practical use. Without understanding relativity, we would not have many of the things we now take for granted - like GPS location on our phones.
Another profound discovery is “deep learning”, something that software developers are actively trying to understand. The concept started in the 1960s, and a first practical use was detecting numerical digits in the early 1990s. Up until the early 2010s, deep learning went almost unnoticed. As an expert in software development since the early 1990s, I had not even heard of it until about 2015 when the advances of GPU technologies made deep learning truly practical and NVIDA started promoting the concept. In 2016, I did a deep dive to get a handle on it - and at that point, I realized the most amazing software revolution of my life was starting.
So, what’s so profound about deep learning? It comes down to realizing that one of the most difficult issues in software engineering has been solved: “fuzzy logic” problem solving. Software engineers have been relying on traditional “precise logic” problem solving for decades. This means that every math calculation and control path that software can take is understood by humans with 100% precision. Even bugs can be understood with 100% precision.
It begs the question: can’t all software be developed with precise logic? Unfortunately, some problems may seem simple - but are so complex that they can’t be solved in a reasonable amount of time with precise logic. For example: imagine trying to write a program to distinguish between all breeds of cats using geometry and colors. This program would take in an image and output if there is a cat, and what breed the cat is. It seems simple, but what if the cat is upside down or partially hidden under a couch cushion, or mostly covered in mud, or in a background that mostly camouflages the cat, and so on? There are infinite possibilities. More traditional machine learning programs can’t handle the complexity either - because the feature set provided to the machine learning program still must be hand crafted by a human.
So why do humans find it so easy to identify cats - but can’t describe to a computer how to identify cats? It turns out humans are very good at fuzzy logic - but are terrible at precise logic. For example: you can look at some breed of cat a few times over a period of a few seconds, and remember it for life - but if asked to calculate the first 100 digits of pi in your head (not using paper, pencil, calculator, or computer) no-one could do that.
Deep learning is the solution to fuzzy logic problems – and, amazingly, running an already-trained deep neural network in software essentially takes one line of source code:
class_of_animal = animalNeuralNetwork(picture)
This simplicity doesn’t mean we abandon our traditional precise logic. Mixing both fuzzy and precise logic makes for significantly sophisticated software. For example, an object tracking program could use fuzzy logic to identify objects of interest, then precise logic could be used to determine their speed and trajectory over time.
Fuzzy logic could apply to almost any field of study including autonomous driving, signal analysis, speech recognition, health analysis, and so on. Once I realized the importance of deep learning, I understood that it is the future of programming.
If you’re ready to make the big leap into high performance fuzzy logic, Abaco can help you get started. On the software front, AXIS ImageFlex already provides great examples of how to use pre-trained neural networks, and soon will provide tools to assist with training and intuitive optimization on platforms featuring NVIDIA’s remarkable GPU/GPGPU technology. Those platforms range from our 10 TeraFLOPS GVC1001 to the recently-announced GR5 3U VPX rugged video/graphics and GPGPU card.
There has never been a better time to find out more about this profound but quiet software revolution.