This was my first Global Force AUSA event in Huntsville. Having been at the Washington DC event a couple of times, where I was disappointed at the lack of technical engagement I had on the booth, I was expecting this show to be similar.
Well: I couldn’t have been more wrong. Even in the hour before the exhibition hall formally opened, I was talking about Abaco’s capabilities and the Obox at a technical level - and that went on for the whole three days of the show. At one point, there were three sets of people lined up for a technical overview of Obox. It was good, too, to have the opportunity of demonstrating to the people – like those above from the ROTC - who represent the future of our armed forces.
Thinking about it, and looking around at other similar booths, the reasons became obvious. Firstly we had a great looking booth. Secondly, we had a very visible, compelling and interactive demo running on a real, deployable piece of hardware. Elsewhere on the show floor, there weren’t too many other vendors who could say the same thing. Static displays and simulated demos seemed to be the norm.
Why the interest?
So: what was it that we were showing that commanded so much interest?
At the top of the screen, the three camera views of the area in front of our booth were visible. The main screen view was of the three views stitched together into a 360 degree representation (plus distortion correction). This was all achieved using our AXIS ImageFlex Image Processing and Visualisation Toolkit.
What really captured visitors’ attention, however, was that around each object on the screen – other booths, people passing by – were red rectangles, each carrying a decsription, using AI object recognition and tracking. Also, via Lidar sensor fusion with the cameras, we were determining the distance or range of those objects. This was all powered by the combined rugged CPU and GPU boards, communicating via our rugged network switch boards integrated within the Obox.
The Obox platform brings the Abaco story together, demonstrating our high performance computing capabilities with GPUs and SBCs all tied together with our latest 40GbE-capable network switch. Some people expressed doubt about its ability to be applied to RF signal processing for radar and SIGINT - and the answer was an emphatic “yes”, as the box could accommodate our latest 3U VPX RFSoC and FPGA offerings, and the GPUs provided the TeraFLOPS of processing required for these applications. The Obox provides a powerful, fully integrated processing platform that supports the full autonomy software stack.
The key thing is: visitors immediately understood the “autonomy in a box” concept and message. Perhaps even better: they could easily make the connection between what we were showing and the kind of capabilities an autonomous military platform would require.
Returning to our demo: people were intrigued as to why the AI algorithm running on the Obox only classified passers-by as ‘99% person’. I explained that the algorithm classifies objects based on probability - so cannot be 100% certain.
“What do I need to do to get it to 100%?” asked one visitor, “Take my clothes off?” “That has not been tested,” I swiftly replied. Once I get back to the office, I’ll check with the engineers if my reply was correct…
After three tiring but thoroughly engaging and worthwhile days at what turned out to be a very successful show for us – a show that demonstrated conclusively that, with our new autonomy platform, we have a winner on our hands – I’m looking forward to some days of reflection and planning the next stage of work on the Obox.
Fully clothed, of course.