Ahead of the European MEMS & Sensors and Imaging & Sensors Summits, SEMI's Serena Brischetto met with Xperi Corporation senior vice president of engineering Petronel Bigioi to talk about infrastructure and related technology requirements to design, train and implement deep learning in revolutionary architectures; inference engines and their scalability for various markets; encrypted networks and IP protection; and how they all can come together in an advanced AI-enabled imaging chip.
Q: Which infrastructure and other technology changes are required to design, train and implement deep learning?
Bigioi: AI-based problem solving has become a very iterative process of trial and error. There is an acute need for more images with ground truth data and more testing to identify specific corner cases where AI doesn’t perform well so they can be targeted for improvement through augmenting some of the training sets, trying again, modifying the network architecture and trying again. So the needs to support AI infrastructure is exploding.
The long list of requirements includes much more computation, different type of capture (3D models and synthetic training sets generation with ground truth), and more real life 2D data with (automatic or semi-automatic) annotation for testing and validation. Once the resulting solution works, further optimizations in areas such as data type simplification, compression, sparsity are needed to make it work at the edge. These optimizations depend on the capabilities of the inference engines at the edge. The upshot is that more infrastructure is required.
Q: Which changes and needs do you see for the infrastructure and related technology to design, train and implement deep learning?
Bigioi: For some rather strange reasons, nobody seems to be very worried about deploying their network solutions in the form of binary image, stored somewhere in the flash in clear without any protection against reverse engineering and intellectual theft. However, there is some protection for biometrics technologies that in some cases uses some form of trusted execution environment. But that is in the context of inference done via general purpose CPUs/DSPs (e.g. Qualcomm, ARM). Even then, the networks are still stored in clear somewhere in the device’s flash. I think this trend will continue for a while until a security-breach horror story strikes.
Q: How does XPERI plan to make AI and deep learning applications more secure?
Bigioi: Our CNN inference cores are capable of real-time inference of encrypted networks stored in the main memory. That means the network weights and topology would never be available in clear in the main memory (encrypted in the main memory, decrypted on the fly inside the inference cores).
Q: What do you expect from this year’s European Imaging & Sensors Summit in Grenoble and why do you recommend attending?
Bigioi: Differentiation in the future will come from solutions designed across multiple disciplines (optics, sensors, computation units). Attending the summit in Grenoble will provide insights into connected and complementary disciplines to imaging. Take, for instance, our PCNN cores design. the biggest challenge to solve in the case of AI on the edge was the memory bandwidth problem. The state of the art deep neural networks create an incredible amount of intermediate (layer) data on the way to generate their outputs or decisions. Our PCNN cores were designed to optimize the main memory bandwidth with the view that we have highly advanced silicon to silicon interconnect technology (not a very clear connection between those apparently very different disciplines).
The freedom to run hundreds of thousands of wires per square mm between two silicon dies manufactured at different disciplines allowed our designers to put local memories very close to the computation units with virtually infinite bandwidth. Those memories can be dimensioned and instantiated depending on the choice of the customer to have either a 2D (traditional, with some AI capabilities, limited by DDR bandwidth) or 3D chip design (enabling high performance, low power AI that require very little DDR access).
Serena Brischetto is a marketing and communications manager at SEMI Europe.