Date: Thursday, May 27
Start Time: 12:00 pm
End Time: 1:00 pm
The demos are a quick, easy way to meet top building-block technology suppliers and evaluate their latest processors, software, development tools and more. You may ask, “How does that work online?” Over the past year, we’ve run a bunch of experiments and found a format that works very well. Each demo will be a 5-7 minute presentation followed by live Q&A with the company’s specialists, allowing you to see up to four demos in one hour! Demos will be available on each of the four days of the conference. And if you’re interested in learning more about a company and its technology, schedule a 1:1 meeting with their supplier representatives right through the event platform.
To see some sample demos and to see who’s demoing what, please visit our Demo Directory!
On Thursday, May 27, demos start at 12:00, 12:15, 12:30 and 12:45 pm PT.
Platinum Sponsor
Hands-On Demo: Low-Compute Image Classification with a Himax Monochrome Camera
The Himax HM0360 is an ultra-low-power VGA monochrome camera designed for energy-efficient smart vision applications. This demonstration will take attendees through the process of creating and running an image classification model with a dataset of grayscale pictures using Edge Impulse Studio and the Himax WE-I Plus board. (Representative: Aurelien Lequertier, User Success Engineering)
Hands-On Demo: Accurate Digit Recognition with Arduino Portenta
Digits recognition using computer vision is desirable in many application and market areas such as Grocery Retail, Manufacturing, Utility Metering, and Administration. This demonstration will dive into an implementation of a digits recognition system using Edge Impulse and the Arduino Portenta H7 + Vision Shield, which can remain in the field for years on only battery power and is also achieveable with a low cost of ownership. (Representative: Zin Kyaw, Senior User Success Engineer)
Hands-On Demo: High-Speed Object Detection with the Jetson Nano
In this demonstration, you'll learn how to utilize the power of Linux devices for object detection simply and effectively with Edge Impulse. It's no longer necessary to be a machine learning expert in order to identify objects in your environment. (Representative: Jenny Plunkett, User Success Engineer)
Hands-On Demo: Effective Image Classification Solution Under $5
The ESP32-CAM, famous for its ultra-low price, extensive capabilities and energy efficiency, is widely used in affordable IoT solutions. This demonstration will show how to open up new application fields by boosting this microcontroller with intelligent capabilities using Edge Impulse. It's no longer necessary to be a machine learning expert in order to classify images using transfer learning techniques. (Representative: Louis Moreau, User Success Engineer)
Write Once, Run Everywhere with OpenVINO
The AI ecosystem is fragmented; many solutions require specialized hardware or specific frameworks, libraries, APIs or tools that may conflict with your current development environment. With the OpenVINO Toolkits and the OpenVINO Notebooks, we've unified the experiences and underlying technology such that you can now easily run, deploy and test your AI prototypes in less than 10 minutes. In this demo session, we will show you how and what's "behind the scenes". (Representative: Raymond Lo, OpenVINO Software Evangelist)
Intel DevCloud for the Edge
Learn how to run edge applications in minutes using Intel DevCloud for the Edge, a remote development environment accessible in your web browser. We'll show you how you can access pre-built samples, development resources, the latest versions of Intel® Distribution of OpenVINO- Toolkit, and a suite of Intel hardware. (Representative: Monique Jones, Technical Product Manager, Intel DevCloud for the Edge)
Accelerate the Development of Edge Intelligence Solutions with Intel Edge Software Hub
This demonstration describes the importance and challenges of connecting devices at the edge, and how the Intel Edge software hub can help you deploy one-click applications for various use cases. We will show various demonstrations of pre-validated reference implementations, along with explaining the software packages provided by Intel for a multitude of use cases. (Representative: Chen Su, Product Marketing Engineer)
Gold Sponsor
Hands-On Demo: Low-Compute Image Classification with a Himax Monochrome Camera
The Himax HM0360 is an ultra-low-power VGA monochrome camera designed for energy-efficient smart vision applications. This demonstration will take attendees through the process of creating and running an image classification model with a dataset of grayscale pictures using Edge Impulse Studio and the Himax WE-I Plus board. (Representative: Aurelien Lequertier, User Success Engineering)
Hands-On Demo: Accurate Digit Recognition with Arduino Portenta
Digits recognition using computer vision is desirable in many application and market areas such as Grocery Retail, Manufacturing, Utility Metering, and Administration. This demonstration will dive into an implementation of a digits recognition system using Edge Impulse and the Arduino Portenta H7 + Vision Shield, which can remain in the field for years on only battery power and is also achieveable with a low cost of ownership. (Representative: Zin Kyaw, Senior User Success Engineer)
Hands-On Demo: High-Speed Object Detection with the Jetson Nano
In this demonstration, you'll learn how to utilize the power of Linux devices for object detection simply and effectively with Edge Impulse. It's no longer necessary to be a machine learning expert in order to identify objects in your environment. (Representative: Jenny Plunkett, User Success Engineer)
Hands-On Demo: Effective Image Classification Solution Under $5
The ESP32-CAM, famous for its ultra-low price, extensive capabilities and energy efficiency, is widely used in affordable IoT solutions. This demonstration will show how to open up new application fields by boosting this microcontroller with intelligent capabilities using Edge Impulse. It's no longer necessary to be a machine learning expert in order to classify images using transfer learning techniques. (Representative: Louis Moreau, User Success Engineer)
Write Once, Run Everywhere with OpenVINO
The AI ecosystem is fragmented; many solutions require specialized hardware or specific frameworks, libraries, APIs or tools that may conflict with your current development environment. With the OpenVINO Toolkits and the OpenVINO Notebooks, we've unified the experiences and underlying technology such that you can now easily run, deploy and test your AI prototypes in less than 10 minutes. In this demo session, we will show you how and what's "behind the scenes". (Representative: Raymond Lo, OpenVINO Software Evangelist)
Intel DevCloud for the Edge
Learn how to run edge applications in minutes using Intel DevCloud for the Edge, a remote development environment accessible in your web browser. We'll show you how you can access pre-built samples, development resources, the latest versions of Intel® Distribution of OpenVINO- Toolkit, and a suite of Intel hardware. (Representative: Monique Jones, Technical Product Manager, Intel DevCloud for the Edge)
Accelerate the Development of Edge Intelligence Solutions with Intel Edge Software Hub
This demonstration describes the importance and challenges of connecting devices at the edge, and how the Intel Edge software hub can help you deploy one-click applications for various use cases. We will show various demonstrations of pre-validated reference implementations, along with explaining the software packages provided by Intel for a multitude of use cases. (Representative: Chen Su, Product Marketing Engineer)
Silver Sponsor
Hands-On Demo: Low-Compute Image Classification with a Himax Monochrome Camera
The Himax HM0360 is an ultra-low-power VGA monochrome camera designed for energy-efficient smart vision applications. This demonstration will take attendees through the process of creating and running an image classification model with a dataset of grayscale pictures using Edge Impulse Studio and the Himax WE-I Plus board. (Representative: Aurelien Lequertier, User Success Engineering)
Hands-On Demo: Accurate Digit Recognition with Arduino Portenta
Digits recognition using computer vision is desirable in many application and market areas such as Grocery Retail, Manufacturing, Utility Metering, and Administration. This demonstration will dive into an implementation of a digits recognition system using Edge Impulse and the Arduino Portenta H7 + Vision Shield, which can remain in the field for years on only battery power and is also achieveable with a low cost of ownership. (Representative: Zin Kyaw, Senior User Success Engineer)
Hands-On Demo: High-Speed Object Detection with the Jetson Nano
In this demonstration, you'll learn how to utilize the power of Linux devices for object detection simply and effectively with Edge Impulse. It's no longer necessary to be a machine learning expert in order to identify objects in your environment. (Representative: Jenny Plunkett, User Success Engineer)
Hands-On Demo: Effective Image Classification Solution Under $5
The ESP32-CAM, famous for its ultra-low price, extensive capabilities and energy efficiency, is widely used in affordable IoT solutions. This demonstration will show how to open up new application fields by boosting this microcontroller with intelligent capabilities using Edge Impulse. It's no longer necessary to be a machine learning expert in order to classify images using transfer learning techniques. (Representative: Louis Moreau, User Success Engineer)
Write Once, Run Everywhere with OpenVINO
The AI ecosystem is fragmented; many solutions require specialized hardware or specific frameworks, libraries, APIs or tools that may conflict with your current development environment. With the OpenVINO Toolkits and the OpenVINO Notebooks, we've unified the experiences and underlying technology such that you can now easily run, deploy and test your AI prototypes in less than 10 minutes. In this demo session, we will show you how and what's "behind the scenes". (Representative: Raymond Lo, OpenVINO Software Evangelist)
Intel DevCloud for the Edge
Learn how to run edge applications in minutes using Intel DevCloud for the Edge, a remote development environment accessible in your web browser. We'll show you how you can access pre-built samples, development resources, the latest versions of Intel® Distribution of OpenVINO- Toolkit, and a suite of Intel hardware. (Representative: Monique Jones, Technical Product Manager, Intel DevCloud for the Edge)
Accelerate the Development of Edge Intelligence Solutions with Intel Edge Software Hub
This demonstration describes the importance and challenges of connecting devices at the edge, and how the Intel Edge software hub can help you deploy one-click applications for various use cases. We will show various demonstrations of pre-validated reference implementations, along with explaining the software packages provided by Intel for a multitude of use cases. (Representative: Chen Su, Product Marketing Engineer)
Bronze Sponsor
Hands-On Demo: Low-Compute Image Classification with a Himax Monochrome Camera
The Himax HM0360 is an ultra-low-power VGA monochrome camera designed for energy-efficient smart vision applications. This demonstration will take attendees through the process of creating and running an image classification model with a dataset of grayscale pictures using Edge Impulse Studio and the Himax WE-I Plus board. (Representative: Aurelien Lequertier, User Success Engineering)
Hands-On Demo: Accurate Digit Recognition with Arduino Portenta
Digits recognition using computer vision is desirable in many application and market areas such as Grocery Retail, Manufacturing, Utility Metering, and Administration. This demonstration will dive into an implementation of a digits recognition system using Edge Impulse and the Arduino Portenta H7 + Vision Shield, which can remain in the field for years on only battery power and is also achieveable with a low cost of ownership. (Representative: Zin Kyaw, Senior User Success Engineer)
Hands-On Demo: High-Speed Object Detection with the Jetson Nano
In this demonstration, you'll learn how to utilize the power of Linux devices for object detection simply and effectively with Edge Impulse. It's no longer necessary to be a machine learning expert in order to identify objects in your environment. (Representative: Jenny Plunkett, User Success Engineer)
Hands-On Demo: Effective Image Classification Solution Under $5
The ESP32-CAM, famous for its ultra-low price, extensive capabilities and energy efficiency, is widely used in affordable IoT solutions. This demonstration will show how to open up new application fields by boosting this microcontroller with intelligent capabilities using Edge Impulse. It's no longer necessary to be a machine learning expert in order to classify images using transfer learning techniques. (Representative: Louis Moreau, User Success Engineer)
Write Once, Run Everywhere with OpenVINO
The AI ecosystem is fragmented; many solutions require specialized hardware or specific frameworks, libraries, APIs or tools that may conflict with your current development environment. With the OpenVINO Toolkits and the OpenVINO Notebooks, we've unified the experiences and underlying technology such that you can now easily run, deploy and test your AI prototypes in less than 10 minutes. In this demo session, we will show you how and what's "behind the scenes". (Representative: Raymond Lo, OpenVINO Software Evangelist)
Intel DevCloud for the Edge
Learn how to run edge applications in minutes using Intel DevCloud for the Edge, a remote development environment accessible in your web browser. We'll show you how you can access pre-built samples, development resources, the latest versions of Intel® Distribution of OpenVINO- Toolkit, and a suite of Intel hardware. (Representative: Monique Jones, Technical Product Manager, Intel DevCloud for the Edge)
Accelerate the Development of Edge Intelligence Solutions with Intel Edge Software Hub
This demonstration describes the importance and challenges of connecting devices at the edge, and how the Intel Edge software hub can help you deploy one-click applications for various use cases. We will show various demonstrations of pre-validated reference implementations, along with explaining the software packages provided by Intel for a multitude of use cases. (Representative: Chen Su, Product Marketing Engineer)
Bronze Sponsor
Hands-On Demo: Low-Compute Image Classification with a Himax Monochrome Camera
The Himax HM0360 is an ultra-low-power VGA monochrome camera designed for energy-efficient smart vision applications. This demonstration will take attendees through the process of creating and running an image classification model with a dataset of grayscale pictures using Edge Impulse Studio and the Himax WE-I Plus board. (Representative: Aurelien Lequertier, User Success Engineering)
Hands-On Demo: Accurate Digit Recognition with Arduino Portenta
Digits recognition using computer vision is desirable in many application and market areas such as Grocery Retail, Manufacturing, Utility Metering, and Administration. This demonstration will dive into an implementation of a digits recognition system using Edge Impulse and the Arduino Portenta H7 + Vision Shield, which can remain in the field for years on only battery power and is also achieveable with a low cost of ownership. (Representative: Zin Kyaw, Senior User Success Engineer)
Hands-On Demo: High-Speed Object Detection with the Jetson Nano
In this demonstration, you'll learn how to utilize the power of Linux devices for object detection simply and effectively with Edge Impulse. It's no longer necessary to be a machine learning expert in order to identify objects in your environment. (Representative: Jenny Plunkett, User Success Engineer)
Hands-On Demo: Effective Image Classification Solution Under $5
The ESP32-CAM, famous for its ultra-low price, extensive capabilities and energy efficiency, is widely used in affordable IoT solutions. This demonstration will show how to open up new application fields by boosting this microcontroller with intelligent capabilities using Edge Impulse. It's no longer necessary to be a machine learning expert in order to classify images using transfer learning techniques. (Representative: Louis Moreau, User Success Engineer)
Write Once, Run Everywhere with OpenVINO
The AI ecosystem is fragmented; many solutions require specialized hardware or specific frameworks, libraries, APIs or tools that may conflict with your current development environment. With the OpenVINO Toolkits and the OpenVINO Notebooks, we've unified the experiences and underlying technology such that you can now easily run, deploy and test your AI prototypes in less than 10 minutes. In this demo session, we will show you how and what's "behind the scenes". (Representative: Raymond Lo, OpenVINO Software Evangelist)
Intel DevCloud for the Edge
Learn how to run edge applications in minutes using Intel DevCloud for the Edge, a remote development environment accessible in your web browser. We'll show you how you can access pre-built samples, development resources, the latest versions of Intel® Distribution of OpenVINO- Toolkit, and a suite of Intel hardware. (Representative: Monique Jones, Technical Product Manager, Intel DevCloud for the Edge)
Accelerate the Development of Edge Intelligence Solutions with Intel Edge Software Hub
This demonstration describes the importance and challenges of connecting devices at the edge, and how the Intel Edge software hub can help you deploy one-click applications for various use cases. We will show various demonstrations of pre-validated reference implementations, along with explaining the software packages provided by Intel for a multitude of use cases. (Representative: Chen Su, Product Marketing Engineer)
Gold Sponsor
Real-Time Object Tracking with OpenMV
This session will demonstrate how to use MicroPython, along with a quantized TensorFlow Lite Microcontroller model, to run a cup detection program that keeps track of the number of cups in the image. You'll learn how to create a dataset, train a model and deploy the neural network model onto the OpenMV H7 board. The demo tracks the number of cups in an image by first using classical image processing methods on each frame prior to passing the interim results to the model to find blobs and make bounding boxes based on the blobs' colors, shapes, widths, and heights. (Representatives: Carlo Grisafi, IoT Developer Advocate)
Moving the Gym to Your Living Room: Body Pose Tracking on Your Smart TV
The lifestyle changes we have all experienced over the past year have brought a big focus on home fitness. At the same time, we have seen many advances in deep learning technology, along with increased camera presence in smart TVs and other home devices. In this session, we will demonstrate our fitness application performing real-time body pose estimation and tracking on Arm CPUs and GPUs. We will share some excellent results using Google’s BlazePose model, and discuss the challenges and opportunities for creating immersive experiences on Arm platforms. (Representatives: Mina Dimova, Staff Software Engineer)
An Open Source Approach To Cloud Native Vision Workload Deployment On Arm
This demonstration will showcase how to use open-source lightweight Kubernetes-based orchestration for deploying and managing vision and machine learning workloads on edge devices. (Representatives: Umair Hashim, Principal Solutions Engineer)
Color-Based Object Detection System for Visual AI Applications
BASF will demonstrate an object detection system consisting of cameras, LED lighting and colors designed to serve as product identifiers. The system can be used to improve the object detection performance for use cases involving manufactured consumer items as well as to label neural network training image sets. (Representatives: Ian Childers, Head of Technology)
Sensor Fusion + Semantic Segmentation Processing on Blaize Pathfinder P1600
This session will demonstrate sensor fusion and semantic segmentation in a use case fusing data received from full HD cameras and from lidar and radar sensors on a standalone embedded system, the Blaize Pathfinder P1600. (Representatives: Doug Watt, Director Field Applications Engineering)
Multi-Camera Object Detection on the Blaize Xplorer X1600E
This demonstration will show multi-camera object detection running on the Blaize Xplorer X1600E. The use case processes 5 independent HD video streams using 5 independent YoloV3 networks with less than 100 ms latency, for true real-time processing at the edge. (Representatives: Shawn Holiday, Sr. Director, Customer Success)
Blaize Picasso SDK: People & Pose Detection, Key Point Tracking
This session demonstrates a high resolution, multi-neural network and multi-function graph-native application built on the Blaize Picasso SDK, with the Blaize Xplorer X1600P as the host system. (Representatives: Rajesh Anantharaman, Sr. Director of Products)
Cadence Demonstration of Vision and AI Applications on Tensilica DSP-based Platforms
In this session, Amol Borkar, Senior Product Manager and Marketing of Vision and AI DSPs at Cadence, will demonstrate real-time examples of driver monitoring, automotive perception, 6DoF SLAM, and semantic segmentation, all running on Tensilica processors and DSPs. (Representatives: Amol Borkar, Senior Product Manager and Marketing of Vision and AI DSPs)
Leveraging the High Throughput of the Hailo-8 AI Processor to Improve Large-Scale Object Detection on the Edge
The high throughput delivered by the Hailo-8 AI processor can be leveraged not only to process multiple cameras at the same time but also to improve detection accuracy in high-resolution video. Tiling is a dedicated application that allows developers to break down high-resolution images into multiple input streams, thus making small and faraway objects detectable even in very busy and dynamic environments. Hailo's out-of-the-box TAPPAS applications are a great way to hit the ground running, making the development and deployment of models easier and accelerating time-to-market. (Representative: Yaniv Sulkes, VP Product Marketing & Automotive)
State-of-the-Art Object Detection on the Edge with Hailo-8 AI Processor
YoloV5m is a high-end neural network model for object detection, a fundamental computer vision task. The Hailo-8 AI chip can run this model at real-time frame rates with a high-resolution video input, providing a powerful solution for surveillance applications in Smart City, Smart Retail, Enterprise and other markets. The Hailo-8 is supported by a dedicated out-of-the-box application included in the Hailo TAPPAS toolkit, designed to accelerate time-to-market and make it easy to develop and deploy high-performance edge applications. (Representative: Nadav Eden, Technical Account Manager)
Lattice Semiconductor's CrossLink-NX: Human Presence and Counting
This demonstration showcases a development board equipped with a CrossLink-NX FPGA, based on the Lattice Semiconductor Nexus platform. The FPGA is programmed to perform human presence detection and counting, common AI use cases found in many applications. (Representative: Hussein Osman, Product Marketing Manager)
Helion ISP on ECP5
Lattice Semiconductor has partnered with Helion-Vision to bring its industry-leading IONOS image signal processing (ISP) IP portfolio to the Lattice ECP5 FPGA. This demonstration showcases the quality of Helion-Vision's image processing algorithms, along with showing optional features such as image overlay. (Representative: Mark Hoopes, Director, Product Marketing)
Ignitarium AI Unit inspection
This demonstration was developed by Lattice Semiconductor partner Ignitarium, experts in applied AI technology. It shows how you can implement efficient machine learning/AI algorithms on the Lattice ECP5 FPGA to perform complex tasks. In this case, the algorithm looks for defective products as they pass along a conveyor belt, with higher speed and better quality of results than any traditional or human-powered approach. (Representative: Hussein Osamn, Product Marketing Manager)
Spatial AI and CV for Human Machine Safety
In situations where people need to interact with potentially dangerous equipment, it is often not possible to physically guard or “dumb-guard” (e.g., with sonar or laser, for example) the equipment. In such cases, you need to discern not just the presence of an object in the equipment’s “red zone,” but also what that object is; some things (wooden boards with a saw, for example) should be there, while some things (human limbs, for example) shouldn't be! Embedded spatial artificial intelligence and computer vision, as demonstrated in this session, enables such machines to intelligently perceive objects so that they can smart-guard to protect people while not adversely impacting normal operation. (Representative: Shawn McLaughlin, Vice President)
Neural-Inference-Controlled Crop/Zoom and H.265 Encode
In this demonstration, we will show how to use a neural network to guide what portion of a high-resolution image sensor (12MP) is output to a 2MP (1920x1080) h.265-encoded video stream. This capability allows a neural network or computer vision algorithm (e.g., motion estimation) to guide where the action is in a given scene, and then zoom (6x lossless) into that action, h.265-encoding the resultant 1920x1080 region of interest. (Representative: Martin Peterlin, Chief Technology Officer)
From-Behind Collision Detection for People Who Ride Bikes
In this demonstration, we'll show an example of localizing and estimating the trajectories of vehicles behind you to determine if they are on a trajectory to hit you. Too many people have been struck from behind by distracted drivers. Let's use embedded, performant spatial artificial intelligence and computer vision to solve this problem! (Representative: Brandon Gilles, CEO)
Efficient Driver Monitoring for Automotive Cameras
In this demonstration, Nextchip, in collaboration with PathPartner Technology, will present a driver monitoring system (DMS) solution based on its ADAS SoC, APACHE4. The solution includes both distraction and drowsiness detection algorithms, ported to the CEVA XM-4 DSP (delivering 77 GMAC of performance) in the APACHE4. Thanks to the power of a well-designed platform and well-optimized algorithms, the solution delivers both high frame rates and high accuracy. (Representative: Jessie Lee, Technical Marketer(GM))
Nextchip’s Imaging Signal Processor: Supported Functions
In this session, Nextchip will showcase its imaging signal processor (ISP) expertise by demonstrating various ISP features for automotive applications. Nextchip’s ISP pipeline is developed in full by the company, leading to improved tuning capabilities. Nextchip will demonstrate the following algorithms and functions running on the ISP: high dynamic range (HDR), LED flicker mitigation (LFM), auto-calibration, and dewarp. (Representative: Jesse Lee Kim, Technical Marketer(GM))
Enabling Smart Automotive Camera ADAS SoCs
In this demonstration, Nextchip will present its image processors for the edge, APACHE4 and 5, which are ADAS SoCs empowering cameras with various “smart” functions. Because the products are designed to be located in a vehicle’s camera module, they have very low power consumption and are very small. The demonstration will show the SoCs’ various use cases, along with the well-organized APACHE5 SDK. (Representative: James Kim, Technical Marketer(Director))
Pick a Card, Any Card: Using ML-Enhanced Vision for ID and Label Tracking
Security cameras can play a valuable role in tracking the whereabouts of goods and personnel within facilities, but to do so they need to be able to reliably read information on ID cards and labels. Using a deck of standard playing cards, Perceive will demonstrate how the combination of a high-definition image sensor and the company’s Ergo edge inference processor can read information at distances beyond the capability of the human eye and expand the role of computer vision in applications such as inventory control and access management. (Representative: Kathy Cook, VP, Business Development)
Smart Video Conferencing on the Edge
Enhancements in AI processing capabilities on edge devices are enabling a richer video conferencing experience. These capabilities are leveraged in use cases that range from biometrics and automatic access of calendar and video conferencing applications through face and voice identification to a smarter framing of the scene in front of the camera, and understanding the language for auto-subtitling...all on the edge. The Synaptics VS680 and VS780 SOCs were designed with smart video conferencing use cases in mind. They are highly integrated to implement cost-optimized video conferencing devices in different form factors and boast the needed AI computation. In this demonstration, the VS680 will run smart framing, where the captured scene in front of the camera is dynamically adjusted to show the region of interest using machine learning. (Representatives: Zafer Diab, Director of Product Marketing & Xin Li, FAE Engineer)
Real-Time Video Post-processing Using Machine Learning
Enhancements in AI processing capabilities on edge devices are enabling significant enhancements in video scaling and post-processing versus, compared to what had been possible with traditional scaling integrated in SoCs. These enhancements enable the scaling and post-processing to be theme-based. Scaling for sports content can be adjusted for the high motion content, for example, while scaling for video conferencing can be optimized for the (mostly) static and moving face content. Synaptics SoCs integrate an internally developed machine learning engine called QDEO.ai, which performs super-resolution scaling that takes the theme of the video into consideration. This demonstration will show a side-by-side comparison of scaling using the traditional hardware scaler and the QDEO.ai scaled video. Quality enhancements are accentuated when performed on lower-bitrate input video, such as with videoconferencing. (Representatives: Zafer Diab, Director of Product Marketing & Xin Li, FAE Engineer)
From Neural Network to Edge with Synaptics
Synaptics will present an overview of the Katana Edge AI processor and tensaicc, the compiler that Eta Compute is developing for use with Katana. After we introduce Katana's novel multicore architecture, you'll see a live demonstration of how tensaicc compiles a neural network and generates power-, cycle- and memory-optimized code that takes advantage of the architecture. (Representative: Vineet Ganju, VP/GM Audio Business Unit)
Simultaneous Localization and Mapping (SLAM) Acceleration on DesignWare ARC EV7x Processors
Simultaneous Localization and Mapping (SLAM) creates and updates a map of an unknown environment while at the same time keeping track of an agent's location within it. SLAM is a key component in a variety of systems, such as autonomous vehicles, robotics, and augmented and virtual reality. This demonstration shows how to accelerate a SLAM software stack by offloading some of the processing to the ARC EV7x processor, as well as how adding an extra core to parallelize the algorithms can further increase the acceleration. We’ll also show how, with an EV7x-based system, you can incorporate a deep neural network engine to expand the system's intelligence. (Representative: Liliya Tazieva, Software Engineer)
SRGAN Super Resolution on DesignWare ARC EV7x Processors
Image super-resolution techniques reconstruct a higher-resolution image from a lower-resolution one. Although this can be done with classical vision algorithms, the results are inferior to what neural-network-based solutions can now offer. In this demo, we show how a Generative Adversarial Network can be used to intelligently infer the missing pixel data and generate a high-quality high-resolution image. The demo will run on the ARC EV7x processor and showcase the Deep Neural Network accelerator engine. (Representative: Liliya Tazieva, Software Engineer)
X-Ray Classification Solution for COVID-19 and Pneumonia Detection
Spline.AI is collaborating with Amazon Web Services and Xilinx to deliver an open-source, open-model, X-Ray classification solution for COVID-19 and pneumonia detection. This demonstration will show the model, deployed on the Xilinx Zynq UltraScale+- MPSoC device-based ZCU104 evaluation kit and leveraging the Xilinx deep learning processor unit (DPU), a soft-IP tensor accelerator that is powerful enough to run a variety of neural networks, including classification and detection of diseases. (Representative: Quenton Hall, AI Architect)
Out-of-the-Box with the Kria KV260 Starter Kit: Up and Running in Under an Hour
This demonstration will provide a detailed look at the Kria KV260 Vision AI Starter Kit and its companion basic accessory pack. We’ll show you just how quickly and easily you can get our Smart Camera accelerated application up and running, with no FPGA experience required. (Representative: Karan Kantharia, Product Line Manager)
Visual Machine Learning and Natural Language Processing Fusion on the Xilinx Kria SOM
This demonstration shows the new Kria K26 SOM running both vision machine learning (ML) and natural language processing on the same Xilinx Zynq-based platform. This solution showcases the dynamic switching capabilities of the Xilinx deep learning processor unit and integrates audio ML keyword-spotting to control the video display. (Representative: Girish Malipeddi, Director of Video and Imaging Solutions)
Silver Sponsor
Real-Time Object Tracking with OpenMV
This session will demonstrate how to use MicroPython, along with a quantized TensorFlow Lite Microcontroller model, to run a cup detection program that keeps track of the number of cups in the image. You'll learn how to create a dataset, train a model and deploy the neural network model onto the OpenMV H7 board. The demo tracks the number of cups in an image by first using classical image processing methods on each frame prior to passing the interim results to the model to find blobs and make bounding boxes based on the blobs' colors, shapes, widths, and heights. (Representatives: Carlo Grisafi, IoT Developer Advocate)
Moving the Gym to Your Living Room: Body Pose Tracking on Your Smart TV
The lifestyle changes we have all experienced over the past year have brought a big focus on home fitness. At the same time, we have seen many advances in deep learning technology, along with increased camera presence in smart TVs and other home devices. In this session, we will demonstrate our fitness application performing real-time body pose estimation and tracking on Arm CPUs and GPUs. We will share some excellent results using Google’s BlazePose model, and discuss the challenges and opportunities for creating immersive experiences on Arm platforms. (Representatives: Mina Dimova, Staff Software Engineer)
An Open Source Approach To Cloud Native Vision Workload Deployment On Arm
This demonstration will showcase how to use open-source lightweight Kubernetes-based orchestration for deploying and managing vision and machine learning workloads on edge devices. (Representatives: Umair Hashim, Principal Solutions Engineer)
Color-Based Object Detection System for Visual AI Applications
BASF will demonstrate an object detection system consisting of cameras, LED lighting and colors designed to serve as product identifiers. The system can be used to improve the object detection performance for use cases involving manufactured consumer items as well as to label neural network training image sets. (Representatives: Ian Childers, Head of Technology)
Sensor Fusion + Semantic Segmentation Processing on Blaize Pathfinder P1600
This session will demonstrate sensor fusion and semantic segmentation in a use case fusing data received from full HD cameras and from lidar and radar sensors on a standalone embedded system, the Blaize Pathfinder P1600. (Representatives: Doug Watt, Director Field Applications Engineering)
Multi-Camera Object Detection on the Blaize Xplorer X1600E
This demonstration will show multi-camera object detection running on the Blaize Xplorer X1600E. The use case processes 5 independent HD video streams using 5 independent YoloV3 networks with less than 100 ms latency, for true real-time processing at the edge. (Representatives: Shawn Holiday, Sr. Director, Customer Success)
Blaize Picasso SDK: People & Pose Detection, Key Point Tracking
This session demonstrates a high resolution, multi-neural network and multi-function graph-native application built on the Blaize Picasso SDK, with the Blaize Xplorer X1600P as the host system. (Representatives: Rajesh Anantharaman, Sr. Director of Products)
Cadence Demonstration of Vision and AI Applications on Tensilica DSP-based Platforms
In this session, Amol Borkar, Senior Product Manager and Marketing of Vision and AI DSPs at Cadence, will demonstrate real-time examples of driver monitoring, automotive perception, 6DoF SLAM, and semantic segmentation, all running on Tensilica processors and DSPs. (Representatives: Amol Borkar, Senior Product Manager and Marketing of Vision and AI DSPs)
Leveraging the High Throughput of the Hailo-8 AI Processor to Improve Large-Scale Object Detection on the Edge
The high throughput delivered by the Hailo-8 AI processor can be leveraged not only to process multiple cameras at the same time but also to improve detection accuracy in high-resolution video. Tiling is a dedicated application that allows developers to break down high-resolution images into multiple input streams, thus making small and faraway objects detectable even in very busy and dynamic environments. Hailo's out-of-the-box TAPPAS applications are a great way to hit the ground running, making the development and deployment of models easier and accelerating time-to-market. (Representative: Yaniv Sulkes, VP Product Marketing & Automotive)
State-of-the-Art Object Detection on the Edge with Hailo-8 AI Processor
YoloV5m is a high-end neural network model for object detection, a fundamental computer vision task. The Hailo-8 AI chip can run this model at real-time frame rates with a high-resolution video input, providing a powerful solution for surveillance applications in Smart City, Smart Retail, Enterprise and other markets. The Hailo-8 is supported by a dedicated out-of-the-box application included in the Hailo TAPPAS toolkit, designed to accelerate time-to-market and make it easy to develop and deploy high-performance edge applications. (Representative: Nadav Eden, Technical Account Manager)
Lattice Semiconductor's CrossLink-NX: Human Presence and Counting
This demonstration showcases a development board equipped with a CrossLink-NX FPGA, based on the Lattice Semiconductor Nexus platform. The FPGA is programmed to perform human presence detection and counting, common AI use cases found in many applications. (Representative: Hussein Osman, Product Marketing Manager)
Helion ISP on ECP5
Lattice Semiconductor has partnered with Helion-Vision to bring its industry-leading IONOS image signal processing (ISP) IP portfolio to the Lattice ECP5 FPGA. This demonstration showcases the quality of Helion-Vision's image processing algorithms, along with showing optional features such as image overlay. (Representative: Mark Hoopes, Director, Product Marketing)
Ignitarium AI Unit inspection
This demonstration was developed by Lattice Semiconductor partner Ignitarium, experts in applied AI technology. It shows how you can implement efficient machine learning/AI algorithms on the Lattice ECP5 FPGA to perform complex tasks. In this case, the algorithm looks for defective products as they pass along a conveyor belt, with higher speed and better quality of results than any traditional or human-powered approach. (Representative: Hussein Osamn, Product Marketing Manager)
Spatial AI and CV for Human Machine Safety
In situations where people need to interact with potentially dangerous equipment, it is often not possible to physically guard or “dumb-guard” (e.g., with sonar or laser, for example) the equipment. In such cases, you need to discern not just the presence of an object in the equipment’s “red zone,” but also what that object is; some things (wooden boards with a saw, for example) should be there, while some things (human limbs, for example) shouldn't be! Embedded spatial artificial intelligence and computer vision, as demonstrated in this session, enables such machines to intelligently perceive objects so that they can smart-guard to protect people while not adversely impacting normal operation. (Representative: Shawn McLaughlin, Vice President)
Neural-Inference-Controlled Crop/Zoom and H.265 Encode
In this demonstration, we will show how to use a neural network to guide what portion of a high-resolution image sensor (12MP) is output to a 2MP (1920x1080) h.265-encoded video stream. This capability allows a neural network or computer vision algorithm (e.g., motion estimation) to guide where the action is in a given scene, and then zoom (6x lossless) into that action, h.265-encoding the resultant 1920x1080 region of interest. (Representative: Martin Peterlin, Chief Technology Officer)
From-Behind Collision Detection for People Who Ride Bikes
In this demonstration, we'll show an example of localizing and estimating the trajectories of vehicles behind you to determine if they are on a trajectory to hit you. Too many people have been struck from behind by distracted drivers. Let's use embedded, performant spatial artificial intelligence and computer vision to solve this problem! (Representative: Brandon Gilles, CEO)
Efficient Driver Monitoring for Automotive Cameras
In this demonstration, Nextchip, in collaboration with PathPartner Technology, will present a driver monitoring system (DMS) solution based on its ADAS SoC, APACHE4. The solution includes both distraction and drowsiness detection algorithms, ported to the CEVA XM-4 DSP (delivering 77 GMAC of performance) in the APACHE4. Thanks to the power of a well-designed platform and well-optimized algorithms, the solution delivers both high frame rates and high accuracy. (Representative: Jessie Lee, Technical Marketer(GM))
Nextchip’s Imaging Signal Processor: Supported Functions
In this session, Nextchip will showcase its imaging signal processor (ISP) expertise by demonstrating various ISP features for automotive applications. Nextchip’s ISP pipeline is developed in full by the company, leading to improved tuning capabilities. Nextchip will demonstrate the following algorithms and functions running on the ISP: high dynamic range (HDR), LED flicker mitigation (LFM), auto-calibration, and dewarp. (Representative: Jesse Lee Kim, Technical Marketer(GM))
Enabling Smart Automotive Camera ADAS SoCs
In this demonstration, Nextchip will present its image processors for the edge, APACHE4 and 5, which are ADAS SoCs empowering cameras with various “smart” functions. Because the products are designed to be located in a vehicle’s camera module, they have very low power consumption and are very small. The demonstration will show the SoCs’ various use cases, along with the well-organized APACHE5 SDK. (Representative: James Kim, Technical Marketer(Director))
Pick a Card, Any Card: Using ML-Enhanced Vision for ID and Label Tracking
Security cameras can play a valuable role in tracking the whereabouts of goods and personnel within facilities, but to do so they need to be able to reliably read information on ID cards and labels. Using a deck of standard playing cards, Perceive will demonstrate how the combination of a high-definition image sensor and the company’s Ergo edge inference processor can read information at distances beyond the capability of the human eye and expand the role of computer vision in applications such as inventory control and access management. (Representative: Kathy Cook, VP, Business Development)
Smart Video Conferencing on the Edge
Enhancements in AI processing capabilities on edge devices are enabling a richer video conferencing experience. These capabilities are leveraged in use cases that range from biometrics and automatic access of calendar and video conferencing applications through face and voice identification to a smarter framing of the scene in front of the camera, and understanding the language for auto-subtitling...all on the edge. The Synaptics VS680 and VS780 SOCs were designed with smart video conferencing use cases in mind. They are highly integrated to implement cost-optimized video conferencing devices in different form factors and boast the needed AI computation. In this demonstration, the VS680 will run smart framing, where the captured scene in front of the camera is dynamically adjusted to show the region of interest using machine learning. (Representatives: Zafer Diab, Director of Product Marketing & Xin Li, FAE Engineer)
Real-Time Video Post-processing Using Machine Learning
Enhancements in AI processing capabilities on edge devices are enabling significant enhancements in video scaling and post-processing versus, compared to what had been possible with traditional scaling integrated in SoCs. These enhancements enable the scaling and post-processing to be theme-based. Scaling for sports content can be adjusted for the high motion content, for example, while scaling for video conferencing can be optimized for the (mostly) static and moving face content. Synaptics SoCs integrate an internally developed machine learning engine called QDEO.ai, which performs super-resolution scaling that takes the theme of the video into consideration. This demonstration will show a side-by-side comparison of scaling using the traditional hardware scaler and the QDEO.ai scaled video. Quality enhancements are accentuated when performed on lower-bitrate input video, such as with videoconferencing. (Representatives: Zafer Diab, Director of Product Marketing & Xin Li, FAE Engineer)
From Neural Network to Edge with Synaptics
Synaptics will present an overview of the Katana Edge AI processor and tensaicc, the compiler that Eta Compute is developing for use with Katana. After we introduce Katana's novel multicore architecture, you'll see a live demonstration of how tensaicc compiles a neural network and generates power-, cycle- and memory-optimized code that takes advantage of the architecture. (Representative: Vineet Ganju, VP/GM Audio Business Unit)
Simultaneous Localization and Mapping (SLAM) Acceleration on DesignWare ARC EV7x Processors
Simultaneous Localization and Mapping (SLAM) creates and updates a map of an unknown environment while at the same time keeping track of an agent's location within it. SLAM is a key component in a variety of systems, such as autonomous vehicles, robotics, and augmented and virtual reality. This demonstration shows how to accelerate a SLAM software stack by offloading some of the processing to the ARC EV7x processor, as well as how adding an extra core to parallelize the algorithms can further increase the acceleration. We’ll also show how, with an EV7x-based system, you can incorporate a deep neural network engine to expand the system's intelligence. (Representative: Liliya Tazieva, Software Engineer)
SRGAN Super Resolution on DesignWare ARC EV7x Processors
Image super-resolution techniques reconstruct a higher-resolution image from a lower-resolution one. Although this can be done with classical vision algorithms, the results are inferior to what neural-network-based solutions can now offer. In this demo, we show how a Generative Adversarial Network can be used to intelligently infer the missing pixel data and generate a high-quality high-resolution image. The demo will run on the ARC EV7x processor and showcase the Deep Neural Network accelerator engine. (Representative: Liliya Tazieva, Software Engineer)
X-Ray Classification Solution for COVID-19 and Pneumonia Detection
Spline.AI is collaborating with Amazon Web Services and Xilinx to deliver an open-source, open-model, X-Ray classification solution for COVID-19 and pneumonia detection. This demonstration will show the model, deployed on the Xilinx Zynq UltraScale+- MPSoC device-based ZCU104 evaluation kit and leveraging the Xilinx deep learning processor unit (DPU), a soft-IP tensor accelerator that is powerful enough to run a variety of neural networks, including classification and detection of diseases. (Representative: Quenton Hall, AI Architect)
Out-of-the-Box with the Kria KV260 Starter Kit: Up and Running in Under an Hour
This demonstration will provide a detailed look at the Kria KV260 Vision AI Starter Kit and its companion basic accessory pack. We’ll show you just how quickly and easily you can get our Smart Camera accelerated application up and running, with no FPGA experience required. (Representative: Karan Kantharia, Product Line Manager)
Visual Machine Learning and Natural Language Processing Fusion on the Xilinx Kria SOM
This demonstration shows the new Kria K26 SOM running both vision machine learning (ML) and natural language processing on the same Xilinx Zynq-based platform. This solution showcases the dynamic switching capabilities of the Xilinx deep learning processor unit and integrates audio ML keyword-spotting to control the video display. (Representative: Girish Malipeddi, Director of Video and Imaging Solutions)
Bronze Sponsor
Real-Time Object Tracking with OpenMV
This session will demonstrate how to use MicroPython, along with a quantized TensorFlow Lite Microcontroller model, to run a cup detection program that keeps track of the number of cups in the image. You'll learn how to create a dataset, train a model and deploy the neural network model onto the OpenMV H7 board. The demo tracks the number of cups in an image by first using classical image processing methods on each frame prior to passing the interim results to the model to find blobs and make bounding boxes based on the blobs' colors, shapes, widths, and heights. (Representatives: Carlo Grisafi, IoT Developer Advocate)
Moving the Gym to Your Living Room: Body Pose Tracking on Your Smart TV
The lifestyle changes we have all experienced over the past year have brought a big focus on home fitness. At the same time, we have seen many advances in deep learning technology, along with increased camera presence in smart TVs and other home devices. In this session, we will demonstrate our fitness application performing real-time body pose estimation and tracking on Arm CPUs and GPUs. We will share some excellent results using Google’s BlazePose model, and discuss the challenges and opportunities for creating immersive experiences on Arm platforms. (Representatives: Mina Dimova, Staff Software Engineer)
An Open Source Approach To Cloud Native Vision Workload Deployment On Arm
This demonstration will showcase how to use open-source lightweight Kubernetes-based orchestration for deploying and managing vision and machine learning workloads on edge devices. (Representatives: Umair Hashim, Principal Solutions Engineer)
Color-Based Object Detection System for Visual AI Applications
BASF will demonstrate an object detection system consisting of cameras, LED lighting and colors designed to serve as product identifiers. The system can be used to improve the object detection performance for use cases involving manufactured consumer items as well as to label neural network training image sets. (Representatives: Ian Childers, Head of Technology)
Sensor Fusion + Semantic Segmentation Processing on Blaize Pathfinder P1600
This session will demonstrate sensor fusion and semantic segmentation in a use case fusing data received from full HD cameras and from lidar and radar sensors on a standalone embedded system, the Blaize Pathfinder P1600. (Representatives: Doug Watt, Director Field Applications Engineering)
Multi-Camera Object Detection on the Blaize Xplorer X1600E
This demonstration will show multi-camera object detection running on the Blaize Xplorer X1600E. The use case processes 5 independent HD video streams using 5 independent YoloV3 networks with less than 100 ms latency, for true real-time processing at the edge. (Representatives: Shawn Holiday, Sr. Director, Customer Success)
Blaize Picasso SDK: People & Pose Detection, Key Point Tracking
This session demonstrates a high resolution, multi-neural network and multi-function graph-native application built on the Blaize Picasso SDK, with the Blaize Xplorer X1600P as the host system. (Representatives: Rajesh Anantharaman, Sr. Director of Products)
Cadence Demonstration of Vision and AI Applications on Tensilica DSP-based Platforms
In this session, Amol Borkar, Senior Product Manager and Marketing of Vision and AI DSPs at Cadence, will demonstrate real-time examples of driver monitoring, automotive perception, 6DoF SLAM, and semantic segmentation, all running on Tensilica processors and DSPs. (Representatives: Amol Borkar, Senior Product Manager and Marketing of Vision and AI DSPs)
Leveraging the High Throughput of the Hailo-8 AI Processor to Improve Large-Scale Object Detection on the Edge
The high throughput delivered by the Hailo-8 AI processor can be leveraged not only to process multiple cameras at the same time but also to improve detection accuracy in high-resolution video. Tiling is a dedicated application that allows developers to break down high-resolution images into multiple input streams, thus making small and faraway objects detectable even in very busy and dynamic environments. Hailo's out-of-the-box TAPPAS applications are a great way to hit the ground running, making the development and deployment of models easier and accelerating time-to-market. (Representative: Yaniv Sulkes, VP Product Marketing & Automotive)
State-of-the-Art Object Detection on the Edge with Hailo-8 AI Processor
YoloV5m is a high-end neural network model for object detection, a fundamental computer vision task. The Hailo-8 AI chip can run this model at real-time frame rates with a high-resolution video input, providing a powerful solution for surveillance applications in Smart City, Smart Retail, Enterprise and other markets. The Hailo-8 is supported by a dedicated out-of-the-box application included in the Hailo TAPPAS toolkit, designed to accelerate time-to-market and make it easy to develop and deploy high-performance edge applications. (Representative: Nadav Eden, Technical Account Manager)
Lattice Semiconductor's CrossLink-NX: Human Presence and Counting
This demonstration showcases a development board equipped with a CrossLink-NX FPGA, based on the Lattice Semiconductor Nexus platform. The FPGA is programmed to perform human presence detection and counting, common AI use cases found in many applications. (Representative: Hussein Osman, Product Marketing Manager)
Helion ISP on ECP5
Lattice Semiconductor has partnered with Helion-Vision to bring its industry-leading IONOS image signal processing (ISP) IP portfolio to the Lattice ECP5 FPGA. This demonstration showcases the quality of Helion-Vision's image processing algorithms, along with showing optional features such as image overlay. (Representative: Mark Hoopes, Director, Product Marketing)
Ignitarium AI Unit inspection
This demonstration was developed by Lattice Semiconductor partner Ignitarium, experts in applied AI technology. It shows how you can implement efficient machine learning/AI algorithms on the Lattice ECP5 FPGA to perform complex tasks. In this case, the algorithm looks for defective products as they pass along a conveyor belt, with higher speed and better quality of results than any traditional or human-powered approach. (Representative: Hussein Osamn, Product Marketing Manager)
Spatial AI and CV for Human Machine Safety
In situations where people need to interact with potentially dangerous equipment, it is often not possible to physically guard or “dumb-guard” (e.g., with sonar or laser, for example) the equipment. In such cases, you need to discern not just the presence of an object in the equipment’s “red zone,” but also what that object is; some things (wooden boards with a saw, for example) should be there, while some things (human limbs, for example) shouldn't be! Embedded spatial artificial intelligence and computer vision, as demonstrated in this session, enables such machines to intelligently perceive objects so that they can smart-guard to protect people while not adversely impacting normal operation. (Representative: Shawn McLaughlin, Vice President)
Neural-Inference-Controlled Crop/Zoom and H.265 Encode
In this demonstration, we will show how to use a neural network to guide what portion of a high-resolution image sensor (12MP) is output to a 2MP (1920x1080) h.265-encoded video stream. This capability allows a neural network or computer vision algorithm (e.g., motion estimation) to guide where the action is in a given scene, and then zoom (6x lossless) into that action, h.265-encoding the resultant 1920x1080 region of interest. (Representative: Martin Peterlin, Chief Technology Officer)
From-Behind Collision Detection for People Who Ride Bikes
In this demonstration, we'll show an example of localizing and estimating the trajectories of vehicles behind you to determine if they are on a trajectory to hit you. Too many people have been struck from behind by distracted drivers. Let's use embedded, performant spatial artificial intelligence and computer vision to solve this problem! (Representative: Brandon Gilles, CEO)
Efficient Driver Monitoring for Automotive Cameras
In this demonstration, Nextchip, in collaboration with PathPartner Technology, will present a driver monitoring system (DMS) solution based on its ADAS SoC, APACHE4. The solution includes both distraction and drowsiness detection algorithms, ported to the CEVA XM-4 DSP (delivering 77 GMAC of performance) in the APACHE4. Thanks to the power of a well-designed platform and well-optimized algorithms, the solution delivers both high frame rates and high accuracy. (Representative: Jessie Lee, Technical Marketer(GM))
Nextchip’s Imaging Signal Processor: Supported Functions
In this session, Nextchip will showcase its imaging signal processor (ISP) expertise by demonstrating various ISP features for automotive applications. Nextchip’s ISP pipeline is developed in full by the company, leading to improved tuning capabilities. Nextchip will demonstrate the following algorithms and functions running on the ISP: high dynamic range (HDR), LED flicker mitigation (LFM), auto-calibration, and dewarp. (Representative: Jesse Lee Kim, Technical Marketer(GM))
Enabling Smart Automotive Camera ADAS SoCs
In this demonstration, Nextchip will present its image processors for the edge, APACHE4 and 5, which are ADAS SoCs empowering cameras with various “smart” functions. Because the products are designed to be located in a vehicle’s camera module, they have very low power consumption and are very small. The demonstration will show the SoCs’ various use cases, along with the well-organized APACHE5 SDK. (Representative: James Kim, Technical Marketer(Director))
Pick a Card, Any Card: Using ML-Enhanced Vision for ID and Label Tracking
Security cameras can play a valuable role in tracking the whereabouts of goods and personnel within facilities, but to do so they need to be able to reliably read information on ID cards and labels. Using a deck of standard playing cards, Perceive will demonstrate how the combination of a high-definition image sensor and the company’s Ergo edge inference processor can read information at distances beyond the capability of the human eye and expand the role of computer vision in applications such as inventory control and access management. (Representative: Kathy Cook, VP, Business Development)
Smart Video Conferencing on the Edge
Enhancements in AI processing capabilities on edge devices are enabling a richer video conferencing experience. These capabilities are leveraged in use cases that range from biometrics and automatic access of calendar and video conferencing applications through face and voice identification to a smarter framing of the scene in front of the camera, and understanding the language for auto-subtitling...all on the edge. The Synaptics VS680 and VS780 SOCs were designed with smart video conferencing use cases in mind. They are highly integrated to implement cost-optimized video conferencing devices in different form factors and boast the needed AI computation. In this demonstration, the VS680 will run smart framing, where the captured scene in front of the camera is dynamically adjusted to show the region of interest using machine learning. (Representatives: Zafer Diab, Director of Product Marketing & Xin Li, FAE Engineer)
Real-Time Video Post-processing Using Machine Learning
Enhancements in AI processing capabilities on edge devices are enabling significant enhancements in video scaling and post-processing versus, compared to what had been possible with traditional scaling integrated in SoCs. These enhancements enable the scaling and post-processing to be theme-based. Scaling for sports content can be adjusted for the high motion content, for example, while scaling for video conferencing can be optimized for the (mostly) static and moving face content. Synaptics SoCs integrate an internally developed machine learning engine called QDEO.ai, which performs super-resolution scaling that takes the theme of the video into consideration. This demonstration will show a side-by-side comparison of scaling using the traditional hardware scaler and the QDEO.ai scaled video. Quality enhancements are accentuated when performed on lower-bitrate input video, such as with videoconferencing. (Representatives: Zafer Diab, Director of Product Marketing & Xin Li, FAE Engineer)
From Neural Network to Edge with Synaptics
Synaptics will present an overview of the Katana Edge AI processor and tensaicc, the compiler that Eta Compute is developing for use with Katana. After we introduce Katana's novel multicore architecture, you'll see a live demonstration of how tensaicc compiles a neural network and generates power-, cycle- and memory-optimized code that takes advantage of the architecture. (Representative: Vineet Ganju, VP/GM Audio Business Unit)
Simultaneous Localization and Mapping (SLAM) Acceleration on DesignWare ARC EV7x Processors
Simultaneous Localization and Mapping (SLAM) creates and updates a map of an unknown environment while at the same time keeping track of an agent's location within it. SLAM is a key component in a variety of systems, such as autonomous vehicles, robotics, and augmented and virtual reality. This demonstration shows how to accelerate a SLAM software stack by offloading some of the processing to the ARC EV7x processor, as well as how adding an extra core to parallelize the algorithms can further increase the acceleration. We’ll also show how, with an EV7x-based system, you can incorporate a deep neural network engine to expand the system's intelligence. (Representative: Liliya Tazieva, Software Engineer)
SRGAN Super Resolution on DesignWare ARC EV7x Processors
Image super-resolution techniques reconstruct a higher-resolution image from a lower-resolution one. Although this can be done with classical vision algorithms, the results are inferior to what neural-network-based solutions can now offer. In this demo, we show how a Generative Adversarial Network can be used to intelligently infer the missing pixel data and generate a high-quality high-resolution image. The demo will run on the ARC EV7x processor and showcase the Deep Neural Network accelerator engine. (Representative: Liliya Tazieva, Software Engineer)
X-Ray Classification Solution for COVID-19 and Pneumonia Detection
Spline.AI is collaborating with Amazon Web Services and Xilinx to deliver an open-source, open-model, X-Ray classification solution for COVID-19 and pneumonia detection. This demonstration will show the model, deployed on the Xilinx Zynq UltraScale+- MPSoC device-based ZCU104 evaluation kit and leveraging the Xilinx deep learning processor unit (DPU), a soft-IP tensor accelerator that is powerful enough to run a variety of neural networks, including classification and detection of diseases. (Representative: Quenton Hall, AI Architect)
Out-of-the-Box with the Kria KV260 Starter Kit: Up and Running in Under an Hour
This demonstration will provide a detailed look at the Kria KV260 Vision AI Starter Kit and its companion basic accessory pack. We’ll show you just how quickly and easily you can get our Smart Camera accelerated application up and running, with no FPGA experience required. (Representative: Karan Kantharia, Product Line Manager)
Visual Machine Learning and Natural Language Processing Fusion on the Xilinx Kria SOM
This demonstration shows the new Kria K26 SOM running both vision machine learning (ML) and natural language processing on the same Xilinx Zynq-based platform. This solution showcases the dynamic switching capabilities of the Xilinx deep learning processor unit and integrates audio ML keyword-spotting to control the video display. (Representative: Girish Malipeddi, Director of Video and Imaging Solutions)
Bronze Sponsor
Real-Time Object Tracking with OpenMV
This session will demonstrate how to use MicroPython, along with a quantized TensorFlow Lite Microcontroller model, to run a cup detection program that keeps track of the number of cups in the image. You'll learn how to create a dataset, train a model and deploy the neural network model onto the OpenMV H7 board. The demo tracks the number of cups in an image by first using classical image processing methods on each frame prior to passing the interim results to the model to find blobs and make bounding boxes based on the blobs' colors, shapes, widths, and heights. (Representatives: Carlo Grisafi, IoT Developer Advocate)
Moving the Gym to Your Living Room: Body Pose Tracking on Your Smart TV
The lifestyle changes we have all experienced over the past year have brought a big focus on home fitness. At the same time, we have seen many advances in deep learning technology, along with increased camera presence in smart TVs and other home devices. In this session, we will demonstrate our fitness application performing real-time body pose estimation and tracking on Arm CPUs and GPUs. We will share some excellent results using Google’s BlazePose model, and discuss the challenges and opportunities for creating immersive experiences on Arm platforms. (Representatives: Mina Dimova, Staff Software Engineer)
An Open Source Approach To Cloud Native Vision Workload Deployment On Arm
This demonstration will showcase how to use open-source lightweight Kubernetes-based orchestration for deploying and managing vision and machine learning workloads on edge devices. (Representatives: Umair Hashim, Principal Solutions Engineer)
Color-Based Object Detection System for Visual AI Applications
BASF will demonstrate an object detection system consisting of cameras, LED lighting and colors designed to serve as product identifiers. The system can be used to improve the object detection performance for use cases involving manufactured consumer items as well as to label neural network training image sets. (Representatives: Ian Childers, Head of Technology)
Sensor Fusion + Semantic Segmentation Processing on Blaize Pathfinder P1600
This session will demonstrate sensor fusion and semantic segmentation in a use case fusing data received from full HD cameras and from lidar and radar sensors on a standalone embedded system, the Blaize Pathfinder P1600. (Representatives: Doug Watt, Director Field Applications Engineering)
Multi-Camera Object Detection on the Blaize Xplorer X1600E
This demonstration will show multi-camera object detection running on the Blaize Xplorer X1600E. The use case processes 5 independent HD video streams using 5 independent YoloV3 networks with less than 100 ms latency, for true real-time processing at the edge. (Representatives: Shawn Holiday, Sr. Director, Customer Success)
Blaize Picasso SDK: People & Pose Detection, Key Point Tracking
This session demonstrates a high resolution, multi-neural network and multi-function graph-native application built on the Blaize Picasso SDK, with the Blaize Xplorer X1600P as the host system. (Representatives: Rajesh Anantharaman, Sr. Director of Products)
Cadence Demonstration of Vision and AI Applications on Tensilica DSP-based Platforms
In this session, Amol Borkar, Senior Product Manager and Marketing of Vision and AI DSPs at Cadence, will demonstrate real-time examples of driver monitoring, automotive perception, 6DoF SLAM, and semantic segmentation, all running on Tensilica processors and DSPs. (Representatives: Amol Borkar, Senior Product Manager and Marketing of Vision and AI DSPs)
Leveraging the High Throughput of the Hailo-8 AI Processor to Improve Large-Scale Object Detection on the Edge
The high throughput delivered by the Hailo-8 AI processor can be leveraged not only to process multiple cameras at the same time but also to improve detection accuracy in high-resolution video. Tiling is a dedicated application that allows developers to break down high-resolution images into multiple input streams, thus making small and faraway objects detectable even in very busy and dynamic environments. Hailo's out-of-the-box TAPPAS applications are a great way to hit the ground running, making the development and deployment of models easier and accelerating time-to-market. (Representative: Yaniv Sulkes, VP Product Marketing & Automotive)
State-of-the-Art Object Detection on the Edge with Hailo-8 AI Processor
YoloV5m is a high-end neural network model for object detection, a fundamental computer vision task. The Hailo-8 AI chip can run this model at real-time frame rates with a high-resolution video input, providing a powerful solution for surveillance applications in Smart City, Smart Retail, Enterprise and other markets. The Hailo-8 is supported by a dedicated out-of-the-box application included in the Hailo TAPPAS toolkit, designed to accelerate time-to-market and make it easy to develop and deploy high-performance edge applications. (Representative: Nadav Eden, Technical Account Manager)
Lattice Semiconductor's CrossLink-NX: Human Presence and Counting
This demonstration showcases a development board equipped with a CrossLink-NX FPGA, based on the Lattice Semiconductor Nexus platform. The FPGA is programmed to perform human presence detection and counting, common AI use cases found in many applications. (Representative: Hussein Osman, Product Marketing Manager)
Helion ISP on ECP5
Lattice Semiconductor has partnered with Helion-Vision to bring its industry-leading IONOS image signal processing (ISP) IP portfolio to the Lattice ECP5 FPGA. This demonstration showcases the quality of Helion-Vision's image processing algorithms, along with showing optional features such as image overlay. (Representative: Mark Hoopes, Director, Product Marketing)
Ignitarium AI Unit inspection
This demonstration was developed by Lattice Semiconductor partner Ignitarium, experts in applied AI technology. It shows how you can implement efficient machine learning/AI algorithms on the Lattice ECP5 FPGA to perform complex tasks. In this case, the algorithm looks for defective products as they pass along a conveyor belt, with higher speed and better quality of results than any traditional or human-powered approach. (Representative: Hussein Osamn, Product Marketing Manager)
Spatial AI and CV for Human Machine Safety
In situations where people need to interact with potentially dangerous equipment, it is often not possible to physically guard or “dumb-guard” (e.g., with sonar or laser, for example) the equipment. In such cases, you need to discern not just the presence of an object in the equipment’s “red zone,” but also what that object is; some things (wooden boards with a saw, for example) should be there, while some things (human limbs, for example) shouldn't be! Embedded spatial artificial intelligence and computer vision, as demonstrated in this session, enables such machines to intelligently perceive objects so that they can smart-guard to protect people while not adversely impacting normal operation. (Representative: Shawn McLaughlin, Vice President)
Neural-Inference-Controlled Crop/Zoom and H.265 Encode
In this demonstration, we will show how to use a neural network to guide what portion of a high-resolution image sensor (12MP) is output to a 2MP (1920x1080) h.265-encoded video stream. This capability allows a neural network or computer vision algorithm (e.g., motion estimation) to guide where the action is in a given scene, and then zoom (6x lossless) into that action, h.265-encoding the resultant 1920x1080 region of interest. (Representative: Martin Peterlin, Chief Technology Officer)
From-Behind Collision Detection for People Who Ride Bikes
In this demonstration, we'll show an example of localizing and estimating the trajectories of vehicles behind you to determine if they are on a trajectory to hit you. Too many people have been struck from behind by distracted drivers. Let's use embedded, performant spatial artificial intelligence and computer vision to solve this problem! (Representative: Brandon Gilles, CEO)
Efficient Driver Monitoring for Automotive Cameras
In this demonstration, Nextchip, in collaboration with PathPartner Technology, will present a driver monitoring system (DMS) solution based on its ADAS SoC, APACHE4. The solution includes both distraction and drowsiness detection algorithms, ported to the CEVA XM-4 DSP (delivering 77 GMAC of performance) in the APACHE4. Thanks to the power of a well-designed platform and well-optimized algorithms, the solution delivers both high frame rates and high accuracy. (Representative: Jessie Lee, Technical Marketer(GM))
Nextchip’s Imaging Signal Processor: Supported Functions
In this session, Nextchip will showcase its imaging signal processor (ISP) expertise by demonstrating various ISP features for automotive applications. Nextchip’s ISP pipeline is developed in full by the company, leading to improved tuning capabilities. Nextchip will demonstrate the following algorithms and functions running on the ISP: high dynamic range (HDR), LED flicker mitigation (LFM), auto-calibration, and dewarp. (Representative: Jesse Lee Kim, Technical Marketer(GM))
Enabling Smart Automotive Camera ADAS SoCs
In this demonstration, Nextchip will present its image processors for the edge, APACHE4 and 5, which are ADAS SoCs empowering cameras with various “smart” functions. Because the products are designed to be located in a vehicle’s camera module, they have very low power consumption and are very small. The demonstration will show the SoCs’ various use cases, along with the well-organized APACHE5 SDK. (Representative: James Kim, Technical Marketer(Director))
Pick a Card, Any Card: Using ML-Enhanced Vision for ID and Label Tracking
Security cameras can play a valuable role in tracking the whereabouts of goods and personnel within facilities, but to do so they need to be able to reliably read information on ID cards and labels. Using a deck of standard playing cards, Perceive will demonstrate how the combination of a high-definition image sensor and the company’s Ergo edge inference processor can read information at distances beyond the capability of the human eye and expand the role of computer vision in applications such as inventory control and access management. (Representative: Kathy Cook, VP, Business Development)
Smart Video Conferencing on the Edge
Enhancements in AI processing capabilities on edge devices are enabling a richer video conferencing experience. These capabilities are leveraged in use cases that range from biometrics and automatic access of calendar and video conferencing applications through face and voice identification to a smarter framing of the scene in front of the camera, and understanding the language for auto-subtitling...all on the edge. The Synaptics VS680 and VS780 SOCs were designed with smart video conferencing use cases in mind. They are highly integrated to implement cost-optimized video conferencing devices in different form factors and boast the needed AI computation. In this demonstration, the VS680 will run smart framing, where the captured scene in front of the camera is dynamically adjusted to show the region of interest using machine learning. (Representatives: Zafer Diab, Director of Product Marketing & Xin Li, FAE Engineer)
Real-Time Video Post-processing Using Machine Learning
Enhancements in AI processing capabilities on edge devices are enabling significant enhancements in video scaling and post-processing versus, compared to what had been possible with traditional scaling integrated in SoCs. These enhancements enable the scaling and post-processing to be theme-based. Scaling for sports content can be adjusted for the high motion content, for example, while scaling for video conferencing can be optimized for the (mostly) static and moving face content. Synaptics SoCs integrate an internally developed machine learning engine called QDEO.ai, which performs super-resolution scaling that takes the theme of the video into consideration. This demonstration will show a side-by-side comparison of scaling using the traditional hardware scaler and the QDEO.ai scaled video. Quality enhancements are accentuated when performed on lower-bitrate input video, such as with videoconferencing. (Representatives: Zafer Diab, Director of Product Marketing & Xin Li, FAE Engineer)
From Neural Network to Edge with Synaptics
Synaptics will present an overview of the Katana Edge AI processor and tensaicc, the compiler that Eta Compute is developing for use with Katana. After we introduce Katana's novel multicore architecture, you'll see a live demonstration of how tensaicc compiles a neural network and generates power-, cycle- and memory-optimized code that takes advantage of the architecture. (Representative: Vineet Ganju, VP/GM Audio Business Unit)
Simultaneous Localization and Mapping (SLAM) Acceleration on DesignWare ARC EV7x Processors
Simultaneous Localization and Mapping (SLAM) creates and updates a map of an unknown environment while at the same time keeping track of an agent's location within it. SLAM is a key component in a variety of systems, such as autonomous vehicles, robotics, and augmented and virtual reality. This demonstration shows how to accelerate a SLAM software stack by offloading some of the processing to the ARC EV7x processor, as well as how adding an extra core to parallelize the algorithms can further increase the acceleration. We’ll also show how, with an EV7x-based system, you can incorporate a deep neural network engine to expand the system's intelligence. (Representative: Liliya Tazieva, Software Engineer)
SRGAN Super Resolution on DesignWare ARC EV7x Processors
Image super-resolution techniques reconstruct a higher-resolution image from a lower-resolution one. Although this can be done with classical vision algorithms, the results are inferior to what neural-network-based solutions can now offer. In this demo, we show how a Generative Adversarial Network can be used to intelligently infer the missing pixel data and generate a high-quality high-resolution image. The demo will run on the ARC EV7x processor and showcase the Deep Neural Network accelerator engine. (Representative: Liliya Tazieva, Software Engineer)
X-Ray Classification Solution for COVID-19 and Pneumonia Detection
Spline.AI is collaborating with Amazon Web Services and Xilinx to deliver an open-source, open-model, X-Ray classification solution for COVID-19 and pneumonia detection. This demonstration will show the model, deployed on the Xilinx Zynq UltraScale+- MPSoC device-based ZCU104 evaluation kit and leveraging the Xilinx deep learning processor unit (DPU), a soft-IP tensor accelerator that is powerful enough to run a variety of neural networks, including classification and detection of diseases. (Representative: Quenton Hall, AI Architect)
Out-of-the-Box with the Kria KV260 Starter Kit: Up and Running in Under an Hour
This demonstration will provide a detailed look at the Kria KV260 Vision AI Starter Kit and its companion basic accessory pack. We’ll show you just how quickly and easily you can get our Smart Camera accelerated application up and running, with no FPGA experience required. (Representative: Karan Kantharia, Product Line Manager)
Visual Machine Learning and Natural Language Processing Fusion on the Xilinx Kria SOM
This demonstration shows the new Kria K26 SOM running both vision machine learning (ML) and natural language processing on the same Xilinx Zynq-based platform. This solution showcases the dynamic switching capabilities of the Xilinx deep learning processor unit and integrates audio ML keyword-spotting to control the video display. (Representative: Girish Malipeddi, Director of Video and Imaging Solutions)
Silver Sponsor
Highly Robust Computer Vision for All Conditions with the Eos Embedded Perception Software
Algolux will demonstrate its Eos end-to-end vision architecture that enables co-design of imaging and detection to solve accuracy and robustness problems jointly instead of just relying on more data and supervised training. We will show examples and benchmarks of how such co-designed vision systems, using our novel embedded perception stack, outperform public and commercial vision systems (including Tesla's latest OTA Model S Autopilot and Nvidia Driveworks). (Representatives: Dave Tokic, VP Marketing and Strategic Partnerships )
Automatically Optimize Camera Image Signal Processors for Computer Vision Using the Atlas Camera Optimization Suite
Algolux will demonstrate its Atlas cloud-enabled workflow to automatically optimize image signal processors (ISPs) to maximize computer vision accuracy in only days. Easy to access and deploy, the workflow can improve computer vision (CV) results by up to 25 mAP points while reducing time and effort by more than 10x versus today’s expert manual ISP tuning approaches. (Representatives: Marc Courtemanche, Atlas Product Architect)
Overcoming the Impossible: BrainChip Demonstrations of AI at the Sensor
Utilizing BrainChip’s Akida neural processor (NPU), you can leverage advanced neuromorphic computing as the engine for intelligent AI at the edge. This demonstration will show how Akida uses a 3D point cloud to implement human detection and gesture recognition, along with other demonstrations highlighting Akida's unique capabilities to solve critical problems that some feel are impossible to overcome. BrainChip is delivering on next-generation demands by achieving ultra-low power, efficient, effective AI functionality through event-based neuromorphic computing. (Representative: Rob Telson, Vice President, World wide Sales and Marketing)
BrainChip’s Akida Neural Processor: Solving Problems At the Edge
Utilizing BrainChip’s Akida neural processor (NPU), you can leverage advanced neuromorphic computing as the engine for intelligent AI at the edge. This demonstration will show how Akida can successfully implement human detection even when images are not proportional to the subject. BrainChip will also demonstrate one-shot learning and retraining without dependence on the cloud and re-programming of the network. BrainChip is delivering on next-generation demands by achieving ultra-low power, efficient, effective AI functionality through event-based neuromorphic computing. (Representative: Todd Vierra, Director, Customer Engagement)
Ultra-Low Latency Industrial Inspection at the Edge Using the HyperX Processor
In this demonstration, we will show how to use the HyperX Memory Network parallel processor to enable ultra-low latency industrial inspection of food products. We will use a simulated line scan acquisition and then a OpenCV-based algorithm to segment, label, and perform a feature-based determination of quality. These operations will happen at line rates and within a power budget that enables embedding the processing in the camera. (Representative: Eugene Mezhibovsky, Sr. Systems Architect)
Virtual Surround View Fire Detection Using a Deep Neural Network and the HyperX Processor
In this demonstration, we will show the detection of fire using a deep neural network (DNN) and a virtual 360-degree view of an environment. Four HD cameras with wide angle lenses will stream data into a HyperX processor, which will generate a virtual view from any perspective around the space. The virtual view will be swept around the space and fed to a DNN-based fire detection network to monitor the area. (Representative: Martin Hunt, Dir. Applications Engineering)
Dynamic Neural Accelerator F-Series and MERA Compiler for Low-Latency Deep Neural Network Inference
EdgeCortix's new Dynamic Neural Accelerator (DNA) architecture is a runtime-reconfigurable, highly scalable and power-efficient AI processor design that works on FPGAs as well as custom ASICs and systems on chips (SoCs). This demonstration will showcase the different configurations of EdgeCortix's DNA-F-series AI engines for FPGAs currently available for customers. It will also introduce the MERA compiler that works together with the DNA-F-series to enable the deployment of deep neural networks for computer vision applications, achieving real-time AI inference with high resolution still image and video data. Also shown will be video demos of typical use-case applications of AI inference in Xilinx FPGA-enabled edge servers and embedded SoCs. (Representative: Hamid Zohouri, Director of Product)
Deploying Hardware-Accelerated Long Short-Term Memory (LSTM) Neural Networks on Edge Devices
In this session, we'll demonstrate how to deploy long short-term memory (LSTM) neural networks on edge AI devices, using Imagination neural network acceleration (NNA) IP. We will show how to unroll and locally execute an LSTM at the edge using Imagination’s patented technology. Efficient deployment of LSTMs enables secure and efficient execution of new applications at the edge without the need to send any private data to the cloud. LSTMs, which have applications in voice and ADAS, use historic information to predict the right outcome. (Representative: Gilberto Rodriguez, Director of AI Product Management)
Traffic Analysis Using LAON PEOPLE's Deep Learning Solution
This session demonstrates a new traffic analysis program that doesn't require installing a new IP camera. LAON PEOPLE's AI solution provides vehicle, bicycle and pedestrian statistics utilizing the currently installed low-resolution traffic cameras. (Representative: Luke Faubion, Director - Traffic Solution)
Surface Inspection Using LAON PEOPLE’s Deep Learning Solution
This session will demonstrate a fully automatic inspection of complex surfaces (e.g., automobiles) using LAON PEOPLE’s advanced machine vision camera and deep learning algorithms. See how we are able to spot defects that are nearly impossible to detect with standard machine vision techniques and even difficult for humans to discern. (Representative: Henry SANG, Director-Business Development)
Customizing an Edge AI Vision SoC
Deploying a custom accelerator in a new embedded application is difficult without the rest of the IP also required to build a tightly integrated SoC. After presenting a brief overview of the AI Vision Platform that enables you to customize, optimize, and build your own Edge AI SoC for your application, OpenFive will demonstrate an AI application running on an FPGA, emulating a SoC built using the Platform. (Representative: David Lee, Director Product Management)
The Visidon Depth SDK: Image Stylization Optimized for Mobile and Embedded Platforms
This technology demo of the Visidon Depth SDK shows both still photos and a live video feed on mobile platforms. Visidon's Depth SDK includes feature sets for single and dual cameras and a gallery SDK for modifying results as a post-process. Results from input to computed depth, and to final stylized images are demonstrated for single and dual camera configurations with real-world examples. (Representative: Otso Suvilehto, Technology Lead)
Visidon's Image Noise Reduction Solutions Optimized for Mobile and Embedded Platforms
This demonstration provides an overview of Visidon's noise reduction technologies for embedded platforms. It describes various image and video noise reduction techniques, along with showing noisy input and reduced-noise output image pairs. It also introduces noise reduction using several methods, including highly optimized multi-frame fusion and filtering technologies, as well as efficient convolutional neural network (CNN) approaches. (Representative: Valtteri Inkiläinen, AI Software Engineer & Saku Moilanen, AI Software Engineer)
Bronze Sponsor
Highly Robust Computer Vision for All Conditions with the Eos Embedded Perception Software
Algolux will demonstrate its Eos end-to-end vision architecture that enables co-design of imaging and detection to solve accuracy and robustness problems jointly instead of just relying on more data and supervised training. We will show examples and benchmarks of how such co-designed vision systems, using our novel embedded perception stack, outperform public and commercial vision systems (including Tesla's latest OTA Model S Autopilot and Nvidia Driveworks). (Representatives: Dave Tokic, VP Marketing and Strategic Partnerships )
Automatically Optimize Camera Image Signal Processors for Computer Vision Using the Atlas Camera Optimization Suite
Algolux will demonstrate its Atlas cloud-enabled workflow to automatically optimize image signal processors (ISPs) to maximize computer vision accuracy in only days. Easy to access and deploy, the workflow can improve computer vision (CV) results by up to 25 mAP points while reducing time and effort by more than 10x versus today’s expert manual ISP tuning approaches. (Representatives: Marc Courtemanche, Atlas Product Architect)
Overcoming the Impossible: BrainChip Demonstrations of AI at the Sensor
Utilizing BrainChip’s Akida neural processor (NPU), you can leverage advanced neuromorphic computing as the engine for intelligent AI at the edge. This demonstration will show how Akida uses a 3D point cloud to implement human detection and gesture recognition, along with other demonstrations highlighting Akida's unique capabilities to solve critical problems that some feel are impossible to overcome. BrainChip is delivering on next-generation demands by achieving ultra-low power, efficient, effective AI functionality through event-based neuromorphic computing. (Representative: Rob Telson, Vice President, World wide Sales and Marketing)
BrainChip’s Akida Neural Processor: Solving Problems At the Edge
Utilizing BrainChip’s Akida neural processor (NPU), you can leverage advanced neuromorphic computing as the engine for intelligent AI at the edge. This demonstration will show how Akida can successfully implement human detection even when images are not proportional to the subject. BrainChip will also demonstrate one-shot learning and retraining without dependence on the cloud and re-programming of the network. BrainChip is delivering on next-generation demands by achieving ultra-low power, efficient, effective AI functionality through event-based neuromorphic computing. (Representative: Todd Vierra, Director, Customer Engagement)
Ultra-Low Latency Industrial Inspection at the Edge Using the HyperX Processor
In this demonstration, we will show how to use the HyperX Memory Network parallel processor to enable ultra-low latency industrial inspection of food products. We will use a simulated line scan acquisition and then a OpenCV-based algorithm to segment, label, and perform a feature-based determination of quality. These operations will happen at line rates and within a power budget that enables embedding the processing in the camera. (Representative: Eugene Mezhibovsky, Sr. Systems Architect)
Virtual Surround View Fire Detection Using a Deep Neural Network and the HyperX Processor
In this demonstration, we will show the detection of fire using a deep neural network (DNN) and a virtual 360-degree view of an environment. Four HD cameras with wide angle lenses will stream data into a HyperX processor, which will generate a virtual view from any perspective around the space. The virtual view will be swept around the space and fed to a DNN-based fire detection network to monitor the area. (Representative: Martin Hunt, Dir. Applications Engineering)
Dynamic Neural Accelerator F-Series and MERA Compiler for Low-Latency Deep Neural Network Inference
EdgeCortix's new Dynamic Neural Accelerator (DNA) architecture is a runtime-reconfigurable, highly scalable and power-efficient AI processor design that works on FPGAs as well as custom ASICs and systems on chips (SoCs). This demonstration will showcase the different configurations of EdgeCortix's DNA-F-series AI engines for FPGAs currently available for customers. It will also introduce the MERA compiler that works together with the DNA-F-series to enable the deployment of deep neural networks for computer vision applications, achieving real-time AI inference with high resolution still image and video data. Also shown will be video demos of typical use-case applications of AI inference in Xilinx FPGA-enabled edge servers and embedded SoCs. (Representative: Hamid Zohouri, Director of Product)
Deploying Hardware-Accelerated Long Short-Term Memory (LSTM) Neural Networks on Edge Devices
In this session, we'll demonstrate how to deploy long short-term memory (LSTM) neural networks on edge AI devices, using Imagination neural network acceleration (NNA) IP. We will show how to unroll and locally execute an LSTM at the edge using Imagination’s patented technology. Efficient deployment of LSTMs enables secure and efficient execution of new applications at the edge without the need to send any private data to the cloud. LSTMs, which have applications in voice and ADAS, use historic information to predict the right outcome. (Representative: Gilberto Rodriguez, Director of AI Product Management)
Traffic Analysis Using LAON PEOPLE's Deep Learning Solution
This session demonstrates a new traffic analysis program that doesn't require installing a new IP camera. LAON PEOPLE's AI solution provides vehicle, bicycle and pedestrian statistics utilizing the currently installed low-resolution traffic cameras. (Representative: Luke Faubion, Director - Traffic Solution)
Surface Inspection Using LAON PEOPLE’s Deep Learning Solution
This session will demonstrate a fully automatic inspection of complex surfaces (e.g., automobiles) using LAON PEOPLE’s advanced machine vision camera and deep learning algorithms. See how we are able to spot defects that are nearly impossible to detect with standard machine vision techniques and even difficult for humans to discern. (Representative: Henry SANG, Director-Business Development)
Customizing an Edge AI Vision SoC
Deploying a custom accelerator in a new embedded application is difficult without the rest of the IP also required to build a tightly integrated SoC. After presenting a brief overview of the AI Vision Platform that enables you to customize, optimize, and build your own Edge AI SoC for your application, OpenFive will demonstrate an AI application running on an FPGA, emulating a SoC built using the Platform. (Representative: David Lee, Director Product Management)
The Visidon Depth SDK: Image Stylization Optimized for Mobile and Embedded Platforms
This technology demo of the Visidon Depth SDK shows both still photos and a live video feed on mobile platforms. Visidon's Depth SDK includes feature sets for single and dual cameras and a gallery SDK for modifying results as a post-process. Results from input to computed depth, and to final stylized images are demonstrated for single and dual camera configurations with real-world examples. (Representative: Otso Suvilehto, Technology Lead)
Visidon's Image Noise Reduction Solutions Optimized for Mobile and Embedded Platforms
This demonstration provides an overview of Visidon's noise reduction technologies for embedded platforms. It describes various image and video noise reduction techniques, along with showing noisy input and reduced-noise output image pairs. It also introduces noise reduction using several methods, including highly optimized multi-frame fusion and filtering technologies, as well as efficient convolutional neural network (CNN) approaches. (Representative: Valtteri Inkiläinen, AI Software Engineer & Saku Moilanen, AI Software Engineer)
Bronze Sponsor
Highly Robust Computer Vision for All Conditions with the Eos Embedded Perception Software
Algolux will demonstrate its Eos end-to-end vision architecture that enables co-design of imaging and detection to solve accuracy and robustness problems jointly instead of just relying on more data and supervised training. We will show examples and benchmarks of how such co-designed vision systems, using our novel embedded perception stack, outperform public and commercial vision systems (including Tesla's latest OTA Model S Autopilot and Nvidia Driveworks). (Representatives: Dave Tokic, VP Marketing and Strategic Partnerships )
Automatically Optimize Camera Image Signal Processors for Computer Vision Using the Atlas Camera Optimization Suite
Algolux will demonstrate its Atlas cloud-enabled workflow to automatically optimize image signal processors (ISPs) to maximize computer vision accuracy in only days. Easy to access and deploy, the workflow can improve computer vision (CV) results by up to 25 mAP points while reducing time and effort by more than 10x versus today’s expert manual ISP tuning approaches. (Representatives: Marc Courtemanche, Atlas Product Architect)
Overcoming the Impossible: BrainChip Demonstrations of AI at the Sensor
Utilizing BrainChip’s Akida neural processor (NPU), you can leverage advanced neuromorphic computing as the engine for intelligent AI at the edge. This demonstration will show how Akida uses a 3D point cloud to implement human detection and gesture recognition, along with other demonstrations highlighting Akida's unique capabilities to solve critical problems that some feel are impossible to overcome. BrainChip is delivering on next-generation demands by achieving ultra-low power, efficient, effective AI functionality through event-based neuromorphic computing. (Representative: Rob Telson, Vice President, World wide Sales and Marketing)
BrainChip’s Akida Neural Processor: Solving Problems At the Edge
Utilizing BrainChip’s Akida neural processor (NPU), you can leverage advanced neuromorphic computing as the engine for intelligent AI at the edge. This demonstration will show how Akida can successfully implement human detection even when images are not proportional to the subject. BrainChip will also demonstrate one-shot learning and retraining without dependence on the cloud and re-programming of the network. BrainChip is delivering on next-generation demands by achieving ultra-low power, efficient, effective AI functionality through event-based neuromorphic computing. (Representative: Todd Vierra, Director, Customer Engagement)
Ultra-Low Latency Industrial Inspection at the Edge Using the HyperX Processor
In this demonstration, we will show how to use the HyperX Memory Network parallel processor to enable ultra-low latency industrial inspection of food products. We will use a simulated line scan acquisition and then a OpenCV-based algorithm to segment, label, and perform a feature-based determination of quality. These operations will happen at line rates and within a power budget that enables embedding the processing in the camera. (Representative: Eugene Mezhibovsky, Sr. Systems Architect)
Virtual Surround View Fire Detection Using a Deep Neural Network and the HyperX Processor
In this demonstration, we will show the detection of fire using a deep neural network (DNN) and a virtual 360-degree view of an environment. Four HD cameras with wide angle lenses will stream data into a HyperX processor, which will generate a virtual view from any perspective around the space. The virtual view will be swept around the space and fed to a DNN-based fire detection network to monitor the area. (Representative: Martin Hunt, Dir. Applications Engineering)
Dynamic Neural Accelerator F-Series and MERA Compiler for Low-Latency Deep Neural Network Inference
EdgeCortix's new Dynamic Neural Accelerator (DNA) architecture is a runtime-reconfigurable, highly scalable and power-efficient AI processor design that works on FPGAs as well as custom ASICs and systems on chips (SoCs). This demonstration will showcase the different configurations of EdgeCortix's DNA-F-series AI engines for FPGAs currently available for customers. It will also introduce the MERA compiler that works together with the DNA-F-series to enable the deployment of deep neural networks for computer vision applications, achieving real-time AI inference with high resolution still image and video data. Also shown will be video demos of typical use-case applications of AI inference in Xilinx FPGA-enabled edge servers and embedded SoCs. (Representative: Hamid Zohouri, Director of Product)
Deploying Hardware-Accelerated Long Short-Term Memory (LSTM) Neural Networks on Edge Devices
In this session, we'll demonstrate how to deploy long short-term memory (LSTM) neural networks on edge AI devices, using Imagination neural network acceleration (NNA) IP. We will show how to unroll and locally execute an LSTM at the edge using Imagination’s patented technology. Efficient deployment of LSTMs enables secure and efficient execution of new applications at the edge without the need to send any private data to the cloud. LSTMs, which have applications in voice and ADAS, use historic information to predict the right outcome. (Representative: Gilberto Rodriguez, Director of AI Product Management)
Traffic Analysis Using LAON PEOPLE's Deep Learning Solution
This session demonstrates a new traffic analysis program that doesn't require installing a new IP camera. LAON PEOPLE's AI solution provides vehicle, bicycle and pedestrian statistics utilizing the currently installed low-resolution traffic cameras. (Representative: Luke Faubion, Director - Traffic Solution)
Surface Inspection Using LAON PEOPLE’s Deep Learning Solution
This session will demonstrate a fully automatic inspection of complex surfaces (e.g., automobiles) using LAON PEOPLE’s advanced machine vision camera and deep learning algorithms. See how we are able to spot defects that are nearly impossible to detect with standard machine vision techniques and even difficult for humans to discern. (Representative: Henry SANG, Director-Business Development)
Customizing an Edge AI Vision SoC
Deploying a custom accelerator in a new embedded application is difficult without the rest of the IP also required to build a tightly integrated SoC. After presenting a brief overview of the AI Vision Platform that enables you to customize, optimize, and build your own Edge AI SoC for your application, OpenFive will demonstrate an AI application running on an FPGA, emulating a SoC built using the Platform. (Representative: David Lee, Director Product Management)
The Visidon Depth SDK: Image Stylization Optimized for Mobile and Embedded Platforms
This technology demo of the Visidon Depth SDK shows both still photos and a live video feed on mobile platforms. Visidon's Depth SDK includes feature sets for single and dual cameras and a gallery SDK for modifying results as a post-process. Results from input to computed depth, and to final stylized images are demonstrated for single and dual camera configurations with real-world examples. (Representative: Otso Suvilehto, Technology Lead)
Visidon's Image Noise Reduction Solutions Optimized for Mobile and Embedded Platforms
This demonstration provides an overview of Visidon's noise reduction technologies for embedded platforms. It describes various image and video noise reduction techniques, along with showing noisy input and reduced-noise output image pairs. It also introduces noise reduction using several methods, including highly optimized multi-frame fusion and filtering technologies, as well as efficient convolutional neural network (CNN) approaches. (Representative: Valtteri Inkiläinen, AI Software Engineer & Saku Moilanen, AI Software Engineer)
Bronze Sponsor
Advanced Encoder and Decoder IP for Applications Demanding Highest Video Quality
This session will cover the development (and integration in a SoC) of video encoder and decoder IP cores that support the highest quality 4:4:4 chroma sampling format and 12-bit sample precision. We demonstrate the configuration process for an IP core using a proprietary tool that enables optimal configuration for the end application requirements. (Representatives: Dr Doug Ridge, Strategic Marketing Manager)
MIVisionX and rocAL: Two of AMD’s Computer Vision and Machine Learning Solutions
AMD’s MIVisionX is a set of comprehensive computer vision and machine intelligence libraries, utilities, and applications bundled into a single toolkit. rocAL, part of MIVisionX, is used to load, decode and augment data for deep learning, including the creation and validation of massive vision datasets. This demonstration shows the deployment of MIVisionX across several multi-GPU AMD systems, exemplifying industry-leading scaling using Kubernetes for inference. The demonstration also shows rocAL being used to process and augment images for deep learning training using PyTorch and TensorFlow with multiple GPUs.
RISC-V and Arm Vector Extensions -- Differences in Code Complexity and Execution
The RISC-V Foundation recently announced the release of its vector (RVV) extension. Earlier this year, Arm also announced its new v9 architecture, along with new scalable vector extension (SVE). This demonstration will look at the differences in the two solutions, showing variations in the code required to implement common tasks along with the varying performance of that code. (Representatives: Andrew Richards, CEO / President)
Empowering AI in Mobility
AI and deep learning for edge devices is gaining momentum, promising to empower numerous applications both inside and outside vehicles. However, such implementations can be challenging in terms of enabling the compression and acceleration of deep neural networks to run on multiple processors, along with managing associated costs. Join this demo to see how Deeplite is overcoming these roadblocks to enable object detection applications running in vehicles. (Representative:Charles Marsh, CCO)
Enabling Deep Neural Networks on Increasingly Constrained Edge Devices
To help pave the way for straightforward deep learning deployments in increasingly constrained edge devices, Deeplite has released its Neutrino software in a community version! This release is hardware-agnostic and can be seamlessly and smoothly integrated into any ML Ops pipeline. In this session, we’ll demonstrate how to use the software framework, how to produce optimized models for inference to be either directly deployed on the edge device or via a cloud environment, and how to compress and maintain model accuracy. (Representative: Davis Sawyer, co-founder & CPO)
Using Efinix Titanium FPGAs with Quantum Acceleration to Optimize Edge AI Performance while Reducing Time to Market
Efinix Quantum Acceleration provides a pre-defined and easy to use acceleration framework to facilitate rapid hardware / software system partitioning. This demonstration will provide an overview of the new Efinix Titanium series and explore a case study on the use of Quantum Acceleration. It will show how, in an edge AI application, progressively migrating software bottlenecks into hardware accelerators increases overall system performance and minimizes time to market. (Representative: Roger Silloway, Director of North American Sales)
Stereo Vision for Robotic Automation and Depth Sensor Fusion
Depth-sensing technology is now being widely adopted commercially in various consumer and industrial products. It's commonly recognized that there is no perfect 3D sensing technology to fit all diverse application domains. eYs3D incorporates advanced computer vision algorithms, hybrid depth-sensing fusion, point-cloud compression and streaming, along with dynamic calibration, all of which is delivered by the most power-efficient and scalable form of hardware: silicon. Robots are entering our life at a rapid rate. These machines thrive on computer vision to provide near-human perception capability and interact with people, all in order to complete tasks autonomously. In this demonstration, we show how our silicon-centric technology can be applied to various robotic applications such as the autonomous mobile robot (AMR), Co-bot, and many others. (Representative: James Wang, Technical General Manager)
How Immervision's Super-Wide-Angle Camera and Pixel Processing Can Improve Machine Perception
In this session, we will demonstrate how to specify, simulate and design with cameras equipped with super-wide-angle lens and sensors, for the purpose of improving machine perception. We will also show how a specific pixel processing technique, adaptive dewarping, can increase machine perception’s accuracy. We will make use of two application scenarios: monocular single frame depth perception and object classification (such as YoloV4). (Representative: Patrice Roulet, Vice President, Technology and Co-Founder)
The Inuitive NU4000, a Multi-Core Processor For 3D Imaging, Deep Learning and Computer Vision
In this session, Inuitive will demonstrate the diverse edge computing and other capabilities of the NU4000 processor in action, implementing operations commonly used in augmented and virtual reality, drones, robots and other applications. The next-generation NU4000 enables high-quality depth sensing, on-chip SLAM, computer vision and deep learning - in a compact form factor with optimized power consumption and an affordable cost structure - delivering smarter user experiences. (Representative: Dor Zepeniuk, CTO & Product VP)
VectorBlox SDK for AI/ML Product Demonstration using Microchip
Microchip’s VectorBlox Accelerator Software Development Kits (SDKs) are designed to enable developers to code in C/C++ and program power efficient neural networks without prior FPGA design experience. The highly-flexible tool kit can execute models in TensorFlow and the open neural network exchange (ONNX) format, which offers the widest framework interoperability. The VectorBlox Accelerator SDK is supported on Linux and Windows and also includes a bit accurate simulator, which provides the user the opportunity to validate the accuracy of the hardware while in the software environment. The neural network IP included with the kit also supports the ability to load different network models at run time. The VectorBlox product demonstration runs through the steps required to quickly get started with evaluating AI/ML algorithms in the PolarFire Video Kit.
Image Enhancement by AI-based Segmentation and Pixel Filtering
This session will demonstrate Morpho's award-winning software product that enables professional-quality image retouching, using AI to recognize the semantics (e.g., people, landscapes) easily on mobile devices. (Representative: Toshi Torihara, Vice President)
NetsPresso: Hardware-Aware Automatic AI Model Compression Platform
NetsPresso is an automated AI model compression platform solution designed for deploying lightweight deep learning models on different platforms and architectures from cloud applications to edge devices. By leveraging NetsPresso, Nota was able to support a number of applications in various market areas including healthcare, mobility, security, and retail. (Representative: Tae-Ho Kim, CTO)
Fast, Visual, and Explainable Machine Learning Modeling With PerceptiLabs
Join our demonstration, where we use image classification and transfer learning to train a model to classify brain tumours in magnetic resonance imaging (MRI) images. Using PerceptiLabs, a TensorFlow-based visual modeling tool, you’ll see how you can build deep learning models in seconds using pre-built components and get instant visualizations of the workflow, thereby eliminating the need to run the entire model before seeing results. (Representative: Robert Lundberg, CTO & Co-founder)
Renesas RZ/V Microprocessor with Power-Efficient AI Accelerator
In this demonstration of object detection and recognition on Renesas' proprietary Dynamically Reconfigurable Processor (DRP-AI), we will show how DRP-AI realizes both higher AI performance and superior power efficiency compared to GPUs in the representative AI execution environment. RZV-series products with DRP-AI technology enable embedded AI applications with higher performance without requiring heat sinks and cooling fans. (Representative: Manny Singh, Principal Product Marketing Manager)
UnitX Vizard: Real-time AI Video Analytics at the Edge
In this demonstration, UnitX CEO Kiran Narayan (PhD) will present the company's edge-based AI video analytics platform, VIZARD, which provides real-time actionable insights from multi-modal drone and closed-circuit television (CCTV) data on a responsive web dashboard. VIZARD is currently being used for security and safety surveillance by smart cities, private compounds, mining, oil and gas facilities, and event companies. It works seamlessly with multiple off-the-shelf drones, CCTV and video management software products. The company founders will share the product evolution journey, demonstrate the patent-pending platform and explain how it works to maximize business impact. (Representative: Kiran Narayanan, CEO & Founder)
Bronze Sponsor
Advanced Encoder and Decoder IP for Applications Demanding Highest Video Quality
This session will cover the development (and integration in a SoC) of video encoder and decoder IP cores that support the highest quality 4:4:4 chroma sampling format and 12-bit sample precision. We demonstrate the configuration process for an IP core using a proprietary tool that enables optimal configuration for the end application requirements. (Representatives: Dr Doug Ridge, Strategic Marketing Manager)
MIVisionX and rocAL: Two of AMD’s Computer Vision and Machine Learning Solutions
AMD’s MIVisionX is a set of comprehensive computer vision and machine intelligence libraries, utilities, and applications bundled into a single toolkit. rocAL, part of MIVisionX, is used to load, decode and augment data for deep learning, including the creation and validation of massive vision datasets. This demonstration shows the deployment of MIVisionX across several multi-GPU AMD systems, exemplifying industry-leading scaling using Kubernetes for inference. The demonstration also shows rocAL being used to process and augment images for deep learning training using PyTorch and TensorFlow with multiple GPUs.
RISC-V and Arm Vector Extensions -- Differences in Code Complexity and Execution
The RISC-V Foundation recently announced the release of its vector (RVV) extension. Earlier this year, Arm also announced its new v9 architecture, along with new scalable vector extension (SVE). This demonstration will look at the differences in the two solutions, showing variations in the code required to implement common tasks along with the varying performance of that code. (Representatives: Andrew Richards, CEO / President)
Empowering AI in Mobility
AI and deep learning for edge devices is gaining momentum, promising to empower numerous applications both inside and outside vehicles. However, such implementations can be challenging in terms of enabling the compression and acceleration of deep neural networks to run on multiple processors, along with managing associated costs. Join this demo to see how Deeplite is overcoming these roadblocks to enable object detection applications running in vehicles. (Representative:Charles Marsh, CCO)
Enabling Deep Neural Networks on Increasingly Constrained Edge Devices
To help pave the way for straightforward deep learning deployments in increasingly constrained edge devices, Deeplite has released its Neutrino software in a community version! This release is hardware-agnostic and can be seamlessly and smoothly integrated into any ML Ops pipeline. In this session, we’ll demonstrate how to use the software framework, how to produce optimized models for inference to be either directly deployed on the edge device or via a cloud environment, and how to compress and maintain model accuracy. (Representative: Davis Sawyer, co-founder & CPO)
Using Efinix Titanium FPGAs with Quantum Acceleration to Optimize Edge AI Performance while Reducing Time to Market
Efinix Quantum Acceleration provides a pre-defined and easy to use acceleration framework to facilitate rapid hardware / software system partitioning. This demonstration will provide an overview of the new Efinix Titanium series and explore a case study on the use of Quantum Acceleration. It will show how, in an edge AI application, progressively migrating software bottlenecks into hardware accelerators increases overall system performance and minimizes time to market. (Representative: Roger Silloway, Director of North American Sales)
Stereo Vision for Robotic Automation and Depth Sensor Fusion
Depth-sensing technology is now being widely adopted commercially in various consumer and industrial products. It's commonly recognized that there is no perfect 3D sensing technology to fit all diverse application domains. eYs3D incorporates advanced computer vision algorithms, hybrid depth-sensing fusion, point-cloud compression and streaming, along with dynamic calibration, all of which is delivered by the most power-efficient and scalable form of hardware: silicon. Robots are entering our life at a rapid rate. These machines thrive on computer vision to provide near-human perception capability and interact with people, all in order to complete tasks autonomously. In this demonstration, we show how our silicon-centric technology can be applied to various robotic applications such as the autonomous mobile robot (AMR), Co-bot, and many others. (Representative: James Wang, Technical General Manager)
How Immervision's Super-Wide-Angle Camera and Pixel Processing Can Improve Machine Perception
In this session, we will demonstrate how to specify, simulate and design with cameras equipped with super-wide-angle lens and sensors, for the purpose of improving machine perception. We will also show how a specific pixel processing technique, adaptive dewarping, can increase machine perception’s accuracy. We will make use of two application scenarios: monocular single frame depth perception and object classification (such as YoloV4). (Representative: Patrice Roulet, Vice President, Technology and Co-Founder)
The Inuitive NU4000, a Multi-Core Processor For 3D Imaging, Deep Learning and Computer Vision
In this session, Inuitive will demonstrate the diverse edge computing and other capabilities of the NU4000 processor in action, implementing operations commonly used in augmented and virtual reality, drones, robots and other applications. The next-generation NU4000 enables high-quality depth sensing, on-chip SLAM, computer vision and deep learning - in a compact form factor with optimized power consumption and an affordable cost structure - delivering smarter user experiences. (Representative: Dor Zepeniuk, CTO & Product VP)
VectorBlox SDK for AI/ML Product Demonstration using Microchip
Microchip’s VectorBlox Accelerator Software Development Kits (SDKs) are designed to enable developers to code in C/C++ and program power efficient neural networks without prior FPGA design experience. The highly-flexible tool kit can execute models in TensorFlow and the open neural network exchange (ONNX) format, which offers the widest framework interoperability. The VectorBlox Accelerator SDK is supported on Linux and Windows and also includes a bit accurate simulator, which provides the user the opportunity to validate the accuracy of the hardware while in the software environment. The neural network IP included with the kit also supports the ability to load different network models at run time. The VectorBlox product demonstration runs through the steps required to quickly get started with evaluating AI/ML algorithms in the PolarFire Video Kit.
Image Enhancement by AI-based Segmentation and Pixel Filtering
This session will demonstrate Morpho's award-winning software product that enables professional-quality image retouching, using AI to recognize the semantics (e.g., people, landscapes) easily on mobile devices. (Representative: Toshi Torihara, Vice President)
NetsPresso: Hardware-Aware Automatic AI Model Compression Platform
NetsPresso is an automated AI model compression platform solution designed for deploying lightweight deep learning models on different platforms and architectures from cloud applications to edge devices. By leveraging NetsPresso, Nota was able to support a number of applications in various market areas including healthcare, mobility, security, and retail. (Representative: Tae-Ho Kim, CTO)
Fast, Visual, and Explainable Machine Learning Modeling With PerceptiLabs
Join our demonstration, where we use image classification and transfer learning to train a model to classify brain tumours in magnetic resonance imaging (MRI) images. Using PerceptiLabs, a TensorFlow-based visual modeling tool, you’ll see how you can build deep learning models in seconds using pre-built components and get instant visualizations of the workflow, thereby eliminating the need to run the entire model before seeing results. (Representative: Robert Lundberg, CTO & Co-founder)
Renesas RZ/V Microprocessor with Power-Efficient AI Accelerator
In this demonstration of object detection and recognition on Renesas' proprietary Dynamically Reconfigurable Processor (DRP-AI), we will show how DRP-AI realizes both higher AI performance and superior power efficiency compared to GPUs in the representative AI execution environment. RZV-series products with DRP-AI technology enable embedded AI applications with higher performance without requiring heat sinks and cooling fans. (Representative: Manny Singh, Principal Product Marketing Manager)
UnitX Vizard: Real-time AI Video Analytics at the Edge
In this demonstration, UnitX CEO Kiran Narayan (PhD) will present the company's edge-based AI video analytics platform, VIZARD, which provides real-time actionable insights from multi-modal drone and closed-circuit television (CCTV) data on a responsive web dashboard. VIZARD is currently being used for security and safety surveillance by smart cities, private compounds, mining, oil and gas facilities, and event companies. It works seamlessly with multiple off-the-shelf drones, CCTV and video management software products. The company founders will share the product evolution journey, demonstrate the patent-pending platform and explain how it works to maximize business impact. (Representative: Kiran Narayanan, CEO & Founder)