Are you developing smart SoCs for gesture recognition, facial recognition, ADAS, or other AI and vision applications? Synopsys’ seminar includes sessions on implementing artificial intelligence, machine learning, and computer vision into SoCs, all aimed to help you develop vision chips for edge applications.
The seminar provides a deep dive into always-on, low-power applications, implementing gesture recognition, system-level architecture exploration, SLAM implementations, and securing sensitive information (from algorithms to biometric data). You’ll learn about optimizing bandwidth, performance, power, and area for designs from 1 TOPS to 100 TOPS.
You’ll learn about:
Who should attend:
NOTE: Synopsys approval and a business email address is required for registration. You will be contacted in a subsequent email if you are not approved.
Date/Time: September 16 and 18 – 9:00 am – 12:00 pm PT
If you are considering signing up for this workshop, you’ll almost certainly want to register for the remainder of the Embedded Vision Summit, featuring both live and on-demand content. The Summit is your gateway to a world of information on practical computer vision. Inspiring keynote presentations, dozens of practical sessions that help you deploy embedded vision products, and the Technology Showcase, where top suppliers are on hand with your projects in mind and the tools you need to achieve your vision.
Click here for more info on the Embedded Vision Summit.
Join @intel for an online workshop during the Summit. Learn more about the workshop: https://bit.ly/3a1GAEB Attending the Summit? Use code EBSOCIAL20-V by Friday for 15% off your pass. (Lite Pass: $99; Standard Pass: $249-Full Access) Summit info & reg: https://embeddedvisionsummit.com
Join Pei Zhang of @CarnegieMellon as he presents three physical-based approaches to reduce the data demand for robust learning in Structures as Sensors (SaS) in his session at the Summit. Abstract: https://bit.ly/31hoxGp Summit info: https://embeddedvisionsummit.com