The Embedded Vision Summit has traditionally been held in person every May in Santa Clara, California. But due to COVID-19, we’ve moved the Summit to a 100% on-line event taking place on September 15-25 from 9 am – 2 pm PT.
The program has been designed to still give you the experience that has allowed us to become known as the premier conference for innovators adding computer vision and AI to products—except now it’s…
Travel costs you time, energy and money. Now, you can save on all those things and enjoy the Summit from the comfort of your own home!
Depending on your interests and availability, you can experience the online Summit live over several days either live or on-demand when it's best for you.
This year's move to a virtual event has opened up new and greater opportunities to directly connect with experts throughout the conference.
The event is September 15-25 from 9 am – 2 pm PT. The primary days are Tuesdays and Thursdays (September 15, 18, 23 and 25). Other dates within that time period are filled with additional opportunities at the Summit.
The Tuesdays and Thursdays within the timeframe of the event are spent on the core conference activities. This includes keynotes, speaker presentations and Q&A sessions, product presentations, networking opportunities, visiting exhibit booths and expert bars. During the Mondays, Wednesdays and Fridays between September 15-25, additional opportunities are available, like Technology Workshops, Over-the-Shoulder Sessions, scheduled meetings and on-demand content. You can see the Schedule at a Glance and we’ll have a more detailed schedule posted as we get closer to September.
You attend via your browser with a networking and virtual event web app. You can think of it as a combination of video conferencing, email and personalized agenda all in one.
You’ll be able to watch presentations (both live and on-demand), ask questions of speakers, visit the virtual exhibits hall, see live demos of products, interact with vendors … in short, pretty much everything that you could do at the in-person Summit, but from the convenience of your home or office.
An Over-the-Shoulder Session is a 50-minute intimate, deep-dive session with a subject matter expert. This is intended to be a screen sharing-based session during which your expert leads the audience through a step-by-step process. The audience will almost literally be watching over your expert’s shoulder as they demonstrate their product or system (e.g., Pete Warden shows how to build a lightweight deep neural network for execution on a microcontroller using TensorFlow Lite Micro).
An Expert Bar is an hour-long event in which sponsors and exhibitors can be connected with attendees directly, both as a group or in individual 1-on-1 sessions via breakout rooms in Zoom.
This year, we will be offering four workshops live online—three from Intel and one from Synopsys—covering the following topics:
To learn more details about the workshops, please visit our Technology Workshops page.
We traditionally have offered several in-depth trainings and workshops at the in-person Summit, and we’ve moved these to an online format. You can find our popular Deep Learning for Computer Vision with TensorFlow 2.0 and Keras training here, and we’ve partnered with OpenCV.Org for their online trainings as well.
We know making valuable connections at the Summit is paramount, so we’ve devised a series of ways to connect and get your important questions answered.
Before attending, we will send additional in-depth information about all of your opportunities to do so as well as instructions.
Yes, if you have questions for the speaker, visit the live Q&A sessions following the speaker’s presentation.
Ready to up your skills in #DeepLearning? Join this #TensorFlow 2.0 and #Keras training for less than $300. Learn at your own pace through lectures and hands-on labs with the help of a certified expert. Course details: https://www.edge-ai-vision.com/tensorflow/ #NeuralNetworks #machinelearning #ai
Join @SiddhaGanju’s session at the Embedded Vision Summit to discover easy-to-use, practical techniques that can improve CNN performance on mobile devices. Abstract: https://bit.ly/31xUdHZ Summit info: https://embeddedvisionsummit.com #DeepLearning #Mobile #DeepLearning