AI is on the cusp of a revolution, driven by breakthroughs like large language models (LLMs) that reason like humans and vision-language models (VLMs) integrating natural language processing and computer vision. In his keynote talk, Professor Trevor Darrell of University of California, Berkeley will discuss the current state and future of machine intelligence research, highlighting his group’s work on training vision models without labeled data, enabling robots to act in novel situations and using LLMs as visual reasoning coordinators. Particularly relevant to edge applications, Trevor’s research also addresses challenges like memory and compute limitations, focusing on making VLMs more efficient while maintaining accuracy. He will also explore how multimodal AI, visual perception and prompt-tuned reasoning allow consumers to use visual intelligence at home while preserving privacy.
Learn about the newest speakers, sessions and other noteworthy details about the Summit Program by leaving us a few details.
Interested in sponsoring or exhibiting?
The Embedded Vision Summit gives you unique access to the best qualified technology buyers you’ll ever meet.
Want to contact us?
Use the small blue chat widget in the lower right-hand corner of your screen, or the form linked below.
STAY CONNECTED
Follow us on Twitter and LinkedIn.