OEMs, brands and cloud providers want to move LLMs to the edge, especially for vision applications. What are the benefits and challenges of doing so? In this talk, we explore how edge AI is evolving to encompass massively increasing LLM […]
OEMs, brands and cloud providers want to move LLMs to the edge, especially for vision applications. What are the benefits and challenges of doing so? In this talk, we explore how edge AI is evolving to encompass massively increasing LLM […]
AI at the edge has been transforming over the last few years, with newer use cases running more efficiently and securely. Most edge AI workloads were initially run on CPUs, but machine learning accelerators have gradually been integrated into SoCs, […]
Attaining the lowest power, size and cost for a smart camera requires carefully matching the hardware to the actual application requirements. General-purpose media processors may appear attractive and easy to use, but often include unneeded features which increase system size, […]
AI hardware accelerators are playing a growing role in enabling AI in embedded systems such as smart devices. In most cases NPUs need a dedicated, tightly coupled high-speed memory to run efficiently. This memory has a major impact on performance, […]
In recent years there’s been tremendous focus on designing next-generation AI chipsets to improve neural network inference performance. As higher performance processors are called upon to execute ever-larger models—from vision transformers to LLMs—memory bandwidth is frequently the key performance bottleneck. […]
Interested in sponsoring or exhibiting?
The Embedded Vision Summit gives you unique access to the best qualified technology buyers you’ll ever meet.
Want to contact us?
Use the small blue chat widget in the lower right-hand corner of your screen, or the form linked below.
STAY CONNECTED
Follow us on Twitter and LinkedIn.