News Highlights from Embedded Vision Summit 2025
When we talk about edge AI, image detection, recognition and analysis are often the topics of key focus, especially in areas like robotics, industrial automation, autonomous mobility and health tech. At the recent Embedded Vision Summit 2025 in Silicon Valley, Calif., keynote speaker Trevor Darrell, a professor at University of California, Berkeley, presented breakthroughs in vision language models (VLMs) and his department’s work in overcoming the massive memory and compute requirements in training state-of-the-art vision models when labeled data is unavailable.
He also covered techniques that enable robots to determine appropriate actions in novel situations. The key to deploying these models at the edge is in making the VLMs smaller and more efficient while retaining accuracy.
At the summit, over 1,200 attendees had the choice of learning from some 85 presentations and 65 exhibitors and exploring the latest in embedded vision. In this article, we highlight a selection of the announcements from the event that showcase some of the advances in enabling embedded vision.
MemryX: Edge AI and Vision Award winner
One of the winners of the Edge AI and Vision product of the year awards for 2025 in the edge AI and computers or boards category was MemryX, who had just two weeks earlier joined the National Semiconductor Hub in Saudi Arabia and received a strategic investment from one of the Middle East’s largest digital infrastructure projects, NEOM, through the Neom Investment Fund.
Technology analyst firm BDTI, who also organizes the Embedded Vision Summit, recently carried out a hands-on evaluation of MemryX’ MX3 M.2 AI accelerator module, and said the product was exceptionally easy to use while providing good performance and consuming little power. For the evaluation, BDTI installed the M.2 module in an x86 Linux PC, downloaded and compiled several neural network models using the MemryX tools, ran these networks on the module and measured inference performance and power consumption. They also developed a small stand-alone example application that takes images from a USB webcam and runs object detection on them using the module.
“It is the first AI accelerator we’ve encountered for which both the hardware and the software ‘just works.’ We were particularly impressed by the scope of models tested in MemryX’s Model eXplorer website, and the fact that the software tools are sufficiently capable that MemryX doesn’t need to provide its own model zoo of modified models,” BDTI said. “Rather, models can be easily compiled and achieve good performance from their original source code form.” The full evaluation (17 pages) can be found here.
Nota AI: On-device AI breakthrough with Qualcomm
South Korea-based Nota AI, which just filed for an IPO listing in South Korea, showcased its collaboration with Qualcomm Technologies at the Embedded Vision Summit. The company emphasized the optimization of Nota AI’s proprietary AI model optimization platform, NetsPresso, for use with the Qualcomm AI Hub. Nota AI’s CTO, Tae-Ho Kim, detailed how the integrated platforms significantly streamline the workflow for developing and deploying AI models on edge devices.

Nota AI showed off its NetsPresso Optimization Studio, an enhancement to its AI model optimization platform that offers users an intuitive, visual interface designed to simplify AI model optimization. Developers can quickly visualize critical layer details and model performance required for efficient quantization, enabling rapid, data-driven decisions based on actual device performance metrics, according to Nota AI.
Also featured at the show was the Nota Vision Agent (NVA), a generative AI-based video analytics solution. NVA enables real-time video event detection, natural language video search and automated report generation, helping enterprise users maximize situational awareness and operational efficiency. The solution has already proven its commercial viability through a recent supply agreement with the Dubai Roads and Transport Authority.
SiMa.ai and Wind River: Seamless AI/ML for intelligent edge
Meanwhile, SiMa.ai said that it had collaborated with Wind River on an integrated hardware and software solution for “next generation edge AI.”
“Edge AI is the next gold rush and creating significant opportunities across robotics, industrial automation, medical, automotive, and aerospace and defense,” said SiMa.ai founder and CEO Krishna Rangasayee. “Together, with Wind River’s long-standing expertise and market success, SiMa.ai is now jointly delivering the industry’s best edge AI platform that leads in performance, power-efficiency and ease-of-use, addressing all AI needs, including GenAI.”
In the company’s announcement, SiMa.ai said its MLSoC platform integrated with the enterprise-grade Debian derivative eLxr project with commercial support provided by Wind River’s eLxr Pro, allowing developers to easily customize and accelerate production time. “This integrated solution combines the freedom of open source with enterprise grade security, stability and compliance,” the company’s announcement added.
Vision Components: A modular, plug and play MIPI bricks system
Vision Components presented its new VC MIPI “bricks” system at the Embedded Vision Summit, a modular system based on perfectly matching components and comprised of camera modules, accessories and services, all the way through to ready-to-use MIPI cameras and complete embedded vision systems. A development kit from PHYTEC with NXP i.MX 8M Plus and i.MX 8M Mini processor is also part of the VC MIPI bricks system.

The modular system is matched to the more than 50 VC MIPI cameras and the requirements of industrial projects. The product includes FPC and coax cables for flexible connection, as well as the various VC power SoM FPGA accelerators for image pre-processing. On request, the cameras are also available ready-to-use, with optics fully assembled and calibrated.
