Date: Monday, May 11
Start Time: 4:50 pm
End Time: 5:20 pm
As automated optical inspection moves from the server room to the factory floor, the promise of “seamless” AI deployment often hits the reality of hardware-specific friction. For embedded vision engineers, the challenge isn’t just training a high-accuracy model—it’s ensuring that model survives the transition to power-constrained edge silicon without losing its functional integrity. In this talk, we’ll provide a technical dive into the practical realities of deploying deep learning models for high-speed optical inspection across three industry-leading platforms: NVIDIA’s Jetson, Qualcomm’s Snapdragon and NXP’s i.MX + Ara-2. We’ll look beyond the marketing benchmarks to explore the “middle-mile” of AI implementation, including model development and training, vision pipeline differences and trade-offs, implications of model compilers for operator support and performance, power and thermal trade-offs. We’ll illustrate these factors using a reference pill inspection system implemented on the three platforms.

