In safety-critical computer vision applications such as medical diagnostics and autonomous driving, corrupted training data can be fatal. A famous example involves a dermatology AI that learned to identify rulers rather than tumors, a classic case of data poisoning. This session presents a novel model recovery framework designed to detect and heal vision models that have been compromised by adversarial attacks or “dirty” data. We introduce a teacher-student distillation architecture in which trusted, historical versions of a model act as teachers to audit and correct the behavior of the current student model. This approach allows organizations to remove (unlearn) the influence of poisoned data without losing legitimate new features or incurring the massive cost of retraining from scratch. Attendees will learn how to implement this self-correcting pipeline to ensure the long-term reliability of embedded vision products in the wild.

