In the domain of AI vision, we have seen an explosion of models that can reliably detect objects of various types, from people to license plates. While these models are impressive, in real-world applications we often need to differentiate between a large number of custom items. For example, in addition to knowing that there is a car, you may want to know the exact make and model of that car. For these sorts of tasks, what you really want is a visual search that can identify an object from a catalog without requiring a new model to be trained when categories are added. In this talk, we describe how embedding models can be used to perform a visual search in such applications. We’ll present how to use and fine-tune these models, including tips on how to train an embedding model such that new objects can be added without requiring retraining of the model.