We commonly train deep neural networks (DNNs) on existing data and then use the trained model to make predictions on new data. Once trained, these predictive models approximate a static mapping function from their input onto their predicted output. However, in many applications, the trained model is used on data that changes over time. In these cases, the predictive performance of these models degrades over time. In this talk, we will introduce the problem of concept drift in deployed DNNs. We will discuss the types of concept drift that occur in the real world, from small variances in the predicted classes all the way to the introduction of a new, previously unseen class. We will discuss approaches to recognizing these changes and identifying the point in time when it becomes necessary to update the training dataset and retrain a new model. We will conclude the talk with a real-world case study.