With the increasing use of deep neural networks in computer vision applications, it has become more difficult for developers to explain how their algorithms work. This can make it difficult to establish trust and confidence among customers and other stakeholders, such as regulators. Lack of explainability also makes it more difficult for developers to improve their solutions. In this talk, we introduce methods for enabling explainability in deep-learning-based computer vision solutions. We will also illustrate some of these techniques via real-world examples, and show how they can be used to improve customer trust in computer vision models, to debug computer vision models, to obtain additional insights about data and to detect bias in models.