Camera systems used for computer vision at the edge are smarter than ever, but when they perceive the world in 2D, they remain limited for many applications because they lack information about the third dimension: depth. Sensing technologies that capture and integrate depth allow us to build smarter and safer applications across a wide variety of applications including robotics, surveillance, AR/VR and gesture detection. In this presentation, we will examine three common technologies used for optical depth sensing: stereo camera systems, time-of-flight (ToF) sensors and structured light systems. We will review the core ideas behind each technology, compare and contrast them, and identify the tradeoffs to consider when selecting a depth sensing technology for your application, focusing on accuracy, sensing range, performance under difficult lighting conditions, optical hardware requirements and more.