Check out the pitch videos of the five finalists that competed in the 2020 Vision Tank!
2020 Judges’ Award Winner
SLAMcore
Simultaneous localization and mapping (SLAM) algorithms that allow robots and drones to truly understand the space around them.
Providing robots and drones with spatial understanding, including localisation, mapping and perception requires deep expertise and often expensive hardware for robust operation. This is particularly challenging when the space is constantly changing. Traditionally companies have tackled each problem separately using individual sensors for specific parts of the perception stack. This approach increases cost and computational load.
SLAMcore spans out from Imperial College London in 2016 founded by some of the top minds in SLAM. Backed by renowned investors such as Amadeus Capital, SLAMcore has built a large team of leading experts in algorithm design and embedded edge AI, whose sole focus is to provide robots and drones with the spatial understanding they need. With vision at the heart, our robust and flexible solutions fuse information from multiple sensors to deliver full-stack Spatial AI on low-cost hardware, available today.
2020 Audience Choice Award Winner
Eyedaptic
Improving eyesight for people with age-related macular degeneration using augmented reality glasses powered by proprietary software.
Eyedaptic, is a software technology company addressing the large unmet need in the ophthalmic field of AMD (Age Related Macular Degeneration), by developing visually assistive solutions based on open market Augmented Reality (AR) glasses and embedding our proprietary simulated natural vision software. Visual intelligence is applied by algorithms that interpret the image, enhance and manipulate the image for display and respond to interactive user input for a natural viewing experience.
Owl Autonomous Imaging
High-definition 3D thermal Imagers with high-precision ranging to enable safe operation of autonomous things in all weather, day or night.
Owl Autonomous Imaging, has created a new sensor modality known as Intelligent 3D Thermal Ranging. For applications in Autonomous things and more, these imagers perform in all-weather day or night to deliver safe autonomous operation. The most determinant variable in the autonomous market will always be safety.
Manufacturers will continuously be tasked with determining which technology solutions most effectively mitigate liability risks and cost for safe operation. Sensor technology must : (1) See through RAIN, FOG, SLEET, SNOW & EXHAUST, (2) Classify living objects (pedestrians, cyclists & animals) from inanimate objects, (3) Track high-speed objects in front and to sides while delivering object velocities (3D), (4) Discern objects in shadows, intense glare at both near and far distances (HD quality). Until now, no sensor has been demonstrated that addresses these pain points. Additionally, Intelligent 3D thermal imaging is the ideal modality for anonymous people counting, test and measurement and security applications. Owl has 15 awarded patents for our solution with another 7 pending.
Hayden AI
An artificial intelligence-powered data platform for traffic law enforcement and parking management.
Hayden AI has developed the first AI-powered data platform for smart and safe city applications such as traffic law enforcement and parking management. We partner with the world’s most innovative cities to deploy our vision-based mobile solution in a city’s existing transportation fleet and private vehicles to collect real-time data. Our solution consists of an intelligent camera, smart cloud, HD maps, and a web portal that can be easily accessed by city officials.
Our mobile intelligent camera device consists of four cameras with different images sensors and lenses based on use case, is capable of day and night operation and is installed in a city’s existing transportation fleet to collect data that supports the enforcement of traffic laws. In real-time, our device detects objects in the environment such as vehicles, parking meters, fire hydrants, pedestrians, lane lines, and license plates, while the Location Engine fuses data from camera, dual-frequency GNSS, wheel odometry, and IMU to track our camera’s position with centimeter accuracy even in urban canyons.
When an event is detected, it uploads a video clip to our smart cloud for further processing. In the cloud, we build a fully annotated 3D semantic geometric map. To build the 3D geometric map, we track salient visual points in the environment. To annotate the geometric map, we segment the scene at a pixel-level and cross-register it against our 3D geometric map. This allows our camera to understand context and reason in 3D to enable it to not only detect the violation but also understand the severity and causation.
Finally, our inference engine combines these prior maps and real-time sensor data to reconstruct the scene in 3D like a game engine. We can add rules to our Reasoning Engine to fully automate the detection of any kind of traffic violation, eliminating the need for human review. Through the more efficient processing of traffic violations and other data services, our solution drastically improves traffic safety, eliminates traffic fatalities, and encourages transportation efficiency.
In addition, this data we collect allows us to generate insights that create significant revenue opportunities and helps us accomplish our mission to eliminate traffic fatalities and congestion and achieve fair and equitable mobility for all.
Ramona Optics
The first gigapixel microscope that can capture cellular-level detail over hundreds of square centimeters.
Ramona Optics has a new approach to microscopy that collects some of the largest images in the world in 28K resolution. This unprecedented amount of data provides unique context which allows advanced processing to draw relationships that are not visible to the human eye. By advancing microscopy through the marriage of computation and optics, we help scientists make more insightful decisions, faster.