Register to join us on July 16 as a part of our live, online audience at the final round of the Vision Tank Start-Up Competition.
The Vision Tank is the Embedded Vision Summit’s annual start-up competition, showcasing the best new ventures using computer vision or visual AI in their products or services. Open to early-stage companies, entrants are judged on four criteria: technology innovation, business plan, team and business opportunity.
This year, for the first time, the Vision Tank finals will take place as a virtual, online event. On July 16, 2020, each of the five finalists will pitch their company and product to a panel of judges in front of a live online audience where the audience will participate by voting for the Audience Choice Award. The five finalists will also have an opportunity to participate in the Embedded Vision Summit taking place as a virtual event across four days in September 2020.
Two awards are given out each year: the Judges’ Award and the Audience Choice Award. The winner of the Vision Tank Judges’ Award will receive a $5,000 cash prize, and both winners will each receive a one-year membership in the Edge AI and Vision Alliance. They’ll also get one-on-one advice from the judges, as well as valuable introductions to potential investors, customers, employees and suppliers.
Plus, they will benefit from significant publicity associated with participation in the Embedded Vision Summit, the premier conference and trade show for innovators adding computer vision and AI to products. For more information about the Embedded Vision Summit, please visit: https://embeddedvisionsummit.com.
Questions on the Vision Tank Start-Up Competition: Please email us at firstname.lastname@example.org.
Improving eyesight for people with age-related macular degeneration using augmented reality glasses powered by proprietary software.
Eyedaptic, is a software technology company addressing the large unmet need in the ophthalmic field of AMD (Age Related Macular Degeneration), by developing visually assistive solutions based on open market Augmented Reality (AR) glasses and embedding our proprietary simulated natural vision software. Visual intelligence is applied by algorithms that interpret the image, enhance and manipulate the image for display and respond to interactive user input for a natural viewing experience.
An artificial intelligence-powered data platform for traffic law enforcement and parking management.
Hayden AI has developed the first AI-powered data platform for smart and safe city applications such as traffic law enforcement and parking management. We partner with the world’s most innovative cities to deploy our vision-based mobile solution in a city’s existing transportation fleet and private vehicles to collect real-time data. Our solution consists of an intelligent camera, smart cloud, HD maps, and a web portal that can be easily accessed by city officials.
Our mobile intelligent camera device consists of four cameras with different images sensors and lenses based on use case, is capable of day and night operation and is installed in a city’s existing transportation fleet to collect data that supports the enforcement of traffic laws. In real-time, our device detects objects in the environment such as vehicles, parking meters, fire hydrants, pedestrians, lane lines, and license plates, while the Location Engine fuses data from camera, dual-frequency GNSS, wheel odometry, and IMU to track our camera’s position with centimeter accuracy even in urban canyons.
When an event is detected, it uploads a video clip to our smart cloud for further processing. In the cloud, we build a fully annotated 3D semantic geometric map. To build the 3D geometric map, we track salient visual points in the environment. To annotate the geometric map, we segment the scene at a pixel-level and cross-register it against our 3D geometric map. This allows our camera to understand context and reason in 3D to enable it to not only detect the violation but also understand the severity and causation.
Finally, our inference engine combines these prior maps and real-time sensor data to reconstruct the scene in 3D like a game engine. We can add rules to our Reasoning Engine to fully automate the detection of any kind of traffic violation, eliminating the need for human review. Through the more efficient processing of traffic violations and other data services, our solution drastically improves traffic safety, eliminates traffic fatalities, and encourages transportation efficiency.
In addition, this data we collect allows us to generate insights that create significant revenue opportunities and helps us accomplish our mission to eliminate traffic fatalities and congestion and achieve fair and equitable mobility for all.
High-definition 3D thermal Imagers with high-precision ranging to enable safe operation of autonomous things in all weather, day or night.
Owl Autonomous Imaging, has created a new sensor modality known as Intelligent 3D Thermal Ranging. For applications in Autonomous things and more, these imagers perform in all-weather day or night to deliver safe autonomous operation. The most determinant variable in the autonomous market will always be safety.
Manufacturers will continuously be tasked with determining which technology solutions most effectively mitigate liability risks and cost for safe operation. Sensor technology must : (1) See through RAIN, FOG, SLEET, SNOW & EXHAUST, (2) Classify living objects (pedestrians, cyclists & animals) from inanimate objects, (3) Track high-speed objects in front and to sides while delivering object velocities (3D), (4) Discern objects in shadows, intense glare at both near and far distances (HD quality). Until now, no sensor has been demonstrated that addresses these pain points. Additionally, Intelligent 3D thermal imaging is the ideal modality for anonymous people counting, test and measurement and security applications. Owl has 15 awarded patents for our solution with another 7 pending.
The first gigapixel microscope that can capture cellular-level detail over hundreds of square centimeters.
Ramona Optics has a new approach to microscopy that collects some of the largest images in the world in 28K resolution. This unprecedented amount of data provides unique context which allows advanced processing to draw relationships that are not visible to the human eye. By advancing microscopy through the marriage of computation and optics, we help scientists make more insightful decisions, faster.
Simultaneous localization and mapping (SLAM) algorithms that allow robots and drones to truly understand the space around them.
Providing robots and drones with spatial understanding, including localisation, mapping and perception requires deep expertise and often expensive hardware for robust operation. This is particularly challenging when the space is constantly changing. Traditionally companies have tackled each problem separately using individual sensors for specific parts of the perception stack. This approach increases cost and computational load.
SLAMcore spans out from Imperial College London in 2016 founded by some of the top minds in SLAM. Backed by renowned investors such as Amadeus Capital, SLAMcore has built a large team of leading experts in algorithm design and embedded edge AI, whose sole focus is to provide robots and drones with the spatial understanding they need. With vision at the heart, our robust and flexible solutions fuse information from multiple sensors to deliver full-stack Spatial AI on low-cost hardware, available today.
See 5 companies developing the next generation of computer vision products in #AIProcessors, #AISoftware & #Algorithms, Cameras & Sensors, #DeveloperTools, and AutomotiveSolutions. June 3 | 9-10 am PT | Online | Live Q&A | Info & Free Reg: https://bit.ly/3eqdeAG
The virtual playground isn’t stopping us from showcasing new products and technologies this year. In addition to 100+ talks and product demos, connect with dozens of exhibitors and sponsors live online! Sep 15-25 | Info & reg: http://bit.ly/2TkkLZ0 #deeplearning #ai #visualai