(Free—advance registration is required)
The Vision Tank is the Embedded Vision Summit’s annual start-up competition, showcasing the best new ventures using computer vision or visual AI in their products or services. Open to early-stage companies, entrants are judged on five criteria: technology innovation, business plan, team and business opportunity.
This year, for the first time, the Vision Tank finals will take place as a virtual, online event. On July 16, 2020, five finalists will pitch their company and product to a panel of judges in front of a live online audience where the audience will participate by voting for the Audience-choice award. The Judges Award and the Audience Choice Award will both be announced at the end of the session. The five finalists will also have an opportunity to participate in the Embedded Vision Summit taking place as a virtual event across four days in September 2020.
Two awards are given out each year: the Judges’ Award and the Audience Choice Award. The winner of the Vision Tank Judges’ Award will receive a $5,000 cash prize, and both winners will each receive a one-year membership in the Edge AI and Vision Alliance. They’ll also get one-on-one advice from the judges, as well as valuable introductions to potential investors, customers, employees and suppliers.
Plus, they will benefit from significant publicity associated with participation in the Embedded Vision Summit, the premier conference and trade show for innovators adding computer vision and AI to products.
Vision Tank applications were accepted until February 14, 2020, and applicants were asked to provide:
Five finalists will be announced on May 15, 2020.
Questions? Please email us at visiontank@edge-ai-vision.com.
BlinkAI Technologies utilizes machine learning to enhance sensor performance, extending the range of what cameras can see and detect in the real world. Building upon proprietary deep learning techniques, the company has developed robust low-light video inference deployed on efficient low-power devices for camera-embedded systems.
Strayos is a 3D visual AI platform using drone images to reduce cost and improve efficiency in job sites. Its software helps mining and quarry operators optimize the placement of drill holes and quantities of explosives and improve site productivity and safety by providing highly accurate survey data analytics.
Entropix provides better vision for computer vision. Its patented technology employs dual sensor cameras with AI and deep learning software to extract extreme levels of detail from video and still images for ultra-accurate intelligent video analytics. This patented computational resolution reconstruction supercharges video data analytics detection and classification.
Robotic Materials enables robotic components with human-like manipulation skills for the robotics industry. The company provides a sensing hand mechanism using tactile sensing, stereo vision and high-performance embedded vision to mimic the tight integration of sensing, actuation, computation and communication found in natural systems.
Vyrill is focused on helping to understand and interpret the massive amount of user-generated video content (UGVC) on social media and the web and offers a proprietary AI-powered platform for UGVC discovery, analytics, licensing and content marketing to help brand marketers better understand their customers.
Libraries designed to make it more simple to analyze vibrations at a rapid rate and more affordable price, predominantly—but not exclusively—in the predictive maintenance, home appliance and safety/security sectors
Cartesiam was born from the vision that it was time to make objects intelligent, and it should be simple, rapid and affordable. Its patented solution, NanoEdge AI, enables unsupervised learning directly on the device, which makes it unique, as other AI solutions at the edge require cloud learning and only allow inference at the edge.
The libraries produced by Cartesiam NanoEdge AI studio are designed to analyze vibrations (accelerometers 1 to 3 axes), electricity, magnetic field (hall sensors), ultrasounds, sounds and Volatile Organic Compounds (VoC). Devices and sensors leveraging Cartesiam’s libraries can be found in all continents and are mostly deployed around use cases of predictive maintenance, home appliance and safety/security. Cartesiam R&D is in Toulon France, and the company has offices in Paris, New York and Munich.
Augmented Reality (AR) glasses, a solution in the ophthalmic field built from both hardware and proprietary software to solve the unmet need by those with AMD (Age-Related Macular Degeneration) using an augmented reality visual aid to help improve vision
Eyedaptic, is a software technology company addressing the large unmet need in the ophthalmic field of AMD (Age Related Macular Degeneration), by developing visually assistive solutions based on open market Augmented Reality (AR) glasses and embedding our proprietary simulated natural vision software. Visual intelligence is applied by algorithms that interpret the image, enhance and manipulate the image for display and respond to interactive user input for a natural viewing experience.
Eye-tracking for depth-sensing cameras used in consumer devices—including shopper data, mobile phones, laptops and many other applications— with a focus on improving where the market perceived to be lacking, attention sensing capabilities
Eyeware develops 3D eye tracking software for depth-sensing cameras, allowing gaze tracking in 3D, without glasses and using consumer 3D cameras. We are offering a 3D Eye Tracking SDK that can be integrated into any device and enable attention sensing: in cars, smartphones, laptops, robots, and more.
With 3D eye tracking, we can enable real-world interactions, for safer driving, more immersive gaming experience, accessibility, shopper research, and more. Target customers are OEMs, resellers and integrators that want to build 3D Eye Tracking directly into their products and solutions. Target industries: computers, phones and cars
Indoor robotic devices that help production, distribution and retail businesses create vital operations for smooth same-day delivery, using autonomous systems
Fast Sense has developed a computer vision-based module for safe and reliable indoor navigation for robotic devices. Fast Sense X navigation suite is installed on UAV to perform fully-autonomous or semi-autonomous (telepresence mode) in-door flights. Collision-avoidance and route planning algorithms make drones a reliable instrument for in-door missions in the hands of any employee.
Fast Sense products include: 1) Solution for data capture, inspection and exploration of indoor, hard-to-reach and confined spaces by drones with integrated telepresence mode. The easy-to-operate interface does not require professional piloting skills to fly drones. 2) Solution for autonomous inspections of repeatable routes, incl. autonomous warehouse stock-taking by drone with further data recognition for integration with clients’ WMS.
Our solution considerably improves safety, speed and quality of data capturing processes and put high-risk inspection areas/security zones under regular control. Integrated telepresence features allow a single operator to control multiple inspection drones flying in different locations simultaneously.
The first AI-powered data platform for smart and safe city applications such as traffic law enforcement and parking management that offers important insights and monitoring
Hayden AI has developed the first AI-powered data platform for smart and safe city applications such as traffic law enforcement and parking management. We partner with the world’s most innovative cities to deploy our vision-based mobile solution in a city’s existing transportation fleet and private vehicles to collect real-time data. Our solution consists of an intelligent camera, smart cloud, HD maps, and a web portal that can be easily accessed by city officials.
Our mobile intelligent camera device consists of four cameras with different images sensors and lenses based on use case, is capable of day and night operation and is installed in a city’s existing transportation fleet to collect data that supports the enforcement of traffic laws. In real-time, our device detects objects in the environment such as vehicles, parking meters, fire hydrants, pedestrians, lane lines, and license plates, while the Location Engine fuses data from camera, dual-frequency GNSS, wheel odometry, and IMU to track our camera’s position with centimeter accuracy even in urban canyons.
When an event is detected, it uploads a video clip to our smart cloud for further processing. In the cloud, we build a fully annotated 3D semantic geometric map. To build the 3D geometric map, we track salient visual points in the environment. To annotate the geometric map, we segment the scene at a pixel-level and cross-register it against our 3D geometric map. This allows our camera to understand context and reason in 3D to enable it to not only detect the violation but also understand the severity and causation.
Finally, our inference engine combines these prior maps and real-time sensor data to reconstruct the scene in 3D like a game engine. We can add rules to our Reasoning Engine to fully automate the detection of any kind of traffic violation, eliminating the need for human review. Through the more efficient processing of traffic violations and other data services, our solution drastically improves traffic safety, eliminates traffic fatalities, and encourages transportation efficiency.
In addition, this data we collect allows us to generate insights that create significant revenue opportunities and helps us accomplish our mission to eliminate traffic fatalities and congestion and achieve fair and equitable mobility for all.
A new platform developed with deep learning model compression technology that compresses the size of computer vision models small enough to be deployed individually on small edge devices at a lower cost, faster inference and with no privacy concerns
Nota provides computer vision solutions that remove the need for any server or cloud computing. We made an innovative move on the development of deep learning model compression technology. Our Automatic Model Compression Platform, the AMC, compresses the size of computer vision models small enough to be deployed individually on small edge devices without relying on hand-crafted heuristics.
When the user puts in the deep learning model, spec of the target device, and the trained data set, the compression will be solely done by the platform itself and will provide a compressed model within a day.
AMC uses a variety of lightweight deep learning techniques, including quantization, pruning, knowledge distillation, and filter decomposition and further combines these 4 techniques to reduce the redundant parameters, required physical memory, computing power, and resource. Clients can now benefit from our stand-alone AI products at a lower cost, faster inference, and no privacy concerns.
A new 3D sensor modality used for applications in autonomous things with a focus on safety improvement and helping manufacturers make decisions to mitigate liability risks and cost for safe operation
Owl Autonomous Imaging, has created a new sensor modality known as Intelligent 3D Thermal Ranging. For applications in Autonomous things and more, these imagers perform in all-weather day or night to deliver safe autonomous operation. The most determinant variable in the autonomous market will always be safety.
Manufacturers will continuously be tasked with determining which technology solutions most effectively mitigate liability risks and cost for safe operation. Sensor technology must : (1) See through RAIN, FOG, SLEET, SNOW & EXHAUST, (2) Classify living objects (pedestrians, cyclists & animals) from inanimate objects, (3) Track high-speed objects in front and to sides while delivering object velocities (3D), (4) Discern objects in shadows, intense glare at both near and far distances (HD quality). Until now, no sensor has been demonstrated that addresses these pain points. Additionally, Intelligent 3D thermal imaging is the ideal modality for anonymous people counting, test and measurement and security applications. Owl has 15 awarded patents for our solution with another 7 pending.
A system comprised of advanced enterprise applications and hardware to redefine the standard of video processing systems through edge computing and on-device learning by eliminating a reliance on cloud computing and an internet connection
Perspective Components is an artificial intelligence company developing advanced enterprise applications and hardware. The goal of their innovation is to redefine the standard of video processing systems through edge computing and on-device learning. Their approach to computer vision eliminates a reliance on cloud computing and allows for operation with no internet connection at all.
They offer enterprise applications including facial recognition, active shooter detection, and thermal screening for the COVID-19 pandemic response. Perspective Components wants to replace traditional video security systems with smarter products that provide tangible, day-to-day value for businesses.
A product used in many applications that solves the problem of looking at tiny area when you’re looking at something through a microscope at high resolutions using an array of small microscopes to see a huge area at high resolution at once
Ramona Optics has a new approach to microscopy that collects some of the largest images in the world in 28K resolution. This unprecedented amount of data provides unique context which allows advanced processing to draw relationships that are not visible to the human eye. By advancing microscopy through the marriage of computation and optics, we help scientists make more insightful decisions, faster.
Solving robotic failure with a platform built with flexible proprietary algorithms and tools that help you find the sensor combination that works for you and solves your most complex problems
Providing robots and drones with spatial understanding, including localisation, mapping and perception requires deep expertise and often expensive hardware for robust operation. This is particularly challenging when the space is constantly changing. Traditionally companies have tackled each problem separately using individual sensors for specific parts of the perception stack. This approach increases cost and computational load.
SLAMcore spans out from Imperial College London in 2016 founded by some of the top minds in SLAM. Backed by renowned investors such as Amadeus Capital, SLAMcore has built a large team of leading experts in algorithm design and embedded edge AI, whose sole focus is to provide robots and drones with the spatial understanding they need. With vision at the heart, our robust and flexible solutions fuse information from multiple sensors to deliver full-stack Spatial AI on low-cost hardware, available today.
Telemedicine appointments connecting children and families to psychologists via an internet platform for a quick virtual assessment that uses machine learning to gather behavioral data
Thrive is building the first consumer-facing platform that enables psychologists to administer online assessments for learning differences such as dyslexia, ADHD, and autism. Currently, a typical learning difference assessment costs $10k in the Bay Area, with client waitlists of 8+ months. Additionally, psychologists use a patchwork of clinical assessments, patient histories, and observational heuristics to make a diagnostic ‘judgment call’.
Thrive is building an end-to-end telehealth practice to (1) enable psychologist to administer learning differences assessments via teleconference, (2) streamline and partially-automate the assessment process, (3) capture video/auditory behavioral data and leverage ML to develop next-generation diagnostic tools, and (4) efficiently match psychologist labor to demand. In doing so, we relieve critical consumer pain points — reducing costs by 10x and cutting ‘time to get an answer’ from 8 months to 1 week — while using ML to increase the accuracy, precision, and efficiency of diagnoses over time.
An API that makes computer vision applications more data and energy-efficient, allowing for savings in data collection and labeling, faster go to market, more reliable models in production, and lighter deployment on the edge
At UpStride, we fundamentally reshape computations to make computer Vision more data-efficient and energy-efficient. Users of our API can train models with up to 10X less data, and/or increase accuracy. They can deploy more compact models that consume up to 3X less energy. No matter the hardware they use. This implies savings in data collection and labeling, faster go to market, more reliable models in production, and lighter deployment on the edge.
To achieve this, we built a datatype that goes beyond linear algebra to optimize Deep Learning workloads. Instead of using floating-point or integer scalars, our datatype represents vectoral data as surfaces. This allows neural networks to better capture correlations between data channels. We do not compete with other players. Our approach brings value in a unique way, that can complement every software, hardware and data solution, to jump onto a new curve of performance.
Celebrating its 10th year, #EVS21 is the perfect venue for sharing your insights and getting the word out about interesting new technologies, techniques, applications, products and practical breakthroughs in sensor-based AI. Learn more: https://embeddedvisionsummit.com/call-proposals/