Date: Wednesday, May 18 (Main Conference Day 2)
Start Time: 2:05 pm
End Time: 2:35 pm
From a parent’s perspective, toys should be safe, private, entertaining and educational, with the ability to adapt and grow with the child. For natural interaction, a toy must see, hear, feel and speak in a human-like manner. Thanks to AI, we can now deliver near-human accuracy on computer vision, speech recognition, speech synthesis and other human interaction tasks. However, these technologies require very high computation performance, making them difficult to implement at the edge with today’s typical hardware. Cloud computing is not attractive for toys, due to privacy risks and the importance of low latency for human-like interaction. We have developed a dedicated platform capable of executing multiple AI-based tasks in parallel at the edge with very low power and size requirements, enabling toys to incorporate sophisticated AI-based perception and communication. In this talk, we will introduce this platform, which includes all of the hardware components required for next-generation toys.