•  

    AI-Powered Cloud Enabled Service Robots

     

    Autonomous and social robots for hospitality, retail, health care, education, security, and entertainment.

     

  • Accelerating Time-To-Market

     

    Advanced robotics, sensors and AI

    Development Centers

    A full life-cycle of research, development and product integration, powered by leading Israeli and Chinese university labs, startups and industry partners.

    Market-Ready Solutions

    We create future-leaning inventions in robotics, sensors and intelligent devices – from applied research all the way through product development.

  • The Cognitive-Social Robotics Lab

    We believe that real breakthroughs happen when brilliant engineers, scientists and entrepreneurs team up to push the limits of accepted boundaries

     

    The Cogni-Tech Lab

    Inspired by human behavior and driven by technological innovation, the Cogni-Tech Lab is focused on robot orientation and bi-directional human-robot interaction. The lab's platform links interdisciplinary research in the fields of computer, mechanical, electrical, biomedical and industrial engineering with psychology, life sciences and neurosciences.

     

     

    Cognitive-Social Robotics

    The focus of the new Cognitive-Social Robotics Innovation Lab is on robot orientation, SLAM, acoustic and visual information discrimination in noisy and unstructured environments. We improve interpretation of humans’ speech and facial expressions with emotion and behavioral sensing, recognition through voice, vision and context.

  • Our Latest Work: From Lab to Market

    Robot Orientation

    Localization of speakers in noisy environments

    Localization of human speakers is highly important for natural human-robot interaction. However, in real-life environments such as public places, localization is a great challenge due to strong reverberation, multiple speakers, and noise. We are developing speaker localization algorithm that is robust to reverberation, for spherical arrays applied to arrays around a robot head, with a real-time version for various real-life acoustic conditions.

     

    Microphone array for superior acoustic information

    Microphone arrays are an important front-end in human-robot interaction. The number of microphones and positioning have a significant effect on the amount of acoustic information they can record, further influencing speaker localization, speech enhancement and recognition. Acoustic information recorded by an array and a method to design optimal microphone arrays has improved speaker localization and speech enhancement.

     

    Speech enhancement for speech recognition

    In noisy environments which include reverberation, speech recognition rate tends to significantly decrease. Microphone arrays with multiple channel signal processing offer a successful approach for speech enhancement and can improve speech enhancement under noise and reverberation.

     

    Visual localization and mapping

    Visual SLAM navigates a robot in unknown environments. By creating a map and simultaneously locating the exact position of the robot in a dynamic, complex and large environments, our fusion framework, with a different Visual SLAM algorithm, estimates the robot's location and enables operation at large distances for long time without any drift.

     

    Construction of a dynamic map for robots

    Producing a high quality, globally consistent map in real-time with the help of one or more robots, by using semantic labeling of the map geometries, so the robot can plan its course in dynamic and complex environments. By adding layers to the map, robots can contribute and share their local information collaboratively, share their local view and update the maps, resulting in lower energy consumption and serving more customers in less time.

     

    Human By Robots
     

    Humans’ intent based on facial expressions

    The intention to approach or avoid an unfamiliar person is communicated through expressions and their temporal order. We are developing an automatic comprehension of social interactions, as well as natural interfaces between humans, robots and machines. Automated systems can be taught the social meaning of the sequences of facial expressions in a scene.

     

    Proactive robotic assistance

    In public places, hand gestures are complementary mean to voice commands, when calling robots for assistance. Voice commands might constitute a limitation, especially when the robot doesn't "speak" the language of the random user, and therefore for environments such as airports, independent gestures interaction is useful for different tasks according to location demographic.

     

    Robot active recruit using hand gestures

    A behavior model of assistance robots based on human conventions, shedding light on the way a robot should physically behave around people in terms of velocity, approach, proxemics, etc. The robot will identify a person in need of assistance or offer guidance, such as a person looking for their gate, accompanying the person to the gate, carrying their personal items, etc.

     

    Speech Based Emotion recognition

    Exploring the effect of different emotions on the way people interact with a robot. The environment is used as a mediator, and changing the environmental conditions is expected to yield different emotions among participants. These emotions are detected by an analysis of the tone of voice, accompanied with physiological measurements and self-reported measures.

     

    Sentiment, state-of-mind and consumer profiling

    Adapting the sentiment, state of mind and consumer profiling analysis by using conversational speech prosody, to the interaction characteristics of a robot, from speech samples that simulate real interactions scenarios, in terms of conversation culture, language, subjects, length and real-time dialogue.

     

    Integrating video analysis and facial expression recognition

    Real-time interface to the video system used by the robot, based on cumulative video and speech samples that simulate real interactions scenarios, in terms of conversation culture, language, subjects, length and real-time dialogue.

    Eye tracking for intentions and needs

    The capability to reveal the observer’s needs by eye tracking. Detecting intentions while tracking the gaze position during observation of visual stimulus on the robot's screen, validating that the robot's camera tracks the observer gaze while they observe our dedicated stimuli on the screen. A faster and a more accurate technique than conversation or motor responses, for the detection of needs and preferences, and revealing concealed preferences that the subjects are unaware of, or trying to conceal.

     


    Emotion and Behavioral Sensing With Recognition

    Visual Media Context for Interaction

    A public space is often supported by a variety of on-screen content, which people interact with. Developing a robotic system which responds to video content with robotic reactions, both verbal and non-verbal is a new media experience, which falls in-between content and the consumer and achieves a much higher level of engagement. This new way of experiencing media could significantly shift content creation and media delivery with the robots’ behavior implemented as a separate media channel accompanying the video, but experienced in the real-world.

     

    "Big Data" of face-to-face interactions

    Most human-robot interaction (HRI) research is done in laboratory studies, limits the applicable data sets to very small empirical evidence. Using a large network of connected robots to collect a large-scale dataset of people physical interaction with a social robot in public places, using cloud-based computing for a feedback loop through which one robot’s interaction affects the machine learning and AI of all the other robots.

     

    Robot-Human interruption & interaction

    Non-verbal signal detection of “time to interrupt”​. Verbal dialog is regulated using non-verbal signals, including pauses, head movements, eye gaze, and more. This Machine Learning project, models dialogs based on non-verbal signals, so a robot that can intelligently produce the right non-verbal signals to help make interruptions more fluent and acceptable.

    Pause-and-Restart. A common failure of a robot’s interaction is completing or starting an action or sentence when no longer necessary, or not knowing how to re-start an interaction. We're building a model for socially natural pauses and continuation.

     

  •  

     

     

     

     

     

     

     

     

     

     

     

    Ivry presenting at T-Edge Beijing, Dec 2016

     

     

     

     

     

    Deng Yaping and Ivry at Ben-Gurion University, hosted by NextWave Robotics - 2016

    Ms. Deng, a 4 gold medal Olympic Champion and a 14 time World Champion, is a founder of a 5 Billion Yuan fund.

     

     

     

     

     

     

    NextWave presents at T-Edge Beijing, December 2016

    NextWave at The Sino-Israeli Conference, Guangzhou Dec 2016

     

     

     

     

     

     

     

    Yosi Lahad, Co-CEO, presenting at World Intelligence Congress, Tianjin, June 2017

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    Ivry, Co-CEO, at T-Edge, Beijing, December 2016

     

     

     

  • About Us

    A Sino-Israeli company powered by leading R&D labs and industry partners

    Chaim Ivry, Co-CEO/Co-Founder, is the former Founder/CEO of U.S. based Next Wave Software, an ERP & supply-chain software developer, integrator and IT consultancy, with clients such as IBM Global Services, AT&T & Verizon. He's the Founder & CEO of C4 Intelligence and the former Founder/CEO of a U.S. homeland security company > more

    Yosi Lahad, Co-CEO/Co-Founder,

    has led startups, emerging and public technology companies as CEO & Chairman. He is a former VP of Engineering at Elbit Systems and former Managing Director of Tadiran China. He has managed M&A and JV deals in Israel, China, Australia and the U.S. Mr. Lahad is a Colonel (Res.) in the Israeli Air Force > more

    Sabina Brady, China Director, Beijing, has lived and worked in China in senior leadership positions in the corporate and nonprofit sectors for three decades. She served as Groupe Schneider Country Manager of China Operations, Director of North Asia Operations and is a former director of the Clinton Foundation China > more