We believe that breakthroughs happen when brilliant engineers, scientists and entrepreneurs team up to push the limits of accepted boundaries



    Research & Development

    A full life-cycle of research, development and product integration, powered by leading Israeli and Chinese university labs, startups and industry partners.

    Market-Ready Solutions

    Addressing China's market opportunities in targeted sectors, accelerating startups and providing investors & customers with early access to new technologies.

  • The Cognitive-Social Robotics Lab

    Inspired by human behaviour, the Cogni-Tech Lab is focused bi-directional human-robot interaction. The lab links interdisciplinary research in the fields of computer, mechanical, electrical, biomedical and industrial engineering with psychology, life sciences and neuroscience.

    The lab conducts R&D in robot orientation, SLAM, acoustic and visual information discrimination in noisy and unstructured environments. It improves interpretation of humans’ speech and facial expressions with emotion and behavioural sensing, recognition through voice, vision and context.

  • From Lab to Market: R&D Projects


    Robot Orientation

    Localization of speakers in noisy environments

    Localization of human speakers is highly important for natural human-robot interaction. However, in real-life environments such as public places, localization is a great challenge due to strong reverberation, multiple speakers, and noise. We are developing speaker localization algorithm that is robust to reverberation, for spherical arrays applied to arrays around a robot head, with a real-time version for various real-life acoustic conditions.


    Microphone array for superior acoustic information

    Microphone arrays are an important front-end in human-robot interaction. The number of microphones and positioning have a significant effect on the amount of acoustic information they can record, further influencing speaker localization, speech enhancement and recognition. Acoustic information recorded by an array and a method to design optimal microphone arrays has improved speaker localization and speech enhancement.


    Speech enhancement for speech recognition

    In noisy environments which include reverberation, speech recognition rate tends to significantly decrease. Microphone arrays with multiple channel signal processing offer a successful approach for speech enhancement and can improve speech enhancement under noise and reverberation.


    Visual localization and mapping

    Visual SLAM navigates a robot in unknown environments. By creating a map and simultaneously locating the exact position of the robot in a dynamic, complex and large environments, our fusion framework, with a different Visual SLAM algorithm, estimates the robot's location and enables operation at large distances for long time without any drift.


    Construction of a dynamic map for robots

    Producing a high quality, globally consistent map in real-time with the help of one or more robots, by using semantic labeling of the map geometries, so the robot can plan its course in dynamic and complex environments. By adding layers to the map, robots can contribute and share their local information collaboratively, share their local view and update the maps, resulting in lower energy consumption and serving more customers in less time.


    Human By Robots

    Humans’ intent based on facial expressions

    The intention to approach or avoid an unfamiliar person is communicated through expressions and their temporal order. We are developing an automatic comprehension of social interactions, as well as natural interfaces between humans, robots and machines. Automated systems can be taught the social meaning of the sequences of facial expressions in a scene.


    Proactive robotic assistance

    In public places, hand gestures are complementary mean to voice commands, when calling robots for assistance. Voice commands might constitute a limitation, especially when the robot doesn't "speak" the language of the random user, and therefore for environments such as airports, independent gestures interaction is useful for different tasks according to location demographic.


    Robot active recruit using hand gestures

    A behavior model of assistance robots based on human conventions, shedding light on the way a robot should physically behave around people in terms of velocity, approach, proxemics, etc. The robot will identify a person in need of assistance or offer guidance, such as a person looking for their gate, accompanying the person to the gate, carrying their personal items, etc.


    Speech Based Emotion recognition

    Exploring the effect of different emotions on the way people interact with a robot. The environment is used as a mediator, and changing the environmental conditions is expected to yield different emotions among participants. These emotions are detected by an analysis of the tone of voice, accompanied with physiological measurements and self-reported measures.


    Sentiment, state-of-mind and consumer profiling

    Adapting the sentiment, state of mind and consumer profiling analysis by using conversational speech prosody, to the interaction characteristics of a robot, from speech samples that simulate real interactions scenarios, in terms of conversation culture, language, subjects, length and real-time dialogue.


    Integrating video analysis and facial expression recognition

    Real-time interface to the video system used by the robot, based on cumulative video and speech samples that simulate real interactions scenarios, in terms of conversation culture, language, subjects, length and real-time dialogue.

    Eye tracking for intentions and needs

    The capability to reveal the observer’s needs by eye tracking. Detecting intentions while tracking the gaze position during observation of visual stimulus on the robot's screen, validating that the robot's camera tracks the observer gaze while they observe our dedicated stimuli on the screen. A faster and a more accurate technique than conversation or motor responses, for the detection of needs and preferences, and revealing concealed preferences that the subjects are unaware of, or trying to conceal.


    Emotion and Behavioural Sensing With Recognition

    Visual Media Context for Interaction

    A public space is often supported by a variety of on-screen content, which people interact with. Developing a robotic system which responds to video content with robotic reactions, both verbal and non-verbal is a new media experience, which falls in-between content and the consumer and achieves a much higher level of engagement. This new way of experiencing media could significantly shift content creation and media delivery with the robots’ behavior implemented as a separate media channel accompanying the video, but experienced in the real-world.


    "Big Data" of face-to-face interactions

    Most human-robot interaction (HRI) research is done in laboratory studies, limits the applicable data sets to very small empirical evidence. Using a large network of connected robots to collect a large-scale dataset of people physical interaction with a social robot in public places, using cloud-based computing for a feedback loop through which one robot’s interaction affects the machine learning and AI of all the other robots.


    Robot-Human interruption & interaction

    Non-verbal signal detection of “time to interrupt”​. Verbal dialog is regulated using non-verbal signals, including pauses, head movements, eye gaze, and more. This Machine Learning project, models dialogs based on non-verbal signals, so a robot that can intelligently produce the right non-verbal signals to help make interruptions more fluent and acceptable.

    Pause-and-Restart. A common failure of a robot’s interaction is completing or starting an action or sentence when no longer necessary, or not knowing how to re-start an interaction. We're building a model for socially natural pauses and continuation.










    Chaim with Deng Yaping at Ben-Gurion University Robotics labs. Deng is a four time olympian gold medalist and a co-founder of a large sports industry investment fund






    NextWave presenting at T-Edge Beijing, December 2016





    Chaim Ivry speaking at the T-Edge Summit in Beijing



    Yosi Lahad presented at The World Intelligence Congress, Tianjin













    With the Israeli delegation at the Sino-Israeli Robotics Innovation Conference in Guangzhou.








    Chaim Ivry speaks at the T-Edge summit, Beijing

  • About Us

    Offering a unique Sino-Israeli collaboration model for developing game changing technologies


    Yosi Lahad, Co-Founder/CEO, is the Chairman of BOS (Nasdaq: BOSC), former VP of Engineering at Elbit Systems and former Managing Director of Tadiran China. Yosi has led startups, emerging and public technology companies.

    Chaim Ivry, Co-Founder/CEO. Owner of U.S. Executive Center, Englewood Cliffs, NJ, and former CEO of companies in Cyber Intelligence, homeland security, ERP software & IT services, with IBM, AT&T, Verizon, NYCHHC among his former clients.

    Sabina Brady, China Director, lives in Beijing. Sabina is the former Groupe Schneider Country Manager of China Operations, former Director of its North Asia Operations, and former director of the Clinton Foundation in China.