Telepresence & How It Is Changing Our Society

Speaker: Adrian Stoica

This video is an overview of the technical aspects of telepresence and discusses services that are appearing as technology in research labs matures, and products move to consumers. Highlighted is the symbiosis between teleoperations and autonomy and AI.

Decades after being first used to control operations in hazardous environments and in space, telerobotics, combined with autonomy, is getting ready to significantly impact our lives in major ways.

In 1980, MIT Professor and AI pioneer, Marvin Minsky’s words predicted a revolutionary change: “Telepresence is not science fiction. We could have a remote-controlled economy by the twenty-first century if we start planning right now”.

While the progression was slower, several recent events, in particular the global pandemic, provided a boost to the ‘tele’ revolution. From tele-health to tele-education and teleoperation in several industrial sectors, we are seeing the appearance of a large number of applications that will irreversibly change how we work and live.

Described in the video are the current activities in preparation for a future IEEE Initiative in Telepresence.

Open-Source Dynamic Server & Modular Controller Package

Speaker: Shreyas Chandra Sekhar

We present a dynamics modeling service that can parse model description files and build a robot dynamics model for use by a model-based controller in real-time. This dynamic server will parse it into a Newton-Euler-based model for fast computation.

Custom robotics systems often have complicated dynamics that are difficult to form into an analytical model. Robotics systems are already commonly defined in model configuration files such as URDF and AMBF description files (ADF). These files contain all the dynamics and kinematics information of the robot for simulation purposes.

However, there is no straightforward method of using this information for dynamics calculations for use in controllers. We present a dynamics modeling service that can parse model description files and build a robot dynamics model for use by a model-based controller in real-time.

This dynamic server takes in a model configuration file and will parse it into a Newton-Euler-based model for fast computation. By wrapping the system using ROS, the dynamics server can be accessed cross-platform and allows for a custom controller that incorporates the dynamics of arbitrary robots to be readily implemented.

Additionally, a control package is presented that leverages the dynamic server to implement the custom controller. This software architecture enables model-based controllers to be developed for, and evaluated on, different simulation and physical systems

Recent Advances in ROS 2

Speaker: Kat Scott

This talk discusses recent updates and improvements in the Robot Operating System (ROS) and new and exciting projects currently using ROS.

Topics covered include an introduction to ROS 2, the recent Humble Hawksbill release, the new TurtleBot 4, SpaceROS, and our multi-robot framework OpenRMF.

How Humanoid General Purpose Robots Can Solve the World’s Labor Crisis

Speaker: Harry Kloor

This video discusses the development of our Beomni AI and Robotics platform and how it will address the eldercare crisis, the shortage of doctors and nurses, and many other applications.

Specific AI and single-task Robotics are common in manufacturing and are slowly penetrating home life, retail, and hospitality marketplaces. However, as labor problems continue to grow, the need for general-purpose robots with AI capable of completing complex tasks has emerged. Humanoid Robots will be coming to market soon.

Join us for a lively discussion on the future of humanoid robots.

 

Speaker: Maria Kyrarini

Assistive robotic manipulators have the potential to support individuals with impairments to regain some of their independence in performing Activities of Daily Living. For individuals with impairments, interaction with assistive robotic manipulators is a very challenging task. In this talk, I will present several interactive approaches to enable people with impairments to collaborate with assistive robots. The first approach focuses on enabling people with tetraplegia to teach the robot how to assist them with drinking. The second approach focuses on an autonomous multi-sensory robotic system, which assists with straw-less drinking. The third approach focuses on robots can learn from human speech. Experimental results for the three approaches will be presented. Furthermore, I will conclude the talk with a brief discussion of future research challenges.

Speaker: Gerard Andrews | Hemal Shah

Autonomous robots depend on their perception systems to understand the world around them. These machines often leverage a host of sensors including cameras, lidars, radars, and ultrasonic sensors to create this environmental understanding. Stereo cameras play a big role in providing depth perception to robotic systems. This depth information can be estimated using classical computer vision techniques, like semi-global matching (SGM) or leverage deep neural networks (DNNs). Each individual algorithm may struggle in a particular set of operating conditions. But when multiple depth estimation algorithms are leveraged simultaneously, It is possible that more robust depth information can be calculated. In this talk, we’ll cover work at NVIDIA to train the ESS DNN model for determining stereo disparity using both synthetic and real-world data to perform well where SGM may not. We’ll also introduce the Bi3D model which is trained on the simplified question of “is X closer than M meters?” rather than “how far away is X?”, yielding improvements in both accuracy and speed. As every approach has deficiencies on its own, we’ll touch upon how ensembling the responses of ESS and Bi3D, DNNs developed specifically for robotic perception with SGM could lead to robust obstacle detection. Finally, we’ll discuss how we’ve tuned the performance of these models to run on embedded compute for the responsive stopping behavior required in autonomous mobile robots (AMRs).