Human-Robot Collaboration

Overview

A key thread of my work involves designing novel interfaces for collaborative robots to facilitate their intuitive and effective use and their successful adoption to industrial settings. Collaborative robots hold tremendous promise for small-scale manufacturing and task automation, although existing interfaces for control and programming are significant bottlenecks to their effective use and widespread adoption.

In this line of work, I explore new ways of enabling operators to effectively use robotic arms and engineers to form successful human-robot teams. My collaborators include Micheal Gleicher, [Robert Radwin](https://directory.engr.wisc.edu/ie/Faculty/Radwin_Robert/, and Michael Zinn. Students involved in this line of work include: Emmanuel Senft (now at Idiap), Danny Rakita (now at Yale), Pragathi Praveena, Kevin Welsh (now at LANL), and Michael Hagenow (now at MIT CSAIL).

Stystems & Publications

Some of the methods and systems we have built in this line of work include:

  • Mimicry control — a new interface that enable operators to control a robotic arm by moving their own arms in free space and the robot mimicking operator movements — Paper 1, Paper 2, Talk

  • RelaxedIK — a new optimization-based real-time controller for robot arm motion that minimizes motion discontinuities, kinematic singularities, and jerky, undesirable motions — Paper, Talk

  • Camera-in-hand robot — a new way to providing remote teleoperation support using a second robot carrying a camera tasked to autonomously provide a view of the task environment — Paper, Talk

  • Bimanual teleoperation — an extension of mimicry control to bimanual scenarios, while enabling coordination between the two robotic arms — Paper, Talk

  • Task-level authoring — using augmented reality and end-user programming principles, this approach enables users to provide no-code, open-ended, short-/long-horizon task instructions based on a set of task primitives – Paper

  • Situated Live Programming — an approach to specifying human-robot “collaborative” task plans using augmented reality and trigger-action planning — Paper, Talk

Videos

This work is supported by NSF award 1830242 and a NASA ULI award.

NSF logo NASA logo