Research Projects

(GitHub Page)


Bubblenets: learning to select the guidance frame in video

Semi-supervised video object segmentation has made significant progress on real and challenging videos in recent years. The current paradigm for segmentation methods and benchmark datasets is to segment objects in video provided a single annotation in the first frame. However, we find that segmentation performance across the entire video varies dramatically when selecting an alternative frame for annotation. This work addresses the problem of learning to suggest the single best frame across the video for user annotation—this is, in fact, never the first frame of video. We achieve this by introducing BubbleNets, a novel deep sorting network that learns to select frames using a performance-based loss function that enables the conversion of expansive amounts of training examples from already existing datasets. Using BubbleNets, we are able to achieve an 11% relative improvement in segmentation performance on the DAVIS benchmark without any changes to the underlying method of segmentation. This work was a finalist for Best Paper Award at CVPR.

Michigan engineer article →


Real-Time Perception for Vocally-Specified Mobile Manipulation Tasks

We introduce a framework for real-time object recognition and localization using RGBD data. Using both appearance and 3D geometry cues, we are able to visually identify and locate multiple target objects as fast as 10 Hz on standard hardware. Innovations include a sequential method of subsampling and densely repopulating image data, which increases the quality of task-relevant observations while reducing computational costs associated with monitoring the surrounding environment. To demonstrate the utility of our approach, we give task-specifying voice commands to a mobile manipulation robot, which searches for specified target objects based on color and shape and then, once found, moves those objects to a specified goal location.


Video object segmentation-based visual servo control

To be useful in everyday environments, robots must be able to identify and locate unstructured, real-world objects. In recent years, video object segmentation has made significant progress on densely separating such objects from background in real and challenging videos. This work addresses the problem of identifying generic objects and locating them in 3D from a mobile robot platform equipped with an RGB camera. We achieve this by introducing a video object segmentation-based approach to visual servo control and active perception. We validate our approach in experiments using an HSR platform, which subsequently identifies, locates, and grasps objects from the YCB object dataset. We also develop a new Hadamard-Broyden update formulation, which enables HSR to automatically learn the relationship between actuators and visual features without any camera calibration. Using a variety of learned actuator-camera configurations, HSR also tracks people and other dynamic articulated objects.


strictly unsupervised VIDEO OBJECT SEGMENTATION

We investigate the problem of strictly unsupervised video object segmentation, i.e., the separation of a primary object from background in video without a user-provided object mask or any training on an annotated dataset. We find foreground objects in low-level vision data using a John Tukey-inspired measure of “outlierness.” This Tukey-inspired measure also estimates the reliability of each data source as video characteristics change (e.g., a camera starts moving). The proposed method achieves state-of-the-art results for strictly unsupervised video object segmentation on the challenging DAVIS dataset. Finally, we use a variant of the Tukey-inspired measure to combine the output of multiple segmentation methods, including those using supervision during training, runtime, or both. This collectively more robust method of segmentation improves the Jaccard measure of its constituent methods by as much as 28%. This research is performed in collaboration with Professor Jason Corso.


Robust robot walking control

Bipedal locomotion is well suited for mobile robotics because it promises to allow robots to traverse difficult terrain and work effectively in man-made environments. Despite this inherent advantage, however, no existing bipedal robot achieves human-level performance in multiple environments. A key challenge in robotic bipedal locomotion is the design of feedback controllers that function well in the presence of uncertainty, in both the robot and its environment. To achieve such walking, we design feedback controllers and periodic gaits that function well in the presence of modest terrain variation, without reliance on perception or a priori knowledge of the environment. Model-based design methods are introduced and subsequently validated in simulation and experiment on MARLO, an underactuated three-dimensional bipedal robot that is roughly human size and has six actuators and thirteen degrees of freedom. Innovations include virtual nonholonomic constraints that enable continuous velocity-based posture regulation and an optimization method that accounts for multiple types of disturbances and more heavily penalizes deviations that persist during critical stages of walking. Using a single continuously-defined controller taken directly from optimization, MARLO traverses sloped sidewalks and parking lots, terrain covered with randomly thrown boards, and grass fields, all while maintaining average walking speeds between 0.9-0.98 m/s and setting a new precedent for walking efficiency in realistic environments. This research is performed in collaboration with Professor Jessy Grizzle.

Popular Science article →


Wireless Power Transfer to Ground Sensors from a UAV

Wireless magnetic resonant power transfer is an emerging technology that has many advantages over other wireless power transfer methods due to its safety, lack of interference, and efficiency at medium ranges.  We develop a wireless magnetic resonant power transfer system that enables unmanned aerial vehicles (UAVs) to provide power to, and recharge batteries of, wireless sensors and other electronics far removed from the electric grid. We address the difficulties of implementing and outfitting this system on a UAV with limited payload capabilities and develop a controller that maximizes the received power as the UAV moves into and out of range. We experimentally demonstrate the prototype wireless power transfer system by using a UAV to transfer nearly 5W of power to a ground sensor. Motivated by limitations of manual piloting, steps are taken toward autonomous navigation to locate receivers and maintain more stable power transfer. Novel sensors are created to measure high frequency alternating magnetic fields, and data from experiments with these sensors illustrate how they can be used for locating nodes receiving power and optimizing power transfer. This research is performed in collaboration with Professor Carrick Detweiler.

IEEE Spectrum article →