Research

Research

moni_1

Coastal-water monitoring & inspection

We are conducting several projects that deal with deploying robotic systems and software infrastructures for enabling autonomous and semi-autonomous underwater robots to work alongside human divers in marine inspection and monitoring tasks, particularly for shallow-water and coaster-water applications. Focusing on the Florida coastlines, we are working closely with the Center for Coastal Solutions (CCS), the Whitney Laboratory, the Warren B. Nelms Institute, and other UF organizations to develop technological solutions to address the practicalities of important subsea applications such as monitoring water quality, farming artificial reefs, surveying seabed & submarine pipelines, and tracking invasive fish. We are exploring deployable systems for both passive sensing and prediction (of hazards/events) as well as active inspection, tracking, and mapping by autonomous mobile robots.

Visual attention modeling & servoing

An essential capability of visually-guided robots is to identify interesting and salient objects in their fields of view for accurate scene parsing to eventually make important operational and navigational decisions. We are investigating robust and efficient solutions for real-time visual attention modeling by AUVs (autonomous underwater vehicles) and ROVs (remotely operated vehicles) in subsea exploration tasks. We are currently exploring novel deep visual learning-based approaches to design a generalizable solution that performs better than existing approaches on challenging test cases and offers fast end-to-end run-times on single-board platforms in addition to achieving state-of-the-art performances on benchmark datasets. As extensions of my previous work (see SVAM-Net paper and project) on this “where to look” problem, we are in the process of integrating acoustic sensing modality and active learning capabilities to improve the onboard perception performance in real-time applications.

Robot perception in adverse sensing conditions

A key component of my Ph.D. research was to design robust methodologies to deal with underwater image distortions for enabling visually-guided AUVs and ROVs to perceive better in adverse sensing conditions (see FUnIE-GAN and Deep SESR projects). Extending these previous works, we are further exploring thermal imaging and sonar imaging modalities to formulate improved techniques that will facilitate useful augmented visuals in autonomous exploration, manned/unmanned rescue operations, and other remote sensing applications. Moreover, the analogous research problems have important use cases in many terrestrial products and services such as the firefighters’ wearable technologies by 3M, aerial surveillance cameras by FLIR Systems, and low-light security products by Spi Corporations – to name a few. Despite recent advancements of interactive vision APIs and AutoML technologies, there are no universal platforms or criteria to measure the goodness of visual sensing conditions to extrapolate the performance bounds of robot perception algorithms. We are working with the FOCUS Laboratory and other external collaborators on these problems for a variety of degraded settings. The goal of these projects is to design adaptable solutions for combating degraded machine vision by harnessing the power of online learning and deep reinforcement learning.

Robot learning from demonstration

Not all desired behavior of a robot (intelligent agent) can be modeled as tractable optimization problems or scripted by traditional robot programming paradigms. One of our research threads identifies such problems and subsequently designs practical solutions by using LfD (learning from demonstration) techniques. Our current investigations found inspiring results as LfD provides a natural and expressive way to program artificially intelligent behavior under complex environmental constraints. It enables autonomous robots to acquire new skills by learning to imitate a human expert, which is potentially useful in a host of important applications of robotics and automation. We are currently driving multiple projects on designing novel LfD capabilities for educational and companion robots of the next generation. Unlike most existing work, we are investigating LfD use cases on real robotic platforms beyond toy problems or simulation environments.

Safe and effective human-robot cooperation

A recent study by Pew Research Center has found that over 54% of the US population thinks that drones and UAVs (unmanned aerial vehicles) should not be allowed to fly in residential areas as it undermines the ability to assess context and measure trust. Such growing concerns are pervasive across cyberspace toward numerous other human-centric robots and intelligent systems. We are trying to address these issues by devising effective technological and/or educational solutions to ensure transparency and trust. We are exploring various forms of implicit and explicit human-robot interaction for companion robots (e.g., Piaggio Fast Forward, Mabu, Staaker, Skydio, Pepper) in manufacturing, health care, and entertainment industry. With a broader goal of ensuring safe and effective human-robot cooperation in various application-specific use cases, we are trying to define and quantify these interactions and implement other socially-compliant features for companion robots.