My research focuses on designing “sensory” and “motional” behaviors of mobile robots in more “informative” ways for each other. In particular, I propose novel machine learning approaches to address problems under realistic challenges. Several research topics in which I’ve been directly involved are listed below, but I have a broader interest in improving autonomy of embodied AI systems.

Anomaly Detection

I’ve been interested in creating detectors of anomalies “only” from normal, typical observations because they can often be obtained more easily. For instance, imagine that you need a detector of panicked crowds from videos. It’ll be very challenging to collect such video samples compared to a huge amount of videos of normal crowds already available from diverse sources. Thus, I’ve been developing novel methods to fully utilise frequent normal samples to effectively identify rare “anomalous” examples. Useful techniques that I have found useful so far include Self-supervised Learning and Generative Adversarial Networks, whilst I’ve worked with visual data from strawberry farms🍓 and ant colonies🐜, respectively. Similarly, taking motions for promoting “outlier” states has also been found useful for speeding-up robotic learning!


Informative Path Planning

Mobile robots could be a useful instrument to regularly monitor an area of interest and report any change of a particular spatial phenomenon there, e.g., soil temperature, humidity, or compaction in agricultural applications. However, their paths for sensing must be optimised to navigate only to most “informative” locations for spatial prediction — instead of planning to visit every grid cell — due to limited resources, such as battery life🔋 and mission time⏰. Gaussian Process Regression has been widely used in literature since it enables 1) information-theoretic evaluation of candidate locations and 2) online learning as a new measurement is acquired. My interest lies in integrating such traditional methods with more modern deep learning algorithms to better optimise the robotic sampling paths.


Multi-robot Coordination via Teammate Modeling

A team of robots could do much more than a single robot could. For strategic team maneuvers, communication is the key feature for them. What would then be the best way of communication? You can imagine explicit signals of particular messages, but we, as humans, often use “implicit” communication by, for example, showing some simple gesture👋 with the hope that our co-worker understands what we actually meant by it. For enabling such communication, I’ve worked on robotic algorithms to model the “mapping” from some indirect observations to actual motions of teammates so that individuals after learning could take complementary actions to others’. As an application, I’ve introduced so-called Remote Teammate Localization (ReTLo) problem, in which a robot is to infer poses of distant teammates only from observations of a nearby robot to proactively maneuver for the team.