Bio
I am a Ph.D. student at Carnegie Mellon University (CMU), in the Robotics Institute advised by Jeffrey Ichnowski.
I am currently a research intern in the DUSt3R Group at NAVER Labs Europe, advised by Jérome Revaud and Vincent Leroy.
My interests lie at the intersection of perception and robot manipulation of challenging objects, such as (transparent) deformables.
I am a recipient of the 2023 CMLH Fellowship in Digital Health Innovation.
During my first year at CMU I worked with Sebastian Scherer , where we focused on geometric camera calibration.
Prior to that I completed a Bachelor's and Master's degree in Aerospace Engineering at Delft University of Technology, in the Netherlands. Advised by Guido de Croon, I studied efficient bio-inspired algorithms for fully autonomous nano drones.
In 2019 I was a visiting student at Vijay Janapa Reddi's Edge Computing lab, at Harvard University, where we studied Deep Reinforcement Learning for tiny robots.
I am passionate about creating a future where complex robotic automation is scalable, safe, and beneficial.
Students I am always looking for collaborators, shoot me an email if you would like to work with me!
Deformable objects are common in household, industrial and healthcare settings. Tracking them would unlock many applications in robotics, gen-AI, and AR.
How? Check out DeformGS: a method for dense 3D tracking and dynamic novel view synthesis on deformable cloths in the real world.
We present Cloth-Splatting: a method for accurate state estimation of deformable objects from RGB supervision.
Cloth-Splatting leverages a GNN as a prior to improve tracking accuracy and speed up convergence.
Online 3D tracking can unlock many new applications in robotics, AR and VR. Most prior works have focused on offline tracking, requiring an entire sequence of posed images.
Here we present DynOMo, a method for simultaneous 3D tracking, 3D reconstruction, novel view synthesis and pose estimation!
In this work, we propose Residual-NeRF, a method to improve depth perception and training speed for transparent objects. Robots often operate in the same area, such as a kitchen.
By first learning a background NeRF of the scene without transparent objects to be manipulated, we improve depth perception quality and speed up training.
Gas-source localization is an important task for autonomous robots. We present GSL-Bench, the first standardized benchmark for gas-source localization.
GSL-Bench uses NVIDIA Isaac Sim for high visual fidelity, and OpenFOAM for realistic gas simulations.
In this work we present our methodology for accurate wide-angle calibration. Our pipeline generates an intermediate model, and leverages it to iteratively improve feature detection and eventually the camera parameters.
We have developed a swarm of autonomous, tiny drones that is able to localize gas sources in unknown, cluttered environments. Bio-inspired AI allows the drones to tackle this complex task without any external infrastructure.
We present fully autonomous source seeking onboard a highly constrained nano quadcopter, by contributing application-specific system and observation feature design to enable inference of a deep-RL policy onboard a nano quadcopter.
This paper describes the computer vision and control algorithms used to achieve autonomous flight with the ∼30g tailless flapping wing robot, used to participate in the International Micro Air Vehicle Conference and Competition (IMAV 2018) indoor microair vehicle competition.