I’m Kyle Vedder

I believe the shortest path to getting robust, generally capable robots in the real world is though the construction of systems whose performance scales with compute and data, without requiring human annotations. The world is fundamentally 3D, but currently most vision systems focus on 2D data simply due to general availability of RGB images and strong hardware acceleration for standard processing methods (e.g. 2D convolutions). I am interested in building such scalable vision systems on top of 3D sensor data (e.g. LiDAR, Stereo) that reasons natively in 3D, in the hope that these 3D representations are more useful for quickly and robustly learning downstream behavioral tasks compared to their 2D counterparts.

Background

I am a CS PhD candidate at Penn under Eric Eaton and Dinesh Jayaraman in the GRASP Lab. Motivated by my goal of developing elder care robots, my research interests lie in the intersection of:

During my undergrad in CS at UMass Amherst I did research under Joydeep Biswas in the Autonomous Mobile Robotics Lab. My research was in:

I have also done a number of industry internships; I have interned twice at Unidesk (a startup since aquired by Citrix), twice at Google, once at Amazon’s R&D lab, Lab126 (where I worked on their home robot Astro), and at Argo AI as a Research Intern under James Hays.

More Information

Updates