I’m Kyle Vedder
I believe the shortest path to getting robust, generally capable
robots in the real world is through the construction of systems
whose performance scales with compute and data, without
requiring human annotations.
In service of this, I am interested in designing and scaling
fundamentally 3D vision systems that learn just from raw, multi-modal
data. My contrarian bet is on the multi-modal and 3D aspects; a high
quality, 3D aware representation with diverse data sources should enable
more sample efficient and robust downstream policies. Most
representations today are 2D for historical reasons (e.g. lots of RGB
data, 2D convolutions won the hardware lottery), but I believe this ends
up pushing a lot of 3D spacial understand out of the visual
representation and into the downstream policy, making them more
expensive to learn and less robust.
For data availability reasons, my current work is in the
Autonomous Driving domain, but I believe the same principles apply to
other domains such as indoor service robots.
Background
I am a CS PhD candidate at
Penn under Eric Eaton
and Dinesh Jayaraman
in the GRASP Lab. My current
work lies in:
During my undergrad in CS at UMass Amherst I did research under Joydeep Biswas in the Autonomous Mobile Robotics Lab. My
research was in:
I have also done a number of industry internships; I have interned
twice at Unidesk (a startup since aquired by Citrix), twice at Google,
once at Amazon’s R&D lab, Lab126 (where I worked on their home robot
Astro),
and at Argo AI as a Research Intern
under James Hays.
Updates
- Mar 7th, 2024: I Can’t Believe It’s
Not Scene Flow! was posted to arXiv!
- Jan 16th, 2024: ZeroFlow was
accepted to ICLR 2024!
- Dec 4th, 2023: Book
Review: Eric Jang’s book “AI is Good for You”
- Nov 21st, 2023: I will be joining Nvidia’s AV team as a Research
Intern starting in January! I will be extending ZeroFlow in an exciting new way
(more to come!)
- Aug 3rd, 2023: Blog post:
Applying to CS PhD programs for Machine Learning: what I wish I
knew
- Jul 28th, 2023: ZeroFlow XL is now
state-of-the-art on the Argoverse
2 Self-Supervised Scene Flow Leaderboard! And we’ve only begun to
scale our method — there is plenty of performance left on the
table!
- Jul 3rd, 2023: Blog post: My ML
research development environment workflow
- Jun 18th, 2023: ZeroFlow was selected
as a highlighted method in the CVPR 2023 Workshop
on Autonomous Driving Scene Flow Challenge!
- May 18th, 2023: ZeroFlow: Scalable
Scene Flow via Distillation was submitted.
- Jan 12th, 2023: A
Domain-Agnostic Approach for Characterization of Lifelong Learning
Systems was accepted to Neural Networks.
- Jun 30th, 2022: Sparse
PointPillars was accepted to IROS 2022. (Reviews)
- Jun 7th, 2022: Invited talk for
Sparse PointPillars at 3D-DLAD
- May 15th, 2022: Joined Argo as a Research Intern!
- Mar 1st, 2022: Submitted
Sparse PointPillars to IROS 2022
- Jul 20th, 2021: Added project webpage for
X*
- Jul 8th, 2021: Poster
presented at Sparse Neural Networks on Sparse
PointPillars
- Jun 14th, 2021: Workshop paper
accepted as poster to Sparse Neural Networks: Sparse PointPillars:
Exploiting Sparsity on Birds-Eye-View Object Detection
- Apr 27th, 2021: My WPEII
Presentation: Current Approaches and Future Directions for Point
Cloud Object Detection in Intelligent Agents
- Apr 14th, 2021: My WPEII
Document: Current Approaches and Future Directions for Point Cloud
Object Detection in Intelligent Agents
- Feb 11th, 2021: Blog post: Setting up
mujoco-py
for use with on-screen and off-screen
rendering
- Nov 4th, 2020: Journal
paper accepted to Artificial Intelligence: X*: Anytime Multi-Agent
Path Finding for Sparse Domains using Window-Based Iterative
Repairs
- Jul 23rd, 2020: Presentation:
From Shapley Values to Explainable AI
- Jun 29rd, 2020: Demo: Penn
Service Robots navigating around Levine
- May 8th, 2020: Term
paper: An Overview of SHAP-based Feature Importance Measures and
Their Applications To Classification