Multimodal Vision Research Laboratory




The Multimodal Vision Research Lab conducts fundamental and applied research that is grounded in computer vision. We simultaneously focus on publishing in top-ranked computer vision venues (e.g., CVPR, ICCV, and ECCV) and working closely with domain experts to solve problems in a broad range of disciplines, including medical imaging, transportation engineering, and astrophysics. We have a supportive culture where people can do impactful research, learn new skills, and achieve their personal goals.

While most of our members have computer science (CS) backgrounds and/or are working on CS degrees, we encourage people with strong computational and quantitative skills from other disciplines to apply, including, but not limited to, Electrical Engineering, Physics, Mathematics, Statistics, and Data Science. Lab members primarily focus on implementing computer vision algorithms using Python and Pytorch. In addition, most of our projects involve some aspects of machine learning, image/signal processing, linear/nonlinear optimization, and large-scale data processing.

current openings (updated Mar 10, 2021)

We have several active openings for graduate and postdoctoral researchers (starting in Summer and Fall 2021). Topics include:

In all cases, the focus will be on developing image-understanding systems using deep-learning techniques. These projects are collaborative with industrial partners and require US citizenship or permanent resident status.

General information and expectations for each type of position:


We look holistically at each applicant to determine if they are a good fit for our current and future needs. We generally look for new members with the following qualifications:


The following documents should be sent to the lab director, Dr. Nathan Jacobs: In addition, be sure to include the type of position you are interested in (postdoc, graduate, or undergraduate), your funding situation, when you are available to start, and anything else you think might be helpful.