Master Student Projects

A variety of student projects are available for current U-M students pursuing a masters degree

Projects

UMTRI Project #1: Adaptive Safety Designs for Injury Prevention: Human Modeling and Impact Simulations

Faculty Mentor: Jingwen Hu, jwhu@umich.edu 

Prerequisites: 

  • Proficiency in Matlab or other programing tools
  • Interested in machine-learning, statistical modeling, and/or injury biomechanics research
  • Demonstrated ability in 3D human geometry model and/or FE model development and application is a plus

Project Description: Unintentional injuries, such as those occurred in motor vehicle crashes, falls, and sports are a major public health problem worldwide. Finite element (FE) human models have the potential to better estimate tissue-level injury responses than any other existing biomechanical tools. However, current FE human models were primarily developed and validated for midsize men, and yet significant morphological and biomechanical variations exist in human anatomy. The goals of this study are to develop parametric human geometry and FE models accounting for the geometric variations in the population, and to conduct a feasibility study using population-based simulations to evaluate the influence of human morphological variation on human impact responses in motor-vehicle crashes and sport-related head impacts. Specifically, in this study, students will use medical image analysis and statistical/machine-learning methods to quantify the geometric variance of the skeleton among the population; use mesh morphing methods to rapidly morph a baseline human FE model to a large number of human models with a wide range of size and shape for both males and females; conduct impact simulations with those models; and use machine-learning models to build surrogate models for injury assessment toward adaptive safety designs.

Research Mode: In Lab, Online or Hybrid

UMTRI Project #2: Driver State Monitoring for Automated Vehicles

Faculty Mentor: Monica L.H. Jones, mhaumann@umich.edu 

Prerequisites: Motivated students, keen to work both independently and within a group.  Some experience with scientific programming languages is required (e.g. Python, MatLab).  Familiarity with computer vision programming is desired.

Project Description: With increasing automation (SAE Levels 2 and 3), the role of the driver will transition from Driver Driving (DD) to Driver Not Driving (DND). Freed from completing operational tasks of driving, drivers will have a much larger behavioral repertoire. Driver state monitoring (DSM) systems attempt to predict the driver’s readiness to respond to a takeover request or other emerging need within the situation from information obtained from cameras and other sensors. These systems face several challenges to comprehensively track the continuum of possible driver postures and behaviors. Many research questions persist with respect to the efficacy and effectiveness of DSM systems. The results of this project may identify disallowed states and provide further design guidance for DSMs.

This project explores the characteristics and behaviors associated with non-nominal postures, driver engagement, monitoring, and state levels – under day and night conditions. It also seeks to quantify driver responses to unscheduled automated-to-manual (non-critical) transitions in L3 automated driving conditions. Data were gathered on the American Center for Mobility closed test facility.  Continuous measures during in-vehicle exposures include: 2D image and 3D depth data, physiological response, driver performance and behavior data, vehicle data, and available DSM outputs.

Student researchers will also assist with data analysis, develop image processing models &/or computational models that predict driver engagement.

Research Mode: In Lab, Hybrid

UMTRI Project #3: Augmented Virtual Reality (AVR) based Driving Scenario Simulation and Analysis

Faculty Mentor: Shan Bao, shanbao@umich.edu 

Prerequisites:  Motivated students who are comfortable working with a big group. Having skills of website development is a great plus!!

Project Description: When evaluating and testing automated vehicle technologies, it is pretty challenging and expensive to test the prototype system using real cars on real roads. Ideally, parameter setting of sensors and vehicle control systems can be tested and evaluated under variety of simulated scenarios at first.  This work is sponsored by a mixed of sponsors with several focuses. The work is designed to simulate 2D and/3D real world driving scenarios in the virtual environment through software (e.g., Carla or Carsim) or Virtual Reality techniques. Student interns get to work with the exciting concepts and interact with our industry sponsors directly and will be able to implement your simulation results through hands on experiences. We are looking for multiple motivated student helpers. Training on certain software (e.g., Carla or Carsim) and hardware (AVR-Heads set) are available.

The research team will be working with industry experts directly on this project. Students will have hands-on experiences working on instrumenting AVs at Mcity, and testing.    

Research Mode: Online or Hybrid

UMTRI Project #4: Safety and Independence of Passengers in Wheelchairs Using Automated Vehicles and Aircraft

Faculty Mentor: Kathleen D. Klinich, kklinich@umich.edu 

Prerequisites: Strong technical writing skills, experience with spreadsheet/data analysis, mechanical design/controls experience, and an interest in improving user travel experience and working with people who have disabilities.

Project Description: We have multiple projects to ensure that people who travel while seated in their wheelchairs can safely and independently do so in automated vehicles where there may not be a driver to assist in securing the wheelchair, or in aircraft where personal wheelchair use is not currently allowed. Student researchers could help with measuring posture and shape of volunteers using wheelchairs, help with dynamic test fixture design and laboratory testing, assist with data analysis, or help create computational models of wheelchair geometry  

Research Mode: In lab, hybrid 

UMTRI Project #5: Motion Sickness to Inform Automated Vehicle Design

Faculty Mentor: Monica L.H. Jones, mhaumann@umich.edu 

Prerequisites:  Motivated students, keen to work both independently and within a group.  Some experience with scientific programming languages is required (e.g. Python, MatLab). Familiarity with computer vision programming is desired

Project Description: Motion sickness in road vehicles may become an increasingly important problem as automation transforms drivers into passengers. However, lack of a definitive etiology of motion sickness challenges the design of automated vehicles (AVs) to address and mitigate motion sickness susceptibility effectively. The quantification of motion sickness severity and identification of objective parameters is fundamental to informing future countermeasures. Data were gathered on-road and on the Mcity and Michigan Proving Ground test facilities.  Continuous measures include: 2D image and 3D depth data, thermal imaging, physiological response, vehicle data, and self-reported motion sickness response. Modeling effort will elucidate relationships among the factors contributing to motion sickness for the purpose of generating hypotheses and informing future countermeasures for AVs.

Students will have hands-on experiences working on instrumenting AVs at Mcity, and testing.   Student researchers will also assist with data analysis or develop computational models that detect and predict passenger motion sickness.

Research Mode: In Lab, Hybrid

UMTRI Project #6: Implementation for Automated Vehicle Intelligent Lane-Change Function

Faculty Mentor: Brian T. W. Lin, btwlin@umich.edu 

Prerequisites: (this field is optional)

  • Strong experience with Python and Linux; experience with projects using ROS 2 is a huge plus
  • Have great communication skills and teamwork experience

Project Description: 

Before initiating a lane change, an autonomous vehicle needs to decide when and how the lane change should be safely executed, according to the vehicle telematics, ramp geometry, and the position of the other road users. The research team had previously developed some data-driven decision-making models and evaluated them. The ultimate goal of this project is to implement the models to Mcity’s Lincoln MKZ AV with DataSpeed Drive-by-Wire Kit installed. Before that, computer simulations will be firstly conducted, followed by testing the external signal input through RTK and the evaluation of performance of the models, communications among different entities, and the safety issues.

Qualified students will help program the lane change models based on the existing path-following software. They will also help program in Python to subscribe/broadcast ROS topics to control the AV and subscribe GPS data as the input for the decision model, conduct the simulation and test track experiment at Mcity, and analyze the data.

Research Mode: In Lab (mostly), Hybrid

UMTRI Project #7: Online Parametric 3D Wheelchair Model Development 

Faculty Mentor: B-K. Daniel Park, keonpark@umich.edu 

Prerequisites: (this field is optional)

  • Proficiency in computer programming languages (Javascript preferred)
  • Familiarity with computer-aided design (CAD) is desired

Project Description:

The proposed study hypothesizes that having digital tools that can represent the diversity of wheelchair 3D geometries will significantly improve vehicle designs for better accommodation and safety for wheelchair-seated occupants. Three-dimensional (3D) wheelchair shape data were collected from commercial wheelchair products and will be categorized into a few groups based on the functional shapes. In this project, a series of online parametric wheelchair models will be developed using an open-source modeling tool (OpenJS). 3D shapes of the wheelchairs in each category will be first simplified according to the functionality of the wheelchairs, and the key dimensions will be derived from statistical analysis to represent the simplified shapes as well as the main functions. These dimensions will be used as shape parameters of the online model, and an intuitive and easy-to-use graphical user interface (GUI) will be implemented to control the model parameters.

Research Mode: In Lab, Remote, Hybrid

UMTRI Project #8: Automated Vehicle Malfunction and Coping Strategies Development

Faculty Mentor: Shan Bao, shanbao@umich.edu

Prerequisites: Team players who are motivated in working with other group members. Experience with human factors knowledge and/or text mining experiences are plus! 

 Project Description: 

Automated systems that control/drive a vehicle or assist a driver may fail/malfunction at any time while driving in traffic and lead to crashes. This Mcity sponsored project is designed to understand the typical and important failure types and taxonomies for automated vehicle systems that are currently on the road, as well as to develop coping strategies in mitigating hazards of such vehicle failures and supporting safe and efficient responses for drivers from both subject and surrounding vehicles. A hybrid approach is proposed to address the research questions both qualitatively and quantitatively.

The research team will be working with industry experts directly on this project. Students will have hands-on experiences working on instrumenting AVs at Mcity, and testing.    

Research Mode: In-lab (Mcity testing), Online or Hybrid

UMTRI Project #9: A Tool for Augmented Reality (AR) Assisted Surgery: 3D Human Modeling and Visualization

Faculty Mentor: Jingwen Hu, jwhu@umich.edu

Prerequisites: Proficiency in computer programming languages (C#, C++, Unity, Python, etc.).  Previous experience of using Microsoft HoloLens will be a plus.

Project Description: An AR-assisted surgery tool will provide a composite view between computer-generated patient anatomy and a surgeon’s view of the operative field, which may lead to more precise understanding of the detailed anatomy and also significantly increase accuracy in tumor localization and resection. In this study, we will focus on a software tool that can address the rapid development of computer anatomy models and accurate registration between the anatomy model and real patient geometry, which are the two key aspects of AR-assisted surgery tools.  We plan to use an AR device, Microsoft HoloLens, as the main hardware to demonstrate the software capability, although our software should not be limited to HoloLens only.  In this study, we will use liver surgery as an example, thus the medical images and anatomy models will only focus on the liver and the surrounding tissues.  Because liver is the largest solid organ in the abdomen, is pliable, and operative interventions can alter its anatomy, it will pose significant challenges on model registration, which will be a good test for the AR-assisted surgery tool. For surgeons who have to deal with complex anatomical structures that are not always visible, the proposed AR-assisted surgery tool will provide much needed understanding of anatomic relations beneath the surface, and will likely lead to better accuracy, safer resection, lower complications, and superior surgical outcomes.

Research Mode: In Lab, Online, Hybrid

UMTRI Project #10: A Science-Based Standard for Describing Driver Performance

Faculty Mentor: Paul Green, pagreen@umich.edu

Prerequisites: none, but being a licensed driver is helpful

Project Description:  

We are developing a next version of SAE Recommended Practice J2944 (Operational Definitions of Driving Performance Measures and Statistics) or its successor.  This standard defines measures as headway gap, time-to-collision, time-to-line crossing, post-encroachment time, and about 50 other measures of driving performance, and provides representative statistical data for each statistic based on those measures.  The previous version of J2944 (170 pages, 300+ references) had limited amount of representative data for each statistic, greatly expanded here.  In addition, to support the Army, we have added in off-road driving performance measures as well.  This research is quite fundamental in that it is defining the science of driving, but quite applied in that we need real-world data to support what we do.  Think of this project as being like inventing the metric system for driving, although there are far more measures.

Research Mode: In Lab (possibly), Online, Remote, Hybrid

UMTRI Project #11: Driving Simulator Development – Unreal Engine

Faculty Mentor: Paul Green, pagreen@umich.edu

Prerequisites: none, but being a licensed driver is helpful, knowledge of Unreal is helpful

Project Description:  

We have a number of projects with the U.S. Army related to driving combat vehicles.  In support of them, we need to develop a simulation in Unreal of driving in a specific virtual world, adding sound, vehicle dynamics, minimaps and a HUD to represent a particular vehicle.  We also need to record driving performance in real time.  We know this is feasible because a student completed elements of this in the past, but the documentation is incomplete and we need to add more features.  We have requested hardware for this task from the Army.  As part of this effort, we are development computational models to predict the demand of driving as a function of road geometry, traffic, weather, and other factors.

Research Mode: In Lab (possibly), Online, Remote, Hybrid

UMTRI Project #12: Continuing Development of a Manned Driving Simulator

Faculty Mentor: Paul Green, pagreen@umich.edu

Prerequisites: none, but being a licensed driver is helpful, knowledge of Python is helpful

Project Description:  

For more than 2 years, various MDP teams have been working on the development of a driving simulator that includes a moving base cab for studies of human interaction with partially automated and automated vehicles.  Our focus is on 3 elements: (1) a GUI to allow for the rapid creation of experiments (especially scenarios and vehicle placement), (2) the ability to import virtual worlds, and (3) control of a 2-DOF motion platform in real time (pitch and roll).  As part of the 2nd element, we are creating a simulation of I-94 from Ann Arbor to the Detroit airport.  The underlying code runs under LINUX and uses CARLA and ROADRUNNER.

Research Mode: In Lab (mostly), Online, Remote, Hybrid

UMTRI Project #13: Development and Implementation of Software Tools for Human Centered Design

Faculty Mentor: Matt Reed, mreed@umich.edu

Prerequisites: Prior experience with Python and/or R

Project Description:

The Biosciences Group has developed a wide range of statistical models of human posture and body shape for use in human-centered design. However, the complexity of these models is such that relatively few people are able to use them. The goal of this project is to make more of these models available online for people around the world to use for human centered design. (As an example, see: http://humanshape.org/). The tools include interactive analysis of standard anthropometry (body dimensions), three-dimensional anthropometry, head and face geometry, and vehicle occupant postures.

The student(s) will work with the faculty to develop and deploy design tools using Python, R, JavaScript, and other languages and environments.  Applications may also be developed for implementation in open-source tools such as FreeCAD and Blender3D. 

Research Mode: In-person, Remote, or Hybrid

UMTRI Project #14:  Vehicle Position-in-Lane: Ground Truth System

Faculty Mentor:  Dave LeBlanc  leblanc@umich.edu 

Prerequisites:  Programming experience.  Experience with Matlab and/or image processing is encouraged but not required.

Project Description:  UMTRI’s Engineering Systems Group uses experiments, simulations, and analytics to help industry and government sponsors (1) quantify the requirements of automated and semi-automated vehicles, and (2) design and demonstrate test methods to ensure that vehicles meet the requirements.   This project’s goal is to develop an automated processing pipeline for accurately determining the position of a vehicle within its lane using downward looking cameras.  The pipeline will consist of image processing the camera images and pushing the results to an SQL database for analyses, such as comparing these “ground truth” results to those that a prototype or production vehicle generates.  The student will interact with and be supported by the faculty mentor and experienced research engineers. 

Research Mode: Hybrid (may include help with occasional hands-on testing)

UMTRI Project #15: Machine Learning for 3D Point Cloud Data via Deep Generative Models

Faculty Mentor: Wenbo Sun, sunwbgt@umich.edu 

Prerequisites: 

  • Proficiency in Python
  • Experience with deep neural networks, especially generative adversarial neural networks, and preferably diffusion neural networks
  • Experience with 3D point cloud data analysis is encouraged

Project Description: Deep generative models have been widely used for image reconstruction in the computer vision field. Going beyond the conventional methodologies on 2D images, we aim to expand the deep generative model techniques to 3D point clouds and 2.5D depth images, which has a more complicated data structure and may bring about potential research challenges. The goals of this study are to reconstruct high resolution human models represented by 3D point clouds or 2.5D depth images from low resolution samples, to validate the reconstruction accuracy through specific evaluation metrics, and to provide an empirical estimation of the joint distribution of the 3D point cloud data among the whole population. In particular, in this study, students will use specifically designed neural network architectures to build a generative model of the high resolution 3D point cloud data, formulate and solve an optimization problem to estimate the corresponding parameters under specific regularizations. It is expected that the reconstruction results will be imported to existing softwares for visualization, along with a journal / conference paper on the proposed methodology.

Research Mode: Online or hybrid

UMTRI Project #16: Standardizing intersections for collision risk calculation

Faculty Mentor: Arpan Kusari, kusari@umich.edu 

Prerequisites: 

  • Proficiency in Python
  • Experience in statistical models like Gaussian Process
  • Experience writing research papers

Project Description: This will be continuation of a previous research project where intersection collision risk for vehicles and pedestrians has been defined based on possible maneuvers of vehicles. However, the formulation is specific to a particular intersection. In this research, we would like to generalize the model to all manner of intersections in different datasets. The result will be an impactful research paper in the field of Intelligent Transportation Systems. 

Research Mode: Online or hybrid

UMTRI Project #17: : Body Shape and Dimension Estimation from Clothed 3D scan

Faculty Mentor: B-K. Daniel Park, keonpark@umich.edu 

Prerequisites:

Proficiency in computer programming languages (C#, Python, etc.)

Interested in computer vision/machine learning research

Project Description: Three-dimensional (3D) surface measurement has become a central component of anthropometric surveys. Modern surface scanning equipment can accurately capture the shape of the surface of the body in a few seconds. However, the practical aspects of conducting 3D scanning surveys have changed little in the past decade. In particular, participants are required to change into close-fitting garb that minimizes the clothing effects on the subsequent scan. This clothing ensemble must be provided, along with suitable privacy for changing, and the consequence is that several seconds of scanning can require 10 minutes or more of preparation and considerable resources. The University of Michigan Transportation Research Institute (UMTRI) recently introduced a new body shape estimation method, “Inscribed Fitting” (IF), that is much faster than previous techniques. This IF method uses an iterative process to estimate the body shape underlying the clothing, based on the observation that the correct body shape is the largest body shape that does not protrude through the clothing. The main objective of this study is to improve the previous IF method using state-of-the-art machine learning techniques to estimate more accurate body shape and dimensions from clothed individual scans.

Research Mode: Online or hybrid