About AiPEX Lab

Our research integrates artificial intelligence with engineering design to create better products for more people, with fewer resources. We work on fundamental challenges in trustworthy AI for design simulations, accessible manufacturing, and adaptive intelligent systems.

Our work spans multiple disciplines such as mechanical engineering, machine learning, robotics, and biomedical engineering, and is grounded in both theoretical rigor and practical validation. Current projects range from teaching AI to interpreting engineering simulations, to generating and evaluating design, to training robots that learn physics-based models of their dynamics.

People

Professor Conrad Tucker

Professor Conrad Tucker

Department of Mechanical Engineering

Also affiliated with: Machine Learning, Robotics, Biomedical Engineering

About

Professor Tucker leads research at the intersection of AI and mechanical engineering, developing systems that transform how we design, manufacture, and deploy products globally. His research focuses on the design and optimization of systems through the acquisition, integration and mining of large-scale, disparate data.

Dr. Tucker has served as PI/Co-PI on federally/non-federally funded grants from the National Science Foundation (NSF), the Air Force Office of Scientific Research (AFOSR), the Defense Advanced Research Projects Agency (DARPA), the Army Research Laboratory (ARL), the Office of Naval Research (ONR) via the NSF Center for eDesign, and the Bill and Melinda Gates Foundation (BMGF). In February 2016, he was invited by National Academy of Engineering (NAE) President Dr. Dan Mote, to serve as a member of the Advisory Committee for the NAE Frontiers of Engineering Education (FOEE) Symposium. He received his Ph.D., M.S. (Industrial Engineering), and MBA degrees from the University of Illinois at Urbana-Champaign, and his B.S. in Mechanical Engineering from Rose-Hulman Institute of Technology.

Lab Members

Dr. Suresh Kumaar Jayaraman

Dr. Suresh Kumaar Jayaraman

Postdoctoral Researcher, Mechanical Engineering

About

Suresh develops adaptive autonomous systems that seamlessly integrate into human-centered environments. His research combines human behavior modeling with explainable decision-making algorithms to enable intuitive human-robot collaboration in dynamic environments. His work bridges human factors and AI capabilities to ensure autonomous systems remain adaptive, transparent, and reliable.

Jessica Ezemba

Jessica Ezemba

Ph.D. Candidate, Mechanical Engineering

About

Jessica’s research focuses on hardware design automation using AI. Her research focuses on making engineering design efficient and less prone to errors. Particularly, she has been recently working on a key part of engineering design that focuses on simulations to validate designs before they are manufactured. This area is time-consuming and prone to error because it needs specific expert domain expertise. This is especially true when it comes to understanding how to set up simulations and understanding the results of the simulation in finite element analysis and computational fluid dynamics. AI has the potential to accelerate simulation interpretation by enabling faster integration of multidisciplinary expertise, but right now, we are seeing that AI is terrible at understanding these simulations, even with current state-of-the-art models. Consequently, she works on developing foundational models that allow designers to integrate simulations into automated workflows, such as LLMs, VLMs, and agentic workflows that keep humans in the design process while enabling automated understanding.

Conan Guo

Conan Guo

Ph.D. Student, Mechanical Engineering

About

Conan's research focuses on applying reinforcement learning to complex engineering systems spanning multiple physical domains, including production lines and robotic systems. His work on temporal abstraction for production line control addresses the challenge of determining optimal intervention timing in manufacturing systems where outcomes emerge over long time horizons. Rather than constantly adjusting system parameters, his methods determine the suitable control intervals to outperform continuous control. Currently, his research interests center on the credit assignment problem in robotics and reinforcement learning, with the goal of improving learning efficiency and policy interpretability.

Junghun Lee

Junghun Lee

Ph.D. Student, Mechanical Engineering

About

Junghun’s research focuses on integrating AI into design to improve efficiency and accessibility. His work on accessible 3D-printed prosthetics develops workflows that allow resource-constrained communities to design, analyze, and fabricate customized assistive devices using only smartphones and widely available 3D printing technologies. More broadly, his research explores AI models that can generate and evaluate product and material designs, enabling designers to navigate design and manufacturing spaces more efficiently than through costly physical prototyping or computationally intensive numerical analysis.

Daniel Nguyen

Daniel Nguyen

Ph.D. Student, Mechanical Engineering

About

Daniel's research develops methods for robots to learn andadapt to real-world physics through system identification and physics-based priors. His work focuses on improving optimal control and reinforcement learning-based controllers, addressing the sim-2-real gaps in contact-rich environments such as quadrupedal locomotion and dexterous manipulation, where standard learning algorithms struggle. By combining hybrid systems identification with modern learning algorithms, his methods enable model-based control with improved sample-efficiency over black-box approaches.

Research

Our research develops AI methods to transform engineering design, manufacturing, and autonomous systems

AI for Engineering Design

This research develops AI foundational methods to transform the engineering design cycle by addressing three fundamental bottlenecks. First, automating the synthesis of novel design concepts across multiple engineering domains using...

View Projects

Intelligent Manufacturing Systems

This research applies AI to advance manufacturing accessibility, efficiency, and performance prediction. The work addresses predicting mechanical properties of 3D-printed materials from process parameters and filament data using data-driven models,...

View Projects

Intelligent Autonomous Systems

This research develops methods for physics-based system identification that enable autonomous systems to understand and adapt to real-world dynamics both in monitoring and control applications. The work addresses maritime safety...

View Projects

AI for Engineering Design

Vision-Language Models for Engineering Simulations

Vision-Language Models for Engineering Simulations

Jessica Ezemba

Read more

Engineering simulation results stress plots, thermal gradients, streamlines, fluid flow patterns, contain domain-specific information that requires human expertise to understand and draw meaningful insights, creating a bottleneck in automated frameworks. Can AI models that excel at describing medical images and satellite photos learn to read finite element analysis? We built OpenSeeSimE, an automated benchmark with over 200,000 question-answer pairs spanning 10,000 simulations across structural, thermal, and fluid dynamics. State-of-the-art vision-language models achieve only 29-47% accuracy on engineering-specific questions. This performance gap reveals fundamental challenges in domain transfer; engineering simulations require understanding physical principles, recognizing patterns in scientific visualizations, and reasoning about multi-scale phenomena. Future work will use this benchmark to evaluate different techniques for improving AI-automated workflows, such as agent-based design processes that decompose interpretation tasks, incorporate physics-based reasoning, and keep humans in the loop while automating repetitive analysis.

Neural Network Surrogate Modeling

Neural Network Surrogate Modeling

Jessica Ezemba, Junghun Lee

Read more

Finite element analysis for complex geometries under uncertainty can take hundreds of hours of computation. Engineering designers, who design for uncertain loading conditions, need a way to see how variations in geometry, material properties, and loading conditions affect performance. These design spaces can have up to thousands of variations, but running full simulations for each is prohibitively expensive. Neural network surrogates offer a solution: train once on a representative dataset, then predict outcomes in milliseconds. We build a system that uses three-dimensional unstructured graph meshes as inputs to test if the current state-of-the-art neural network surrogate models can understand both geometry and physics. Results show engineering-grade accuracy for stochastic finite element problems is still far from reality. Future work will use this benchmark to evaluate different techniques for improving AI automated workflows via neural network surrogate models to improve efficiency and enable them to be used reliably in real-world engineering workflows.

Bond Graph Deep Q-Learning for Multi-Domain Systems

Bond Graph Deep Q-Learning for Multi-Domain Systems

Daniel Nguyen

Read more

Most reinforcement learning treats systems as black boxes—observe states, take actions, get rewards, repeat. This works for environments with clearly definable states, such as computer games, but fails for engineering design, which often spans multiple domains with unclear state and action spaces. For instance, a robotic manipulator isn't just mechanical linkages; it integrates motors (electrical), hydraulic actuators, sensors, and control systems. Defining custom state and action spaces for each design scenario quickly becomes cumbersome. Bond graphs provide a unified modeling language for these multi-domain systems, representing energy flow through mechanical, electrical, hydraulic, and thermal components using the same mathematical framework. By incorporating bond graph structure directly into deep Q-learning, we have developed Bond-DQN: an approach that leverages the universal graph representation for multi-domain engineering design. We demonstrate Bond-DQN on a realistic suspension vibration suppression task, achieving 20.5% better performance than traditional parametric optimization.

Back to Research Overview

Intelligent Manufacturing Systems

Accessible 3D-Printed Prosthetics

Accessible 3D-Printed Prosthetics

Junghun Lee

Read more

A well-fitted prosthetic limb shouldn't be a luxury. Traditional prosthetics require industrial manufacturing equipment costing hundreds of thousands of dollars, and expert prosthetists for custom fitting. We have developed a workflow that enables resource-constrained communities to design, analyze, and fabricate customized prosthetic devices using accessible 3D printing technology and smartphones. The system uses video captured on any smartphone to reconstruct accurate 3D geometry of residual limbs, applies parametric design algorithms to generate custom-fitted components, predicts mechanical performance using machine learning models trained on printing parameters, and produces fully 3D-printable designs requiring zero industrial components. This approach enables rapid iteration and designs adapted to specific activities, representing a new paradigm for democratized product design in resource-constrained settings. 3D-printed parts are highly sensitive to printing parameters. The same digital design printed at 50% infill density versus 100% infill, or at different layer heights, nozzle temperatures, and print speeds, produces parts with dramatically different mechanical properties—stiffness, strength, toughness. Running finite element analyses for every parameter combination is prohibitively time-consuming when designers need to explore hundreds of variations to optimize for conflicting objectives like strength versus weight. We have built comprehensive datasets mapping printing parameters to mechanical properties across over 10,000 test specimens and developed machine learning models that predict material behavior directly from print settings with 4-5% error. This capability transforms the design process: instead of printing multiple physical prototypes to find optimal parameters, designers can now explore parameter spaces digitally in minutes, capturing complex interactions between parameters and enabling the discovery of combinations that achieve specific performance targets.

Temporal Abstraction for Production Line Control

Temporal Abstraction for Production Line Control

Conan Guo

Read more

Manufacturing systems are typically controlled by monitoring every sensor at every timestep and adjusting parameters continuously. While this approach is intuitively sound it is expensive and introduces noise from over-control which can actually harm throughput. Moreover, when reinforcement learning (RL) is applied as controllers in production line systems, such dense control paradigms become even more suboptimal not only because of the computation overhead, but also the unclear correlation between the reward signals and actions taken. Manufacturing systems exhibit natural rhythms and time constants, as products require time to be processed and transferred through the system. Effective RL-based control therefore requires intervening at the right moments rather than continuously controlling the system. We developed a method that identifies the suitable control interval using the structure and the parameters of the production line which provide RL algorithms information regarding when to observe system state and let dynamics evolve in order to apply control actions. Our approach analyzes production structure together with processing parameters to determine appropriate control intervals, enabling RL agents to acquire richer and more informative reward signals. Experimental results demonstrate improved throughput compared to baseline every-timestep control strategies, with learned policies uncovering effective performances near the heuristic optimal of the production line performance.

Back to Research Overview

Intelligent Autonomous Systems

Physics-Based Priors for Robot Learning

Physics-Based Priors for Robot Learning

Daniel Nguyen

Read more

Standard deep reinforcement learning treats everything as patterns to be learned from scratch: which states lead to good outcomes, which actions work in which situations, how different parts of the system interact. But we already know physics—energy is conserved, momentum transfers according to Newton's laws, contact forces depend on material properties and geometry. We're developing approaches that incorporate physics-based priors directly into learning architectures: energy-based networks that ensure learned dynamics respect conservation laws, Lagrangian and Hamiltonian neural networks that parameterize physics correctly, bond graph representations for multi-domain systems, and symmetry-preserving architectures. The results are dramatic: robots learn stable gaits and manipulation strategies with 10-100× fewer samples than black-box approaches, policies generalize to novel terrains and objects not seen during training, and learned behaviors remain safe because physical constraints are built-in rather than discovered.

Hybrid System Identification for Contact-Rich Reinforcement Learning

Hybrid System Identification for Contact-Rich Reinforcement Learning

Daniel Nguyen

Read more

Robots operating in the real world don't inhabit the smooth, continuous state spaces that most learning algorithms assume. A quadruped walking doesn't just move continuously through joint angles and velocities—it experiences discrete events when feet make and break contact with the ground, suddenly changing system dynamics. These hybrid systems, with continuous motion punctuated by discrete mode switches, are notoriously difficult for standard machine learning. We're developing hybrid system identification methods that explicitly detect and model these discrete events alongside continuous dynamics. Our approach simultaneously identifies continuous dynamics within each mode (flight phase, single-support, double-support), the discrete events that trigger mode transitions (foot contact, liftoff), and the conditions under which transitions occur. This hybrid structure enables model-based reinforcement learning that's far more sample-efficient than model-free methods, achieving more stable locomotion across varied terrains while transferring across different gaits because it understands the physical structure underlying legged locomotion.

Back to Research Overview

Publications

Last updated: January 19, 2026

2026

  1. Neural Network Surrogate Modeling for Stochastic Finite Element Method Using Three-Dimensional Graph Representations: A Comparative Study
    Jessica Ezemba and Christopher McComb and Conrad Tucker Journal of Mechanical Design journal

2025

  1. Accessible Digital Reconstruction and Mechanical Prediction of 3D-Printed Prosthetics
    Junghun Lee and Chukwuemeka Nkama and Hadiza Yusuf and Joseph Maina and Jean Ikuzwe and Jean Byiringiro and Moise Busogi and Conrad Tucker Journal of Mechanical Design journal
  2. OpenSeeSimE: A Large-Scale Benchmark to Assess Vision-Language Model Question Answering Capabilities in Engineering Simulations
    Jessica Ezemba and Jason Pohl and Conrad Tucker and Christopher McComb Pre-print other
  3. Data-Driven 3D-Printed Material Data Prediction From Benchmark Specimens
    Junghun Lee and Conrad Tucker Journal of Computing and Information Science in Engineering journal
  4. Neural Network Surrogate Modeling for Stochastic FEM Using 3D Graph Representations: A Comparative Study
    Jessica Ezemba and Christopher McComb and Conrad Tucker International Design Engineering Technical Conferences and Computers and Information in Engineering Conference conference

2024

  1. Increasing accessibility of 3D-printed customized prosthetics in resource-constrained communities
    Junghun Lee and Chukwuemeka Nkama and Hadiza Yusuf and Joseph Maina and Jean Ikuzwe and Jean Byiringiro and Moise Busogi and Conrad Tucker International Design Engineering Technical Conferences and Computers and Information in Engineering Conference conference
  2. Inverse-Prediction of Material Property in Fused Filament Fabrication/Fused Deposition Modeling
    Junghun Lee and Hadiza Yusuf and Conrad Tucker International Design Engineering Technical Conferences and Computers and Information in Engineering Conference conference
  3. Bond-DQN: Deep Q-Learning of Lumped-Element Systems Design via Bond Graphs
    Daniel Nguyen and Conrad Tucker International Design Engineering Technical Conferences and Computers and Information in Engineering Conference conference

2023

  1. Reducing the barriers to designing 3d-printable prosthetics in resource-constrained environments
    Junghun Lee and Andrew Chesang and Michael Gichane and Moise Busogi and Jean Byiringiro and Conrad Tucker International Design Engineering Technical Conferences and Computers and Information in Engineering Conference conference
  2. Parameter Extraction From Images Using Multilabel Supervised Learning
    Jessica Ezemba and James D Cunningham and Conrad S Tucker International Design Engineering Technical Conferences and Computers and Information in Engineering Conference conference

Join Us

If you're interested in research at the intersection of artificial intelligence and engineering design—whether developing AI methods for simulation interpretation, creating accessible manufacturing systems, or building adaptive robotic systems—we welcome passionate researchers who want to tackle fundamental challenges with real-world impact.

For prospective Ph.D. students:

If you're interested in joining our lab, please contact Professor Conrad Tucker at conradt@andrew.cmu.edu. Include your CV/resume, a brief research statement (1-2 pages) describing your interests and how they align with our work, and any relevant materials (papers, projects, code).

For current CMU undergraduates and Master's students:

If you're interested in specific research projects, reach out directly to the Ph.D. students or postdoc leading that work. You can find their contact information on this website or the CMU directory profiles.

Get in Touch

Interested in collaboration, joining the lab, or learning more about our work?