Faraz Faruqi

PhD Candidate in Computer Science, MIT CSAIL
Student Researcher, Google XR

Profile Photo

I am a PhD Candidate in Computer Science at MIT CSAIL, where I work in Prof. Stefanie Mueller's HCIE Lab on generative AI systems for 3D design, fabrication, and extended reality (XR). My research focuses on bridging the gap between visually compelling AI-generated models and physically functional objects that can be manufactured and used in the real world.

I am currently a Student Researcher at Google XR Labs, working on 3D generative AI and XR systems with Ruofei Du and David Kim. Previously, I was an intern at Autodesk Research working on compositional modeling with 3D generative AI with Justin Matejka and Sean Liu. I have also collaborated closely with industry research labs to translate academic research into deployable tools. My work has been published at top HCI and fabrication venues (CHI, UIST, SCF) and featured by MIT News.

Broadly, I am interested in Physical Generative AI - how we can design AI systems that understand not just appearance, but geometry, materials, forces, and use cases - how these systems can augment human creativity rather than replace it.

I develop human-in-the-loop generative systems that integrate physical constraints into 3D generative models, including structural integrity, mechanical behavior, and tactile properties. On the system and HCI side, I explore how users can interactively guide, repair, and refine AI-generated geometry using natural language and spatial interaction, enabling non-experts to create functional designs for real-world use.

News

Jan 2026 Serving as Associate Chair for DIS 2026 - I am serving as an Associate Chair for the Artifacts and Systems subcommittee at ACM DIS 2026, contributing to the review and evaluation of research on interactive systems and tools.
Jan, 2026 MechStyle covered in MIT News - Our work on MechStyle was covered in MIT News. Check out the article here.
Jan, 2026 Joined Google XR Labs as a Student Researcher - I am excited to join the Google XR Labs team as a Student Researcher. I will be working on 3D generative AI and XR systems with Ruofei Du and David Kim on the future of 3D Generative AI for XR.
Nov, 2025 Presented MechStyle at SCF 2025 - I presented our work on MechStyle, which integrates mechanical simulation into generative 3D workflows to create objects that are both visually compelling and physically functional.
Nov, 2025 Presented WireBend-Kit at SCF 2025 - I presented WireBend-Kit, a computational design and fabrication toolkit for wirebending custom 3D wireframe structures.
Oct, 2025 Tactile generative art exhibition at the Henry Art Museum - Presented a public exhibition with Prof. Martin Nisser of University of Washington on 3D printed tactile artworks that encode texture and depth, transforming visual images into tactile reliefs using generative AI.

Research Areas

Some recent research themes are highlighted below. For a complete and up-to-date list of papers, please see my Google Scholar page.

Physical Generative AI for Fabrication

I design generative AI systems that create physically viable 3D objects rather than purely visual geometry. My work integrates fabrication constraints—such as structural integrity, material behavior, thickness, and manufacturability—directly into generative pipelines. Through systems like Style2Fab, MechStyle, and TactStyle, I explore how AI can reason about function alongside form, enabling non-experts to generate objects that can be reliably fabricated and used in the real world.

Human-in-the-Loop Generative Systems

I study how humans can effectively steer, repair, and collaborate with generative AI systems. My research focuses on building interactive workflows that combine natural language, spatial input, and selective control to guide AI outputs toward user intent. Rather than one-shot prompting, I design mixed-initiative systems that allow users to iteratively refine generative results—particularly when correcting functional or fabrication-related errors.

Multimodal & In-Situ 3D Design in XR

I explore how multimodal interaction—combining natural language, spatial input, gestures, and visual context—can enable in-situ 3D design within extended reality (XR) environments. My research investigates how users can create, inspect, and refine generative 3D models directly in context, rather than through detached desktop workflows. By integrating generative AI with XR interfaces, I aim to support more intuitive, iterative, and embodied design processes that allow creators to reason about form, function, and physical constraints while designing in place.