GANesis

Collaboration with Mert Toka, GAN + Virtual Reality, 2019

3D asset creation and manipulation for immersive media reveals both creative and technical challenges. The modeling workflow from 3D modeling software to Virtual Reality (VR) worlds comes with issues like scalability and the need for proficiency in specific software. Although detailed, the traditional vertex-based modeling pipeline remains slow and unintuitive when the target model is a well-defined daily object. We propose an alternative modeling schema by interfacing 3D Generative Adversarial Networks (3D-GANs) in head-mounted displays (HMDs). This tool allows users to generate and manipulate voxelated 3D models from pre-trained machine learning (ML) models in HMD. Starting with a specific category within the dataset, we generate a palette of objects as the base 3D model and employ latent space operations for easy selection of a subsection of the generated asset. We also provide basic transformations, sculpting, and painting tools for familiar placement and editing in VR.

Features

Project Description here

 

trans_back.png

Video Preview

trans_back.png

Photo Documentation

Other Projects