GANesis
Collaboration with Mert Toka, GAN + Virtual Reality, 2019
3D asset creation and manipulation for immersive media reveals both creative and technical challenges. The modeling workflow from 3D modeling software to Virtual Reality (VR) worlds comes with issues like scalability and the need for proficiency in specific software. Although detailed, the traditional vertex-based modeling pipeline remains slow and unintuitive when the target model is a well-defined daily object. We propose an alternative modeling schema by interfacing 3D Generative Adversarial Networks (3D-GANs) in head-mounted displays (HMDs). This tool allows users to generate and manipulate voxelated 3D models from pre-trained machine learning (ML) models in HMD. Starting with a specific category within the dataset, we generate a palette of objects as the base 3D model and employ latent space operations for easy selection of a subsection of the generated asset. We also provide basic transformations, sculpting, and painting tools for familiar placement and editing in VR.
Features
Tensorflow implementation of 3D Generative-Adversarial Modeling to train and generate a model that is capable of produce 3D point cloud objects of 64x64x64 resolution.
Model loaded in Unity using TensorflowSharp for real-time object generation.
Generated data is processed into a compute shader to be output as a RenderTexture for Unity 2019's VFX graph for dynamic GPU-based rendering and animation.