The SpatialFlow project began with an initial contextual review on the history of computer and programming interfaces, including visual programming. I uncovered keywords aiding my research, such as spatial computing, mixed reality, speculative design, iterative design, and skeuomorphism. Current industry standard 2D, VR and AR interfaces were also reviewed, noting critiques of each. Some notable examples are Rhino’s Grasshopper plugin, Unreal Engine’s material node editor, and Oculus Rift’s menu interface. Through discussing with students of various programming skill levels, I uncovered one purpose where I can see great room for improvement – visual programming in order to create 3d structures or models, such as in architectural software such as Rhino.
I began prototyping and user testing using paper, moving to 3D models using Blender, to a 360 degree panoramic render, to Unity, using an Oculus Rift integration allowing for interaction. At the moment, users are placed inside an office room in VR, with the ability to move connected node blocks in the air, and press buttons to change the holographic display grid in the center of the room. Almost all of this interaction has been programmed in Unity. The current prototype does have limitations: the user cannot yet open up new 3D menus, manipulate the building blocks to create a different output, or collaborate with another person.
Plans to Develop the Project:
I plan on creating more visualizations so that users trying the VR prototype will get a better sense of the speculative workspace. I will create renders in which creatives are performing different tasks holographically, such as music composition, sculpting, animation, and level and environment design. The SpatialFlow visual programming interface could be one aspect of this room, shown being used for the creation of game logic or 3D structures and shaders. With the VR prototype, I am currently looking into tools that allow for multiple users to work in the same virtual room. I plan on dedicating a short amount of time into this to allow for a very basic level of functionality – if it turns out to be overly cumbersome, I will prioritize the depth of single-user interactions instead. Ideally, users will be able to actually ‘program’ to create an output using the 3D objects. If I am unable to have a high level of functionality with this, the VR prototype currently does allow users to move blocks around and manipulate the display grid. I would be able to have users load many different pre-created variations and scenarios of the user interface to be explored with more basic interactions instead.