TECTON 3D

  • Abstract

    To address this problem requires developing new techniques for multimodal interaction based on hand gestures, more suitable for tasks of traditional 3D modeling devices and make more attractive mixed reality techniques. These techniques, combined with procedural modeling, address the lack of expressiveness or naturalness of conventional CAD operations. To this end we want to create a new design framework combining stereoscopic viewing with modeling, simulation and reactive content. This will yield a more favorable environment for natural interaction closer to experience afforded by physical models, allowing content to be edited and architectural designs inspected and validated. To this end we will endeavor in recognizing bi-manualgestures using 3D sensors acquiring information from follow-up body posture using commodity sensors. Such interaction techniques combined with procedural based modeling primitives which extend shape grammars, will allow incremental changes to exploit the advantages of direct 3D manipulation in richer ways. Creating more natural forms of interaction on modeling tasks is an area abundantly researched in our group, thanks to our participation in national and European projects with the automotive, mold and architecture industries to devise innovative interaction techniques, including multimodal interfaces and modeling based on sketches. Previous research has allowed us to propose new operators for different geometric representations in CAD or implicit surfaces, that synergize with the knowledge of researchers in the Faculty of Architecture regarding procedural techniques in architectural modeling scenarios. We have also worked extensively to make immersive and interactive virtual environments more natural to use and aware of the user’s domain, skills and tasks. Our activity has led to a series of international symposia on Interfaces and Modeling Based on sketches as well as articles in journals and conferences such as IEEE Virtual Reality and Eurographics. To validate the interaction techniques and modeling we will and work with Architects to take advantage of procedural modeling to describe architectural scenes. Indeed the research group at Faculdade de Arquitectura has considerable experience and an established publication track record in this area. One of the key features of this project lies in its interdisciplinary approach to tackle these problems. Further, we will evaluate the results of this project with both seasoned and junior architects. From the task analysis with these users will identify the modeling activities that can take advantage of mixed reality. Furthermore, we find that interaction metaphors are best suited to architectural field. A key project contribution looks at using virtual environments for architectural design exploration and validation. Also, we explore techniques that combine visualization and animation with physical simulation in order to improve interaction beyond the basic navigation and control. The procedural modeling interactive gestures will be searched based on a dedicated task and integrated in a demonstrator architecture, which will be evaluated by real users. This will enable us to analyze the performance of mixed reality environments in modeling, as well as the effectiveness of 3D gestures to interact in this environment, and how procedural modeling can be used incrementally and interactively edit operators to provide more powerful architectural that traditional approaches based on constructive solid geometry and extrusion. Overcoming the current abstractions of shape grammars, through interaction, make it possible to model complex 3D scenes and review of buildings in a more easy, natural and incremental. Finally, this project will have a major impact on the architects, to encourage the use of mixed reality for modeling making it a medium of choice to analyze and discuss virtual models, more efficiently, beyond the limited current use which is to increase the perception content through virtual exploration and simplified navigation.

  • Description

    Current architectural visualization software based on virtual environments (VEs) supports mainly 3D animation and automatic navigation. Despite the growing popularity of VEs, they still need to go a long ways to replace or even augment desktop CAD systems in the modeling of 3D scenes. To address this problem requires developing new techniques for multimodal interaction based on hand gestures, more suitable for tasks of traditional 3D modeling devices and make more attractive mixed reality techniques. These techniques, combined with procedural modeling, address the lack of expressiveness or naturalness of conventional CAD operations. To this end we want to create a new design framework combining stereoscopic viewing with modeling, simulation and reactive content. To this end we will work on recognizing bi-manual gestures using 3D sensors acquiring information from follow up body posture using commodity sensors. Such interaction techniques combined with procedural based modeling primitives which extend shape grammars, will allow incremental changes to exploit the advantages of direct 3D manipulation in richer ways.

  • Team Members