Sketch that Scene for Me:
Creating Virtual Worlds by Freehand Drawing
Ellen Yi-Luen Do
Design Machine Group, Department of Architecture, Box 355720
University of Washington, Seattle, WA 98195-5720
With the Web people can now view virtual three-dimensional worlds and explore virtual space. Increasingly, novice users are interested in creating 3D Web sites. Virtual Reality Modeling Language gained ISO status in 1997, although it is being supplanted by the compatible Java3D API  and alternative 3D Web technologies compete . Viewing VRML scenes is relatively straightforward on most hardware platforms and browsers, but currently there are only two ways to create 3D virtual scenes: One is to code the scene directly using VRML. The other is to use existing CAD and modeling software, and save the world in VRML format or convert to VRML from some other format. Both methods are time consuming, cumbersome, and have steep learning curves.
Pen-based user interfaces, on the other hand, are for many an easy and intuitive method for graphics input. Not only are people familiar with the look and feel of paper and pencil, novice users also find it less intimidating to draw what they want, where they want it instead of using a complicated tool palette and pull-down menus. Architects and designers use sketches as a primary tool to generate design ideas and to explore alternatives, and numerous computer-based interfaces have played on the concept of "sketch". However, we restrict the notion of sketch to freehand drawing, which we believe helps people to think, to envision, and to recognize properties of the objects with which they are working.
SKETCH employs a pen interface to create three-dimensional models, but it uses a simple language of gestures to control a three-dimensional modeler; it does not attempt to interpret freehand drawings. In contrast, our support of 3D world creation using freehand drawing depend on users traditional understanding of a floor plan representation. Igarashi et al.  used a pen interface to drive browsing in a 3D world, by projecting the users marks on the ground plane in the virtual world. Our Sketch-3D project extends this approach, investigating an interface that allows direct interpretation of the drawing marks (what you draw is what you get) and serves as a rapid prototyping tool for creating 3D virtual scenes.
Instant 3d world from a napkin sketch
Sketch-3D is a pen-based interface that recognizes simple geometric shapes in a two-dimensional view. To create an architectural space you draw lines and circles in a simple "cocktail napkin" sketch to indicate the placements of walls and columns. You select different colors for the elements you draw and the 3D world is created accordingly. Figure 1 below shows ascreen snapshot of this simple interaction.
Figure 1: A floor plan drawn with Sketch-3D. Lines, circles, and spirals are translated to create a 3D VRML world with walls, columns, and view points, shown in a Netscape browser.
You can also use Sketch-3D to create VRML representations of furniture layout designs. For example, a sketch of four rectangles surrounding a circle would generate a virtual scene of a dining table set in a browser. These 3D models are VRML translations of symbol library elements from theFormZ modeling software . Figure 2 shows that a quick layout sketch enables a virtual walk through of a furniture arrangement.
Figure 2: Sketch configurations of furniture layout (left) produced a VRML world (right).
After creating the world, you can use a VRML enabled browser to navigate and explore different paths and view points. For example, Figure 3 shows how a walk through the world is reflected by a series of circles on the original floor plan sketch that indicate your path through the world. This two-way communication gives you the opportunity to quickly see the 3D world youve created. It also provides a "you are here" map that shows where you are on the sketched floor plan as you move through the space. The sample rate remains constant and therefore the speed and acceleration through the world is recorded and displayed as well.
Figure 3: Your path through the VRML world (left) is reflected on the original sketched floor plan (right).
The floor plan sketching part of the Sketch-3D program is built in Lisp, using our Electronic Cocktail Napkin drawing platform . The program recognizes simple floor plan symbols (lines for walls; circles for columns; spirals for view points) and a translator module outputs a standard VRML format file that any VRML enabled browser can read. The program assumes that the world is level and that the heights of all the sketched floor plan elements are the same. It simply extrudes the walls and columns to a default height. The program takes recognized furniture configurations (lamp, TV, table and chairs) and produces 3D objects with standard dimensions.
The VRML file that Sketch-3D writes to describe the worlds geometry is augmented with a touch sensor and proximity sensor. These sensors are used on the client side to communicate with a Java applet running in the browser. The proximity sensor prompts the Java applet to continually report the browsers view location and orientation; the touch sensor reports when the user clicks on an object in the world. The applet communicates with a Perl script running on a server, which in turn relays the view location and orientation information back to the sketching front end. This TCP/IP based architecture technically distinguishes world creation and path monitoring from browsing, although they can also run in the same environment.
Current and future work
We are working on several extensions to the floor plan sketcher that address obvious needs in our prototype. Most immediately, we are implementing a way for the user to indicate the height of the extruded walls and columns, by sketching an isometric projection. Were also exploring enabling a user to sketch walls directly into the browser as they would appear in the VRML world, augmenting the floor plan sketching facility that we report here. In addition, were interested in using freehand sketching to construct fully three-dimensional models. To that end, a related project in our group have implemented a 3D sketcher that uses the constraint propagation method of Huffman, Clowes, and Waltz from early machine vision to infer three dimensional solid models from two-dimensional sketches . Were also implementing a 3D-Sketch module that displays the browsing users field-of-view (Benedikts Isovist ) on the floor plan sketch.
This research was supported in part by the National Science Foundation under Grant No. IIS-96-19856.The views contained in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. We thank Thomas Jung and Dongqiu Qian for help with the Java-Perl connection.
 http://www.vrml.org/WorkingGroups/vrml-java3d/, http://www.vrml.org/x3d.html
 http://www.alice.org, http://www.metacreations.com/
 Zeleznik, R., K. P. Herndon, et al. (1996). SKETCH: An Interface for Sketching 3D Scenes. SIGGraph '96 Conference Proceedings: 163-170.
 Igarashi, T., R. Kadobayashi, et al. (1998). Path Drawing for 3D Walkthough. UIST 98. San Francisco, ACM: 173-174.
 Gross, M. D. and E. Y.-L. Do (1996). Demonstrating the Electronic Cocktail Napkin. ACM Human Factors in Computing - CHI '96 Conference Companion, Vancouver, Addison Wesley: 5-6.
 Schweikardt, E. and M. D. Gross (2000). Digital Clay: deriving digital models from freehand sketches, Automation in Construction V 9 (1) 107-115
 Benedikt, M. L. (1979) To Take Hold of Space: Isovist and Isovist Fields. Environment and Planning B, V 6: 47-65