s_fuz22cover_.jpg

 

form+zweck 22:
Tangibility of the Digital - Die Fühlbarkeit des Digitalen

 

Alexander Kulik | Antje Huckauf | Bernd Fröhlich

You Do What Happens

 

Although we use our hands when we work on our laptops, the consequences of these manual activities receive only visual feedback - we refrain from availing of the little used offerings to attach some kind of sounds as well to the visual feedback. The monopoly of visual feedback has a lot to do with the fact that operating computers is seen primarily as a software problem which can be solved handily on a monitor screen. There is also a key formula for this which has served interface designers for years as the motto of their work: "What You See Is What You Get" - WYSIWYG. Bernd Fröhlich suggests that we supplement the WYSIWYG formula using an action-orientated approach: "You Do What Happens".
Bernd Fröhlich is one of the most important researchers in the field of three-dimensional interaction in virtual reality environments - not only in Germany. He has developed several interactive devices. His "Cubic Mouse" (a new type of input device which places a three-dimensional coordinate system in the hand of the user) was and is groundbreaking. We asked Bernd Fröhlich to present the ways of thinking and working which led to the Cubic Mouse and to report about which new projects emerged under his leadership and in his team.
At the end of his article Fröhlich speaks of brain computer interfaces by which he means they would go without physical feedback. Are the ideas and experiences which were just acquired by means of neurophysiological interfaces becoming outmoded along with tangibility and graspability? Evidently, we are already standing on a new cusp and thus are confronting a new discussion. Debates on technological feasibility and neurophysiological compatibility are going to address the question of whether human sensorimotor activity embodies a unique, irreplaceable value and whether the old notions of individual autonomy should still be maintained with regard to the possibilities which interaction with digital systems holds in readiness for us.

 

 

1. Multi-Sensor Input for Improved Control and Tangibility


There is no doubt that for most users the pointer-based interaction with windows, icons, mouse and pointer is much easier to use than command line input. The introduction of this interface paradigm by Xerox in 1973 fostered the widespread use of computers in nearly all areas of work and leisure. Almost all applications rely on a single device for motion input: a 2D pointing device. It is used to push buttons, select objects and draw paths on plain surfaces. Many direct interactions like selecting, scaling, drawing or splitting may be performed that way. For functionalities not matching the pointing and drawing gestures, widgets are employed: virtual tools controlled with the mouse pointer. It seems as if the mouse pointer may have to serve for everything. Even three-dimensional interaction in games, CAD or 3D animation applications is mainly performed with the 2D pointer. Since the software interface can map everything to pointer input, this procedure somehow works. However, users have to learn the transformations of their input actions. The more complex these transformations are, the harder the training and performing. This becomes obvious when imagining how awkward life would be if our human motor abilities were restricted to just using the index finger for gesturing as in current computer environments.

The current trend towards multi-touch interfaces at least acknowledges that humans tend to act with more than one finger at a time, but still - it is just scratching the surface of the immersive experience that virtual environments will offer in future computer applications. What about grasping, turning, pushing, and about throwing and jumping when interacting with computer applications? The success of the Wii, Nintendo's current game controller, shows that users want for a more engaging computer experience. In particular, professional applications are still far away from providing sufficiently versatile user interfaces.

 

 

2. Task Driven Design of Interaction Techniques

When designing tools for the real world, there are physical constraints and a wealth of established design rules that cannot be ignored. Respective solutions are mostly easy to understand, because they rely on the users' daily experiences in many other fields. But, in designing computer interfaces everything is possible. Unfortunately, software designers often develop products according to their existing toolbox (widgets, icons, menus, ...) which rarely matches the users´ capabilities or the task requirements. Moreover, such interfaces rely on visual feedback solely. Nobody wants to look at tools while using them. Instead, tools need to be operated without paying much attention. For control and efficiency, the focus has to be kept on the piece of work and the respective action. Widgets however require the users attention themselves - for keeping track of their current interaction state. This is not only cumbersome, but also allocates cognitive capacities, that could otherwise be applied to the task. To ameliorate the situation we need to design not only software interfaces, but also sensor hardware, that fits the specific requirements of spatial interaction. The visual paradigm "What you see is what you get" (WYSIWYG) should become an action-based approach following the idea: "What you do is what happens".

Designing human computer interfaces in that way requires knowledge of various disciplines, including psychology, software engineering, product design and many others. The challenge is to find the best solution for a certain task instead of developing a workaround to enable the desired functionality within a given infrastructure.
As an overview, our research proceeds in five main steps:
• Observing cognitive, perceptual, and motor performance in humans interacting within the physical world,
• Modelling the cognitive, perceptual, and motor demands of a certain task in order to create interaction metaphors,
• Developing sensors, low-level interfaces and device drivers to record human actions as input for computer applications,
• Designing input devices (a combination of sensors assembled in an ergonomic way),
• Implementing the designed interaction systems in prototypical applications in order to involve users in the development process
• Examining usability and adjusting the design practices

Since these aspects are interrelated, the whole design process is iterative. In the following sections, we briefly address the aforementioned topics and exemplify them by two of our input device designs and their design rationales.

 

 

3. 3D Interaction

Flat surfaces like panels, boards and desktops provide many advantages like lucidity and physical support of objects, tools and manipulators. The common layout for many working environments is therefore two-dimensional. Engineers, artists and craftspeople often deal with spatial objects. They make use of digital simulations and planning tools for more effective work flows. Also in clinical diagnostics and scientific visualisation there is a lot of spatial content to be explored and manipulated. Current computer systems and desktop-based input devices do not support these sufficiently to provide intuitive and efficient 3D interfaces. The bottom line is that translations in depth or three-dimensional rotation are not directly supported when working on planar surfaces. Hence, even basic interactions like examining objects involving head motion and object rotation become difficult with such systems.

An obvious solution for designing adequate input devices for 3D object manipulation is the use of props: tangible objects that resemble and represent a specific virtual object. Since props are typically tracked, all movement of the hand held prop results in equivalent motion of the represented virtual object.

Of course, we do not wish to clutter workspaces with lots of different objects, each representing a different virtual counterpart. But, more general prop-like devices could be a good alternative. Keeping the prop's shape rather basic allows the user to employ it as a representation for more than just one virtual object. The standard mouse is a form of prop for the 2D pointer and may be used to control several tools with comparable motion characteristics. Similarly, 3D manipulation props could represent a human head in clinical diagnostics, a car in design reviews, a volume of sediments in geological visualisation or anything else that needs to be examined.

Mapping more complex manipulations like scaling, colour adjustments or deformation to the physical prop would obviously be cost-intensive, calling the benefits of digital simulation into question. Many actions that can be performed in computer applications therefore need a certain abstraction to be controlled with ease and reasonable effort. But, it is not only 3D motion that can be recorded by electronic sensor devices. Whether pressure, acceleration, wind, or temperature: tiny devices allow us to record basically everything, that can be tangibly perceived by humans.

The Cubic Mouse (see Figure 1) is such a prop device for 3D object examination with a quite generic cubic shape and additional input sensors for detailed manipulations. Its design is the result of task analyses of applications involving slicing and cutting volumetric data sets or other spatial objects. The device represents the manipulated objects and facilitates the control of cutting plane movements along the three principal axes with tangible rods. n doing so, the device supports such tasks more effectively than more general wand- or glove-based interfaces.


4. Novel Input Device Design

Each type of input sensor provides specific feedback, depending on its construction and measured input parameters. For example, the counterforce of an elastic joystick matches perfectly the task of controlling velocity during navigation through a virtual environment, while the aforementioned props are typically free moving devices ideally suited for positioning an object in space. Another major factor in the design of human computer interaction systems is the simultaneously available degrees of freedom (DOF) of the input controller vs. the integral attributes of the respective task it is designed to control. For example, if a task requires movement in all three dimensions, the input device should support these translations along multiple axes simultaneously. If instead only two dimensions are required, as for viewpoint orientation in space, the operational axes of the input device should be constrained to prevent unintentional actions. Having said this, it would be uneconomical to purposely design an individual controller for every different type of 3D task.

For the design of new input devices, we examine various tasks within targeted applications and identify task-specific interactions and interaction sequences. These observations form the basis for our design decisions, a task's parameters defining the types of sensor best employed for the manufacture of a relevant control device. The relative frequency of specific tasks and the transitions between tasks inform the design process concerning the sensors that should be incorporated in a device and how easy switching between using those different sensors should be. Providing good combinations of simultaneously and separately available DOF through an ergonomic arrangement of various sensors still remains a considerable challenge. In addition - as for any physical tool - there are also design qualities like weight, shape and appearance that qualify input devices for certain uses. Our efforts to address these design concerns are illustrated by the following two input devices.

4.1. The Globefish

Manipulating objects in 3D is a central task in most digital content creation systems. We observed users performing this task whilst using an integrated six degrees of freedom (DOF) input device, the commercially available Spacemouse. We found that users alternated between rotating and translating and rarely used both operations simultaneously. Thus we decided to build an input device which uses separate sensors for these two interaction modes and allows rapid switching between them. This is the central idea for our Globefish device, which consists of a custom 3-DOF trackball embedded in a spring-loaded frame. The Globefish trackball sensor measures the rotation of the ball, which is manipulated by the fingertips, and transforms the sensor reading into a corresponding rotation of a virtual object. Firming the grip on the trackball and pushing or pulling it in any direction controls the virtual object's translation along all spatial dimensions. In a user study we compared Globefish to the commercially available SpaceMouse for object positioning tasks. For these types of tasks the Globefish clearly outperformed the SpaceMouse and most users found the device preferential.

Motivated by these results we are currently studying the usability of our device for viewpoint navigation. Since this is a more complex task, it cannot be evaluated as easily as manipulation performance. Navigation in large environments involves motor control and cognitive tasks. The motor behaviour, referred to as travel, is the movement of the viewpoint from one location to another. The cognitive processes, known as way-finding, require the specification of a path through an environment. While travelling, this is mainly supported by regular rotations of the view to scan the environment passing by. For that purpose, the rotational degrees of freedom need to be controlled independently from other input channels. We believe that the Globefish's tangible separation of rotational input from translational input facilitates this environment scanning and thus way-finding.

Travel along a given path may be supported by different interaction metaphors. All of them have various requirements regarding sensor design, although the task always requires simultaneous control of translational and rotational degrees of freedom. Imagine a task like moving forward and steering - similar to flying an aeroplane or driving a car. Movement velocity within an environment would normally be controlled by applying forward pressure on the Globefish interface. However, doing this whilst trying to also steer is very difficult. As such we have developed a more appropriate control mapping for the device specially for such user scenarios incorporating a form of cruise control. In this mapping, once the movement velocity has been set by the user using the Globefish interface they can let go of the device and freely operate other controls without losing or changing their velocity within the virtual environment.

4.2. The Groovepad
As time goes on, the size of images and maps which users expect to be able to virtually pan, zoom or otherwise manipulate, will grow. In current user interfaces mouse pointer input is employed to control these types of operation with software tools. A drawback of this approach is the frequently required mode changes of the input device, as the mouse pointer can only be used for one task at a time. The Groovepad is an input device that augments common touchpads with an elastically suspended frame, which has similar functionality to a joystick. Both input sensors of the device are assembled such that they can be used separately but facilitate frequent and fluent switching between their different input characteristics.
The elastically suspended ring of the Groovepad can be used as a tangible correspondence to the window frames of a graphical user interface. The idea behind this design is to map the input from the Groovepad ring to the panning of an active workspace window. It can also separately be used as a redundant input channel for cursor control.
The increasingly popular zoomable interfaces are potentially a promising application domain for the Groovepad, since they require pointing, panning
and zooming as inherent operations. Pointing and panning is directly supported by the Groovepad and smooth circular gestures along the Groovepad ring can be used to specify the zoom factor.

In our study, which compared the Groovepad to regular touchpads, users performed better with the Groovepad and preferred it to touchpad interfaces. This was particularly the case for tasks which required frequent switches between panning the window and controlling the mouse pointer.

 

 

5. Future Work

We presented some of our ideas and rationales for designing novel input devices with multiple degrees of freedom. Our user studies indicate that these devices perform well for a certain set of tasks and that they can compete with commercially available solutions. However, to a great extent, the design space for desktop as well as handheld solutions is still unexplored. Further user studies based on carefully selected tasks and task combinations need to examine the advantages and disadvantages of various sensor combinations to further improve tangible 2D and 3D interfaces. Recently, gaze-controlled interfaces as well as brain computer interfaces have gained more attention in user interface research. These approaches represent the most intangible interfaces one could probably imagine due to the total lack of physical feedback. Nevertheless, many combinations with tangible interfaces could be explored. One could imagine that users hold a prop in their hands, use gaze input for selecting a specific virtual object that is then tied to the prop in hand. While a user manipulates the prop, the interpretation of the brain signals is used to identify the applied gestures which are then transferred onto the virtual object. Thus advanced gestures become possible without the need for equipping the prop with tracking technology and complex sensor arrays.