More Natural User Interfaces for Designers

Multi-touch surfaces, stereoscopic displays and gesture computing reveal a yearning for natural interaction with digital objects.

Multi-touch surfaces, stereoscopic displays and gesture computing reveal a yearning for natural interaction with digital objects.

By Kenneth Wong, Senior Editor

About eight months ago, a video clip of a child playing with an iPad surfaced on YouTube. The baby, a 1-year-old girl, was poking, pinching and swiping images on an iPad. If the clip had ended there, it would have been virtually indistinguishable from thousands of online videos of other cute babies doing cute things. What’s remarkable about this video is its second act,  where the toddler began playing with a printed Marie Claire magazine after she’d had her fill with the tablet. She was poking, pinching and swiping the pages of the magazine like she did the previous device. When the magazine didn’t respond with enlarged views and animated page turns, she was visibly perplexed.

The fadeout text at the clip’s end read, “For my 1-year-old daughter, a magazine is an iPad that doesn’t work. It will remain so for her whole life. Steve Jobs has coded a part of her OS.”

The clip eventually went viral, ending up on morning shows and primetime news as an example of technology’s influence on human behavior.

The mouse-and-keyboard combination that most CAD users employ today is but a legacy of the text-display era, dating back to the times you had to type in a command line (often in DOS) to tell your computer to do something. Unfortunately, this dominant computing method has coded our brains’  OS for the last three or four decades—so, even as computer graphics was rapidly transitioning from plain text to photorealism, few challenged the paradigm that was obviously flawed for interacting with 3D geometry. Instead, developers and users alike plowed ahead with the familiar method by default.

But the iPad generation, like the 1-year-old in the YouTube video, may finally be forcing hardware makers to reinvent themselves, to come up with better alternatives for visualizing digital objects and navigating 3D scenes. At the very least, touch responsiveness in display screens may become the norm, not an exception.

At the latest tech conferences, oversize panels with multi-touch surfaces, stereoscopic displays (both with and without glasses),  Kinect-style input systems, and augmented reality goggles provide glimpses of the future. The long-delayed recoding of our biological OS has already begun.

Touchable by Default

At the Siemens PLM Connection conference (May 7-10, 2012 in Las Vegas) and PlanetPTC Live (June 3-6, 2012 in Orlando), the crowd lined up for a chance to, quite literally, get their hands on the 82-in. LCD panel mounted in HP’s booth. Mimicking what people have seen in sci-fi blockbusters like “Minority Report,” the oversized display allows people to pull down,  anchor, rotate, zoom, pan, peel off and brush away images and drawings with fingertips and two-handed gestures, with a finesse that would have been virtually impossible with mouse and keyboard. (The same technology is available in HP TouchSmart tablet products in smaller sizes, more suitable for personal use and desktop deployment.)

 
Perceptive Pixel
HP and Perceptive Pixel partnered to produce the oversized touch-responsive LCD display system, demonstrated at HP’s booth at Siemens PLM Connection 2012 and PlanetPTC Live 2012 conferences. Image courtesy of Perceptive Pixel.

“We worked with a partner called Perceptive Pixel,” explains Tom Salomone, HP’s workstation marketing manager. “It uses projected capacitive technology—the same that was used on the iPad and Windows tablets.”

One of the advantages of the Perceptive Pixel’s device is its capacity to interpret and respond to more than a single-touch input. In other words, several pairs of hands could be interacting with data displayed in different areas of the surface. It also has built-in algorithms to detect and reject noise (for instance, unintended input caused by the base of your palm coming in contact with the surface). In May 2012, the device also became capable of receiving pen-style input, which allows you to create fine-line drawings and handwritten markups.

“We’re used to the mouse and the keyboard,” says Salomone,  “but anybody growing up today, they’re more used to touch. They can’t understand why some monitors don’t have touch response.”

Dara Bahman, AMD’s senior manager for workstation graphics marketing, points to the proliferation of smartphones and mobile devices as a sign of the times.

“Using fingers, using touch, is a very natural way of interacting,” he says. “If you’re on the ]manufacturing] shop floor, and you have a mobile pad, it’s so much easier to just pull up the drawing you need on that pad. You’re not going to use a keyboard and a mouse. These tablets are going to fundamentally change how people work.”

Efrain Rovira, executive director of Dell Precision workstations notes that designers have had to learn indirect manipulation—mouse, keyboard, trackballs, pen/tablets—to do their jobs.

“In some ways, today’s model is unnatural,” he adds. “Touch could enable a more natural creation model. That said, there are limitations around precision,” noting that fingers are thick and when your hands are in the way, they create a screen obstruction.
“Our customers have told us that they view touch as a convenient nice-to-have, but not an essential feature,” he cautions.

Four-finger multi-touch display support for drawing,  writing, editing and zooming onscreen is available in the Dell Precision M4600 and M6600 mobile workstations.

Space Immersion

At the SolidWorks World 2010 conference, director James Cameron, still glowing in the success of “Avatar,” was given a chance to peruse the exhibit area before the public was allowed in. The special preview was arranged to accommodate his busy schedule, which left him about 20 minutes between his keynote speech and his press conference.

One of Cameron’s stops was at the booth of Infinite Z, which develops the zSpace display system. Described as a “virtual holographic experience,” zSpace uses a combination of high-res display panel, eye-tracking cameras, polarized eyewear, and pen stylus to deliver holographic imagery with simulated depth of fields.

 
Infinite Z
zSpace from Infinite Z, described as a “virtual holographic experience,” uses a mix of head-tracking cameras and stereoscope glasses to produce an interactive environment with simulated depth of field. Image courtesy of Infinite Z

“We’ve been working diligently with application developers in many realms, quite a bit in the CAD space,” says Dave Chavez, Infinite Z’s vice president of research and development.

The system has been demonstrated to work with Autodesk Showcase, Autodesk Alias, Siemens PLM Software’s NX, Dassault Systèmes’ CATIA,  and SolidWorks, among others.

In normal displays (stereoscope or otherwise), to view an assembly from a different angle, you’d have to rotate the 3D model using a mouse—or fingers, if operating on multi-touch surfaces. With zSpace, the head-tracking cameras interpret your movements and recalibrate the project 3D image accordingly, so inspecting a model from another angle in its holographic space is more consistent with how you would do the same task in the real world. Compared to stereoscope displays on flat screens, zSpace’s holographic environment creates an illusion of depth of field that’s more convincing.

Design Review in Immersive Stereoscope

The advances in display technologies are driven primarily by the desire to “duplicate reality,” according to David Watters, NVIDIA’s senior director, manufacturing & design.

“The aim is to make you forget what you’re looking at is really not there,” he says. “In the case of automotive styling, for example,  seeing the car as if it’s really in room with stereoscopic glasses makes you forget, for a moment, the car is not really there. Only then can you truly judge the aesthetics of the product.”

Dell’s Rovira notes, “We are seeing total immersive technology being used now in certain segments of the industry—for the virtual prototyping of cars and aircraft, for example.

Though a reality-mimicking display style is preferred for design reviews and presentations, for most engineers creating the CAD model, a less-realistic display style with “clown colors” may be quite sufficient,  Watters adds. By clown colors, he means the type of high-contrast bright colors that lets you clearly see edges and geometric volumes—a visual style that emphasizes dimensions and measurements more than lights, shadows and reflections.

Antoine Reymond, AMD’s senior strategic alliances manager for professional graphics, also observes that we’ll see stereoscope and holographic visuals more in consumption of product information than in creation, adding, “It’ll take some time to get the core designers to move away from 2D flat screens, because they need the precision.”

Waiting for the Kinected World

Though initially developed as a gameplay device, Microsoft Kinect offers tantalizing possibilities for designers and engineers, with its ability to detect and interpret physical gestures.

“A lot of people underestimated what would happen when Kinect was introduced,” notes Infinite Z’s Chavez. “In addition to consumers,  the adoption of Kinect in research labs is growing. People found it to be an intuitive way to interactive with virtual objects.” (For more on this topic,  check out “Kinect for Windows Arrives, Engineering Applications Follow,”  February 2012, at DE’s Engineering on the Edge blog, www.engineeringontheedge.com.)

“A lot of gesture-based computing is in the use of computer vision ]the computer’s ability to recognize shapes and images],” says NVIDIA’s Watters. The use of Kinect for entertainment can be accomplished with some basic image recognition—thus, large body movements captured by a standard webcam are sufficient. The use of the same device for professional design, on the other hand, is expected to require image processing at a much higher resolution. Take, for example, the ability to detect subtle finger movements from a distance so they can be interpreted as commands for drawing specific objects.

Dell’s Rovira points out that the general challenge for Kinect-style “air” gestures is that people working on computers—either desktops or laptops—are close enough to reach the controls.

“For people creating, they need a certain level of precision. Air gestures today, and probably in the future, are much less precise than the alternative that is right in front of them,” he adds. “One of the use cases is presentations or public demos where you would show something to an audience, but the demos of today’s technology illustrate more limitations than results.”

The Burden on Processors

Presenting 3D data in a much more realistic mode is not without consequences. To render and display millions of polygons and pixels—twice as many, in the case of stereoscopic, as it must render a pair of images to address the left- and right-eye views—the CPUs and GPUs must sweat a lot more.

Dell’s Rovira posits that many of today’s engineering software applications don’t fully leverage the computing power currently available in professional-level workstations.

“Most applications, with the exception of some highly specialized simulation/analysis products, have yet to embrace the advantages of multi-threaded environments,” he says. “The next five years will see a significant change in the speed and flexibility of engineering applications, as parallel code and enhancements in UI become more commonplace.”

To satisfy the increased computing needs, “you’ll need devices with more processing power, lower energy, smaller form factor,”  observes AMD’s Bahman. He says he believes accelerated processing units (APUs),  which combine the features of CPUs and GPUs, are better positioned to address the requirements: “With an APU, you don’t split memory—]you don’t need] some on the CPU, some on the GPU—so the memory bandwidth is wider. So processing power improves and power usage drops.”

“The increased computing power is what’s allowing things like Microsoft Kinect’s gesture and multi-touch computing,” notes NVIDIA’s Watters. With a system like Perceptive Pixel’s, the computing power is used “not just to interpret what you’re touching, but also to reject the touches you didn’t intend—for example, the palm of your hand,” he adds. “The GPU is becoming more important in these technologies, because a lot of that is image recognition. All the touches and gestures don’t mean a thing if the graphics processing cannot keep up with them.”

A Digital Future with the Naturalism of the Past

HP’s Salomone points to his granddaughter and her sketch pad as an example.

“She’s an artist. She wants to draw on the monitor. She wants the device to be natural. That’s how professional ]engineers] want to work, too,” he says. “They want to draw on the screen, mark them up, use both hands, not worry about using a keyboard unless they need to type. If you think about it, before computers came around, that was the natural way to design.”

The move to multi-touch screen, stereoscope display, and gesture-based computing is, in a sense, a return to the past, an attempt to remove the intermediary input devices that interfere with the natural approach.

Kenneth Wong is Desktop Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at deskeng.com/facebook.

More Info
AMD
Autodesk
Dassault Systèmes
Dell
HP
Infinite Z
Microsoft Kinect
NVIDIA
Perceptive Pixel
PlanetPTC Live
Siemens

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#2572