With the emergence of the COVID-19 pandemic, we are devoting a series of news posts to exploring the possibilities of touchless interfaces. In this article, we will review a variety of motion- and gesture-based exhibits developed over the last few years. Our first post in the series, Touchless Gesture-Based Exhibits, Part One: High-Fidelity Interaction, looked at prototype software we developed with Intel. This post examines several touchless exhibits we created for zoos, science centers, and children’s museums.
The original Microsoft Kinect, first released ten years ago, was a revolutionary motion-sensing device. It was originally developed for gaming, but it didn’t take long for museums and design firms to begin developing exhibits that employed it. In fact, we still create experiences employing the Kinect (now the latest generation Azure) along with other devices from Intel, Orbbec, and the Stereolabs ZED camera.
While this technology has evolved and the range of devices has proliferated, the key design principles for incorporating such devices in experiences remain much the same. Visitors’ movements are tracked and lead to an outcome or event in the exhibit space; in some cases, we develop software (or use existing software) to do skeletal tracking, which allows detection of specific gestures, poses, or movements. For these experiences to be successful, visitors need to be aware that they can interact with the environment and know in real time that their actions are detected. In some cases, we need to provide specific instructions on where to stand or how to move. With most touchless interfaces, clear directions and active feedback are essential to ensure usability.
Single User Experiences
One of the first motion-based exhibits we developed was Be a Bug, which we developed with the ABQ BioPark in 2015. At this experience, visitors select whether to “become” a honeybee, a beetle, or a damselfly. Once they assume their new identity, visitors fly from food source to food source in a game-like environment, moving their arms to flap their wings and leaning to one side or the other to turn. Other than the initial insect selection, the interaction is touchless.
Be a Bug requires the visitor to stand in a specific location so the motion sensor can accurately track their movements. We placed vinyl graphics on the floor to let visitors know the optimal location. In addition, because we used touch for the initial selection of the insect, we placed additional graphics nearby encouraging visitors to go to the screen to make their choice. With the benefit of hindsight, that additional literal and metaphorical step was a design flaw. However, our evaluations have shown that this hasn’t affected the exhibit’s success or popularity: we have found that there is often a line of visitors waiting to use it. (In fact, the opportunity to watch others not only increases the fun of the experience but also provides guidance in how to use it.)
Along with floor graphics, a small animation inset on the screen shows the visitor’s skeletal outline in real time. Additionally, animated pointers on the display show visitors the direction they should move to find the next food source. As noted earlier, this kind of feedback and guidance—letting visitors know their movements are being received and giving hints about how to use the exhibit—is key to a successful experience.
A few years after creating Be a Bug, we had the opportunity to work with the BioPark again on the Penguin Chill exhibition, and one interactive exhibit extended and refined the basic concept behind Be a Bug. At Chow Time, visitors are prompted to use a pose gesture and then raise their hands to select one of three penguin species presented in a 3D carousel. This approach allows guests to more easily interact with menu items and make selections that might otherwise require touch, making the experience completely gesture-based.
Although the imagery and content are different, the structure of Chow Time is similar to Be a Bug, with visitors swimming to find food and leaning to control their direction. However, we also added predators and other dangers to the environment, so guests need to avoid those unpleasant surprises. A special pose gesture in which guests raise a hand and lean prompts the penguin to spin to evade danger.
Both of these exhibits teach basic facts about animals. In addition, by using their whole bodies to assume an animal’s identity, visitors not only have fun, but can also experience empathy and may come to care more about the creatures and the environment.
More practically, the fact that people line up to use exhibits like these speaks to the popularity of these types of motion-based interactions; it can be both entertaining and instructive to watch people leaning, waving, and gyrating. In fact, we took note of these exhibits’ success when we designed a new hardware model that could more easily support such experiences. Our Portrait touch and motion kiosk was directly influenced by what we learned in developing these exhibits.
However, it’s important to acknowledge that these are single-user experiences, and it’s clear that exhibits that provide opportunities for social interaction may not only increase engagement but may lead to deeper learning.
Multiuser Motion-Based Exhibits and Immersive Environments
Moving from single to multiple users in a gesture-based experience presents intriguing challenges. Depending on the scale, it may be necessary to use two or more more motion-sensing devices, and the data gathered by these sensors will need to be combined so that tracking is consistent across devices. A social exhibit space with numerous visitors roaming freely, and perhaps gesturing and pointing, can present formidable programming challenges as movements are detected by several devices simultaneously.
DinoStomp, an immersive experience we developed in 2016 with the Fort Worth Museum of Science and History, was the first exhibit in which we used multiple Kinects, with three devices tracking visitor interactions across a video wall 8 feet high and 20 feet wide. To make the setup reliable, we had to account for varying distances between visitors and the wall as well as the curve of the wall itself. We developed an algorithm to calibrate the devices so they could agree on visitors’ locations and trigger the appropriate response on the wall.
This custom algorithm joined sensor data and sent it to the central exhibit application controlling the dinosaurs’ movements. It’s important to note that the exhibit includes a variety of 3D dinosaur characters; some, like the raptors, are tied directly to visitor motion, while others, such as the Tyrannosaurus and Brachiosaurus, dominate the scene without being affected by visitor movements. This combination allowed for a particularly dynamic story in which raptors track visitors as they move and are then chased away by larger dinosaurs.
With a few visitors entering the space, the tracking system works well, providing something just short of one-to-one interaction between visitors and raptors. However, as more visitors move into the area, the tracking system loses some reliability, though this minor shortcoming is not apparent to the majority of guests. The complex and dynamic nature of the interaction mimics the real world, in which animals might show interest in an interloper or become distracted by other creatures or events. While this kind of interaction is quite different from the precise high-fidelity gestures we explored in the last article in this series, it created a highly engaging and memorable experience here.
More recently, we created another gesture-based exhibit that mimics the real world: In Tune with Nature, which we developed in 2019 with the Cayton Children’s Museum in Santa Monica. This immersive experience takes place in a three-sided room in which projectors create a stylized mountain meadow scene with hand drawn 2D artwork, animations, and completely touchless interaction.
The exhibit environment moves from day to night and back again in a continuing cycle. During the day, flocks of butterflies appear, along with a watchful frog and curious dragonflies. By nightfall, the butterflies leave and fireflies begin to appear. The overarching concept is that some natural cycles, such as the movement of the sun and moon, can’t be controlled, while we can affect other natural events. All of the insects and animals in the experience are characters in a dynamic scene and are affected by motion and sound. If visitors are too active or loud, the butterflies and dragonflies fly away, the frog disappears into the water, and the fireflies stop flashing. Quiet and reflective visitors are rewarded with increasing numbers of creatures and activity. Not unlike DinoStomp, the feedback that motion-and audio-sensing provides is more than enough to make the exhibit responsive, interactive, and fun. This isn’t a high-fidelity system—but it works extremely well for this experience.
For this exhibit, we mounted Orbbec Astra motion-sensing devices on the ceiling to capture visitor activity. Rather than skeletal tracking, these devices use “blob tracking” to track visitors in the space; we looked at gross movement and activity rather than more nuanced full-body gestures or poses. In addition, we developed a control panel to mask active areas and control the activating threshold of the movement and sound. (The Orbbec has a built-in microphone.) This allowed the Ideum and Cayton teams to fine-tune the interactions based on observations of how visitors are actually using it.
In the next and final post in this series, we will share more about the future of touchless technology. (Touchless Gesture-Based Exhibits, Part Three: Touchless.Design is now available - July 9, 2020)