
Tangible Engine version 3.1 is now available for purchase with Ideum touch tables. This latest release includes built-in support for ring (or “donut”) and triangle-shaped tangible objects. In addition, there have been improvements to the Node.js bindings. This release comes just after the 10th anniversary of Tangible Engine’s release, and while we recently shared some of the history behind the development of this ground-breaking software (see: Tangible Engine Turns 10), I wanted to share some additional thoughts.
Back in 2015, our company was experimenting with object recognition on what were, at the time, our first products using projective capacitive touch technology. Prior to that, our company briefly used infrared (IR) touch frames, and before that, we used camera-based projection systems. With our earliest vision-based systems, we could recognize a virtually unlimited set of objects.
Projected capacitive touch relies on the electrical properties of the human body to recognize touch points. The system works by mimicking that interaction using capacitive objects. These objects have a pattern of “touch points” that are recognized by the software. Since this isn’t a vision-based system, we are limited in the number of objects that we can recognize, but the system is simple. Capacitive touch technology for touch tables offers many advantages, including reliability, performance, a slim profile, and immunity to light interference. Also, unlike projection systems, it doesn’t need calibration and essentially works on most systems right out of the box!
Before all of this became clear, we ran a series of experiments on a touch table that used a 3M 46” HD projected capacitive touch screen. The biggest challenge at the time was figuring out how to create objects that had capacitive properties. Copper objects were easily recognized, but the material scratched the screen, and beyond confirming the stack could work, it was a dead end. Still, we were somewhat encouraged, and we developed a video showing the “office of the future,” which garnered tens of thousands of views and sparked considerable interest in object recognition.
Around the same time, we were experimenting with 3D printing and identified graphene-infused materials with conductive properties. At first, the material was too hard and scratched the displays, much like the copper did, but eventually, we were able to source a more rubbery conductive material. Sourcing this material and using additional techniques, such as conductive foam for the object's feet and adding weight and additional conductive material, allowed us to advance these objects further, making them much more reliable than our earliest experiments.
We built Tangible Engine for ourselves at first, and we developed some interesting projects with Starbucks and JCB wines, to mention a few. Given the strong interest in the technology, we decided to release it as a product in 2016. We rapidly developed its features, adding rotation, multiscreen support, many of which were requested by other design firms. We developed a visualizer, improved the software (and the ability to “tune it” for reliability), and eventually we created support for a wide range of tangible objects of different shapes and sizes. In addition, we built an SDK (software development kit) with simple bindings for Unity and Node.js, making authoring straightforward.
Since its release, many competitors have copied our approach to tangibles or even taken it further by expanding some of its capabilities through other technologies. In some cases, these implementations can enable recognition of smaller objects or increase the number of objects recognized. These qualities can be useful depending on the project, but they also require additional processors (such as an Arduino or a Raspberry Pi) and make for a more complex system. We’ve tried to keep our object recognition platform simple, reliable, and built within the existing tech stack, making all of our touch tables capable of detecting tangibles with software alone. This is true even across multiple displays simultaneously.
Our interest in object recognition stems from our strong interest in how digital and physical objects and experiences are combined. Touch tables themselves are large physical platforms that can support social interaction. Adding objects (or consumables like wine, coffee, etc.) can create unexplored experiences in which digital technology and physical objects work together to elevate social interaction. These interactions can be relatively simple or much more complex, such as games. We wrote a white paper a few years ago that talks broadly about different types of tangible user interfaces.
For museums, touch tables have become a standard form factor for most new exhibitions. They are inherently social and can support multiple visitors. While it depends on software, in many cases, the types of interaction we can expect are similar to those in purely physical tabletop exhibits, which have been around for half a century! We are proud to have played a role in the popularization of touch tables, having sold thousands of them to museums and other public venues in North America and around the world over the last 15+ years! Tangible objects are also becoming more common as visitors seek new experiences and museums look to connect with visitors through collections and stories. Ideum continues to develop custom tangible user experiences and we will share some new work with Monterey Bay Aquarium later this summer.
At the moment, there isn’t a list of new features or a planned release for Tangible Engine, but we are always looking for ways to improve it and remain open to suggestions. There has been a decade's worth of improvements to Tangible Engine, and we will continue to support and develop what is the oldest (and hopefully the simplest) object-recognition system for projected capacitive touch tables. If you have any suggestions, comments, or questions, feel free to contact our Sales and Support teams.