This article explains how the Multi-Mode Interaction sample experience was built and how you can reuse it for your own needs. To access this sample, use the Marketplace tab of the Experiences panel in Composer. You can also download it from this Marketplace page.
By "multi-mode" we mean "offering multiple interactive options". This experience was inspired by the increasing sensitivity the public will have towards touchscreen use in light of the Coronavirus pandemic. One way to accommodate this sensitivity is to offer both touch and touch alternatives for achieving the same goal.
How it works
This experience was built to demonstrate a multi-mode kiosk in action - that is, a kiosk offering multiple options for interacting with on-screen content. In this example, the kiosk helps a customer find the proper drink to pair with a meal.
The three interaction alternatives are:
- Touch, using standard touch assets like Buttons
- Voice commands with the help of the Speech Recognition Interface Asset
- Remote control via the customer's mobile device using Web Triggers
We added user detection to determine when a potential customer is in the vicinity of the kiosk, using the Face Detection with OpenVINO Interface Asset. This part could also have been achieved through the use of a sensor, such as the distance sensor provided by Nexmosphere and accessible via the Nexmosphere Interface Asset.
In the video above, we also added an LED light strip from Nexmosphere to explicitly show the position of the suggested products on the shelf. This isn't part of the sample but can be easily built with the Nexmosphere Interface Asset if you have access to their light strips.
In order to detect whether a customer is close enough to attract, we use the Face Detection with OpenVINO Interface Asset to measure the size of each face in the camera image and define short, medium, and long-range sizes. (The closer the person, the larger the face in the camera image.) This experience has different behavior based on each range.
The requisite two thresholds can be specified after running the experience by using the Settings scene, accessed by pressing the 's' key on the keyboard. Note that in order to set up these thresholds, you don't need to touch the screen; you can walk backward and forwards to pick the proper range and then use your voice to confirm and save it.
By defining these two thresholds, we will have three ranges enabling Intuiface to determine the customer's distance in physical space.
- At long range: The kiosk will display a traditional Attract screen with large enough text for readability and a noticeable visual to catch the eye.
- At medium range: The kiosk will display a message to encourage the customer to come closer.
- At short range: The experience will begin, proposing the meal selection and wine pairing service.
When in short-range, the customer is informed of the three means available for interaction with the kiosk: Touch, Voice, or Use of a Personal Mobile Device (initiated via QR code). We decided to keep this reminder in all scenes for demo purposes. At any moment, the user can switch from one interaction type to another.
Note that if the customer decides to start with touch, we insert an informational message about good health practices.
This sample uses an Excel file to store a list of meals and beverages. In order to propose a list of beverages according to a specific meal, we use an Excel Interface Asset and category filtering, as is done in the Restaurant Menu sample.
Note: This Excel file contains hidden sheets, to make the filtering easier. Since the Product scene has five different "product boxes" (see image below; one "box" per drink), and a single collection isn't being used to display all five boxes at the same time, it was easier to create five separate, filtered worksheets, and displaying the first row of each of these filtered sheets in their associated box.
To make sure the three interaction modes use the same sequence of triggers and actions, there is heavy use of "control buttons" and Intuiface's ability to "simulate a tap action". For a Composer user, it is much easier to make trigger/action modifications to one button than to modify three independent but identical trigger/action sets.
No special approach was required to create this part.
We used the Speech Recognition Interface Asset that you could already test in this Speech Recognition Sample.
The commands are stored in the Excel file for easier modifications.
- The Intuiface speech recognition capability is only available on Windows PCs.
- It is not currently possible to bind a Speech Recognition trigger parameter directly to an Excel cell. This is why Text Assets in an experience layer are bound to these Excel cells, and the triggers are bound to those assets.
This interaction approach uses the mechanism explained in our Real Estate Shop Window sample. The Multi-Mode Interaction sample improves the existing mechanism by adding a session ID in the QR Code. Each time a user leaves the kiosk, a new session ID is randomly generated, which generates a new QR Code.
The QR Code leads the user to this web page, created outside of Intuiface.
Make sure your Intuiface account's Web Triggers credential key has been entered in the "Settings" Excel worksheet. The device ID & session ID are automatically generated.
When the user clicks a button in the webpage loaded on a smartphone, a call is made using the Web Triggers API. On the experience side, the "Message is received" listens for a web trigger, and, when one is received, it calls the appropriate action, such as a meal selection.
The source code for the webpage is available here:
This sample uses the Data Tracking Interface Asset to log user actions. You can create a dashboard such as the one below to identify
- Which meal was the most popular
- Which interaction mode was the most popular
- The demographics of the audience
Article is closed for comments.