Overview
For all non-Windows Player-supported platforms, custom Interface Assets must be built using TypeScript.
Before proceeding, review Interface Asset general concepts and our introduction to creating your own interface asset. Treat these articles as prerequisites to the content below.
Documentation
The Intuiface CDK can be found on GitHub: https://github.com/intuiface/intuiface-cdk
Pre-requisites
You need to install the following components on your development PC:
-
NodeJS (to use NPM to manage the project’s dependencies)
-
Visual Studio Code (to code, build, and test)
-
Angular (to create visual components)
-
Angular-Cli (to use Angular command line)
-
Schematics-Cli (to use Angular Schematics)
-
npm install -g @angular-devkit/schematics-cli
-
Build an Interface Asset
- Create a folder on your drive, and open that folder in Visual Studio Code.
- Open a terminal (Ctrl + u)
-
Install the schematics
-
npm install @intuiface/interface-asset
-
- Use the schematics to create the Interface Asset and follow the prompt to give your IA a name.
-
schematics @intuiface/interface-asset:create
-
- Write your code using the template files created and build your IA
-
npm run build
-
- Copy your first build output into a folder in C:\Users\{UserName}\Documents\Intuiface\Interface Assets and restart your Composer
Your Interface Asset is now available in your Composer.
(optional) Use the Intuiface Coding Assistant
For Step 5 above, consider using the Intuiface Coding Assistant in the OpenAI GPT Store. This tech preview is free to use. Describe what you want the interface asset to do, and the Coding Assistant GPT will produce the code. Compile the code as shown in the section above, and - if needed - return to the Coding Assistant to specify the changes that need to be made.
NOTE
- As with all GPTs in the OpenAI GPT Store, you will need a ChatGPT Plus account to use the Intuiface Coding Assistant.
- As a tech preview, the Intuiface Coding Assistant GPT should be considered beta software. Be patient, and please provide feedback!
- Use of the Coding Assistant still requires some coding knowledge as you will have to use a compiler and - if necessary - interpret compilation errors using an understanding of the Intuiface TypeScript CDK. As our GPT matures, we will increasingly improve code quality. Our goal is to produce a GPT requiring zero development knowledge.
Example prompt
Let's say we want to create an Interface Asset for the text-to-speech functionality accessible through the SpeechSynthesis interface of a web browser's Web Speech API.
Here is a simple prompt that will USUALLY produce all the code you need for the Interface Asset:
Create a TypeScript Interface Asset for Intuiface that provides Text to Speech functionality.
The asset should follow the Intuiface CDK guidelines and should use the Web Speech API for speech synthesis and ensure compatibility with browser environments: https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesis
Implement a method to load and update the list of available voices. The asset should handle voice selection and speed adjustment dynamically based on user input.
Here is a more detailed prompt that also USUALLY produces all the Interface Asset code:
Create a TypeScript Interface Asset for Intuiface that provides Text to Speech functionality. The asset should follow the Intuiface CDK guidelines and include the following Properties,Triggers and Actions:
Properties:
voice: A string property to select the voice for speech synthesis.
speed: A numeric property to adjust the speed of speech.
availableVoices: An array property to list available voices for speech synthesis.
Triggers:
SynthesisStarted: Triggered when speech synthesis starts.
SynthesisCanceled: Trigger when the speech synthesis is canceled.
SynthesisCompleted: Triggered when speech synthesis ends.
AvailableVoicesUpdated: Triggered when the list of available voices is updated.
Actions:
startSynthesis: Accepts a string parameter text and starts speech synthesis with the given text.
cancelSynthesis: Cancels any ongoing speech synthesis.
refreshAvailableVoices: Refreshes and updates the list of available voices.
The Interface Asset should use the Web Speech API for speech synthesis and ensure compatibility with browser environments. Implement a method to load and update the list of available voices. The asset should handle voice selection and speed adjustment dynamically based on user input.
Why the emphasis on USUALLY? GPTs - not just the Coding Assistant - do not produce identical output and sometimes make mistakes. There could be variants that achieve the same thing in different ways or outputs missing a critical component. Additional detail can help to constrain the variation and reduce mistakes - to some extent. Our goal is to maximize accuracy and consistency, so expect the Coding Assistant GPT to get better over time. Meanwhile, don't be surprised if you have to do some debugging.
Test and debug your Interface Asset
Platform Premier and Enterprise accounts can use Chrome DevTools in Composer's Play Mode. See that article for details, but the process can be summarized as follows:
- Create an experience in Composer that uses your Interface Asset.
- In the Project menu of Composer, select "Player on all other platforms (Web, Android, etc.)", then hit play to launch your experience.
- Once in Play Mode, open the dev tools with Ctrl + Shift + i
Video Walkthrough
Watch as we create a TypeScript-based custom Interface Asset:
Comments
0 comments
Please sign in to leave a comment.