In sequence studies, players use a limited set of sound materials, each of which is linked to a single cue. They listen to an audio track in which an artificial voice reads a sequence of cues, followed by a pause. In the pause, the players replicate the sequence using their sound materials as best they can. Each sequence may be different in some way, or the same as a previous sequence.

Each sequence lists the number of sounds needed and the cue words used. Players should prepare the correct number of sounds in advance, but not label them in any way. Sounds may be made by any source, including pitched and non-pitched instruments, objects and materials, or electronic sounds and samples. Each sound should be relatively short, such that it is recognisable as a discrete event. If more than one player is responding to a sequence, each player may use a different set of sounds, or sounds that are the same in some way (e.g. the same source, and/or the same pitches).

The only requirement is that each player sticks to their own selections as consistently as possible. The score for each sequence is a Speech Synthesis Markup Language (SSML) file which can be played back in any artificial voice application which can read SSML, such as Google Cloud Text-to-Speech. Realisations may use any available artificial voice as preferred. The speed of the voice can also be adjusted, but in general there should be enough time in each pause to respond to the cues.

In performance, the audio playback of the artificial voice may be rendered live by the app, or prerecorded. It can be played back over any speaker setup, such that the audio playback and the sounds the players make are all audible and distinguishable spatially.