I think the game industry got the ability to play back entire recordings of musical pieces too early.
In the mid 90's, music software that triggers the individual notes from within the game was beginning to be phased out. Unfortunately, this happened just as things were getting really interesting with music systems like iMUSE that allowed for the incredible seamless transitions for games like Monkey Island 2 and Day of the Tentacle.
Today, while the audio quality of game music is amazing, it rarely adapts to the gameplay much more than switching or crossfading between a set of layers. There are of course exceptions, such as FRACT OSC, Dyad, Panoramical and many recent Mario games.
For long I've experimented with making software for interactive music, but it took me until Uurnog to end up with something solid. Ondskan, as I call it, is integrated with the game (if you have the game, you can access it by pressing F12 in the main menu) and is fully algorithmic. This means I don't compose by actually placing notes. Instead, various algorithms are configured through an inspector-like interface, and any field in this inspector can be configured to take input from what happens in the game.
The language in which we express music can limit the way we think about it. I wanted to approach music like a possibility space, much like the way I think about game and level design. The language in which music is expressed is, however, generally based on the idea that everything is fixed. Notes, chords and keys have absolute names. Since I knew the idea of "when", "what" and "how many", was going to be dynamic and often random, I had to throw away most words that usually describe music and look at everything from another angle.
My substitute for a note name became an "index", which is a floating point value. The way it works is that when an algorithm has picked out an index to play, it is evaluated through a separate "scale" algorithm which converts the index into a frequency. These "scale" objects often represents chords or entire musical scales, but sometimes goes into experimental territory like picking frequencies from the harmonic series.
Because scales are defined separately from the indexes, the music software gains the ability to in realtime swap out the key or chord of the currently playing music. Things like transposing the music in realtime, or even adding a new chord progression, becomes trivial since nothing other than the scale objects actually needs to be manipulated.
Timo Bingmann's video, 15 Sorting Algorithms in 6 Minutes, was the initial spark of inspiration for how my tool could be algorithmic instead of relying on composed sequences. One of the algorithms I use is a bubble sort where each comparison's value is used to select a chord. This is used in the game's trophy puzzle room.
Another algorithm is the "auto-composer" module which contains arrays of indexes, lengths and offsets. Every time the player opens a door in the game, the algorithm mixes and matches these values randomly into a sequence. This sequence plays on loop until the player walks through another door and reshuffles the sequencce again. This is used a lot in the village area of the game to add slight variation.
Warning: Rapidly moving patterns!
Uurnog only features a prototype version of the music software. Though very stable, it's currently too unwieldy to e.g. be released on the asset store. There are so many things I learned how I could do much better, and I also need to pick up some C++ so that I can start using the native Unity audio SDK and build my own synthesizers into the software.
Developing the tool was way too much fun for me to stop now though, and I may even take a break from making games in the near future and see if I can make a user friendly version, or compose algorithmic music for somebody else's Unity game.