Sunday, December 29, 2013

Virtual choir: auto harmonize your voice in real time with supercollider



This blog entry is about to describe an experiment I did with the real time audio synthesis and algorithmic composition programming language supercollider. Before you read on, please realize that this stuff is highly addictive.

So you've always wanted to sing in a choir, but didn't have the opportunity or courage to join one near you? Here's an easy way to turn your singing into a choir piece in real time using supercollider.

The system is based on the following example from the supercollider documentation:

s.boot;
 
(
SynthDef("pitchFollow1",{
    var in, amp, freq, hasFreq, out;
    in = Mix.new(SoundIn.ar([0,1]));
    amp = Amplitude.kr(in, 0.05, 0.05);
    # freq, hasFreq = Pitch.kr(in, ampThreshold: 0.02, median: 7);
    //freq = Lag.kr(freq.cpsmidi.round(1).midicps, 0.05);
    out = Mix.new(VarSaw.ar(freq * [0.5,1,2], 0, LFNoise1.kr(0.3,0.1,0.1), amp));
    6.do({
        out = AllpassN.ar(out, 0.040, [0.040.rand,0.040.rand], 2)
    });
    Out.ar(0,out)
}).play(s);
)


This example does something very interesting: it reads sound from your microphone using the SoundIn command, then tracks its frequency using the Pitch command and then synthesizes a new sawtooth wave at different multiples of the detected frequency. The new sawtooth wave is then sent to the speakers, allowing you to hear the sawtooth following your voice's pitch.

Upon hearing this, I immediately thought of harmonizing singing voice in real time, with the possibility to harmonize every note differently, so as to arrive at an (hopefully somewhat interesting) choir piece.

So I sat out to improve the example (combine it with pitch-shifting instead of synthesizing a new sawtooth wave, and add a bit of a GUI to it, to allow reconfiguring the harmonization on the fly), and came up with the following.

Please note that I'm very new to the supercollider programming language, and I'm glad for any suggestions you may have w.r.t. code style! The following code can also be cloned/forked from the github repository.

s.boot;

(
var table;
var gap = 40;
var mapped, mapped2, diffbuf, diffbuf2;
var miditoname;
var nametomidi;
var sound;
var difference, difference2;
var tf, tf2;

// define a function to convert a midi note number to a midi note name
miditoname = ({ arg note = 60, style = \American ;
        var offset = 0 ;
        var midi, notes;
        case { style == \French } { offset = -1}
            { style == \German } { offset = -3} ;
        midi = (note + 0.5).asInteger;
        notes = ["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"];

        (notes[midi%12] ++ (midi.div(12)-1+offset))
});

// define a function to convert a midi note name to a midi note number
nametomidi = ({ arg name = "C4", style = \American ;
        var offset = 0 ; // French usage: +1 ; German usage: +3
        var twelves, ones, octaveIndex, midis;

        case { style == \French } { offset = 1}
            { style == \German } { offset = 3} ;

        midis = Dictionary[($c->0),($d->2),($e->4),($f->5),($g->7),($a->9),($b->11)];
        ones = midis.at(name[0].toLower);

        if( (name[1].isDecDigit), {
            octaveIndex = 1;
        },{
            octaveIndex = 2;
            if( (name[1] == $#) || (name[1].toLower == $s) || (name[1] == $+), {
                ones = ones + 1;
            },{
                if( (name[1] == $b) || (name[1].toLower == $f) || (name[1] == $-), {
                    ones = ones - 1;
                });
            });
        });
        twelves = (name.copyRange(octaveIndex, name.size).asInteger) * 12;

        (twelves + 12 + ones + (offset*12))
});

// define a table of reference notes [c c# d ... b]
table = Array.fill(12, {arg i; i + 60}); // [60,61,...,71]
// define a table of mapped notes (Default values)
mapped = [nametomidi.value("g3"),
       nametomidi.value("a3"),
       nametomidi.value("b3"),
       nametomidi.value("bb3"),
       nametomidi.value("c4"),
       nametomidi.value("c4"),
       nametomidi.value("a3"),
       nametomidi.value("d4"),
       nametomidi.value("e4"),
       nametomidi.value("f4"),
       nametomidi.value("f4"),
       nametomidi.value("g4"),
];
mapped2 = [nametomidi.value("e3"),
       nametomidi.value("g3"),
       nametomidi.value("g3"),
       nametomidi.value("g#3"),
       nametomidi.value("a3"),
       nametomidi.value("a4"),
       nametomidi.value("d4"),
       nametomidi.value("b4"),
       nametomidi.value("c4"),
       nametomidi.value("d4"),
       nametomidi.value("d4"),
       nametomidi.value("d4"),
];

// define a table to store the difference between reference and mapped note
difference = Array.fill(table.size, {0});
// define a buffer on the server for consultation from the SynthDef
diffbuf= Buffer.loadCollection(s,table,action:{|msg| msg.postln;});
difference2= Array.fill(table.size, {0});
diffbuf2=Buffer.loadCollection(s,table,action:{|msg| msg.postln;});
tf = List.new(table.size);
tf2 = List.new(table.size);

// define a window called "Setup mapping"
w = Window.new("Setup mapping", Rect(200,200,15*gap,190));
// add the reference notes as labels (fixed), and the mapped notes as text fields (editable)
// whenever a text field is updated, update the list of mapped notes (mapped)
table.do({arg item, i;
    var t, u;
    StaticText(w, Rect(10+gap*i, 10, gap, 30)).string_(miditoname.value(item));

    t = TextField(w, Rect(10+gap*i, 50, gap, 30)).string_(miditoname.value(mapped[i]));
    t.action = { mapped[i] = nametomidi.value(t.value); ("upper map note number " ++ i ++ " from " ++ table[i] ++ " to " ++  mapped[i]).postln;  diffbuf.set(i, (table[i] - mapped[i]).midiratio.reciprocal)};
    tf.add(t);

    u = TextField(w, Rect(10+gap*i, 90, gap, 30)).string_(miditoname.value(mapped2[i]));
    u.action = { mapped2[i] = nametomidi.value(u.value); ("lower map note number " ++ i ++ " from " ++ table[i] ++ " to " ++ mapped2[i]).postln; diffbuf2.set(i, (table[i]-mapped2[i]).midiratio.reciprocal);
    tf2.add(u);
    };
});
// add a button to play a reference note
c = Button(w, Rect(10,130,100,30)).states_([["Play C4",Color.black,Color.gray]]);
c.action_({arg butt;
    if (butt.value == 0,
    {
            var env = Env.perc;
            SinOsc.ar(nametomidi.value("c4").midicps)*EnvGen.kr(env, doneAction:2)!2;
    }.play,
    {})
});

// also add a start/stop button
// when the button is set to start, instantiate a new Synth, otherwise free the Synth
b= Button(w, Rect(110,130,100,30)).states_([
    ["Start",Color.black, Color.red],
    ["Stop",Color.black, Color.green]]);
b.action_({arg butt;
    if (butt.value == 1,
        {
            tf.do({arg item; item.action});
            tf2.do({arg item; item.action});
            table.do({arg item, i;
                difference2[i] = (table[i] - mapped2[i]).midiratio.reciprocal;
                difference[i] = (table[i] - mapped[i]).midiratio.reciprocal;
            });
            diffbuf.setn(0,difference);
            diffbuf2.setn(0,difference2);
            sound = Synth.new("pitchFollow1");
        },
        {   sound.free;}
    )
});

// define the Synth itself:
// - first it determines the pitch of what it hears in the microphone
// - then it harmonizes the pitch with the notes as defined in the ui
SynthDef.new("pitchFollow1",{
    var in, amp, freq, hasFreq, out;
    var t, midinum;
    var harmony, harmony2, partials;
    in = Mix.new(SoundIn.ar([0,1]));
    amp = Amplitude.kr(in, 0.05, 1);
    # freq, hasFreq = Pitch.kr(in);
    midinum = freq.cpsmidi.round(1);
    midinum.postln;
    freq = Lag.kr(midinum.midicps, 0.05);
    //freq = midinum.midicps;
    harmony2= WrapIndex.kr(diffbuf2.bufnum, midinum);
    harmony = WrapIndex.kr(diffbuf.bufnum, midinum);
    partials = [
           0.5,
           1,
           0.5*harmony,
           1*harmony,
         0.5*harmony2,
           1*harmony2,
    ];
    out = Mix.new(PitchShift.ar(in, 0.2, partials, 0, 0.004));

    7.do({
        out = AllpassN.ar(out, 0.040, [0.040.rand,0.040.rand], 2)
    });

    Out.ar(0,out/partials.size)

}).add;

// make the ui visible
w.front;

)



Here's a sound example. It's not perfect, obviously, but to hear the computer create this in real time while you sing into the microphone is quite cool...


And a complete piece:

Sunday, October 20, 2013

Music to die for...

When starting this blog, I announced I would sometimes share things I've come across that really need to heard to be believed.

Here is one such thing...

Árstíðir

I'd happily have missed my train for this...

Thursday, October 3, 2013

Streams and paths

What? 

Piano music, based on a poem by Tricia Adams.

The style uses some harmonic ideas from jazz, and features quickly changing time signatures. Difficulties in performing come from the requirements in precise pedaling, the ever-changing time signatures, and some awkward jumps in the left hand (in combination with the speed).

Where?

The usual places :)

Saturday, August 24, 2013

Algorithmic composition: generating tonal canons with Python and music21

1x spiced up chord progression (intermediate step in canon generation)

 

  What?

According to wikipedia:
Algorithmic composition is the technique of using algorithms to create music.

Some algorithms that have no immediate musical relevance are used by composers as creative inspiration for their music. Algorithms such as fractals, L-systems, statistical models, and even arbitrary data (e.g. census figures, GIS coordinates, or magnetic field measurements) have been used as source materials.
And:
In music, a canon is a contrapuntal compositional technique that employs a melody with one or more imitations of the melody played after a given duration (e.g., quarter rest, one measure, etc.).

How?

Last year I wrote a series of articles on an easy method for writing some types of canons:
In at least one of those articles I claimed that the methods described there should be easy to use as basis for automation. Here I automate the method for writing a simple canon as explained in the first of these articles.

Code?

The code discussed below is available under GPLv3 license on github: https://github.com/shimpe/canon-generator. Similar to the gcc compiler, the music you generate with this program is yours. The GPLv3 license only applies to the code itself.

It depends on free software only: python 2.7, music21 and MuseScore.

Explanation? 

This program generates canons from a given chord progression in a fixed key (no modulations for now, sorry!).

It does NOT generate atonal or experimental music, that is, if you're willing to accept the limitations of the program. It can occasionally generate "grave" errors against common practice rules (e.g. parallel fifths/octaves) -> see further explanation.

It closely follows the method as explained in the  Tutorial on my technique for writing a Canon article referenced above. If you want to understand in detail how it all works, please read that article first, and then come back here. If you just want to experiment with the different program settings, continue :)


One thing is worth explaining in more detail: the article mentions in one of the first steps of the recipe that the composer can start from a chord progression, and "spice it up" to get a choral.  So: how to we get a computer to spice up a chord progression, without building in thousands of composition rules?

I introduced some note transformations that:
  • introduce small steps between notes so as to generate something that could be interpreted as a melody
  • do not fundamentally alter the harmonic function in the musical context
To accomplish this I replace with a sequence of notes without altering the total duration of the fragment, e.g.
  • original note (half note) -> original note (quarter note), neighbouring note (8th note), original note (8th note)
Other transformations  look at the current note and the next note, and interpolate a note in between (again, without changing total duration)
  • original note (half note), next note -> original note (quarter note), note between original note and next note (quarter note), next note
A property of generating melodies by spicing up lists of notes using this method is that after spicing up a list of notes, you can spice the spiced up list again to get an even spicier list (more complex melody, both in pitch and in rhythm).

Finally a warning for the sensitive ears: for a composer to write a choral that obeys all the rules of the "common practice" takes years of study and lots of practice. Given the extreme simplicity of the program, the computer doesn't have any of this knowledge and it will happily generate errors against the common practice rules (e.g. parallel fifths and octaves). Not always, but sometimes, as dicated by randomness. Note that this is an area in which the program could be improved, by checking for errors while spicing up and skipping proposed spicings that introduce errors against the common practice rules.

Yet, despite the extreme simplicity of the method, the results can be surprisingly complex and in some cases sound interesting.

How can I use it?

In its current form, the program is not really easy to install and use, at least if you have no computer experience:
  • You need to install the free python programming language from http://www.python.org/download/releases. I recommend using version 2.7. Version 3 and later of python won't work! 
  • You also need to install the free music21toolkit for computer-aided musicology. Follow the instructions on their website. Music21 provides for vast amounts of music knowledge which would take a long time to write ourselves. I'm using only a fraction of its possibilities in the canon generator.
  • Then you need something to visualize and audition the MusicXml that is generated by the program. For our purposes, the free MuseScore program works perfectly.
  • Finally you need to get the free canon-gen.py program from the github repository https://github.com/shimpe/canon-generator.
The main function defined near the bottom of the canon-gen.py file contains some parameters you can edit to experiment with the generator:
  • chords = "C F Am Dm G C"
    #You can insert a new chord progression here. 
  • scale = music21.scale.MajorScale("C")
    #You can define a new scale in which the notes of the chords should be interpreted here 
  • voices = 5
    #Define the number of voices in your canon here 
  • quarterLength = 2
    #Define the length of the notes used to realize the chord progression
    #(don't choose them too short, since the automatic spicing-up will make them shorter) 
  • spice_depth = 1
    # Define how many times a stream (recursively) should be spiced up
    # e.g. setting 2 will first spice up the chords, then again spice up the already spiced chords.
    # scores very quickly become rhytmically very complex for settings > 2
  • stacking = 1
    # the code can generate multiple versions of the spiced up chord progression, and use those
    # versions to create extra voices
    # e.g. setting stacking = 2 will turn a 3-voice canon into a 3*2 = 6-voice canon 
  • voice_transpositions = { VOICE1 : +12, VOICE2 : 0, VOICE3 : -12, VOICE4: -24, VOICE5: 0 }
    # allow extra octave jumps between voices

What does it sound like?

This is a simple example generated with the program with settings
  • chords = "C F Am Dm G C"
  • scale = music21.scale.MajorScale("C")
  • voices = 5
  • quarterLength = 2 
  • spice_depth = 1
  • stacking = 1 
  • voice_transpositions = { VOICE1 : 0, VOICE2 : 0, VOICE3 : -12, VOICE4: -24, VOICE5: 0 }  

Ideas for future improvements

I see many possible improvements, most of which are low-hanging fruits. Feel free to jump in and improve the code :D
  • fix a known bug related to octaviations in keys other than C
  • support modulations, i.e. keep musical key per measure/beat instead of over the complete chord progression
  • extend the code to cover the things explained in the later articles: crab and table canons
  • see if the method/code can be extended to generate canons at the third, fifth, ...
  • smarter spicing up of chord progressions to avoid parallel fifths/octaves (e.g. rejecting a proposed spice if it introduces an error in the overall stream); or use Dmitry Tymoczko's voice leading spaces to ensure better voice leading by construction.
  • protect the end chord from getting spiced up
  • implement more note transformations, e.g. appogiatura
  • experiment with more rhythms
  • how can we better spice up the chord progressions without messing up too much of the original harmonies?
  • ... 

Wednesday, July 3, 2013

Do not stand at my grave and weep

Do Not Stand at My Grave and Weep is a free of rights poem written in 1932 by Mary Elizabeth Frye. Frye allegedly found herself composing the piece of verse on a brown paper shopping bag.

When I found the poem (as with all great findings, by accident :) ) it resonated with me, and I quickly found myself humming a melody to it. It's my first composition for a piano, clarinet, cello trio.

The piece is conceived as an elaborate harmonization of a single note ("e"). Throughout the first verse you can hum the note "e" to the melody. After a modulation the text is repeated and the pedal note becomes the note "f".

Enjoy!
Do not stand at my grave and weep,
I am not there; I do not sleep.
I am a thousand winds that blow,
I am the diamond glints on snow,
I am the sun on ripened grain,
I am the gentle autumn rain.
When you awaken in the morning’s hush
I am the swift uplifting rush
Of quiet birds in circling flight.
I am the soft starlight at night.
Do not stand at my grave and cry,
I am not there; I did not die.


Wednesday, June 19, 2013

Seaside canon, for Douglas Hofstadter

What?

Here's a musical crab canon inspired by a blog post from Julia Galef, who wrote quite an impressive crab canon poem. My musical interpretation of her poem is a crab canon too. It differs from my previous crab canon in the following aspects:

  1.  This canon was conceived after serendipitously finding a poem written by Julia Galef, who kindly granted me permission to use it.
  2. Compared to the previous crab canon, this one uses more and longer pedal notes. What can I say - I just love pedal notes! 
  3. In contrast to my previous crab canon, this one modulates to different keys. Have you discovered my free modulation book yet?
  4. As a nice variation on a typical crab canon (where you can reverse the score), here you can also reverse the raw audio samples. That's right! When you reverse the raw audio samples and listen to the result, you get the same piece again. Mind you: I'm not just talking the notes, but also the timbres! Nothing should sound more "backward" than in the forward version of the piece. You can try it out! Download the raw audio and import it into your favourite audio editor. Then listen to it, and relisten to it in reverse.
  5. The spectrogram (slowly appearing in black on white) looks like the sea, and the score layed on top of it in colored rectangles looks like a frontal view of a crab. Very appropriate for a crab canon about seaside :) Ok, that's a joke (or not? Try squinting your eyes... :D ) 
 With respect to the images that pass by: in black on white you can see a spectrogram of the piece. In colored rectangles I've overlayed half of the notes that are being played in the canon (four voices). The other half of the notes you hear come from those same voices simultaneously playing backwards.

 I've developed a systematic method for constructing music canons, crab & palindrome canons, and table canons using Lilypond, fully documented in a series of articles that you can freely download and use. It's advised to read them in the order listed below:

Tutorial on my technique for writing a Canon
How to write a crab Canon
6-part Invention
How to write a table Canon

I'm aware that the method I devised in its current form still has limitations and does not necessarily yield music that can compete with the Big Composers, but who knows... I  still have some ideas for further experimentation. And besides, it's just too much fun :)

Wednesday, April 17, 2013

25 shades of random

A first exploration of algorithmic composition.

I've written a simple scheduler in the programming language python and I've created some midi event generators, and combined both of those to drive the software synthesizer Yoshimi and the sampler engine LinuxSampler using the jack-midi protocol. I've made the complete code available on github.

The midi events are generated based on a concept of randomness within constraints. The things you hear are randomly generated, but not in an "anything goes" manner.

Constraints are imposed to limit the disorder of the randomness. Just like harmony and counterpoint limit randomness in traditional composition, the constraints here limit randomness in the experimental composition. One could say that different constraints generate families of alternative music theories.

"Randomness within constraints" is a deep idea whose time has come. Nowadays it is extremely popular both in science (artificial intelligence) and art (abstract art) and indeed one of the basic mechanisms underlying the origin and evolution of life itself. I suspect that "randomness within constraints" is the barren wasteland where creativity and innovation hides. In this piece, I try to explore a number of different constraints. In some cases the constraints slowly evolve, in other cases they abruptly change.

Ton De Leeuw in a speech once distinguished between two kinds of music: "music that becomes" versus "music that is". "Music that becomes" is music that starts from a begin point and gradually evolves. "Music that is" is music that doesn't show any progression or evolution. Because the constraints I used are quite static (with some exceptions they don't gradually evolve as time progresses), I think this composition is closer to "music that is". Nevertheless, I hope that by combining different timbres and different constraints I have managed to keep the composition reasonably interesting to listen to.

Note: I do not consider "traditional music theory" (or rather: the different traditional music theory families) inferior or unnecessary (quite the contrary!), but in this piece I don't mind looking beyond its borders.

For this composition I programmed my own algorithmic composition system using Python, Rtmidi, Yoshimi and LinuxSampler with Sonatina Sound Fonts, but for a next algorithmic composition (if any ;) ) I would probably try to use the excellent supercollider environment instead.

The visualization was made with Sonic Visualiser.

Enjoy!

Sunday, March 31, 2013

Meditation for piano and cello

Meditation

A bootleg recording of my first attempt at writing something for a cello... enjoy! The score is available upon request. I've been told it's not that difficult :)

Wednesday, February 20, 2013

Digital sound design

Digital sound design

I took a nice little course on coursera: a (very basic) introduction to digital sound design. Of course, as it goes with such courses, one must create a small composition demonstrating the newly acquired skills (or the lack thereof). I was in the mood for something experimental, so be prepared! Art or Nonsense? Feel free to leave your opinion. Incidently, the piece is dedicated to Patrick De Witte, a journalist and very witty humorist who died unexpectedly yesterday due a "heart failure" (he almost certainly would have seen some humor in this announcement: after all, doesn't everyone die of heart failure?). I didn't know the man personally, but I think he'd have gotten a good laugh out of it.