It's user error.

Debu.gs


Making Music with Computers: Two Unconventional Approaches

I love music. I really, really love music. Hopefully you’ll forgive the departure from the usual topics.

For a few years, I haven’t been making as much music as I would have liked, but in the last year have resumed. I have not owned a guitar for more than a decade. I have, in the last few years, acquired a harmonica, but I can’t play the harmonica. I do own several computers, though!

Unfortunately, most software written for producing music on computers is a huge, huge pain. Maybe my mind isn’t suited very well to the learning curve or the workflow of most music producers, but I got frustrated pretty frequently with the tools. The tools are often fairly heavyweight, and either tedious (I am a programmer, I resent doing anything that I could direct a machine to do) or overly complex, and all of that horrible button-clicking and note-dragging when what I wanted was a textual interface grated on me. That’s before mentioning that most of these tools (Ableton, Reason, etc.) are written for Windows or OSX, neither of which I run.

As my friend mycroftiv (a talented musician, the creator of ANTS and the public grid, Interactive Fiction writer, etc.) put it, music is just math: frequencies over time. There are numerous ways to do math, there are rules in music that define what sounds good and what does not, and that was really helpful to learn.

So I have finished up a couple of albums and I wanted to talk about two approaches, and I’ll start with the new one, Bytebeat, and then talk some about the previous one, 1D Life.

Please forgive the embeds, it seemed like the most expedient way to pair the music and the text.

Bytebeat (2020)

Bytebeat is fascinating. I can’t tell you how much I love it. There’s an excellent write-up by Kragen Javier Sitaker. It’s a very nice read and contains links to Viznut’s write-ups, essentially the originator of the format. I first encountered it when Amavect was talking about it in gridchat. Here he had a few lines of C code that produced entire songs of infinite length.

I was obsessed almost immediately. It felt closer to playing the guitar than any of the above software. It felt right. You can improvise, nearly anything you do will make some kind of sound. There are very few limits on what you can create as well, because you are generating raw PCM.

This is how the format works: you set up a loop at your program’s entry point, this loop increments a variable t, and then calls nx(t). (The names are obviously arbitrary. t is for “time”.) nx() returns an integer, and you write that integer to standard output. That’s it. Here’s my main loop:

int main(int argc, char **argv) {
	long t;
	short r;
	char buf[2];
	for(t=0;;t++) {
		r = nx(t);
		buf[0]=r&0xff;
		buf[1]=(r>>8)&0xff;
		write(1, buf, 2);
	}
}

nx() is where you generate the music. It’s stateless, and usually single-expression. Here’s a really simple one:

short nx(long t) {
	return t * 349;
}

If you compile that, run it, and pipe it into aplay -f cd, that gives you (thanks to the magic of integer overflow), a sawtooth wave close enough to 277.18 Hz that you could reasonably call it C4 (at A440 tuning). There’s a lot you can do with it, and some of the more interesting sounds come from bit-twiddling. For example, you can get a square wave by doing an and:

short nx(long t) {
	return 0x8000 & (t * 349);
}

You can get a much more complex sound by doing an and or an or with the value of t:

short nx(long t) {
	return t | (t * 349);
}

Because of the wrapping, you get really strange sounds by doing addition or subtraction, usually glitchy and sometimes really rough on the ears. (t*349)+(t&(t*87)) sounds glitchy but somewhat melodic (because C-2 is two octaves below C-4), but (t*349)+(t&(t*92)) sounds abrasive and has a lot of noise. Using xor instead of addition gives you cleaner results but C-4 sounds terrible with C#2. It sounds fine with F-2, though! Try (t*349)^(t|(t*116)), and I think you’ll get the idea. You can get a really simple kick with t-(t^(t>>1)), you can iterate through a string of values and multiply t by them. You can use @and@s to turn notes on or off over time, and cubing t can get you some static, which means you can get a hi-hat or a snare! Try running this one for a kick and a snare:

short nx(long t) {
	return
		((t&(t<<1)&(t<<2)&(t<<3)) & /* This sets when the hi-hat happens */
		 ((t*t*(t/3))&0x4000)) /* The static that makes the hi-hat noise */ ^
		/* The kick: */
		((t-(t^((t-2))>>1)) &
		 0x8000 &
		 ((t-0x2000)&((t-0x2000)<<2)&((t-0x2000)<<1)));
}

It’s easy to improvise like this, tacking on more binary operations. Obviously it’s trivial to add a melody, and you can make it sound interesting with a minimal time investment:

short nx(long t) {
	return
		(t*(349&((t>>17)^(t>>15))))^(t|(t*(116))) ^
		((t&(t<<1)&(t<<2)&(t<<3)) & ((t*t*(t/3))&0x4000)) ^
		(((t-(t^((t-2))>>1))) & 0x8000 & ((t-0x2000)&((t-0x2000)<<2)&((t-0x2000)<<1)));
}

That’s really just it. Try it out yourself! On Linux, you can put that nx() into a file with the main() from above, add the relevant #include, compile it, and pipe it into aplay -f cd. I used tinycc because it was much faster and this is all I/O-bound anyway, so in my case it was tcc -run bb.c | aplay -f cd and I was hearing music. Really, seriously, try this yourself! It’s fun.

You get fortuitous accidents when you make an off-by-one error or a mistake with the order of operations, there’s a really nice feel to that. You make a mistake, something interesting happens, you go look at this blob of code, you can follow the mistake down a rabbit hole. At least the way it feels to me, this is the closest a computer has ever felt to being a guitar. You produce a lot of “bad” code, but a lot of fun music. The environment is incredibly simple and lightweight: you have a C compiler and your sound driver. As noted in Kragen’s page linked above, there are several ways to do it, there are JavaScript UIs and tablet apps and whatnot, but it’s really nice using plain old C.

Algorithmic Composition (2019)

Before I got into bytebeat, I had been playing a little with gbsplay. I love small machines, and the Game Boy is dear to my heart. gbsplay emulates enough of a Game Boy (the CPU, memory, some mmap’d I/O registers, and obviously the sound hardware) to play GBS files. While most GBS files exist to preserve the music from old games, I think very few people are trying to create new GBS files, so information was scant, though GBSOUND.txt was a very helpful resource. Eventually, I was able to cobble together Pez program that generated a few hundred bytes of prelude (including the header) that initialized the hardware and set up a loop: read a chunk from a linked list, write bits of it to the appropriate registers, and delay until the next tick. So I shoved the machine code prelude into a Go program as a blob, that had a (bad) web interface with a kind of basic (weird, clunky) UI written in JavaScript. It was even less convenient for large compositions than using, say, lmms but for small stuff, it was quick and simple and I could play with it on my phone. I kept the UI really sparse, tried to make it a little like an NES game. You ever play an old game? They just drop you in, no help! You push a button, see what happens. This enabled me to be lazy, so I leaned into it.

This essentially amounted to a dumb stunt-hack to show my friends and experiment a little with notes, usually using the API (it’s just a form tag and semantic-sh markdown; the UI is the API, we’re just using HTML instead of JSON). People were using the site for some reason! I set up a little Twitter account, thought “Maybe I’ll develop this into something”, added a tiny leaderboard to play with Redis, hacked in a “Share” button, little stuff here and there but nothing major was percolating and it idled for a few years. But people kept using the site so the wheels turned a bit, I had accidentally gotten a lot of musical data. I played with Markov chains and these worked pretty well with the data, apparently they generally work well with music. (Better than with speech.)

The thing that started getting interesting was when I started playing with single-cell automata. I had seen something in on Wolfram’s site about using it to generate music but it was a somewhat old post, they didn’t explain how, and all of their widgets were broken, so I just jumped into using it to generate patterns, something it’s good at. I started with bare lists of notes, this sounded terrible, and I moved to structuring it some.

Here’s a whirlwind tour of a small subset of music theory! There are twelve fundamental tones, represented by the first seven letters of the alphabet with “sharps” for each letter except E and B. Pieces are usually written in a mode is essentially a list of offsets that wrap around. C-4 is one octave higher than C-3, and double the frequency, which is why the octave mark (the two dots) on a guitar neck is in the middle of the string. If you have a starting note (called a “tonic”) and a mode, you can use that to derive the list of notes that will sound like they belong in the same piece of music by adding the mode’s offsets. So, for example, the offsets for the Dorian mode are 2, 1, 2, 2, 2, 1, 2, and starting at C-3 gets you the following scale: C-3, D-3, D#3, F-3, G-3, A-3, A#3, C-4. Each of the “steps” is called a semitone, two steps is a whole tone.

So that was essentially it: take the 1D variant of life, this gets you infinite rows of on/off data, trivial to encode as an integer or to use as a pattern by itself. It gives you a repeating pattern in most cases or a chaotic pattern in others. Pick a mode, pick a starting note, pick a pattern for the note lengths, pick how broad to make the scale. There are two square-wave channels on the Game Boy, so unless you’re using the wave channel, your options are to do a bassline or chords, and that’s just a matter of picking an octave to offset. (You can do a lot of cool hacks, you can abuse the noise channel to produce tones, small systems are really fun, you should try playing with the Game Boy some time.) People familiar with 1D Life might have already noticed that the album art above was generated using 1D life.

Twitter is awful

I tried to make a Twitter account for this, but they suspended the account within two hours of the first and only post and ignored support tickets, then started spamming me about the Adult Video News Awards. (The last bit was particularly irritating, because I don’t care about those awards, it had mistakenly decided my location was Las Vegas despite me having put “Los Angeles” in the location field, and I couldn’t unsubscribe from “People in your area are talking about this!” emails because it wouldn’t let me access these settings without giving them a phone number.) You can purchase a blue checkmark (verified accounts that have been abandoned and sold), it’s all politics and celebrity news, it’s not fun any more. That’s fine with me, I’m happy to stay on the fediverse, where the timeline is chronological and there’s no engagement-mining algorithm trying to work everyone into a froth about the election. Instead, it’s just people talking about stuff they’re doing or making, telling jokes. It’s like Twitter was back before it decided it was Important.

My old Twitter account is now “protected” and still exists mainly because friends will DM me occasionally. I’m hosting a handful of Pleroma instances, with the list growing, so I’ll do some shameless self-promotion when that goes public. It is intended to be done as a public service operated by donations, though at present I’m doing it out-of-pocket.

Next up

I’m playing with bytebeat still. It’s been fun adhering to the form. I broke it for one of the songs (I added a switch statement), but I think I’ll stick to the form for a while.

I have a couple of Inferno-related announcements to be published here shortly and a large update to the public-access Inferno system, hacked up a Venti to HTTP bridge in Ruby (in the most barbaric way possible). I have been making a lot of very odd stuff. I have also been writing sporadically about things like fact-checking journalists using public data or Bayes on the Rekka Labs blog. (Rekka Labs is the same as Reverso Labs; we just renamed it because the old name confused people, and we changed the blue outline in the logo to orange. If you speak Japanese, you might have noticed the connection between “Rekka” and “Inferno”.)

Tags: c c music pez


<< Previous: "The Inferno Operating System: You're soaking in it!"

View all entries >>