the spitball development model

Not a lot to report this week on the development front. I’m still tossing ideas at the wall and looking at what sticks, and writing long essays to myself about why they might be sticking. Slowly, a picture emerges.

Testing proofs of concept in a hacking project has worked much better than trying to code the actual game project from the start. I feel more productive when I’m working on production code, but it’s a false economy, as I wind up rewriting most of the code and introducing plenty of bugs along the way. Better to do the rewriting in notes and hack modules that I can toss out.

I mentioned HTML5 audio a couple posts back, and specifically how I didn’t think the callback model was going to be performant when combined with WebGL. In retrospect, it might have been a good idea if I’d actually tested that assertion first. I’m happy to report that I’ve tested it for my own use cases, and I was wrong. I don’t notice any dropoff in framerate from generating raw audio on the fly, nor do I experience audio dropouts from using WebGL.

I still like Webkit’s patch-and-plug model, of course, but I no longer believe it to be the only usable one.

official announcement

I can definitively state that the next game is taking place underground. Not in a cave, though. Caves are so 4 months ago. Anyway, I’m busy drawing up a map, working through some gameplay mechanics. I’m still sticking to my no-code-until-specification-complete rule, and by “sticking to” I mean “getting around via a hacking project”.

Also, some excellent Javascript game development tips.

followup to the infinite

The last post was written late, and I forgot to mention the biggest issue I had with the soundtrack project. Because it has to generate a separate sound clip for every note, it’s constrained to a musical universe of three octaves, one instrument, and one duration. I’d love to mix it up with some whole notes to generate harmony.

Hell, I’d love to avoid the base64 hack altogether. HTML5 audio is in a shocking state, as far as cross-platform solutions go. That base64 hack doesn’t even work on IE. The Webkit audio API rocks, but it’s not available on FF or Opera. Mozilla’s Audio Data API is fine for raw audio processing, but has been deprecated in favor of a future API to be developed by the Audio Working Group. (Someday.)

On that subject, I tend to prefer Webkit’s patch-and-plug architecture to Mozilla’s web worker model. Calling back into JS to process low-level audio works for demos, but I think it’s going to have issues when integrated with other realtime libraries like WebGL. Even with just-in-time compilation, JS is still basically glue-code, and not really suited to processing large blocks of audio on a tight CPU budget. The patch-and-plug model gets this, and though it provides the means to process low-level audio it doesn’t depend on it.

an infinite number of earworms for you

Procedural generation fascinates me. I take a sneaky pride in the fact that all the textures and models for Gas Food Lodging are assembled client-side from pure code. In the next gaming project, I’ll be adding audio, and I plan to generate it on the fly.

My first crack at the problem? An algorithmically generated soundtrack. Click the button below to hear it.


Okay, it’s not the greatest. Source is here. A few points in no particular order:

I’m using the A Minor Pentatonic scale. I tried C Major and C Blues, but neither of them sounded right. I think it’s a problem of dissonance. Both the major and blues scale have dissonant intervals, and the pentatonic scale really doesn’t. Anything you play in a pentatonic scale is guaranteed to sound—well, not “good”, but “acceptable”. Dissonance requires a more intelligent algorithm to manage.

My cross-browser solution to the audio problem was the old standby, generating a base64 representation of the PCM data and producing a URI string from that. I found a helpful library called riffwave to do the heavy lifting. (Thanks to Pedro Ladaria.)

I originally used a Markov chain to generate the “riffs”, and the results were even worse: no recognizable motifs emerging, just random noodling. Music needs repetition, though not too much—just enough to keep the pattern-hungry brain happy.

So, I’m putting this aside. Fun project, but I doubt I’ll take it much further. Bits of it are going into a sound effects generator. But keep it running if you like. I’m sure there’s a symphony in there somewhere.