Skip to content
Colin Clark edited this page Mar 5, 2016 · 28 revisions

Flocking at Five Years Old

Five years ago today, on March 5, 2011, I made the first commit to Flocking. At the time, I thought I was just making a little demo showing how to use the HTML5 element with the then-new Firefox Audio Data API. Over the years, amidst a lot change in the Web platform, Flocking has grown into something significantly different from the other Web Audio libraries that have sprung up in its wake.

Flocking is a computer music framework that is designed to support harmonious authorship across different representations, meaning it aims to be usable with both code (or, more accurately, data) and graphical tooling. It's designed to be massively interoperable so that its programs aren't limited to a single environment or "walled garden" collaborative tool. Flocking is also really fast and is optimized for use on little devices like the Raspberry Pi or Chromebooks.

My goal with Flocking is to support a kind of materiality, one in which a software artifact can serve as the medium for new works within a creative community, without requiring forking, cut-and-paste, or knowledge of specialized computer programming languages. A metaphor for this is Amy Twigger Holroyd's re-knitting, in which existing knitted garments (most strikingly, even those that were mass-produced by machines) can be modified or transformed in ways that were unanticipated by their original designers. As artifacts, knitted objects can be worked on by multiple creators and can support unanticipated uses and after-the-fact adaptation. It is this ability to be serendipitously added to, subtracted from, grafted onto, or unravelled in a form not already planned for and designed into the object that is missing from software today, and which I aim to provide in Flocking.

We're not there yet, and Flocking is still in its infancy. Achieving these goals will take time, patience, and a lot of creativity. As it grows, I always try to keep a mental cardboard mockup of how I dream Flocking programs will be in the future: highly authorable, supportive of an open ecosystem of shared, modifiable artifacts, live and dynamic, but without the usual accompaniment of code worship and the intellectual "eat your broccoli" of computational thinking.

Throughout the development of Flocking, I have used it extensively in my own creative practice to make videos, music, sound installations, and film soundtracks. Making Flocking has been a huge creative catalyst for me; Flocking has pushed and pulled my work in strange directions, sometimes towards a complex new signaletics that combines Flocking with Aconite to...; at other times, working on Flocking has driven me away from computational art entirely and towards quiet, meditative observation with my camera. Even when I'm not using Flocking to make art, I still feel its presence and influence on my way of seeing and hearing the world.

In particular, Flocking has recently been used to create:

  • Two of my recent videos digitally-processed videos, In Passing and Tofino

  • The soundtracks for Izabella Pruska-Oldenhof's recent film, Font Màgica and her video installation, Relics of Lumen. Development on Flocking has been quiet recently, largely on account of my finishing an MFA thesis while launching a major new personalized accessibility project. However, as my time starts to free up over the next two months, I will be redoubling my focus on Flocking, and have a number of major new features planned in the next year:

  • Integration of Bergson, a new scheduler I wrote specifically for Flocking. Bergson will provide a much more robust scheduling subsystem for Flocking, driven by the primary sample generation clock so that block-accurate actions can be scheduled in your compositions.

  • A complete refactoring of the Flocking SynthDef format, introducing wire specifications, which will finally allow for an easy and declarative way to define "diamond-shaped" signal graphs and other types of multiple input/output connections between unit generators in a Flocking synth.

  • Greater support for the Web Audio API, including the ability to interleaving Web Audio nodes with Flocking's JavaScript unit generators. This will provide a larger palette of signal processing algorithms, as well as greater performance when using Flocking in graphics- or interaction-heavy applications.

  • A vastly improved live development environment, which will support the creation of Flocking instruments with multiple, synchronized representations; both JSON5-based code as well as visual boxes-and-wires editing will be supported simultaneously, and valid changes to playing synths will be heard immediately. You can imagine where this might lead in terms of a different approach to live "coding," where changes can be targeted to synths on-the-fly as JSON5-based "diffs," enabling the creation of performance environments that symmetrically blend both GUI controls and live "data merging."

  • Vastly improved documentation, so that getting started with Flocking doesn't require newcomers to puzzle through obscure examples and sift through comments in the source code.

Clone this wiki locally