Daniel McKemie: D/A /// A/D
The music presented here is drawn primarily from a series of works
that utilize custom software designed specifically to control
hardware. These come in the form of programmed compositions,
performance environments, algorithmic processes, and custom
controllers, written in a mix of C/C++, CSound, JavaScript,
Max/MSP, and Pure Data. These software mechanisms were used to
control the sound palettes provided by musical hardware such as
modular synthesizers and custom circuits. The acoustic sound
sources are made up entirely of recorded samples of choice
instruments in the Lou Harrison Collection at Mills College,
Oakland, CA.
All music composed and recorded by Daniel McKemie in Brooklyn,
NY
Mastered by Ryan Ross Smith in Fremont Center, NY
Sample recordings performed and engineered by Daniel McKemie and
Joseph Rosenzweig in Oakland, CA
Cover art - ERIDAN (Eri King and Daniel Greer)
Further Reading:
This outing is a curated series of works that illustrate a few
years' worth of work on computer-controlled synthesizer. The
earliest pieces of mine that explored this hybrid reality could
largely be chalked up as fancy noise studies (at best). These were
mostly attempts to understand both sides of the system
respectively, let alone how they could possibly work together. I
do not usually aim to discuss technical details in liner or
program notes, but this is an area that I have spent an incredible
amount of time researching and developing music in, and have plans
to continue doing so for the foreseeable future. It is in my hopes
that these liner notes will motivate others to explore this topic,
or at least spark a conversation about it.
Without an entire history lesson in this idea, this came about as
a simple interest in wanting to join the power of computers with
the interface and dynamics of control voltage. By generating
programmed voltages in software, routing them in any number of
ways, or even sending back voltage to be read by the software to
in turn make decisions on control voltage generation, I see a rich
atmosphere for electronic music making; both in live performance
and in composed (or dare I say...algorithmic!) settings. The
original experiments in this were constructed from patches written
in Max/MSP and hooked in a myriad of ways to semi-modular Eurorack
instruments. This then moved into constructing breadboard circuits
and homemade hardware systems.
What I quickly realized in this venture was that neither my
programming, circuit building, or general knowledge of modular
synthesis were enough to warrant anything of value. But yet I
moved forward. After spending some considerable time with at least
two of these three topics listed, some musical ideas began to take
shape. What I sought to do was utilize a number of different
languages and approaches, as well as hardware systems to see how
many variations could be achieved.
Using C, C++, CSound, JavaScript, Max, and Pd, I sought to explore
every angle of expansion of a modular synthesizer. Creating
interfaces, mobile device controllers, automatic voltage
generation, and interactive performance environments, it was not
always the case that one piece of software was paired with one
piece of hardware. These combinations were smashed all together,
programs combined with different interfaces, the same programmed
procedures realized in different languages to explore the
differences, and a huge number of hardware variations were all at
play. I looked to the pioneers of the fields of tape music, live
electronic music, and computer music for inspiration. I aimed to
program some of their techniques and bring them into my own work,
not as theft but as tribute (but you can be the judge of that) in
order to construct a new way of music making. The beauty of
electronic music is that the technology that is used to execute
this music is always at the forefront, but sometimes it is the
classic tools that are most engaging and intriguing to use.
Because of the rapid pace in which technology evolves, we
sometimes forget that some tools even existed, or that some tools
can be used in ways not thought of prior to their usage falling
out of style.
I settled on the pieces presented here for two reasons, the first
being that they are the most musically interesting to me, and the
second being that they exhibited an array of different approaches
with varying degrees of success. Some of these pieces are
performed live, some recorded live as an automated musical
process, and some constructed as fixed media from either of these
two aforementioned methods. What was learned in the end, and what
is almost always learned in the end, is that it is not the
technical specifications that make the work, it is the person
behind it who makes the aesthetic choices on how; to deal with
this technology however it does not mean that discussing technique
(be it technical or aesthetic) has no implied value.
This does not mark an end to this approach to music making for me,
but rather the beginning of what I hope to be are a series of
experiments. Additional work and research is being done by taking
these ideas into the realm of live-coding, custom built
instruments and circuitry, and developing software functionality
for embedded systems at a lower level. In addition, I am
continuing to codify these works with supplemental writing and
research papers that I hope to have published in the future. As
always, thank you for listening (and reading!).