Interview: Dan Tepfer, the Musician Coding the Future of Concerts

With a playback piano and VR, the pianist entrances every audience he plays for

Billed as the future of concerts, Dan Tepfer‘s live show—a solo performance wherein a self-playing Yamaha Disklavier is the only instrument—entrances every attendee. From the intimate-by-choice setting and the Google-branded VR headsets to Tepfer’s winding style, the show captivates with each song and its visual accompaniment—all of which is improvisational and diverges from the original recording. There are rules in music and in coding and, while seeming to break all of them, Tepfer concurrently adheres to both centuries-old and 21st-century rules alike.

His Natural Machines show is an amalgamation of piano developments and coding advancements—Tepfer bridges the gap between technology and music on stage. Combined, Tepfer and his self-made technology form a seamless stream of creative consciousness—which is broken only by moments where he’d instruct the audience to divert their attention from him to the VR headsets they’ve been given. We were fortunate to catch him after his New York show at National Sawdust, wherein he explains the performance’s evolution from its earliest stages to now.

Courtesy of Nicolas Joubard

How and when did you get into coding?

I started getting into coding as a kid. I was born in 1982. My dad, a biologist, brought home a Macintosh Plus around 1988 or so. I started making things in HyperCard and slowly getting into the scripting environment there. It was always really fascinating to me. As a kid, it’s easy to feel relatively powerless. But with coding, you have an incredibly powerful machine that will do whatever you ask it to; it’s very empowering. As a teenager, I got into Basic and C. It was all self-taught and, in those days, there was nothing like the online resources there are now, so it was a lot of poking around and experimenting.

by Evan Malachosky

And how did this idea come about? When did this project start?

The Natural Machines project started the day I walked into the Yamaha space in Manhattan, sat down at a Disklavier player piano, and realized that instead of having to play back pre-recorded music, it could take real-time input from my computer. I’d been experimenting with creating generative music from algorithms for a few years, but I was always frustrated with how inorganic it felt. Suddenly, two things were possible: I could use what I improvised at the keyboard—with its naturally organic quality—as real-time input for my algorithms, and the computer could express its response in real-time by playing the piano itself. These two things combined made me feel very excited because it really brought the computer into the world of organic, acoustic sound that I love.

It’s not about replacing humans with computers, it’s about exploring the fertile ground where intuition meets structure

How does a show like this align with our underlying societal skepticism of (some) technology? Does it open a new world of technological wonder for audiences?

Natural Machines, as it is now, is all about exploring the intersection of organic and mechanical processes—hence the name. It’s not about replacing humans with computers, it’s about exploring the fertile ground where intuition meets structure, similarly to how a composer like Bach sets down rules for himself while leaving sufficient degrees of freedom for self-expression. And the whole thing centers around free improvisation; it all depends on my playing the piano well and being inspired in the moment. That, combined with the fact that I’ve written every line of code myself so that the programming is just as home-grown as the playing, means that this project is pretty different from the things that typically scare people about technology, I think. We tend to mistrust technology when we feel like we don’t understand it entirely, but here there’s no data-harvesting, no phishing, there’s nothing nefarious lurking underneath the surface, and I think people can feel that.

People have told me that the visuals, rather than distracting them from the music, actually help them to understand its structure, which was my goal all along. And the music that’s come out of the project wouldn’t be possible without the tech involved. So I don’t know about “technological wonder,” but hopefully people come away inspired by the possibilities that arise when computers are used to naturally augment human creativity.

by Evan Malachosky

How did you teach yourself to code this show, with the added VR element?

I’m completely self-taught as a coder. It’s something I’ve dipped into kind of obsessively for days or weeks at a time at various times in my life. It’s just kept regularly drawing me back in and every time, I get deeper into it. In my teens I learned how to make 3D graphics by asking my math teacher for rotational equations. In my early 20s, I wrote programs to help with musical exercises I needed to do, mainly ear training and sight-reading. That developed into my getting into SuperCollider, a somewhat arcane programming environment specialized in music. It has a kind of steep learning curve, but is very powerful and reliable once you’ve gotten the hang of it. All the musical elements in Natural Machines are coded in SuperCollider.

I love how, with programming, once you’ve learned a few languages it becomes easier and easier to learn others

For the visuals, I use Processing, which I just love, because it makes it so efficient to test out ideas. Processing is something I started exploring in my mid-20s when I got interested in generating music from fractals. I also taught myself Objective-C when the first iPhone came out and have made a few apps, all music related. Over the last six months, I’ve learned Javascript to code the VR element of the show, which I’ve decided to have run in the browser on people’s phones. I use the three.js and a-frame libraries for this. I love how, with programming, once you’ve learned a few languages, it becomes easier and easier to learn others.

Courtesy of Nicolas Joubard

There are a lot of technical elements. Can you explain, for anyone who hasn’t seen the show or a YouTube clip of your performance, how it all works together?

It all starts with me playing something on the piano. When I press a key, the Disklavier (which is a fully acoustic instrument, with the added ability to play on its own and record what a pianist plays) sends data to my computer. There, programs I’ve written respond to that input by sending commands back out for the piano to play. Since I’m improvising, I respond to that, and a positive feedback loop is created, with me building on what the computer has done, it’s building on what I have, and so on.

Every time the computer deals with a musical element, it also sends the data via OSC to the visual programs I’ve written in Processing, which create a kind of visual live score, in real-time, of what’s going on. I’ve worked hard to make each musical algorithm have its own visually distinct space. And now, since I’ve added a VR element to the show, I’m also sending data to node.js, which sends it on to all the phones that are connected to my wifi network, each of which then internally creates a VR environment from the data in real-time.

What’s the most important element about the show, to you?

At the end of the day, all the technology has to serve the music. If I haven’t moved people through the music, if the music doesn’t stand on its own, I’ve failed. The last thing I want is for it to be a technological gimmick. What’s essential is to be using the tech only in as much as it opens up new musical spaces, takes me to musical possibilities I couldn’t have gotten to without it.

Beyond that, I hope people come away from it with a renewed respect for the power of combining reason and intuition, something that I find is often lacking from our discourse these days. It’s amazing to me how little understanding there is in popular culture of how science works. Science, despite its imperfections, is the most honest discipline in relation to truth. And there are aspects of music, just as architects need to work with structural engineers, that really benefit from reason. Natural Machines is all about that combination—I’m the architect when I play, and the computer is the structural engineer.

Courtesy of  Josh Goleman (left) and Nicolas Joubard (right)

So you’ve thought about how your show combines something centuries-old (the piano) with something so 21st century (coding) then?

Yes, a lot. The core of the project, what really makes it work in my opinion, is that the computer plays exactly the same instrument I do—we both are playing this fully acoustic piano. That’s what makes it possible for the computer to play a truly equal part in the music, and also for the music to resonate in the way that I, as a longtime performing musician, want it to be resonating—with the complexity of acoustic sound resonating in a space. At a more abstract level, I love digging deep into the past—into the musical math of Pythagoras, into the contrapuntal rules of Palestrina, into Bach—and making contemporary art with it. So combining something old like the piano (which was considered very high-tech in its day) with something contemporary like the computer feels very natural.

How do the rules of music mingle with the modern rules of technology?

Computers are made to run algorithms, and they’re incredibly good at it—infinitely better than we are. Over the course of music history, people all over the world developed homegrown systems of rules to help them teach music to new generations. They figured out ways of codifying what sounded good to them, or at least the non-mysterious aspects of it. So in many ways, computers are perfectly suited to execute these rules. The rules were figured out by humans, but computers can take care of them in incredibly virtuosic ways. What computers still aren’t very good at, even with the rise of AI, is the emotional/intuitive/mysterious side of art, which is equally important.

One could argue that the piano is made for the intersection of music and tech. At a more general level, there’s something about music, especially instrumental music (which is, by definition, abstract) that lends itself incredibly well to developments in tech. The entire history of music shows this. Composers have had a symbiotic relationship with tech throughout music history—their music would call for further developments in instrument-making, and when those developments happened, they opened up new avenues for composition. It’s the same today. At the end of the day, we want good music, whatever that means—art being, always, in the eye of the beholder. And if a computer or a player piano can help create good music that we haven’t heard before, that’s something that I find very exciting.

Courtesy of Nicolas Joubard

Are there moments in specific songs and shows that surprise you still?

Yes, and it’s very important to me that that continues to be the case. When I go on stage, I commit to really improvising—to creating music that is uniquely of the moment, specific to this instrument, this space, this audience, this state of mind that I’m in. So hopefully, what I’m playing will already have moments of genuine surprise for me. But add what the computer does in response to this, with varying layers of complexity, and it becomes doubly surprising. And what’s best is that the surprise feeds on itself—it opens the door to further discoveries and surprises.

That’s perhaps my favorite aspect of this project. It turns the piano, this instrument that I’ve been playing most every day for the last 30 years, into a brand new instrument, an instrument that constantly yanks me away from my comfort zone and leads me to discover aspects of my musicality I didn’t know were there. I’m pretty sure that Bach used creative constraints in much the same way—to make himself a little uncomfortable, just to see how he would get out of the corner he’d painted himself into.

Tepfer is currently touring in support of Natural Machines.