Exploring the Future of Music with Spencer Salazar

Advancements in technology are transforming music into an incredibly interactive experience for the listener. It’s not only about listening, but bringing a level of tangibility to sound and audio. The increasing use of smartphones, tablets, and other mobile devices is providing a new way to learn and engage with the world, and artists, musicians, and creative technologists are now developing methods to involve the user in creating their own unique experience.

For instance, the way we read has changed drastically. American publishing house McSweeney’s incorporated icons for interactive artworks into their mobile applications as a way for the reader to engage with the material while providing exposure to new media artists. Within the music industry, we see the same innovation but with much more interactivity in mind. As touch screen technology becomes increasingly prevalent in how we obtain information, creatives must grapple with how the technology affects our individual and collective experiences. From the internet to mobile devices, music is one of the first commodities that concerns the user. It is no longer an auditory escape but a multi-sensory experience, which forces artists and musicians to look at music production in radically different ways.

Upcoming GAFFTA course, Music and Mobile Computing for iOS, taught by instructors Spencer Salazar and Mark Cerqueira, will not only assist developers in learning new skills but will help shape nascent ideas into potentially sustainable projects. Despite Salazar’s busy schedule, he was able to answer questions regarding the field of mobile computing, how the course came to fruition, and where he believes the field is headed. He also shared some projects currently in the works at Smule.

Q & A with Music and Mobile Computing for iOS Instructor Spencer Salazar

Dorothy Santos (DS)Can you provide some background on processing and design in iOS as a creative tool? In layman’s terms, how would you describe music and mobile computing in iOS? How is it used and by whom, typically?

Spencer Salazar (SS): The explosive popularity of smartphones in the past 5 years has led to a proliferation of small computers in apparently everyone’s pocket or purse, each persistently connected to the internet, aware of its geographic location and spatial orientation, always-on, and capable of extensive audio/visual processing. There are many similarities between traditional desktop computing and the new mobile model; our course explores how the distinguishing qualities of mobile computing can be leveraged for new/interesting musical experiences, using iOS as the specific programming environment for this exploration.

Mark and I both come from Smule, where these technologies power apps like Ocarina, a combination of instrument, music education tool, and social-music experience. I’m a PhD student at CCRMA (Stanford’s Computer Music department), where groups like the Mobile Phone Orchestra (“MoPhO”) use iOS to realize musical compositions in a performance context. Beyond that, forward thinking musicians such as Björk and Brian Eno have embraced mobile technologies, the former releasing her latest album in the form of an iPad application. So, in our experience there’s a combination of software developers and musicians who see a lot of value in these tools.

DSWhat do you hope students will get out of the course?

SS: The curriculum is about 50/50 audio and physical interaction. The focus is partially on what kind of musical experiences make sense given the hardware interface and how to implement in a way that will actually work reasonably well with limited computing resources. We hope students will produce some sort of interesting app/experience/instrument for their iPhone or iPad. After 2 weeks it’s more likely this will be in proof-of-concept form rather than something ready to ship to the App Store, but from there, we also hope that they will have the tools to further develop that app and create new ones.

DSWhere do you see this subject matter or field going?

SS: Hmm, its a tough question because we’ve really just scratched the surface of what is possible with the current way of thinking about it. But we think there will be a lot more software that really takes advantage of location-awareness, the physicality of the device itself, and the degree to which one’s phone is part of one’s identity.

From a software engineering perspective, there is a lot of room for growth. At the moment, to put together a solid network-enabled app you need to have a handle on at least three different programming frameworks. In an ideal world you would only need one, so that’s a pretty glaring deficiency in the toolset.

DSWhat’s the most exciting thing you’ve seen done with these tools?

SS: I’m not sure what the *most* exciting thing is but its pretty cool to see mainstream artists like Björk embracing this type of technology (e.g. Biophilia). It’s also cool to see things like this:

I don’t watch Glee or even have TV, but here 3000 complete strangers from around the world are singing together (in support of those hit by the Japanese tsunami in 2011). This is something that just wasn’t possible until very recently, thanks to mobile audio technology.

DSWhat new projects are you working on?

SS: I’m working on a mobile application where users can leave audio “traces” throughout their world, and other people can tune in to the traces that have been left around them. I’m also trying to develop that “1 framework” I mentioned above.

Originally posted to the GAFFTA blog, which can be found here

Advertisements

Author: Dorothy R. Santos

Dorothy R. Santos is a writer, editor, curator, and educator whose research areas and interests include new media and digital art, activism, artificial intelligence, networked culture, and biotechnology. Born and raised in San Francisco, California, she holds Bachelor’s degrees in Philosophy and Psychology from the University of San Francisco, and received her Master’s degree in Visual and Critical Studies at the California College of the Arts. She is currently the managing editor for Hyphen magazine. Her work appears in art21, Art Practical, Daily Serving, Rhizome, Hyperallergic, and Public Art Dialogue. She has lectured and spoken at the De Young museum, Yerba Buena Center for the Arts, Stanford University, School of Visual Arts, and more. Her essay “Materiality to Machines: Manufacturing the Organic and Hypotheses for Future Imaginings,” has been published to The Routledge Companion to Biology in Art and Architecture in 2016. She serves as executive staff for the Bay Area Society for Art & Activism, board member for the SOMArts Cultural Center, and teaches at the University of California, Santa Cruz in the Digital Art and New Media department.

2 thoughts on “Exploring the Future of Music with Spencer Salazar”

  1. Gee, you really know how to ask questions.

    What I mean to say is that, not only did you give an informative discussion between yourselves, you gave him food for thought pertinent to the goals in sight.

    These are not compliments, but rather how I see it.

    1. Thaanks again for the comments! Sometimes, I wish I could shadow some of these folks to really get a peak into what they do and how they do it. I’m trying to survey the tail end of Spencer’s class to get some idea of what the students have been working on. We shall see! 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s