Loading Events

« All Events

  • This event has passed.

Seattle DAML: January Meetup on Music and Deep Learning

January 24 @ 6:30 pm - 8:30 pm


We’ll be heading to Galvanize to start off 2017 with two talks on machine learning, focusing on music and deep learning. Our speakers: Dr. Sham Kakade of the University of Washington and Dr. Christopher Re of Stanford University.

Talk 1: MusicNet: A new dataset for tackling the next generation of questions in music and machine learning by Dr. Sham Kakade

Music research has benefited recently from the effectiveness of machine learning methods on a wide range of problems from music recommendation to music generation. In the related fields of computer vision and speech processing, learned feature representations using deep end-to-end architectures have lead to tremendous progress in tasks such as image classification and speech recognition. These supervised architectures depend on large labeled datasets, for example the ImageNet dataset. Inspired by the success of these methods, we have created MusicNet as the beginning of a project to explore these techniques in the realm of music.

MusicNet is a collection of 330 freely-licensed classical music recordings, together with over 1 million annotated labels indicating the precise time of each note in every recording, the instrument that plays each note, and the note’s position in the metrical structure of the composition.

We will overview the methods we utilized to construct this dataset, and we will also present our findings on the first attempts to learn musical features from scratch. More broadly, we hope that MusicNet can be a resource for more creative tasks, and we will discuss some of the new problems this dataset might helps us to address (including automatic music transcription,  improving recommendation systems based on audio features, and music synthesis.)

More info is available at: http://homes.cs.washington.edu/~thickstn/musicnet.html.

This is joint work with John Thickstun and Zaid Harchaoui.

Talk 2: Ameliorating the Annotation Bottleneck by Dr. Christopher Re

The rise of automatic feature-generation techniques, including deep learning, has the potential to greatly enlarge the pool of machine-learning users. Such methods require large labeled training sets to obtain high-quality results. This raises two related questions:

First, how does one scale deep-learning systems?

And second, how can one make it easier for users to build training sets? We describe some very recent work on these questions.

Our contribution for the first question is a recent result that characterizes asynchronous learning as equivalent to changing the momentum term.

Importantly, this result does not depend on convexity and, so, applies to deep learning. For the second question, we describe a new paradigm called Data Programming that enables users to programmatically cheaply generate large but noisy training sets. Nonconvex analysis techniques then allow us to model and denoise these noisy training data sets. We also report on how nonexperts are able to obtain high-quality end-to-end performance using our prototype information extraction framework, Snorkel, that implements these ideas.

All projects available from https://github.com/HazyResearch

About our Sponsor

Galvanize is the premiere dynamic learning community for technology. With campuses located in booming technology sectors throughout the country, Galvanize provides a community for each the following:

  • Education – part-time and full-time training in web development, data science, and data engineering
  • Workspace – whether you’re a freelancer, startup, or established business, we provide beautiful spaces with a community dedicated to support your company’s growth
  • Networking – events in the tech industry happen constantly in our campuses, ranging from popular Meetups to multi-day international conferences

To learn more about Galvanize, visit galvanize.com.


Galvanize Seattle
111 South Jackson Street
Seattle, WA 98104 United States


Galvanize Seattle