When Classical Musicians Go Digital

Among the exhibits on display at the Royal Academy of Music during its centenary tribute to the violinist Yehudi Menuhin is a single page of a Bach violin sonata. The printed page is darkened with Menuhin’s pencil markings fixing the contours of a phrase, the direction of bow strokes, fingerings, the speed and width of vibrato: the expression, in graphite, of a player’s interpretation and craft.  Read more at the New York Times. 

Modeling continuous human artifacts: music, prosody, and historical documents

Speaker:  Taylor Berg-Kirkpatrick (UC Berkeley/CMU)

Time:   Monday, June 6, 12 noon

Location:  CSE 305


Sign up here:  https://reserve.cs.washington.edu/visitor/week.php?year=2016&month=6&day=5&area=5&room=2741


Lunch will be served.


While acoustic signals are continuous in nature, the ways that humans generate pitch in speech and music involve important discrete decisions. As a result, models of pitch must resolve a tension between continuous and combinatorial structure. Similarly, interpreting images of printed documents requires reasoning about both continuous pixels and discrete characters. Focusing on several different tasks that involve human artifacts, I’ll present probabilistic models with this goal in mind. First, I’ll describe an approach to historical document recognition that uses a statistical model of the historical printing press to reason about images, and, as a result, is able to decipher historical documents in an unsupervised fashion. Second, I’ll present an unsupervised system that transcribes acoustic piano music into a symbolic representation by jointly describing the discrete structure of sheet music and the continuous structure of piano sounds. Finally, I’ll present a supervised method for predicting prosodic intonation from text that treats discrete prosodic decisions as latent variables, but directly models pitch in a continuous fashion.




Taylor Berg-Kirkpatrick will be starting as an Assistant Professor of Language Technologies in the School of Computer Science at Carnegie Mellon University in the Fall of 2016. Currently, Taylor is a Research Scientist at Semantic Machines Inc. He recently completed his PhD in computer science at the University of California, Berkeley, working with professor Dan Klein. Taylor’s research focuses on using machine learning to understand structured human data, including language but also sources like music, document images, and other complex artifacts.

First performance in 1,000 years: ‘lost’ songs from the Middle Ages are brought back to life

“An ancient song repertory will be heard for the first time in 1,000 years this week after being ‘reconstructed’ by a Cambridge researcher and a world-class performer of medieval music”

See more at: http://www.cam.ac.uk/research/news/first-performance-in-1000-years-lost-songs-from-the-middle-ages-are-brought-back-to-life-0#sthash.D7QTEiRd.dpuf

Could early music training help babies learn language?

“Music training early in life (before the age of seven) can have a wide range of benefits beyond musical ability.

For instance, school-age children (six to eight years old) who participated in two years of musical classes four hours each week showed better brain responses to consonants compared with their peers who started one year later. This suggests that music experience helped children hear speech sounds.”

For the full story, see The Conversation.