User Tools

Site Tools


deepperformancediff

====== Differences ====== This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
deepperformancediff [2015/10/25 10:33]
jganseman
deepperformancediff [2015/10/26 17:13] (current)
jganseman
Line 1: Line 1:
 ====== DeepPerformanceDiff ====== ====== DeepPerformanceDiff ======
  
-| Authors | Amélie Anglade, Joachim Ganseman, Vladimir Viro |+| Authors ​| [[https://​www.twitter.com/​utstikkar|Amélie Anglade]][[https://​www.twitter.com/​jganseman|Joachim Ganseman]][[https://​www.twitter.com/​peachnote|Vladimir Viro]] |
 | Affiliation | Various | | Affiliation | Various |
 | Code | [[https://​github.com/​peachnote/​sync-visualization|Github Link]] | | Code | [[https://​github.com/​peachnote/​sync-visualization|Github Link]] |
  
-DeepPerformanceDiff allows you to compare ​performances ​of the same piece (of which the score is provided by PeachNote ​through IMSLP) by different ​artists ​(provided by YouTube). Differences in tempo and loudness are visualised in a graph that makes it easy to find renditions ​that are markedly ​different ​from others. ​+**DeepPerformanceDiff** allows you to compare ​renditions ​of the same piece (of which the score is provided by Peachnote ​through IMSLP) by different ​performers ​(as found on YouTube). Differences in tempo and loudness are visualised in a graph that makes it easy to spot sections ​that any performer plays markedly ​differently ​from others. ​
  
 ===== Origins ===== ===== Origins =====
  
-Blabla +This project originates from an idea that is part of the [[http://​music-connection-machine.peachnote.com/​|Music Connection Machine]] brainstorm taking place at [[http://​www.peachnote.com|Peachnote]]. A minimal mockup version had already been coded. During the HAMR@ISMIR hackday, this mockup has been extended to a fully functional prototype which includes more audio features (notably loudness), better search functionality,​ and a more flexible layout.
  
 ===== Used Technologies and Data ===== ===== Used Technologies and Data =====
  
-Blabla+  * Most of the code has been written in JavaScript. 
 +  * Calculation of loudness was done in Python using [[http://​essentia.upf.edu/​|Essentia]]. 
 +  * The web interface was wrapped in [[http://​getbootstrap.com/​|Bootstrap]] 
 +  * [[http://​imslp.org/​|IMSLP]] scores are displayed using the [[http://​www.peachnote.com/​api.html|Peachnote API]] 
 +  * Videos are gathered from [[http://​www.youtube.com|YouTube]]  
 + 
 +===== Relation to MIR Research and Potential Impact =====
  
 +Study of performance practice, and especially differences between performers, is a research topic that has already garnered some interest in the past (([[http://​www.sciencedirect.com/​science/​article/​pii/​S0004370205000196|Stamatatos & Widmer 2005]])) (([[http://​www.ncbi.nlm.nih.gov/​pmc/​articles/​PMC3842509/​|Gingras et al. 2013]])) (([[http://​www.mirlab.org/​conference_papers/​International_Conference/​ISMIR%202008/​papers/​ISMIR2008_240.pdf|Sapp 2008]])). Note that many of these papers rely on (weighted combinations of) tempo and loudness to differentiate between or distinguish different performers. DeepPerformanceDiff visualizes these features in a single graph.
  
-===== Research challenges ​and potential impact =====+This intuitive visualization does not only create possibilities for data exploration,​ but is also of practical interest. We think more specifically in the direction of performance practice education, where more and more people upload recordings of themselves playing on YouTube. Everyone is now able to compare themselves to others, to professional performers, or to earlier versions of themselves, which also enables one to track their own progress. ​
  
-Blabla+===== Try it out! =====
  
 +The code is available for download on [[https://​github.com/​peachnote/​sync-visualization|Github]]. Fork/clone the repository and point a webserver to the project directory. Additionally,​ you need a few precomputed files - [[https://​www.twitter.com/​peachnote|contact Peachnote]] to obtain them:
 +  * Alignment definitions (json) between IMSLP scores and YouTube videos
 +  * Alignment Quality definitions (json) between IMSLP scores and YouTube videos
  
-===== Used Technologies =====+The Bootstrap-wrapped version is in a separate branch in the repository called '​bootstrap'​.
  
-Blabla+===== Future Work =====
  
 +For the study of performance,​ we could most definitely think towards clustering of the tempo/​loudness profiles. On such a large body of data, this could give us information on stylistic relationships between performers, maybe allowing us to detect schools of performance,​ teacher-student relationships,​ or regional differences in interpretation. ​
  
-A footnote or reference looks like this((Mastering is the process ​of applying equalization,​ compression,​ reverb, and other nonlinearities to an audio recording to improve it sound.)) You can also include images:+Music education practice would most definitely be served by a tool that visualizes ​the progress ​of students. Professional performances already appear often on YouTube, and given that we live in the age of Instagram and Vine, we foresee no lack of uploads of amateur performances in the near future eitherThe comparisons between performances by people on different levels with different abilities may provide valuable pedagogical insights for tutors
  
-{{http://​labrosa.ee.columbia.edu/​files/​labROSA-header.png|}} 
deepperformancediff.1445783602.txt.gz · Last modified: 2015/10/25 10:33 by jganseman