Note: This web service is running an out-of-date version of mir_eval and should be treated only as a proof-of-concept.
Use the form below to evaluate annotations for a given MIR task.
The file format should be as described in mir_eval
Some example annotation files can be found in within mir_eval's tests
You can also query this web service as an API, e.g.:
curl -F "task=beat" -F "firstname.lastname@example.org" -F "email@example.com" http://labrosa.ee.columbia.edu/mir_eval/
task should be one of beat, chord, melody, onset, pattern, segment, tempo, key, or transcription.
If you're running a large-scale evaluation, it will probably be more efficient to run mir_eval locally.
Installation instructions for mir_eval can be found here
You can even run mir_eval with minimal Python knowledge by using the evaluators
If you use mir_eval in a research project, please cite the following paper:
Colin Raffel, Brian McFee, Eric J. Humphrey, Justin Salamon, Oriol Nieto, Dawen Liang, and Daniel P. W. Ellis.
"mir_eval: A Transparent Implementation of Common MIR Metrics"
Proceedings of the 15th International Conference on Music Information Retrieval, 2014.