License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/DFU.Vol3.11041.157
URN: urn:nbn:de:0030-drops-34711
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2012/3471/
Go to the corresponding DFU Volume Portal


Grosche, Peter ; Müller, Meinard ; Serrà, Joan

Audio Content-Based Music Retrieval

pdf-format:
10.pdf (2 MB)


Abstract

The rapidly growing corpus of digital audio material requires novel
retrieval strategies for exploring large music collections. Traditional retrieval strategies rely on metadata that describe the actual audio content in words. In the case that such textual descriptions are not available, one requires content-based retrieval strategies which only utilize the raw audio material. In this contribution, we discuss content-based retrieval strategies that
follow the query-by-example paradigm: given an audio query, the task is to retrieve all documents that are somehow similar or related to the query from a music collection. Such strategies can be loosely classified according to their "specificity", which refers to the degree of similarity between the query and the database documents. Here, high specificity refers to a strict notion of similarity, whereas low specificity to a rather vague one. Furthermore, we introduce a second classification principle based on "granularity", where one distinguishes between fragment-level and document-level retrieval. Using a classification scheme based on specificity and granularity, we identify various classes of retrieval scenarios, which comprise "audio identification", "audio matching", and "version
identification". For these three important classes, we give an overview of representative state-of-the-art approaches, which also illustrate the sometimes subtle but crucial differences between the retrieval scenarios. Finally, we give an outlook on a user-oriented retrieval system, which combines the various retrieval strategies in a unified framework.

BibTeX - Entry

@InCollection{grosche_et_al:DFU:2012:3471,
  author =	{Peter Grosche and Meinard M{\"u}ller and Joan Serr{\`a}},
  title =	{{Audio Content-Based Music Retrieval}},
  booktitle =	{Multimodal Music Processing},
  pages =	{157--174},
  series =	{Dagstuhl Follow-Ups},
  ISBN =	{978-3-939897-37-8},
  ISSN =	{1868-8977},
  year =	{2012},
  volume =	{3},
  editor =	{Meinard M{\"u}ller and Masataka Goto and Markus Schedl},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{http://drops.dagstuhl.de/opus/volltexte/2012/3471},
  URN =		{urn:nbn:de:0030-drops-34711},
  doi =		{10.4230/DFU.Vol3.11041.157},
  annote =	{Keywords: music retrieval, content-based, query-by-example, audio identification, audio matching, cover song identification}
}

Keywords: music retrieval, content-based, query-by-example, audio identification, audio matching, cover song identification
Collection: Multimodal Music Processing
Issue Date: 2012
Date of publication: 27.04.2012


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI