Music is present in everyday life and used for a wide range of objectives. Musical databases have considerably increased in number and size over the past years, therefore, the development of accurate tools for music information retrieval (MIR) has become an important topic in computer science. The increasing theoretical advances in machine learning algorithms together with the abundance of recordings available in digital audio formats, the growing quality and accessibility of on-line symbolic music data, and availability of tools and toolboxes for the extraction of musical properties have motivated many studies on machine learning and MIR. Relevant problems in MIR include classification of songs into genres, which enables the summarization of common features (or patterns) shared by different songs. The automatic classification of music genres plays a fundamental role in the context of music indexing and retrieval, so that websites and device music engines can manage and label music content. Most studies have dealt with such an issue by extracting music characteristics from the audio content, and some have provided overviews of audio features and classification algorithms for music genre classification. However, precise high-level musical information can be extracted from symbolic data (e.g. digital music scores), known to be closely related to the way humans perceive music. A number of approaches use such musical information to process, retrieve and classify music content. This manuscript provides an overview of the most important approaches that deal with music genre classification and consider the symbolic representation of music data. Current issues inherent to such a music format, as well the main algorithms adopted for the modeling of the music feature space are presented.