Acoustic Attributes Overview
An acoustic attribute is an estimated subjective quality of a song. It is modeled through learning and is given as a single floating point number ranging from 0.0 to 1.0. Songs can be sorted by any of these axes, or the attributes can be used as filters for constructing custom playlists.
Some of our currently available acoustic attributes are described below.
For a detailed description of how to interpret the analyzer output see the Analyze Documentation.
Designed to describe how suitable a song is for dancing, considering a number of musical elements (the more suitable for dancing, the closer to 1.0 the value). The combination of musical elements that best characterize danceability include tempo, rhythm stability, beat strength, and overall regularity.
Represents a perceptual measure of intensity and powerful activity released throughout the song. Typical energetic songs feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to the measurement of energy include dynamic range, perceived loudness, timbre, onset rate, and general entropy.
Detects the presence of spoken words in a track. The more exclusively speech-like the track (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are most probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like sonic tracks.