Forums » General Discussion Search

Sync analyzed data with audio source New Reply

Author Post
Posts: 16
Registered: Sep 29, 2011

Hello, I just discovered Echonest, and I'm really interested in the Analyze API. I'd like to use it to create some kind of visualizer or video game. That means that I'll need the Analysed data to be precisely synchronized with the audio. If I upload my MP3, and let your API analyse it, that should be fine. Thus, if for some reason, I'd like to use an other source of audio, and use existing analysed data, how can I be sure that the data I'm using actually fits with the mp3 I'm listening? (for a same song, there can be a lot of MP3, with a length variation of a few seconds). I've seen on your blog, that in the newest version of your Analyser, you've added some 'Synchstring'. Thus, I can't find it in your API help. I've also seen that project, and will take a look at it : https://github.com/echonest/synchdata

Finally, I'd like to know if there's a way to use Analysed data for some streamed audio (for example, knowing the name and artist of the song currently playing, being able to display some real time beat visualisation on the screen).

Thanks very much for your help !

Posts: 69
Registered: Sep 17, 2008

Hi R40ul, yeah the synchstring allows you to get sample-accurate synchronization between the analysis data and the decoder of your choice: it doesn't need to be the same one we're using. And every track you upload for analysis today feature the synchstring. Analyzes prior to version 3.08 do not.

The synchstring was primarily designed to cope with decoder variations. If you're using a different track altogether, chances are you won't be able to recover the alignment properly. I've had success doing it in many cases, when encoding differences and start offsets are minimal, within 1 second.

Re: streaming: assuming you do it yourself, and have access to the analysis data for the playing track, you can certainly use the same approach. But again, you must know what you're playing within the 1-second accuracy range, and have the analysis available, for the alignment code to work. You only need a few seconds of audio in the buffer to work out the alignment.

Posts: 16
Registered: Sep 29, 2011

Thanks, that makes more sense. Where can I get more info about these synchstring? I could'nt find anything in the API doc, and I'd like to investigate a little bit further about it.

Thanks again !

Posts: 69
Registered: Sep 17, 2008

I'd say the best place to start is on github. Read the synchdata README file, try the example, and then get your own synchstrings by uploading tracks and querying the analysis data: there's an 'analysis_url' pointing to the JSON analysis file in the audio_summary bucket. The synchstrings are contained in that file. An easy way to get started in accessing synchstrings is through pyechonest. Check out the 'track' object. Sorry, we'll put more information on this on the developer site soon.

Reply to this Thread

You must log in to post a reply.