I’ve been trying to understand some of the ffmpeg libraries enough to use for decoding audio in a personal application I’m working on. Documentation and resources to learn it are a bit low and inconsistent or usually outdated. Here I try to maintain a list of resources and information I’ve learned about the library.

These notes will be updated as I learn more about ffmpeg.


A problem I’ve encountered with ffmpeg is that many articles posted about it or open source found online is already outdated. Many modern applications like MPC-HC and Chromium do seem to use the latest version of the APIs. I also quickly learned that most learning is done through looking at the examples in ffmpeg as well as the ffplay source.

One particular thing about decoding audio is that it is generally decoded to a format that was used during the encoding process. This could be PCM Float Planar format for example, where each sample is a float and each channel is stored in a separate buffer. However, when you want to use the audio or play it through speakers, sometimes the format required can differ from the format decoded to. Fortunately ffmpeg has libswresample which can make these conversions easy. There also exists an abstraction of this using libavfilter.

March 17, 2013
57fed1c — March 15, 2024