Name: Ian Gowen
Contact: [email protected]
School: University of Oregon
Major: Computer Science
Project: Blender/ffmpeg
Mentor: Alexander Ewering
Mentoring Organization: Blender Foundation (http://www.blender.org/)
Until now, Blender's support for digital-video input and output on Linux has been extremely sparse: You were limited to using either uncompressed AVI or a sequence of numbered images. My task was to integrate the FFMPEG (http://ffmpeg.sourceforge.net/) libavformat and libavcodec libraries into Blender's already existing video I/O code. Doing so would add support for many other video formats--MPEG-1, MPEG-2, MPEG-4, Quicktime, DV, and AVI, to name a few.
The first step, obviously, is translating between Blender and FFMPEG's internal data structures. FFMPEG stores each image as an array of uint32_t, one element per pixel, whereas Blender stores them as unsigned char (8 bits), four elements per pixel. It would seem that all you have to do it cast the uint32_t to unsigned char; however, FFMPEG stores each pixel as AGBR, and Blender uses RGBA, so I also had to flip each set of four elements.
FFMPEG has two major data structures: AVCodecContext (the handle to a audio/video stream's codec) and AVFormatContext (the handle to the file itself). Low-level file operations are done for you--you don't have to call any C- or UNIX-style I/O functions. Once the file is opened and the structs are set up correctly, it's trivial to read individual frames from the file into memory and convert them into a format Blender can use. The hard part is output. When you read a video file, FFMPEG sets both structs up for you, there is very little you have to do to get at the underlying data. When writing to a file, however, you must configure everything yourself. Additionally, some formats require special setup--something I was not able to do without some extra (potentially confusing) input from the user. I was forced to strip down the possibilities to a few choice formats: MPEG-1, -2, and -4, AVI, Quicktime, and DV.
On the Blender side, implementation was very easy. For input, it determines the file type and calls an initialization function. Then, for each frame, it calls a function that hands back image data in Blender's image buffer format, the ImBuf struct. Finally, it calls a third function to close the input file. I simply had to add an extra case statement to handle formats that FFMPEG supports, then perform the translation I described earlier.
For output, all I had to do was inverse the input process: namely, translate from an ImBuf to FFMPEG's uint32_t format, and write the frames to the output file.