For a project that I have been working on recently, I came across the need for creating thumbnail images for videos in iOS application. The iOS SDK makes it really simple to create the thumbnail images through the AVAssetImageGenerator. AVAssetImageGenerator uses the default enabled video track(s) to generate images.

Generating a single image in isolation can require the decoding of a large number of video frames with complex interdependencies. If you require a series of images, you can achieve far greater efficiency using the asynchronous method, copyCGImageAtTime:actualTime:error:, which employs decoding efficiencies similar to those used during playback.

We will use generateCGImagesAsynchronouslyForTimes:completionHandler:to generate the images from video. It expects an NSArray of NSValue objects, each containing a CMTime, specifying the asset times at which we need an image.

Here's a simple function that can be used to generate images asynchronously from the video:

-(void)generateImage
{
    NSURL *url = [NSURL fileURLWithPath:_videoPath];
    AVURLAsset *asset=[[AVURLAsset alloc] initWithURL:url options:nil];
    AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
    generator.appliesPreferredTrackTransform=TRUE;

    CMTime thumbTime = CMTimeMakeWithSeconds(30,30);

    AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
        if (result != AVAssetImageGeneratorSucceeded) {
            NSLog(@"couldn't generate thumbnail, error:%@", error);
        }
        // TODO Do something with the image
    };

    CGSize maxSize = CGSizeMake(128, 128);
    generator.maximumSize = maxSize;
    [generator generateCGImagesAsynchronouslyForTimes:[NSArray arrayWithObject:[NSValue valueWithCMTime:thumbTime]] completionHandler:handler];

}