GPUImage–視訊流處理之AVCaptureVideoDataOutputSampleBufferDelegate

GPUImage–視訊流處理之AVCaptureVideoDataOutputSampleBufferDelegate

如果你是第一次看到這篇部落格請看http://blog.csdn.net/xoxo_x/article/details/52695032

如果app需求僅僅是自己得到美顏的效果, 請看這裡http://blog.csdn.net/xoxo_x/article/details/52743107

如果想了解更多的濾鏡使用方法,請看這裡http://blog.csdn.net/xoxo_x/article/details/52749033

接下來咱們要做的工作是如何對 CMSampleBufferRef 資料進行渲染、然後顯示 美顏效果

一、通過AVCaptureVideoDataOutputSampleBufferDelegate獲取視訊流:

#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
@interface ViewController ()<AVCaptureVideoDataOutputSampleBufferDelegate>
@property (nonatomic, strong) AVCaptureVideoPreviewLayer  *preLayer;
@end
@implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupCaptureSession];
}
//捕獲到視訊的回撥函式
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
}
//開啟攝像頭
- (void)setupCaptureSession
{
NSError *error = nil;
// 建立session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// 可以配置session以產生解析度較低的視訊幀,如果你的處理演算法能夠應付(這種低解析度)。
// 我們將選擇的裝置指定為中等質量。
session.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device;
for(AVCaptureDevice *dev in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo])
{
//這裡使用前置攝像頭
//這裡修改AVCaptureDevicePositionFront成AVCaptureDevicePositionBack可獲取後端攝像頭
if([dev position]==AVCaptureDevicePositionFront)
{
device=dev;
break;
}
}
//  我們初始化一個AVCaptureDeviceInput物件,以建立一個輸入資料來源,該資料來源為捕獲會話(session)提供視訊資料
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// AVCaptureVideoDataOutput可用於處理從視訊中捕獲的未經壓縮的幀。一個AVCaptureVideoDataOutput例項能處理許多其他多媒體API能處理的視訊幀,你可以通過captureOutput:didOutputSampleBuffer:fromConnection:這個委託方法獲取幀,使用setSampleBufferDelegate:queue:設定抽樣快取委託和將應用回撥的佇列。
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// 配置output物件
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
//dispatch_release(queue);
// Specify the pixel format 設定輸出的引數
output.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt: 640], (id)kCVPixelBufferWidthKey,
[NSNumber numberWithInt: 480], (id)kCVPixelBufferHeightKey,
nil];
//預覽的圖層
self.preLayer = [AVCaptureVideoPreviewLayer layerWithSession: session];
self.preLayer.frame = self.view.frame;
self.preLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:self.preLayer];
// 開始捕獲畫面
[session startRunning];
}
@end

效果圖:

這裡寫圖片描述

二、現在沒有新增任何濾鏡效果,下面開始新增濾鏡,處理AVCaptureVideoDataOutputSampleBufferDelegate的CMSampleBufferRef資料

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
//這裡進行處理CMSampleBufferRef
}

首先、我們先通過處理資料,從而得到視訊流,如下:

//捕獲到視訊的回撥函式
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// 通過sampleBuffer得到圖片
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
NSData *mData = UIImageJPEGRepresentation(image, 0.5);
//這裡的mData是NSData物件,後面的0.5代表生成的圖片質量
//在主執行緒中執行才會把圖片顯示出來
dispatch_async(dispatch_get_main_queue(), ^{
[self.imagev setImage:[UIImage imageWithData:mData]];
});
[self.view addSubview:self.imagev];
NSLog(@"output,mdata:%@",image);
}

self.imagev是在viewController中新增的全域性變數,並在viewDidLoad中初始化

- (void)viewDidLoad
{
[super viewDidLoad];
self.imagev=[[UIImageView alloc] init];
self.imagev.frame=CGRectMake(0, 300, 300, 200);
self.imagev.backgroundColor=[UIColor orangeColor];
[self setupCaptureSession];
}

這裡我們發現self.preLayer的frame 與 self.imagev 的frame 並不衝突,為了更好地進行比較,這裡就不移除原有的self.frame
其中,imageFromSampleBuffer的內容如下

// 把buffer流生成圖片
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
//UIImage *image = [UIImage imageWithCGImage:quartzImage];
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0f orientation:UIImageOrientationRight];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}

這時的效果如下:

這裡寫圖片描述

我們明顯可以看到,在ImageView中出現了視訊,併成功的顯現了出來,到這裡我們成功了

三、我們成功地得到了這個影象,那麼是否可以處理呢?下面進行濾鏡處理