欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

AVCapture之5——OpenGL

程序员文章站 2022-07-04 09:09:34
...

渲染的方案之前探索过很多,但是很遗憾,那些方案都是基于系统控件,并没有接触到真正的OpenGL。网上也没有太多MacOS上使用纯OpenGL渲染的实现,这次我就基于苹果 GLEssentials 示例代码来实现。

使用纯OpenGL要掌握不少知识,对OpenGL不熟的可以先看看我前几篇GLFW文章。
首先,需要先创建vertex信息

- (GLuint) buildVAO
{   
    
    // Set up vertex data (and buffer(s)) and attribute pointers
    GLfloat vertices[] = {
        -1.0f, -1.0f, 0.0f,  1.0f, 1.0f,
        1.0f, 1.0f, 0.0f, 0.0f, 0.0f,
        -1.0f,  1.0f, 0.0f, 1.f, 0.f,
        -1.0f, -1.0f, 0.0f,  1.0f, 1.0f,
        1.0f, -1.0f, 0.0f, 0.0f, 1.f,
        1.0f,  1.0f, 0.0f, 0.0f, 0.0f
    };
    
    GLuint VBO, VAO;
    glGenVertexArrays(1, &VAO);
    glGenBuffers(1, &VBO);
    // Bind the Vertex Array Object first, then bind and set vertex buffer(s) and attribute pointer(s).
    glBindVertexArray(VAO);
    
    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
    
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0);
    glEnableVertexAttribArray(0);
    
    glVertexAttribPointer(1, 2, GL_FLOAT,GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
    glEnableVertexAttribArray(1);
    
    glBindBuffer(GL_ARRAY_BUFFER, 0); // Note that this is allowed, the call to glVertexAttribPointer registered VBO as the currently bound vertex buffer object so afterwards we can safely unbind
    
    glBindVertexArray(0); // Unbind VAO (it's always a good thing to unbind any buffer/array to prevent strange bugs)
    
    
    return VAO;
}

顶点信息是2个三角形组成的矩形,以及每个点的纹理坐标。后面一堆代码是创建VAO的。

接下来是写shader。我们的窗口非常简单,就是把纹理显示出来。

#version 330 core

layout (location = 0) in vec3 position;
layout (location = 1) in vec2 texCoord;

out vec2 TexCoord;


void main()
{
    gl_Position = vec4(position.x, position.y, position.z, 1.0);
    TexCoord = texCoord;
}

TexCoord是纹理坐标,后面是要传给fragment shader的。

#version 330 core
in vec2 TexCoord;

out vec4 color; 

uniform sampler2D ourTexture;

void main()
{
    color = texture(ourTexture, TexCoord);
}

texture方法是从纹理中取出坐标上的颜色。

-(GLuint) buildTexture:(demoImage*) image
{
    GLuint texName;
    
    if (_characterTexName == 0) {
        // Create a texture object to apply to model
        glGenTextures(1, &texName);
    } else {
        texName = _characterTexName;
    }
    
    glBindTexture(GL_TEXTURE_2D, texName);
    // Set up filter and wrap modes for this texture object
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
    
    // Indicate that pixel rows are tightly packed 
    //  (defaults to stride of 4 which is kind of only good for
    //  RGBA or FLOAT data types)
    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
    
    // Allocate and load image data into texture
    glTexImage2D(GL_TEXTURE_2D, 0, image->format, image->width, image->height, 0,
                 image->format, image->type, image->data);

    // Create mipmaps for this texture for better image quality
    glGenerateMipmap(GL_TEXTURE_2D);
    
    GetGLError();
    
    return texName;
}

- (void)setImage:(CVImageBufferRef)pixelBuffer {
    
//    glDeleteBuffers(1, &_characterTexName);
//    _characterTexName = 0;
    
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    
    size_t width = CVPixelBufferGetWidth(pixelBuffer);
    size_t height = CVPixelBufferGetHeight(pixelBuffer);
    
    demoImage image = {0};
    image.width = width;
    image.height = height;
    image.rowByteSize = image.width * 3;
    image.format = GL_RGB;
    image.type = GL_UNSIGNED_BYTE;
    image.size = CVPixelBufferGetDataSize(pixelBuffer);
    image.data = CVPixelBufferGetBaseAddress(pixelBuffer);
    
    _characterTexName = [self buildTexture:&image];
    
    CVPixelBufferUnlockBaseAddress(pixelBuffer,0);

}

buildTexture是方便创建纹理的辅助函数。AVCapture项目只需要一个纹理,所以在前面判断_characterTexName纹理是否存在,只有不存在的时候才需要创建(删除纹理并不能有效的释放纹理占有的内存,而且也没有这个必要)。
刷新纹理用的是CVImageBufferRef对象,这个对象从CMSampleBuffer中很容易获得。这次我选用的是RGB格式,因为OpenGL支持。使用pixelBuffer前,需要包地址锁住,不然取到的数据会是脏数据。

- (void) render
{
    // Calculate the projection matrix
    // Clear the colorbuffer
    glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    
    // Draw our first triangle
    glUseProgram(_characterPrgName);
    
    glBindTexture(GL_TEXTURE_2D, _characterTexName);
    
//    glActiveTexture(GL_TEXTURE0);
//    glBindTexture(GL_TEXTURE_2D, _characterTexName);
//    glUniform1i(glGetUniformLocation(_characterPrgName, "ourTexture"), 0);
    
    
    glBindVertexArray(_characterVAOName);
    glDrawArrays(GL_TRIANGLES, 0, 6);
    glBindVertexArray(0);
}

渲染的过程就太简单了,首先启用纹理,最后把两个三角形画出来,图像就显示出来了。
render是方法是由单独的CVDisplayLink驱动的,DisplayLink的刷新频率要比摄像头快,所以我的做法是先将图像保存,render在刷新的时候取出。OpenGL不是线程安全,不能在Capture线程直接操作纹理。

最后实测下来,CPU的占用率和CIContext持平,都是7%,可见效率还是不错的。下一篇将介绍如何对I420格式的支持,毕竟YUV才是视频中的主流格式。基于目前OpenGL的框架,支持I420并不难,大部分工作都集中在Shader中,等我有时间了再写。

上一篇: 学习LinQ(一)

下一篇: LINQ