欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  移动技术

Android多种方式实现相机圆形预览的示例代码

程序员文章站 2022-09-06 13:38:50
效果图如下: 一、为预览控件设置圆角 为控件设置viewoutlineprovider public roundtextureview(context...

效果图如下:

Android多种方式实现相机圆形预览的示例代码

一、为预览控件设置圆角

为控件设置viewoutlineprovider

public roundtextureview(context context, attributeset attrs) {
  super(context, attrs);
  setoutlineprovider(new viewoutlineprovider() {
    @override
    public void getoutline(view view, outline outline) {
      rect rect = new rect(0, 0, view.getmeasuredwidth(), view.getmeasuredheight());
      outline.setroundrect(rect, radius);
    }
  });
  setcliptooutline(true);
}

在需要时修改圆角值并更新

public void setradius(int radius) {
  this.radius = radius;
}

public void turnround() {
  invalidateoutline();
}

即可根据设置的圆角值更新控件显示的圆角大小。当控件为正方形,且圆角值为边长的一半,显示的就是圆形。

二、实现正方形预览

1. 设备支持1:1预览尺寸

首先介绍一种简单但是局限性较大的实现方式:将相机预览尺寸和预览控件的大小都调整为1:1。
一般android设备都支持多种预览尺寸,以samsung tab s3为例

在使用camera api时,其支持的预览尺寸如下:

2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo i/camerahelper: supportedpreviewsize: 1920x1080
2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo i/camerahelper: supportedpreviewsize: 1280x720
2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo i/camerahelper: supportedpreviewsize: 1440x1080
2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo i/camerahelper: supportedpreviewsize: 1088x1088
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo i/camerahelper: supportedpreviewsize: 1056x864
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo i/camerahelper: supportedpreviewsize: 960x720
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo i/camerahelper: supportedpreviewsize: 720x480
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo i/camerahelper: supportedpreviewsize: 640x480
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo i/camerahelper: supportedpreviewsize: 352x288
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo i/camerahelper: supportedpreviewsize: 320x240
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo i/camerahelper: supportedpreviewsize: 176x144

其中1:1的预览尺寸为:1088x1088。

在使用camera2 api时,其支持的预览尺寸(其实也包含了picturesize)如下:

2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 4128x3096
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 4128x2322
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 3264x2448
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 3264x1836
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 3024x3024
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 2976x2976
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 2880x2160
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 2592x1944
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 2560x1920
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 2560x1440
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 2560x1080
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 2160x2160
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 2048x1536
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 2048x1152
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 1936x1936
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 1920x1080
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 1440x1080
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 1280x960
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 1280x720
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 960x720
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 720x480
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 640x480
2019-08-02 13:19:24.982 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 320x240
2019-08-02 13:19:24.982 16768-16768/com.wsy.glcamerademo i/camera2helper: getbestsupportedsize: 176x144

其中1:1的预览尺寸为:3024x3024、2976x2976、2160x2160、1936x1936。
只要我们选择1:1的预览尺寸,再将预览控件设置为正方形,即可实现正方形预览;
再通过设置预览控件的圆角为边长的一半,即可实现圆形预览。2. 设备不支持1:1预览尺寸的情况

选择1:1预览尺寸的缺陷分析

分辨率局限性
上述说到,我们可以选择1:1的预览尺寸进行预览,但是局限性较高,
可选择范围都很小。如果相机不支持1:1的预览尺寸,这个方案就不可行了。

资源消耗
以samsung tab s3为例,该设备使用camera2 api时,支持的正方形预览尺寸都很大,在进行图像处理等操作时将占用较多系统资源。

处理不支持1:1预览尺寸的情况

添加一个1:1尺寸的viewgroup
将textureview放入viewgroup
设置textureview的margin值以达到显示中心正方形区域的效果

Android多种方式实现相机圆形预览的示例代码

示意图

示例代码

//将预览控件和预览尺寸比例保持一致,避免拉伸
{
  framelayout.layoutparams textureviewlayoutparams = (framelayout.layoutparams) textureview.getlayoutparams();
  int newheight = 0;
  int newwidth = textureviewlayoutparams.width;
  //横屏
  if (displayorientation % 180 == 0) {
    newheight = textureviewlayoutparams.width * previewsize.height / previewsize.width;
  }
  //竖屏
  else {
    newheight = textureviewlayoutparams.width * previewsize.width / previewsize.height;
  }
  ////当不是正方形预览的情况下,添加一层viewgroup限制view的显示区域
  if (newheight != textureviewlayoutparams.height) {
    insertframelayout = new roundframelayout(coverbyparentcameraactivity.this);
    int sidelength = math.min(newwidth, newheight);
    framelayout.layoutparams layoutparams = new framelayout.layoutparams(sidelength, sidelength);
    insertframelayout.setlayoutparams(layoutparams);
    framelayout parentview = (framelayout) textureview.getparent();
    parentview.removeview(textureview);
    parentview.addview(insertframelayout);

    insertframelayout.addview(textureview);
    framelayout.layoutparams newtextureviewlayoutparams = new framelayout.layoutparams(newwidth, newheight);
    //横屏
    if (displayorientation % 180 == 0) {
      newtextureviewlayoutparams.leftmargin = ((newheight - newwidth) / 2);
    }
    //竖屏
    else {
      newtextureviewlayoutparams.topmargin = -(newheight - newwidth) / 2;
    }
    textureview.setlayoutparams(newtextureviewlayoutparams);
  }
}

三、使用glsurfaceview进行自定义程度更高的预览

使用上面的方法操作已经可完成正方形和圆形预览,但是仅适用于原生相机,当我们的数据源并非是原生相机的情况时如何进行圆形预览?接下来介绍使用glsurfaceview显示nv21的方案,完全是自己实现预览数据的绘制。

1. glsurfaceview使用流程

Android多种方式实现相机圆形预览的示例代码

opengl渲染yuv数据流程

其中的重点是渲染器(renderer)的编写,renderer的介绍如下:

/**
 * a generic renderer interface.
 * <p>
 * the renderer is responsible for making opengl calls to render a frame.
 * <p>
 * glsurfaceview clients typically create their own classes that implement
 * this interface, and then call {@link glsurfaceview#setrenderer} to
 * register the renderer with the glsurfaceview.
 * <p>
 *
 * <div class="special reference">
 * <h3>developer guides</h3>
 * <p>for more information about how to use opengl, read the
 * <a href="{@docroot}guide/topics/graphics/opengl.html" rel="external nofollow" >opengl</a> developer guide.</p>
 * </div>
 *
 * <h3>threading</h3>
 * the renderer will be called on a separate thread, so that rendering
 * performance is decoupled from the ui thread. clients typically need to
 * communicate with the renderer from the ui thread, because that's where
 * input events are received. clients can communicate using any of the
 * standard java techniques for cross-thread communication, or they can
 * use the {@link glsurfaceview#queueevent(runnable)} convenience method.
 * <p>
 * <h3>egl context lost</h3>
 * there are situations where the egl rendering context will be lost. this
 * typically happens when device wakes up after going to sleep. when
 * the egl context is lost, all opengl resources (such as textures) that are
 * associated with that context will be automatically deleted. in order to
 * keep rendering correctly, a renderer must recreate any lost resources
 * that it still needs. the {@link #onsurfacecreated(gl10, eglconfig)} method
 * is a convenient place to do this.
 *
 *
 * @see #setrenderer(renderer)
 */
public interface renderer {
  /**
   * called when the surface is created or recreated.
   * <p>
   * called when the rendering thread
   * starts and whenever the egl context is lost. the egl context will typically
   * be lost when the android device awakes after going to sleep.
   * <p>
   * since this method is called at the beginning of rendering, as well as
   * every time the egl context is lost, this method is a convenient place to put
   * code to create resources that need to be created when the rendering
   * starts, and that need to be recreated when the egl context is lost.
   * textures are an example of a resource that you might want to create
   * here.
   * <p>
   * note that when the egl context is lost, all opengl resources associated
   * with that context will be automatically deleted. you do not need to call
   * the corresponding "gldelete" methods such as gldeletetextures to
   * manually delete these lost resources.
   * <p>
   * @param gl the gl interface. use <code>instanceof</code> to
   * test if the interface supports gl11 or higher interfaces.
   * @param config the eglconfig of the created surface. can be used
   * to create matching pbuffers.
   */
  void onsurfacecreated(gl10 gl, eglconfig config);

  /**
   * called when the surface changed size.
   * <p>
   * called after the surface is created and whenever
   * the opengl es surface size changes.
   * <p>
   * typically you will set your viewport here. if your camera
   * is fixed then you could also set your projection matrix here:
   * <pre class="prettyprint">
   * void onsurfacechanged(gl10 gl, int width, int height) {
   *   gl.glviewport(0, 0, width, height);
   *   // for a fixed camera, set the projection too
   *   float ratio = (float) width / height;
   *   gl.glmatrixmode(gl10.gl_projection);
   *   gl.glloadidentity();
   *   gl.glfrustumf(-ratio, ratio, -1, 1, 1, 10);
   * }
   * </pre>
   * @param gl the gl interface. use <code>instanceof</code> to
   * test if the interface supports gl11 or higher interfaces.
   * @param width
   * @param height
   */
  void onsurfacechanged(gl10 gl, int width, int height);

  /**
   * called to draw the current frame.
   * <p>
   * this method is responsible for drawing the current frame.
   * <p>
   * the implementation of this method typically looks like this:
   * <pre class="prettyprint">
   * void ondrawframe(gl10 gl) {
   *   gl.glclear(gl10.gl_color_buffer_bit | gl10.gl_depth_buffer_bit);
   *   //... other gl calls to render the scene ...
   * }
   * </pre>
   * @param gl the gl interface. use <code>instanceof</code> to
   * test if the interface supports gl11 or higher interfaces.
   */
  void ondrawframe(gl10 gl);
}



void onsurfacecreated(gl10 gl, eglconfig config)
在surface创建或重建的情况下回调

void onsurfacechanged(gl10 gl, int width, int height)
在surface的大小发生变化的情况下回调

void ondrawframe(gl10 gl)
在这里实现绘制操作。当我们设置的rendermode为rendermode_continuously时,该函数将不断地执行;
当我们设置的rendermode为rendermode_when_dirty时,将只在创建完成和调用requestrender后才执行。一般我们选择rendermode_when_dirty渲染模式,避免过度绘制。

一般情况下,我们会自己实现一个renderer,然后为glsurfaceview设置renderer,可以说,renderer的编写是整个流程的核心步骤。以下是在void onsurfacecreated(gl10 gl, eglconfig config)进行的初始化操作和在void ondrawframe(gl10 gl)进行的绘制操作的流程图:

Android多种方式实现相机圆形预览的示例代码

渲染yuv数据的renderer

2. 具体实现

坐标系介绍

Android多种方式实现相机圆形预览的示例代码

android view坐标系

Android多种方式实现相机圆形预览的示例代码

opengl世界坐标系

如图所示,和android的view坐标系不同,opengl的坐标系是笛卡尔坐标系。
android view的坐标系以左上角为原点,向右x递增,向下y递增;
而opengl坐标系以中心为原点,向右x递增,向上y递增。

着色器编写

/**
 * 顶点着色器
 */
private static string vertex_shader =
    "  attribute vec4 attr_position;\n" +
        "  attribute vec2 attr_tc;\n" +
        "  varying vec2 tc;\n" +
        "  void main() {\n" +
        "    gl_position = attr_position;\n" +
        "    tc = attr_tc;\n" +
        "  }";

/**
 * 片段着色器
 */
private static string frag_shader =
    "  varying vec2 tc;\n" +
        "  uniform sampler2d ysampler;\n" +
        "  uniform sampler2d usampler;\n" +
        "  uniform sampler2d vsampler;\n" +
        "  const mat3 convertmat = mat3( 1.0, 1.0, 1.0, -0.001, -0.3441, 1.772, 1.402, -0.7141, -0.58060);\n" +
        "  void main()\n" +
        "  {\n" +
        "    vec3 yuv;\n" +
        "    yuv.x = texture2d(ysampler, tc).r;\n" +
        "    yuv.y = texture2d(usampler, tc).r - 0.5;\n" +
        "    yuv.z = texture2d(vsampler, tc).r - 0.5;\n" +
        "    gl_fragcolor = vec4(convertmat * yuv, 1.0);\n" +
        "  }";


内建变量解释

gl_position

vertex_shader代码里的gl_position代表绘制的空间坐标。由于我们是二维绘制,所以直接传入opengl二维坐标系的左下(-1,-1)、右下(1,-1)、左上(-1,1)、右上(1,1),也就是{-1,-1,1,-1,-1,1,1,1}

gl_fragcolor

frag_shader代码里的gl_fragcolor代表单个片元的颜色

其他变量解释

ysampler、usampler、vsampler

分别代表y、u、v纹理采样器

convertmat

根据以下公式:

r = y + 1.402 (v - 128)
g = y - 0.34414 (u - 128) - 0.71414 (v - 128)
b = y + 1.772 (u - 128)

我们可得到一个yuv转rgb的矩阵

1.0,  1.0,  1.0, 
0,   -0.344, 1.77, 
1.403, -0.714, 0 

部分类型、函数的解释

vec3、vec4

分别代表三维向量、四维向量。

vec4 texture2d(sampler2d sampler, vec2 coord)

以指定的矩阵将采样器的图像纹理转换为颜色值;如:
texture2d(ysampler, tc).r获取到的是y数据,
texture2d(usampler, tc).r获取到的是u数据,
texture2d(vsampler, tc).r获取到的是v数据。

在java代码中进行初始化

根据图像宽高创建y、u、v对应的bytebuffer纹理数据;
根据是否镜像显示、旋转角度选择对应的转换矩阵;

public void init(boolean ismirror, int rotatedegree, int framewidth, int frameheight) {
if (this.framewidth == framewidth
    && this.frameheight == frameheight
    && this.rotatedegree == rotatedegree
    && this.ismirror == ismirror) {
  return;
}
datainput = false;
this.framewidth = framewidth;
this.frameheight = frameheight;
this.rotatedegree = rotatedegree;
this.ismirror = ismirror;
yarray = new byte[this.framewidth * this.frameheight];
uarray = new byte[this.framewidth * this.frameheight / 4];
varray = new byte[this.framewidth * this.frameheight / 4];

int yframesize = this.frameheight * this.framewidth;
int uvframesize = yframesize >> 2;
ybuf = bytebuffer.allocatedirect(yframesize);
ybuf.order(byteorder.nativeorder()).position(0);

ubuf = bytebuffer.allocatedirect(uvframesize);
ubuf.order(byteorder.nativeorder()).position(0);

vbuf = bytebuffer.allocatedirect(uvframesize);
vbuf.order(byteorder.nativeorder()).position(0);
// 顶点坐标
squarevertices = bytebuffer
    .allocatedirect(glutil.square_vertices.length * float_size_bytes)
    .order(byteorder.nativeorder())
    .asfloatbuffer();
squarevertices.put(glutil.square_vertices).position(0);
//纹理坐标
if (ismirror) {
  switch (rotatedegree) {
    case 0:
      coordvertice = glutil.mirror_coord_vertices;
      break;
    case 90:
      coordvertice = glutil.rotate_90_mirror_coord_vertices;
      break;
    case 180:
      coordvertice = glutil.rotate_180_mirror_coord_vertices;
      break;
    case 270:
      coordvertice = glutil.rotate_270_mirror_coord_vertices;
      break;
    default:
      break;
  }
} else {
  switch (rotatedegree) {
    case 0:
      coordvertice = glutil.coord_vertices;
      break;
    case 90:
      coordvertice = glutil.rotate_90_coord_vertices;
      break;
    case 180:
      coordvertice = glutil.rotate_180_coord_vertices;
      break;
    case 270:
      coordvertice = glutil.rotate_270_coord_vertices;
      break;
    default:
      break;
  }
}
coordvertices = bytebuffer.allocatedirect(coordvertice.length * float_size_bytes).order(byteorder.nativeorder()).asfloatbuffer();
coordvertices.put(coordvertice).position(0);
}

在surface创建完成时进行renderer初始化

  private void initrenderer() {
  rendererready = false;
  createglprogram();

  //启用纹理
  gles20.glenable(gles20.gl_texture_2d);
  //创建纹理
  createtexture(framewidth, frameheight, gles20.gl_luminance, ytexture);
  createtexture(framewidth / 2, frameheight / 2, gles20.gl_luminance, utexture);
  createtexture(framewidth / 2, frameheight / 2, gles20.gl_luminance, vtexture);

  rendererready = true;
} 

其中createglprogram用于创建opengl program并关联着色器代码中的变量

 private void createglprogram() {
 int programhandlemain = glutil.createshaderprogram();
 if (programhandlemain != -1) {
   // 使用着色器程序
   gles20.gluseprogram(programhandlemain);
   // 获取顶点着色器变量
   int glposition = gles20.glgetattriblocation(programhandlemain, "attr_position");
   int texturecoord = gles20.glgetattriblocation(programhandlemain, "attr_tc");

   // 获取片段着色器变量
   int ysampler = gles20.glgetuniformlocation(programhandlemain, "ysampler");
   int usampler = gles20.glgetuniformlocation(programhandlemain, "usampler");
   int vsampler = gles20.glgetuniformlocation(programhandlemain, "vsampler");

   //给变量赋值
   /**
    * gles20.gl_texture0 和 ysampler 绑定
    * gles20.gl_texture1 和 usampler 绑定
    * gles20.gl_texture2 和 vsampler 绑定
    *
    * 也就是说 gluniform1i的第二个参数代表图层序号
    */
   gles20.gluniform1i(ysampler, 0);
   gles20.gluniform1i(usampler, 1);
   gles20.gluniform1i(vsampler, 2);

   gles20.glenablevertexattribarray(glposition);
   gles20.glenablevertexattribarray(texturecoord);

   /**
    * 设置vertex shader数据
    */
   squarevertices.position(0);
   gles20.glvertexattribpointer(glposition, glutil.count_per_square_vertice, gles20.gl_float, false, 8, squarevertices);
   coordvertices.position(0);
   gles20.glvertexattribpointer(texturecoord, glutil.count_per_coord_vertices, gles20.gl_float, false, 8, coordvertices);
 }
}

其中createtexture用于根据宽高和格式创建纹理

 private void createtexture(int width, int height, int format, int[] textureid) {
   //创建纹理
   gles20.glgentextures(1, textureid, 0);
   //绑定纹理
   gles20.glbindtexture(gles20.gl_texture_2d, textureid[0]);
   /**
    * {@link gles20#gl_texture_wrap_s}代表左右方向的纹理环绕模式
    * {@link gles20#gl_texture_wrap_t}代表上下方向的纹理环绕模式
    *
    * {@link gles20#gl_repeat}:重复
    * {@link gles20#gl_mirrored_repeat}:镜像重复
    * {@link gles20#gl_clamp_to_edge}:忽略边框截取
    *
    * 例如我们使用{@link gles20#gl_repeat}:
    *
    *       squarevertices      coordvertices
    *       -1.0f, -1.0f,      1.0f, 1.0f,
    *       1.0f, -1.0f,       1.0f, 0.0f,     ->     和textureview预览相同
    *       -1.0f, 1.0f,       0.0f, 1.0f,
    *       1.0f, 1.0f        0.0f, 0.0f
    *
    *       squarevertices      coordvertices
    *       -1.0f, -1.0f,      2.0f, 2.0f,
    *       1.0f, -1.0f,       2.0f, 0.0f,     ->     和textureview预览相比,分割成了4 块相同的预览(左下,右下,左上,右上)
    *       -1.0f, 1.0f,       0.0f, 2.0f,
    *       1.0f, 1.0f        0.0f, 0.0f
    */
   gles20.gltexparameteri(gles20.gl_texture_2d, gles20.gl_texture_wrap_s, gles20.gl_repeat);
   gles20.gltexparameteri(gles20.gl_texture_2d, gles20.gl_texture_wrap_t, gles20.gl_repeat);
   /**
    * {@link gles20#gl_texture_min_filter}代表所显示的纹理比加载进来的纹理小时的情况
    * {@link gles20#gl_texture_mag_filter}代表所显示的纹理比加载进来的纹理大时的情况
    *
    * {@link gles20#gl_nearest}:使用纹理中坐标最接近的一个像素的颜色作为需要绘制的像素颜色
    * {@link gles20#gl_linear}:使用纹理中坐标最接近的若干个颜色,通过加权平均算法得到需要绘制的像素颜色
    */
   gles20.gltexparameteri(gles20.gl_texture_2d, gles20.gl_texture_min_filter, gles20.gl_nearest);
   gles20.gltexparameteri(gles20.gl_texture_2d, gles20.gl_texture_mag_filter, gles20.gl_linear);
   gles20.glteximage2d(gles20.gl_texture_2d, 0, format, width, height, 0, format, gles20.gl_unsigned_byte, null);
 }

在java代码中调用绘制

在数据源获取到时裁剪并传入帧数据

@override
 public void onpreview(final byte[] nv21, camera camera) {
 //裁剪指定的图像区域
 imageutil.cropnv21(nv21, this.squarenv21, previewsize.width, previewsize.height, croprect);
 //刷新glsurfaceview
 roundcameraglsurfaceview.refreshframenv21(this.squarenv21);
}

nv21数据裁剪代码

/**
* 裁剪nv21数据
*
* @param originnv21 原始的nv21数据
* @param cropnv21  裁剪结果nv21数据,需要预先分配内存
* @param width   原始数据的宽度
* @param height   原始数据的高度
* @param left    原始数据被裁剪的区域的左边界
* @param top    原始数据被裁剪的区域的上边界
* @param right   原始数据被裁剪的区域的右边界
* @param bottom   原始数据被裁剪的区域的下边界
*/
 public static void cropnv21(byte[] originnv21, byte[] cropnv21, int width, int height, int left, int top, int right, int bottom) {
 int halfwidth = width / 2;
 int cropimagewidth = right - left;
 int cropimageheight = bottom - top;

 //原数据y左上
 int originalylinestart = top * width;
 int targetyindex = 0;

 //原数据uv左上
 int originaluvlinestart = width * height + top * halfwidth;

 //目标数据的uv起始值
 int targetuvindex = cropimagewidth * cropimageheight;

 for (int i = top; i < bottom; i++) {
   system.arraycopy(originnv21, originalylinestart + left, cropnv21, targetyindex, cropimagewidth);
   originalylinestart += width;
   targetyindex += cropimagewidth;
   if ((i & 1) == 0) {
     system.arraycopy(originnv21, originaluvlinestart + left, cropnv21, targetuvindex, cropimagewidth);
     originaluvlinestart += width;
     targetuvindex += cropimagewidth;
   }
 }
}

传给glsurafceview并刷新帧数据

/**
* 传入nv21刷新帧
*
* @param data nv21数据
*/
public void refreshframenv21(byte[] data) {
 if (rendererready) {
   ybuf.clear();
   ubuf.clear();
   vbuf.clear();
   putnv21(data, framewidth, frameheight);
   datainput = true;
   requestrender();
 }
}

其中putnv21用于将nv21中的y、u、v数据分别取出

/**
* 将nv21数据的y、u、v分量取出
*
* @param src  nv21帧数据
* @param width 宽度
* @param height 高度
*/
private void putnv21(byte[] src, int width, int height) {

 int ysize = width * height;
 int framesize = ysize * 3 / 2;

 //取分量y值
 system.arraycopy(src, 0, yarray, 0, ysize);

 int k = 0;

 //取分量uv值
 int index = ysize;
 while (index < framesize) {
   varray[k] = src[index++];
   uarray[k++] = src[index++];
 }
 ybuf.put(yarray).position(0);
 ubuf.put(uarray).position(0);
 vbuf.put(varray).position(0);
}

在执行requestrender后,ondrawframe函数将被回调,在其中进行三个纹理的数据绑定并绘制

   @override
   public void ondrawframe(gl10 gl) {
   // 分别对每个纹理做激活、绑定、设置数据操作
   if (datainput) {
     //y
     gles20.glactivetexture(gles20.gl_texture0);
     gles20.glbindtexture(gles20.gl_texture_2d, ytexture[0]);
     gles20.gltexsubimage2d(gles20.gl_texture_2d,
         0,
         0,
         0,
         framewidth,
         frameheight,
         gles20.gl_luminance,
         gles20.gl_unsigned_byte,
         ybuf);

     //u
     gles20.glactivetexture(gles20.gl_texture1);
     gles20.glbindtexture(gles20.gl_texture_2d, utexture[0]);
     gles20.gltexsubimage2d(gles20.gl_texture_2d,
         0,
         0,
         0,
         framewidth >> 1,
         frameheight >> 1,
         gles20.gl_luminance,
         gles20.gl_unsigned_byte,
         ubuf);

     //v
     gles20.glactivetexture(gles20.gl_texture2);
     gles20.glbindtexture(gles20.gl_texture_2d, vtexture[0]);
     gles20.gltexsubimage2d(gles20.gl_texture_2d,
         0,
         0,
         0,
         framewidth >> 1,
         frameheight >> 1,
         gles20.gl_luminance,
         gles20.gl_unsigned_byte,
         vbuf);
     //在数据绑定完成后进行绘制
     gles20.gldrawarrays(gles20.gl_triangle_strip, 0, 4);
   }
 }

即可完成绘制。

四、加一层边框

有时候需求并不仅仅是圆形预览这么简单,我们可能还要为相机预览加一层边框

Android多种方式实现相机圆形预览的示例代码

边框效果

一样的思路,我们动态地修改边框值,并进行重绘。
边框自定义view中的相关代码如下:

@override
protected void ondraw(canvas canvas) {
  super.ondraw(canvas);
  if (paint == null) {
    paint = new paint();
    paint.setstyle(paint.style.stroke);
    paint.setantialias(true);
    sweepgradient sweepgradient = new sweepgradient(((float) getwidth() / 2), ((float) getheight() / 2),
        new int[]{color.green, color.cyan, color.blue, color.cyan, color.green}, null);
    paint.setshader(sweepgradient);
  }
  drawborder(canvas, 6);
}


private void drawborder(canvas canvas, int rectthickness) {
  if (canvas == null) {
    return;
  }
  paint.setstrokewidth(rectthickness);
  path drawpath = new path();
  drawpath.addroundrect(new rectf(0, 0, getwidth(), getheight()), radius, radius, path.direction.cw);
  canvas.drawpath(drawpath, paint);
}

public void turnround() {
  invalidate();
}

public void setradius(int radius) {
  this.radius = radius;
}

五、完整demo代码:

https://github.com/wangshengyang1996/glcamerademo

使用camera api和camera2 api并选择最接近正方形的预览尺寸
使用camera api并为其动态添加一层父控件,达到正方形预览的效果
使用camera api获取预览数据,使用opengl的方式进行显示最后,给大家推荐一个好用的android免费离线人脸识别的sdk,可以和本文实现技术的完美结合:

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。