欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  技术分享

Android实现人脸识别技术_Android如何从图片中切取人脸区域?

程序员文章站 2021-12-12 18:04:44
...

从一张图片中切出人脸区域是App开发中常用的场景,譬如,现在很多App用户上传头像的时候,喜欢随手自拍。自拍的图片往往在尺寸、位置上并不完美。而App需要在各种千奇百怪的UI场景下显示用户的头像。所以从原始头像图片中切取出人脸区域看起来是个刚需。这里介绍如何应用Android提供的人脸识别接口完成简单的切取人脸区域。


简单起见,输入为一个可能人脸的Bitmap,并且假定目标是识别出一个人脸而已。输出为一个以人脸为中心的原图的部分切图。如果需要自定义长宽比例,或者只需要定位人脸位置,来半侧切图,可以稍加改动实现。


Android官方提供的人脸识别api在android.media.FaceDetector,核心的api是FaceDetector.findFaces()


直接贴代码

package com.example.testface;
 
import android.graphics.Bitmap;
import android.graphics.PointF;
import android.media.FaceDetector;
import android.util.Log;
 
public final class FaceHelper {
 
    private static final String LOG_TAG = "FaceHelper";
    private static final boolean DEBUG_ENABLE = false;
 
    public static Bitmap genFaceBitmap(Bitmap sourceBitmap) {
 
        if (!checkBitmap(sourceBitmap, "genFaceBitmap()")) {
            return null;
        }
 
        // default algorithm of face detecting of Android can only handle
        // RGB_565 bitmap, so copy it by RGB_565 here.
        Bitmap cacheBitmap = sourceBitmap.copy(Bitmap.Config.RGB_565, false);
 
        // gives up strong reference here. because this method may be
        // time-consuming, so maybe run in work thread usually, we give up the
        // strong reference of the source bitmap after it copied, so that it can
        // be recycled and GC in another thread.
        sourceBitmap = null;
 
        if (DEBUG_ENABLE) {
            Log.i(LOG_TAG,
                    "genFaceBitmap() : source bitmap width - "
                              cacheBitmap.getWidth()   " , height - "
                              cacheBitmap.getHeight());
        }
 
        int cacheWidth = cacheBitmap.getWidth();
        int cacheHeight = cacheBitmap.getHeight();
 
        // default algorithm of face detecting of Android can only handle the
        // bitmap that width is even, so we give up the 1 pixel from right if
        // not even.
        if (cacheWidth % 2 != 0) {
            if (0 == cacheWidth - 1) {
                if (DEBUG_ENABLE) {
                    Log.e(LOG_TAG,
                            "genFaceBitmap() : source bitmap width is only 1 , return null.");
                }
                return null;
            }
            final Bitmap localCacheBitmap = Bitmap.createBitmap(cacheBitmap, 0,
                    0, cacheWidth - 1, cacheHeight);
            cacheBitmap.recycle();
            cacheBitmap = localCacheBitmap;
            --cacheWidth;
 
            if (DEBUG_ENABLE) {
                Log.i(LOG_TAG,
                        "genFaceBitmap() : source bitmap width - "
                                  cacheBitmap.getWidth()   " , height - "
                                  cacheBitmap.getHeight());
            }
        }
 
        final FaceDetector.Face[] faces = new FaceDetector.Face[1];
        final int facefound = new FaceDetector(cacheWidth, cacheHeight, 1)
                .findFaces(cacheBitmap, faces);
        if (DEBUG_ENABLE) {
            Log.i(LOG_TAG, "genFaceBitmap() : facefound - "   facefound);
        }
        if (0 == facefound) {
            if (DEBUG_ENABLE) {
                Log.e(LOG_TAG, "genFaceBitmap() : no face found , return null.");
            }
            return null;
        }
 
        final PointF p = new PointF();
        faces[0].getMidPoint(p);
        if (DEBUG_ENABLE) {
            Log.i(LOG_TAG,
                    "getFaceBitmap() : confidence - "   faces[0].confidence());
            Log.i(LOG_TAG, "genFaceBitmap() : face center x - "   p.x
                      " , y - "   p.y);
        }
        final int faceX = (int) p.x;
        final int faceY = (int) p.y;
        if (DEBUG_ENABLE) {
            Log.i(LOG_TAG, "genFaceBitmap() : int faceX - "   faceX
                      " , int faceY - "   faceY);
        }
 
        // compute an area that face in middle of it.
        int startX, startY, width, height;
        if (faceX <= cacheWidth - faceX) {
            startX = 0;
            width = faceX * 2;
        } else {
            startX = faceX - (cacheWidth - faceX);
            width = (cacheWidth - faceX) * 2;
        }
        if (faceY <= cacheHeight - faceY) {
            startY = 0;
            height = faceY * 2;
        } else {
            startY = faceY - (cacheHeight - faceY);
            height = (cacheHeight - faceY) * 2;
        }
 
        final Bitmap result = Bitmap.createBitmap(cacheBitmap, startX, startY,
                width, height);
        cacheBitmap.recycle();
        return result;
    }
 
    private static boolean checkBitmap(final Bitmap bitmap,
            final String debugInfo) {
        if (null == bitmap || bitmap.isRecycled()) {
            if (DEBUG_ENABLE) {
                Log.e(LOG_TAG, debugInfo
                          " : check bitmap , is null or is recycled.");
            }
            return false;
        }
        if (0 == bitmap.getWidth() || 0 == bitmap.getHeight()) {
            if (DEBUG_ENABLE) {
                Log.e(LOG_TAG, debugInfo
                          " : check bitmap , width is 0 or height is 0.");
            }
            return false;
        }
        return true;
    }
 
}


主要的坑有两个:

(1)只能处理RGB_565格式的Bitmap,需要做转换。

(2)只能处理width为偶数的Bitmap,这里的解法是如果是奇数,则放弃最右侧的一列像素。

这样就可以一个一个把人脸裁剪出来,以便获取人脸的图片,从而用其获取单人脸的特征。