欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  移动技术

Choreographer机制和卡顿优化

程序员文章站 2022-03-20 22:44:30
Choreographer 是 Android 4.1 google的黄油计划新增的机制,用于配合系统的 VSYNC 中断信号。其主要用途是接收系统的 VSYNC 信号,统一管理应用的输入、动画和绘制等任务的执行时机。也就是(CALLBACK_INPUT,CALLBACK_ANIMATION,CALLBACK_TRAVERSAL,CALLBACK_COMMIT),而我们主要的用途就是来查看app的帧率情况,下面来具体分析下这个神秘的类当我们进行invalidate或者requestLayout时,总会执行...

Choreographer 是 Android 4.1 google的黄油计划新增的机制,用于配合系统的 VSYNC 中断信号。其主要用途是接收系统的 VSYNC 信号,统一管理应用的输入、动画和绘制等任务的执行时机。也就是(CALLBACK_INPUT,CALLBACK_ANIMATION,CALLBACK_TRAVERSAL,CALLBACK_COMMIT),而我们主要的用途就是来查看app的帧率情况,下面来具体分析下这个神秘的类
当我们进行invalidate或者requestLayout时,总会执行viewRootImp的scheduleTraversals方法

 

  void scheduleTraversals() {
        if (!mTraversalScheduled) {
            mTraversalScheduled = true;
            mTraversalBarrier = mHandler.getLooper().getQueue().postSyncBarrier();
            mChoreographer.postCallback(
                    Choreographer.CALLBACK_TRAVERSAL, mTraversalRunnable, null);
            if (!mUnbufferedInputDispatch) {
                scheduleConsumeBatchedInput();
            }
            notifyRendererOfFramePending();
            pokeDrawLockIfNeeded();
        }
    }

而这个mTraversalRunnable就是我们所要执行的任务了,那究竟是何时执行呢?
首先会mChoreographer.postCallback会间接调用postCallbackDelayedInternal方法

 

private void postCallbackDelayedInternal(int callbackType,
            Object action, Object token, long delayMillis) {
        if (DEBUG_FRAMES) {
            Log.d(TAG, "PostCallback: type=" + callbackType
                    + ", action=" + action + ", token=" + token
                    + ", delayMillis=" + delayMillis);
        }

        synchronized (mLock) {
            final long now = SystemClock.uptimeMillis();
            final long dueTime = now + delayMillis;
            mCallbackQueues[callbackType].addCallbackLocked(dueTime, action, token);

            if (dueTime <= now) {
                scheduleFrameLocked(now);
            } else {
                Message msg = mHandler.obtainMessage(MSG_DO_SCHEDULE_CALLBACK, action);
                msg.arg1 = callbackType;
                msg.setAsynchronous(true);
                mHandler.sendMessageAtTime(msg, dueTime);
            }
        }
    }

我们可以看到正常情况下会执行scheduleFrameLocked方法

 

private void scheduleFrameLocked(long now) {
        if (!mFrameScheduled) {
            mFrameScheduled = true;
            if (USE_VSYNC) {
                if (DEBUG_FRAMES) {
                    Log.d(TAG, "Scheduling next frame on vsync.");
                }

                // If running on the Looper thread, then schedule the vsync immediately,
                // otherwise post a message to schedule the vsync from the UI thread
                // as soon as possible.
                if (isRunningOnLooperThreadLocked()) {
                    scheduleVsyncLocked();
                } else {
                    Message msg = mHandler.obtainMessage(MSG_DO_SCHEDULE_VSYNC);
                    msg.setAsynchronous(true);
                    mHandler.sendMessageAtFrontOfQueue(msg);
                }
            } else {
                final long nextFrameTime = Math.max(
                        mLastFrameTimeNanos / TimeUtils.NANOS_PER_MS + sFrameDelay, now);
                if (DEBUG_FRAMES) {
                    Log.d(TAG, "Scheduling next frame in " + (nextFrameTime - now) + " ms.");
                }
                Message msg = mHandler.obtainMessage(MSG_DO_FRAME);
                msg.setAsynchronous(true);
                mHandler.sendMessageAtTime(msg, nextFrameTime);
            }
        }
    }

由于在4.1上是使用VSYNC信号的,所以就自然会调用scheduleVsyncLocked方法,会间接调用scheduleVsync方法

 

**
     * Schedules a single vertical sync pulse to be delivered when the next
     * display frame begins.
     */
    public void scheduleVsync() {
        if (mReceiverPtr == 0) {
            Log.w(TAG, "Attempted to schedule a vertical sync pulse but the display event "
                    + "receiver has already been disposed.");
        } else {
            nativeScheduleVsync(mReceiverPtr);
        }
    }

注意:这里的注释说的很清楚了,当下一帧来临时准备一个要分发的垂直同步信号,啥意思呢?简单来说就是当调用了nativeScheduleVsync方法时,当屏幕需要刷新的时候,也就是每隔16.6ms会通过native的looper分发到java层,从而调用java的方法,那是哪个方法呢?

 

  // Called from native code.
    @SuppressWarnings("unused")
    private void dispatchVsync(long timestampNanos, int builtInDisplayId, int frame) {
        onVsync(timestampNanos, builtInDisplayId, frame);
    }

很明显是此方法
举个例子,比如屏幕显示的是第一帧,你在第一帧调用invalidate,其实并不是立即刷新的,而是在一帧会去注册一个Vsync(前提是这一帧cpu空闲情况下),当下一帧来临时也就是第二帧的时候会调用dispatchVsync此方法,当然这是一种比较简单的情况,复杂的等会说
那么来看一下调用的onVsync方法

 

  public void onVsync(long timestampNanos, int builtInDisplayId, int frame) {
            // Ignore vsync from secondary display.
            // This can be problematic because the call to scheduleVsync() is a one-shot.
            // We need to ensure that we will still receive the vsync from the primary
            // display which is the one we really care about.  Ideally we should schedule
            // vsync for a particular display.
            // At this time Surface Flinger won't send us vsyncs for secondary displays
            // but that could change in the future so let's log a message to help us remember
            // that we need to fix this.
            if (builtInDisplayId != SurfaceControl.BUILT_IN_DISPLAY_ID_MAIN) {
                Log.d(TAG, "Received vsync from secondary display, but we don't support "
                        + "this case yet.  Choreographer needs a way to explicitly request "
                        + "vsync for a specific display to ensure it doesn't lose track "
                        + "of its scheduled vsync.");
                scheduleVsync();
                return;
            }

            // Post the vsync event to the Handler.
            // The idea is to prevent incoming vsync events from completely starving
            // the message queue.  If there are no messages in the queue with timestamps
            // earlier than the frame time, then the vsync event will be processed immediately.
            // Otherwise, messages that predate the vsync event will be handled first.
            long now = System.nanoTime();
            if (timestampNanos > now) {
                Log.w(TAG, "Frame time is " + ((timestampNanos - now) * 0.000001f)
                        + " ms in the future!  Check that graphics HAL is generating vsync "
                        + "timestamps using the correct timebase.");
                timestampNanos = now;
            }

            if (mHavePendingVsync) {
                Log.w(TAG, "Already have a pending vsync event.  There should only be "
                        + "one at a time.");
            } else {
                mHavePendingVsync = true;
            }

            mTimestampNanos = timestampNanos;
            mFrame = frame;
            Message msg = Message.obtain(mHandler, this);
            msg.setAsynchronous(true);
            mHandler.sendMessageAtTime(msg, timestampNanos / TimeUtils.NANOS_PER_MS);
        }

注意看下timestampNanos的参数(简单来说就是从调用native方法以后到回调到这个方法所经过的的时间)
接着看会发送一条异步消息,简单来说就是此消息在消息队列中不用排队,可以最先被取出来,很明显,会调用下面的run方法进行处理

 

 @Override
        public void run() {
            mHavePendingVsync = false;
            doFrame(mTimestampNanos, mFrame);
        }

这里调用的doframe方法

 

void doFrame(long frameTimeNanos, int frame) {
        final long startNanos;
        synchronized (mLock) {
            if (!mFrameScheduled) {
                return; // no work to do
            }

            if (DEBUG_JANK && mDebugPrintNextFrameTimeDelta) {
                mDebugPrintNextFrameTimeDelta = false;
                Log.d(TAG, "Frame time delta: "
                        + ((frameTimeNanos - mLastFrameTimeNanos) * 0.000001f) + " ms");
            }

            long intendedFrameTimeNanos = frameTimeNanos;
            startNanos = System.nanoTime();
            final long jitterNanos = startNanos - frameTimeNanos;
            if (jitterNanos >= mFrameIntervalNanos) {
                final long skippedFrames = jitterNanos / mFrameIntervalNanos;
                if (skippedFrames >= SKIPPED_FRAME_WARNING_LIMIT) {
                    Log.i(TAG, "Skipped " + skippedFrames + " frames!  "
                            + "The application may be doing too much work on its main thread.");
                }
                final long lastFrameOffset = jitterNanos % mFrameIntervalNanos;
                if (DEBUG_JANK) {
                    Log.d(TAG, "Missed vsync by " + (jitterNanos * 0.000001f) + " ms "
                            + "which is more than the frame interval of "
                            + (mFrameIntervalNanos * 0.000001f) + " ms!  "
                            + "Skipping " + skippedFrames + " frames and setting frame "
                            + "time to " + (lastFrameOffset * 0.000001f) + " ms in the past.");
                }
                frameTimeNanos = startNanos - lastFrameOffset;
            }

            if (frameTimeNanos < mLastFrameTimeNanos) {
                if (DEBUG_JANK) {
                    Log.d(TAG, "Frame time appears to be going backwards.  May be due to a "
                            + "previously skipped frame.  Waiting for next vsync.");
                }
                scheduleVsyncLocked();
                return;
            }

            if (mFPSDivisor > 1) {
                long timeSinceVsync = frameTimeNanos - mLastFrameTimeNanos;
                if (timeSinceVsync < (mFrameIntervalNanos * mFPSDivisor) && timeSinceVsync > 0) {
                    scheduleVsyncLocked();
                    return;
                }
            }

            mFrameInfo.setVsync(intendedFrameTimeNanos, frameTimeNanos);
            mFrameScheduled = false;
            mLastFrameTimeNanos = frameTimeNanos;
        }

        try {
            Trace.traceBegin(Trace.TRACE_TAG_VIEW, "Choreographer#doFrame");
            AnimationUtils.lockAnimationClock(frameTimeNanos / TimeUtils.NANOS_PER_MS);

            mFrameInfo.markInputHandlingStart();
            doCallbacks(Choreographer.CALLBACK_INPUT, frameTimeNanos);

            mFrameInfo.markAnimationsStart();
            doCallbacks(Choreographer.CALLBACK_ANIMATION, frameTimeNanos);

            mFrameInfo.markPerformTraversalsStart();
            doCallbacks(Choreographer.CALLBACK_TRAVERSAL, frameTimeNanos);

            doCallbacks(Choreographer.CALLBACK_COMMIT, frameTimeNanos);
        } finally {
            AnimationUtils.unlockAnimationClock();
            Trace.traceEnd(Trace.TRACE_TAG_VIEW);
        }

        if (DEBUG_FRAMES) {
            final long endNanos = System.nanoTime();
            Log.d(TAG, "Frame " + frame + ": Finished, took "
                    + (endNanos - startNanos) * 0.000001f + " ms, latency "
                    + (startNanos - frameTimeNanos) * 0.000001f + " ms.");
        }
    }

有几个重点的地方要说下,我们开发时偶尔在log上会看到Skipped " + skippedFrames + " frames! "
+ "The application may be doing too much work on its main thread."这句话,很多人以为出现这句话是因为ui线程太耗时了,其实仔细想想就知道是错的,因为在回掉这方法的时候根本没有执行到
测量,绘制等,它的两个时间对比的是回掉到此帧的时间frameTimeNanos和当前的时间,如果大于16.66ms就会打印这句话,那就是说执行native方法到onFrame回掉超过16.66ms,而onFrame回掉是通过异步消息,可以忽略不计,那唯一可能出现的情况就是通过handler后执行dispatchVsync方法,与执行native方法的耗时,也就是说此时会有多个message,而执行dispatchVsync方法的message是排在比较后面的,这也解释了这句log:he application may be doing too much work on its main thread.

所以这句话并不能判断是ui卡顿了,只能说明有很多message,要减少不必要的message才是优化的根本。
而frameTimeNanos = startNanos - lastFrameOffset;简单来说就是给vsnc设置帧数的偏移量
那又是啥意思呢?
比如我在第一帧发起了重绘制,按理来说第二帧就会收到Vsync的信号值,但是由于message阻塞了超过了16.66,所以收到Vsync的信号自然延续要了第三帧。
在了解这句话以后,接下来就是回调绘制,或者input事件了,可以看到代码会间接调用doCallbacks

 

  void doCallbacks(int callbackType, long frameTimeNanos) {
        CallbackRecord callbacks;
        synchronized (mLock) {
            // We use "now" to determine when callbacks become due because it's possible
            // for earlier processing phases in a frame to post callbacks that should run
            // in a following phase, such as an input event that causes an animation to start.
            final long now = System.nanoTime();
            callbacks = mCallbackQueues[callbackType].extractDueCallbacksLocked(
                    now / TimeUtils.NANOS_PER_MS);
            if (callbacks == null) {
                return;
            }
            mCallbacksRunning = true;

            // Update the frame time if necessary when committing the frame.
            // We only update the frame time if we are more than 2 frames late reaching
            // the commit phase.  This ensures that the frame time which is observed by the
            // callbacks will always increase from one frame to the next and never repeat.
            // We never want the next frame's starting frame time to end up being less than
            // or equal to the previous frame's commit frame time.  Keep in mind that the
            // next frame has most likely already been scheduled by now so we play it
            // safe by ensuring the commit time is always at least one frame behind.
            if (callbackType == Choreographer.CALLBACK_COMMIT) {
                final long jitterNanos = now - frameTimeNanos;
                Trace.traceCounter(Trace.TRACE_TAG_VIEW, "jitterNanos", (int) jitterNanos);
                if (jitterNanos >= 2 * mFrameIntervalNanos) {
                    final long lastFrameOffset = jitterNanos % mFrameIntervalNanos
                            + mFrameIntervalNanos;
                    if (DEBUG_JANK) {
                        Log.d(TAG, "Commit callback delayed by " + (jitterNanos * 0.000001f)
                                + " ms which is more than twice the frame interval of "
                                + (mFrameIntervalNanos * 0.000001f) + " ms!  "
                                + "Setting frame time to " + (lastFrameOffset * 0.000001f)
                                + " ms in the past.");
                        mDebugPrintNextFrameTimeDelta = true;
                    }
                    frameTimeNanos = now - lastFrameOffset;
                    mLastFrameTimeNanos = frameTimeNanos;
                }
            }
        }
        try {
            Trace.traceBegin(Trace.TRACE_TAG_VIEW, CALLBACK_TRACE_TITLES[callbackType]);
            for (CallbackRecord c = callbacks; c != null; c = c.next) {
                if (DEBUG_FRAMES) {
                    Log.d(TAG, "RunCallback: type=" + callbackType
                            + ", action=" + c.action + ", token=" + c.token
                            + ", latencyMillis=" + (SystemClock.uptimeMillis() - c.dueTime));
                }
                c.run(frameTimeNanos);
            }
        } finally {
            synchronized (mLock) {
                mCallbacksRunning = false;
                do {
                    final CallbackRecord next = callbacks.next;
                    recycleCallbackLocked(callbacks);
                    callbacks = next;
                } while (callbacks != null);
            }
            Trace.traceEnd(Trace.TRACE_TAG_VIEW);
        }
    }

从而执行 c.run(frameTimeNanos);方法进行回调
这里放张图,大家可以理解一下

 

Choreographer机制和卡顿优化

 

简单说下
一开始注册了vsync信号,所以在下一帧调用了dispatchVsync方法,由于没有message阻塞,所以接收到了此帧的信号,进行了绘制,在绘制完成后又注册了信号,可以看到一帧内注册同一信号是无效的,但是回掉会执行,到了下一帧,由于message的超时不到16.66ms,所以也就是执行dispatchVsync与执行native方法的间隔时间,所以还是此帧还是有信号的,而由于此帧耗时超过了一帧,所以没有注册Vsync,当然也不会执行dispatchVsync方法,到了最后可以看到由于message超过了16.66即使在第三帧注册了Vsync信号,但是dispatchVsync执行的事件已经到了第5帧

卡顿优化

在简单分析完了Choreographer机制以后,来具体说下卡顿优化的两种方案的原理
1、 利用UI线程的Looper打印的日志匹配;

2、 使用Choreographer.FrameCallback

第一种是blockcanary的原理,就是利用looper.loop分发事件的时间间隔作为卡顿的依据

 

public static void loop() {
        final Looper me = myLooper();
        if (me == null) {
            throw new RuntimeException("No Looper; Looper.prepare() wasn't called on this thread.");
        }
        final MessageQueue queue = me.mQueue;

        // Make sure the identity of this thread is that of the local process,
        // and keep track of what that identity token actually is.
        Binder.clearCallingIdentity();
        final long ident = Binder.clearCallingIdentity();

        // Allow overriding a threshold with a system prop. e.g.
        // adb shell 'setprop log.looper.1000.main.slow 1 && stop && start'
        final int thresholdOverride =
                SystemProperties.getInt("log.looper."
                        + Process.myUid() + "."
                        + Thread.currentThread().getName()
                        + ".slow", 0);

        boolean slowDeliveryDetected = false;

        for (;;) {
            Message msg = queue.next(); // might block
            if (msg == null) {
                // No message indicates that the message queue is quitting.
                return;
            }

            // This must be in a local variable, in case a UI event sets the logger
            final Printer logging = me.mLogging;
            if (logging != null) {
                logging.println(">>>>> Dispatching to " + msg.target + " " +
                        msg.callback + ": " + msg.what);
            }

            final long traceTag = me.mTraceTag;
            long slowDispatchThresholdMs = me.mSlowDispatchThresholdMs;
            long slowDeliveryThresholdMs = me.mSlowDeliveryThresholdMs;
            if (thresholdOverride > 0) {
                slowDispatchThresholdMs = thresholdOverride;
                slowDeliveryThresholdMs = thresholdOverride;
            }
            final boolean logSlowDelivery = (slowDeliveryThresholdMs > 0) && (msg.when > 0);
            final boolean logSlowDispatch = (slowDispatchThresholdMs > 0);

            final boolean needStartTime = logSlowDelivery || logSlowDispatch;
            final boolean needEndTime = logSlowDispatch;

            if (traceTag != 0 && Trace.isTagEnabled(traceTag)) {
                Trace.traceBegin(traceTag, msg.target.getTraceName(msg));
            }

            final long dispatchStart = needStartTime ? SystemClock.uptimeMillis() : 0;
            final long dispatchEnd;
            try {
                msg.target.dispatchMessage(msg);
                dispatchEnd = needEndTime ? SystemClock.uptimeMillis() : 0;
            } finally {
                if (traceTag != 0) {
                    Trace.traceEnd(traceTag);
                }
            }
            if (logSlowDelivery) {
                if (slowDeliveryDetected) {
                    if ((dispatchStart - msg.when) <= 10) {
                        Slog.w(TAG, "Drained");
                        slowDeliveryDetected = false;
                    }
                } else {
                    if (showSlowLog(slowDeliveryThresholdMs, msg.when, dispatchStart, "delivery",
                            msg)) {
                        // Once we write a slow delivery log, suppress until the queue drains.
                        slowDeliveryDetected = true;
                    }
                }
            }
            if (logSlowDispatch) {
                showSlowLog(slowDispatchThresholdMs, dispatchStart, dispatchEnd, "dispatch", msg);
            }

            if (logging != null) {
                logging.println("<<<<< Finished to " + msg.target + " " + msg.callback);
            }
...

也就是对logging进行深入的研究,一般超过了1000ms就认为卡顿了,但我们有没有想过,我们通常说的卡顿不是说超过了16.66ms么,为何这里要超过500ms,甚至1000ms才算是卡顿?
我们要知道,android系统所有的执行都是基于looper机制的,也就是所有的消息执行的时间超过1000ms就认定卡顿了,举个例子,我们可能在主线程操作数据库,可能在主线程解析json,可能在主线程写文件,可能在主线程做一些例如高斯模糊的耗时操作
这种情况下我们利用blockcanary是可以检测出来的,但是如果是卡顿呢?当然我们也可以把时间设定为50ms,但是检测出来的太多了,所以就需要第二个机制了Choreographer.FrameCallback,通常这样写

 

public class BlockDetectByChoreographer {
    public static void start() {
        Choreographer.getInstance().postFrameCallback(new Choreographer.FrameCallback() { 
                   long lastFrameTimeNanos = 0; 
                   long currentFrameTimeNanos = 0;

                @Override
                public void doFrame(long frameTimeNanos) { 
                    if(lastFrameTimeNanos == 0){
                        lastFrameTimeNanos == frameTimeNanos;
                    }
                    currentFrameTimeNanos = frameTimeNanos;
                    long diffMs = TimeUnit.MILLISECONDS.convert(currentFrameTimeNanos-lastFrameTimeNanos, TimeUnit.NANOSECONDS);
                    if (diffMs > 16.6f) {            
                       long droppedCount = (int)diffMs / 16.6;
                    }
                        if (LogMonitor.getInstance().isMonitor()) {
                        LogMonitor.getInstance().removeMonitor();                    
                    } 
                    LogMonitor.getInstance().startMonitor();
                    Choreographer.getInstance().postFrameCallback(this);
                }
        });
    }
}

我们可以看到调用了postFrameCallback方法,会间接调用postFrameCallbackDelayed

 

  public void postFrameCallbackDelayed(FrameCallback callback, long delayMillis) {
        if (callback == null) {
            throw new IllegalArgumentException("callback must not be null");
        }

        postCallbackDelayedInternal(CALLBACK_ANIMATION,
                callback, FRAME_CALLBACK_TOKEN, delayMillis);
    }

可以看到这里是CALLBACK_ANIMATION的callback,也就说只要监听了此方法他就会不断的调用doFrame,在doframe里调用postFrameCallback,从而来达到完美的监听ui卡顿的效果
也就是onMeasure,onLayout,onDraw的耗时时间

本文地址:https://blog.csdn.net/qijingwang/article/details/110558172

相关标签: Android