欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  移动技术

iOS实现微信朋友圈视频截取功能

程序员文章站 2023-12-11 16:57:22
序言 微信现在这么普及,功能也做的越来越强大,不知大家对于微信朋友圈发视频截取的功能或者苹果拍视频对视频编辑的功能有没有了解(作者这里也猜测,微信的这个功能也是仿苹果的)...

序言

微信现在这么普及,功能也做的越来越强大,不知大家对于微信朋友圈发视频截取的功能或者苹果拍视频对视频编辑的功能有没有了解(作者这里也猜测,微信的这个功能也是仿苹果的)。感觉这个功能确实很方便实用,近来作者也在研究音视频功能,所以就实现了一下这个功能。

功能其实看着挺简单,实现过程也踩了不少坑。一方面记录一下;另一方面也算是对实现过程的再一次梳理,这样大家看代码也会比较明白。

效果

我们先看看我实现的效果

iOS实现微信朋友圈视频截取功能 

实现

实现过程分析

整个功能可以分为三部分:

  • 视频播放

这部分我们单独封装一个视频播放器即可

  • 下边的滑动视图

这部分实现过程比较复杂,一共分成了4部分。灰色遮盖、左右把手滑块、滑块中间上下两条线、图片管理视图

控制器视图逻辑组装和功能实现

  • 视频播放器的封装

这里使用avplayer、playerlayer、avplayeritem这三个类实现了视频播放功能;由于整个事件都是基于kvo监听的,所以增加了block代码提供了对外监听使用。

#import "fofmovieplayer.h"
@interface fofmovieplayer()
{
  avplayerlooper *_playerlooper;
  avplayeritem *_playitem;
  bool _loop;
}
@property(nonatomic,strong)nsurl *url;
@property(nonatomic,strong)avplayer *player;
@property(nonatomic,strong)avplayerlayer *playerlayer;
@property(nonatomic,strong)avplayeritem *playitem;
@property (nonatomic,assign) cmtime duration;
@end
@implementation fofmovieplayer
-(instancetype)initwithframe:(cgrect)frame url:(nsurl *)url superlayer:(calayer *)superlayer{
  self = [super init];
  if (self) {
    [self initplayers:superlayer];
    _playerlayer.frame = frame;
    self.url = url;
  }
  return self;
}
-(instancetype)initwithframe:(cgrect)frame url:(nsurl *)url superlayer:(calayer *)superlayer loop:(bool)loop{
  self = [self initwithframe:frame url:url superlayer:superlayer];
  if (self) {
    _loop = loop;
  }
  return self;
}
- (void)initplayers:(calayer *)superlayer{
  self.player = [[avplayer alloc] init];
  self.playerlayer = [avplayerlayer playerlayerwithplayer:self.player];
  self.playerlayer.videogravity = avlayervideogravityresize;
  [superlayer addsublayer:self.playerlayer];
}
- (void)initloopplayers:(calayer *)superlayer{
  self.player = [[avqueueplayer alloc] init];
  self.playerlayer = [avplayerlayer playerlayerwithplayer:self.player];
  self.playerlayer.videogravity = avlayervideogravityresize;
  [superlayer addsublayer:self.playerlayer];
}
-(void)fof_play{
  [self.player play];
}
-(void)fof_pause{
  [self.player pause];
}
#pragma mark - observe
-(void)observevalueforkeypath:(nsstring *)keypath ofobject:(id)object change:(nsdictionary*)change context:(void *)context{
  if ([keypath isequaltostring:@"status"]) {
    avplayeritem *item = (avplayeritem *)object;
    avplayeritemstatus status = [[change objectforkey:@"new"] intvalue]; // 获取更改后的状态
    if (status == avplayeritemstatusreadytoplay) {
      _duration = item.duration;//只有在此状态下才能获取,不能在avplayeritem初始化后马上获取
      nslog(@"准备播放");
      if (self.blockstatusreadyplay) {
        self.blockstatusreadyplay(item);
      }
    } else if (status == avplayeritemstatusfailed) {
      if (self.blockstatusfailed) {
        self.blockstatusfailed();
      }
      avplayeritem *item = (avplayeritem *)object;
      nslog(@"%@",item.error);
      nslog(@"avplayerstatusfailed");
    } else {
      self.blockstatusunknown();
      nslog(@"%@",item.error);
      nslog(@"avplayerstatusunknown");
    }
  }else if ([keypath isequaltostring:@"tracking"]){
    nsinteger status = [change[@"new"] integervalue];
    if (self.blocktracking) {
      self.blocktracking(status);
    }
    if (status) {//正在拖动
      [self.player pause];
    }else{//停止拖动
    }
  }else if ([keypath isequaltostring:@"loadedtimeranges"]){
    nsarray *array = _playitem.loadedtimeranges;
    cmtimerange timerange = [array.firstobject cmtimerangevalue];//本次缓冲时间范围
    cgfloat startseconds = cmtimegetseconds(timerange.start);
    cgfloat durationseconds = cmtimegetseconds(timerange.duration);
    nstimeinterval totalbuffer = startseconds + durationseconds;//缓冲总长度
    double progress = totalbuffer/cmtimegetseconds(_duration);
    if (self.blockloadedtimeranges) {
      self.blockloadedtimeranges(progress);
    }
    nslog(@"当前缓冲时间:%f",totalbuffer);
  }else if ([keypath isequaltostring:@"playbackbufferempty"]){
    nslog(@"缓存不够,不能播放!");
  }else if ([keypath isequaltostring:@"playbacklikelytokeepup"]){
    if (self.blockplaybacklikelytokeepup) {
      self.blockplaybacklikelytokeepup([change[@"new"] boolvalue]);
    }
  }
}
-(void)seturl:(nsurl *)url{
  _url = url;
  [self.player replacecurrentitemwithplayeritem:self.playitem];
}
-(avplayeritem *)playitem{
  _playitem = [[avplayeritem alloc] initwithurl:_url];
  //监听播放器的状态,准备好播放、失败、未知错误
  [_playitem addobserver:self forkeypath:@"status" options:nskeyvalueobservingoptionnew context:nil];
  //  监听缓存的时间
  [_playitem addobserver:self forkeypath:@"loadedtimeranges" options:nskeyvalueobservingoptionnew context:nil];
  //  监听获取当缓存不够,视频加载不出来的情况:
  [_playitem addobserver:self forkeypath:@"playbackbufferempty" options:nskeyvalueobservingoptionnew context:nil];
  //  用于监听缓存足够播放的状态
  [_playitem addobserver:self forkeypath:@"playbacklikelytokeepup" options:nskeyvalueobservingoptionnew context:nil];
  [[nsnotificationcenter defaultcenter] addobserver:self selector:@selector(private_playermoviefinish) name:avplayeritemdidplaytoendtimenotification object:nil];
  return _playitem;
}
- (void)private_playermoviefinish{
  nslog(@"播放结束");
  if (self.blockplaytoendtime) {
    self.blockplaytoendtime();
  }
  if (_loop) {//默认提供一个循环播放的功能
    [self.player pause];
    cmtime time = cmtimemake(1, 1);
    __weak typeof(self)this = self;
    [self.player seektotime:time completionhandler:^(bool finished) {
      [this.player play];
    }];
  }
}
-(void)dealloc{
  nslog(@"-----销毁-----");
}
@end

视频播放器就不重点讲了,作者计划单独写一篇有关视频播放器的。

下边的滑动视图

灰色遮盖

灰色遮盖比较简单这里作者只是用了uiview

self.leftmaskview = [[uiview alloc] initwithframe:cgrectmake(0, 0, 0, height)];
self.leftmaskview.backgroundcolor = [uicolor graycolor];
self.leftmaskview.alpha = 0.8;
[self addsubview:self.leftmaskview];
self.rightmaskview = [[uiview alloc] initwithframe:cgrectmake(0, 0, 0, height)];
self.rightmaskview.backgroundcolor = [uicolor graycolor];
self.rightmaskview.alpha = 0.8;

滑块中间上下两条线

这两根线单独封装了一个视图line,一开始也想到用一个uiview就好了,但是发现一个问题,就是把手的滑动与线的滑动速度不匹配,线比较慢。

@implementation line
-(void)setbeginpoint:(cgpoint)beginpoint{
  _beginpoint = beginpoint;
  [self setneedsdisplay];
}
-(void)setendpoint:(cgpoint)endpoint{
  _endpoint = endpoint;
  [self setneedsdisplay];
}
- (void)drawrect:(cgrect)rect {
  cgcontextref context = uigraphicsgetcurrentcontext();
  cgcontextsetlinewidth(context, 3);
  cgcontextsetstrokecolorwithcolor(context, [uicolor colorwithwhite:0.9 alpha:1].cgcolor);
  cgcontextmovetopoint(context, self.beginpoint.x, self.beginpoint.y);
  cgcontextaddlinetopoint(context, self.endpoint.x, self.endpoint.y);
  cgcontextstrokepath(context);
}

图片管理视图

这里封装了一个videopieces,用来组装把手、线、遮盖的逻辑,并且用来显示图片。由于图片只有10张,所以这里紧紧是一个for循环,增加了10个uiimageview

@interface videopieces()
{
  cgpoint _beginpoint;
}
@property(nonatomic,strong) haft *lefthaft;
@property(nonatomic,strong) haft *righthaft;
@property(nonatomic,strong) line *topline;
@property(nonatomic,strong) line *bottomline;
@property(nonatomic,strong) uiview *leftmaskview;
@property(nonatomic,strong) uiview *rightmaskview;
@end
@implementation videopieces
-(instancetype)initwithframe:(cgrect)frame{
  self = [super initwithframe:frame];
  if (self) {
    [self initsubviews:frame];
  }
  return self;
}
- (void)initsubviews:(cgrect)frame{
  cgfloat height = cgrectgetheight(frame);
  cgfloat width = cgrectgetwidth(frame);
  cgfloat mingap = 30;
  cgfloat widthhaft = 10;
  cgfloat heightline = 3;
  _lefthaft = [[haft alloc] initwithframe:cgrectmake(0, 0, widthhaft, height)];
  _lefthaft.alpha = 0.8;
  _lefthaft.backgroundcolor = [uicolor colorwithwhite:0.9 alpha:1];
  _lefthaft.rightedgeinset = 20;
  _lefthaft.lefedgeinset = 5;
  __weak typeof(self) this = self;
  [_lefthaft setblockmove:^(cgpoint point) {
    cgfloat maxx = this.righthaft.frame.origin.x-mingap;
    if (point.x=minx) {
      this.topline.endpoint = cgpointmake(point.x-widthhaft, heightline/2.0);
      this.bottomline.endpoint = cgpointmake(point.x-widthhaft, heightline/2.0);
      this.righthaft.frame = cgrectmake(point.x, 0, widthhaft, height);
      this.rightmaskview.frame = cgrectmake(point.x+widthhaft, 0, width-point.x-widthhaft, height);
      if (this.blockseekoffright) {
        this.blockseekoffright(point.x);
      }
    }
  }];
  [_righthaft setblockmoveend:^{
    if (this.blockmoveend) {
      this.blockmoveend();
    }
  }];
  _topline = [[line alloc] init];
  _topline.alpha = 0.8;
  _topline.frame = cgrectmake(widthhaft, 0, width-2*widthhaft, heightline);
  _topline.beginpoint = cgpointmake(0, heightline/2.0);
  _topline.endpoint = cgpointmake(cgrectgetwidth(_topline.bounds), heightline/2.0);
  _topline.backgroundcolor = [uicolor clearcolor];
  [self addsubview:_topline];
  _bottomline = [[line alloc] init];
  _bottomline.alpha = 0.8;
  _bottomline.frame = cgrectmake(widthhaft, height-heightline, width-2*widthhaft, heightline);
  _bottomline.beginpoint = cgpointmake(0, heightline/2.0);
  _bottomline.endpoint = cgpointmake(cgrectgetwidth(_bottomline.bounds), heightline/2.0);
  _bottomline.backgroundcolor = [uicolor clearcolor];
  [self addsubview:_bottomline];
  [self addsubview:_lefthaft];
  [self addsubview:_righthaft];
  self.leftmaskview = [[uiview alloc] initwithframe:cgrectmake(0, 0, 0, height)];
  self.leftmaskview.backgroundcolor = [uicolor graycolor];
  self.leftmaskview.alpha = 0.8;
  [self addsubview:self.leftmaskview];
  self.rightmaskview = [[uiview alloc] initwithframe:cgrectmake(0, 0, 0, height)];
  self.rightmaskview.backgroundcolor = [uicolor graycolor];
  self.rightmaskview.alpha = 0.8;
  [self addsubview:self.rightmaskview];
}
-(void)touchesbegan:(nsset*)touches withevent:(uievent *)event{
  uitouch *touch = touches.anyobject;
  _beginpoint = [touch locationinview:self];
}

把手的实现

把手的实现这里优化了一点,就是滑动的时候比较灵敏,一开始用手指滑动的时候不是非常灵敏,经常手指滑动了,但是把手没有动。

增加了灵敏度的方法其实就是增加了接收事件区域的大小,重写了-(bool)pointinside:(cgpoint)point withevent:(uievent *)event这个方法

@implementation haft
-(instancetype)initwithframe:(cgrect)frame{
  self = [super initwithframe:frame];
  if (self) {
    self.userinteractionenabled = true;
  }
  return self;
}
-(bool)pointinside:(cgpoint)point withevent:(uievent *)event{
  cgrect rect = cgrectmake(self.bounds.origin.x-self.lefedgeinset, self.bounds.origin.y-self.topedgeinset, cgrectgetwidth(self.bounds)+self.lefedgeinset+self.rightedgeinset, cgrectgetheight(self.bounds)+self.bottomedgeinset+self.topedgeinset);
  if (cgrectcontainspoint(rect, point)) {
    return yes;
  }
  return no;
}
-(void)touchesbegan:(nsset*)touches withevent:(uievent *)event{
  nslog(@"开始");
}
-(void)touchesmoved:(nsset*)touches withevent:(uievent *)event{
  nslog(@"move");
  uitouch *touch = touches.anyobject;
  cgpoint point = [touch locationinview:self.superview];
  cgfloat maxx = cgrectgetwidth(self.superview.bounds)-cgrectgetwidth(self.bounds);
  if (point.x>maxx) {
    point.x = maxx;
  }
  if (point.x>=0&&point.x<=(cgrectgetwidth(self.superview.bounds)-cgrectgetwidth(self.bounds))&&self.blockmove) {
    self.blockmove(point);
  }
}
-(void)touchesended:(nsset*)touches withevent:(uievent *)event{
  if (self.blockmoveend) {
    self.blockmoveend();
  }
}
- (void)drawrect:(cgrect)rect {
  cgfloat width = cgrectgetwidth(self.bounds);
  cgfloat height = cgrectgetheight(self.bounds);
  cgfloat linewidth = 1.5;
  cgfloat lineheight = 12;
  cgfloat gap = (width-linewidth*2)/3.0;
  cgfloat liney = (height-lineheight)/2.0;
  cgcontextref context = uigraphicsgetcurrentcontext();
  cgcontextsetlinewidth(context, linewidth);
  cgcontextsetstrokecolorwithcolor(context, [[uicolor graycolor] colorwithalphacomponent:0.8].cgcolor);
  cgcontextmovetopoint(context, gap+linewidth/2, liney);
  cgcontextaddlinetopoint(context, gap+linewidth/2, liney+lineheight);
  cgcontextstrokepath(context);
  cgcontextsetlinewidth(context, linewidth);
  cgcontextsetstrokecolorwithcolor(context, [[uicolor graycolor] colorwithalphacomponent:0.8].cgcolor);
  cgcontextmovetopoint(context, gap*2+linewidth+linewidth/2, liney);
  cgcontextaddlinetopoint(context, gap*2+linewidth+linewidth/2, liney+lineheight);
  cgcontextstrokepath(context);
}

控制器视图逻辑组装和功能实现

这部分逻辑是最重要也是最复杂的。

获取10张缩略图

- (nsarray *)getvideothumbnail:(nsstring *)path count:(nsinteger)count splitcompleteblock:(void(^)(bool success, nsmutablearray *splitimgs))splitcompleteblock {
  avasset *asset = [avasset assetwithurl:[nsurl fileurlwithpath:path]];
  nsmutablearray *arrayimages = [nsmutablearray array];
  [asset loadvaluesasynchronouslyforkeys:@[@"duration"] completionhandler:^{
    avassetimagegenerator *generator = [avassetimagegenerator assetimagegeneratorwithasset:asset];
//    generator.maximumsize = cgsizemake(480,136);//如果是cgsizemake(480,136),则获取到的图片是{240, 136}。与实际大小成比例
    generator.appliespreferredtracktransform = yes;//这个属性保证我们获取的图片的方向是正确的。比如有的视频需要旋转手机方向才是视频的正确方向。
    /**因为有误差,所以需要设置以下两个属性。如果不设置误差有点大,设置了之后相差非常非常的小**/
    generator.requestedtimetoleranceafter = kcmtimezero;
    generator.requestedtimetolerancebefore = kcmtimezero;
    float64 seconds = cmtimegetseconds(asset.duration);
    nsmutablearray *array = [nsmutablearray array];
    for (int i = 0; i
      cmtime time = cmtimemakewithseconds(i*(seconds/10.0),1);//想要获取图片的时间位置
      [array addobject:[nsvalue valuewithcmtime:time]];
    }
    __block int i = 0;
    [generator generatecgimagesasynchronouslyfortimes:array completionhandler:^(cmtime requestedtime, cgimageref _nullable imageref, cmtime actualtime, avassetimagegeneratorresult result, nserror * _nullable error) {
      i++;
      if (result==avassetimagegeneratorsucceeded) {
        uiimage *image = [uiimage imagewithcgimage:imageref];
        [arrayimages addobject:image];
      }else{
        nslog(@"获取图片失败!!!");
      }
      if (i==count) {
        dispatch_async(dispatch_get_main_queue(), ^{
          splitcompleteblock(yes,arrayimages);
        });
      }
    }];
  }];
  return arrayimages;
}

10张图片很容易获取到,不过这里要注意一点:回调的时候要放到异步主队列回调!要不会出现图片显示延迟比较严重的问题。

监听左右滑块事件

[_videopieces setblockseekoffleft:^(cgfloat offx) {
  this.seeking = true;
  [this.movieplayer fof_pause];
  this.laststartseconds = this.totalseconds*offx/cgrectgetwidth(this.videopieces.bounds);
  [this.movieplayer.player seektotime:cmtimemakewithseconds(this.laststartseconds, 1) tolerancebefore:kcmtimezero toleranceafter:kcmtimezero];
}];
[_videopieces setblockseekoffright:^(cgfloat offx) {
  this.seeking = true;
  [this.movieplayer fof_pause];
  this.lastendseconds = this.totalseconds*offx/cgrectgetwidth(this.videopieces.bounds);
  [this.movieplayer.player seektotime:cmtimemakewithseconds(this.lastendseconds, 1) tolerancebefore:kcmtimezero toleranceafter:kcmtimezero];
}];

这里通过监听左右滑块的事件,将偏移距离转换成时间,从而设置播放器的开始时间和结束时间。

循环播放

self.timeobservertoken = [self.movieplayer.player addperiodictimeobserverforinterval:cmtimemakewithseconds(0.5, nsec_per_sec) queue:dispatch_get_main_queue() usingblock:^(cmtime time) {
  if (!this.seeking) {
    if (fabs(cmtimegetseconds(time)-this.lastendseconds)<=0.02) {
        [this.movieplayer fof_pause];
        [this private_replayatbegintime:this.laststartseconds];
      }
  }
}];

这里有两个注意点:

1. addperiodictimeobserverforinterval要进行释放,否则会有内存泄漏。

-(void)dealloc{
  [self.movieplayer.player removetimeobserver:self.timeobservertoken];
}

2.这里监听了播放时间,进而计算是否达到了我们右边把手拖动的时间,如果达到了则重新播放。 这个问题作者思考了很久,怎么实现边播放边截取?差点进入了一个误区,真去截取视频。其实这里不用截取视频,只是控制播放时间和结束时间就可以了,最后只截取一次就行了。

总结

这次微信小视频编辑实现过程中,确实遇到了挺多的小问题。不过通过仔细的研究,最终完美实现了,有种如释重负的感觉。哈哈。

源码

github源码

总结

以上所述是小编给大家介绍的ios实现微信朋友圈视频截取功能,希望对大家有所帮助

上一篇:

下一篇: