OpenCvSharp 通过特征点匹配图片
程序员文章站
2022-08-08 18:04:19
现在的手游基本都是重复操作,一个动作要等好久,结束之后继续另一个动作.很麻烦,所以动起了自己写一个游戏辅助的心思. 这个辅助本身没什么难度,就是通过不断的截图,然后从这个截图中找出预先截好的能代表相应动作的按钮或者触发条件的小图. 找到之后获取该子区域的左上角坐标,然后通过windows API调用 ......
现在的手游基本都是重复操作,一个动作要等好久,结束之后继续另一个动作.很麻烦,所以动起了自己写一个游戏辅助的心思.
这个辅助本身没什么难度,就是通过不断的截图,然后从这个截图中找出预先截好的能代表相应动作的按钮或者触发条件的小图.
找到之后获取该子区域的左上角坐标,然后通过windows api调用鼠标或者键盘做操作就行了.
这里面最难的也就是找图了,因为要精准找图,而且最好能适应不同的分辨率下找图,所以在模板匹配的基础上,就有了sift和surf的特征点找图方式.
在写的过程中查找资料,大都是c++ 或者python的, 很少有原生的c#实现, 所以我就直接拿来翻译过来了(稍作改动).
sift算法
public static bitmap matchpicbysift(bitmap imgsrc, bitmap imgsub) { using (mat matsrc = imgsrc.tomat()) using (mat matto = imgsub.tomat()) using (mat matsrcret = new mat()) using (mat mattoret = new mat()) { keypoint[] keypointssrc, keypointsto; using (var sift = opencvsharp.xfeatures2d.sift.create()) { sift.detectandcompute(matsrc, null, out keypointssrc, matsrcret); sift.detectandcompute(matto, null, out keypointsto, mattoret); } using (var bfmatcher = new opencvsharp.bfmatcher()) { var matches = bfmatcher.knnmatch(matsrcret, mattoret, k: 2); var pointssrc = new list<point2f>(); var pointsdst = new list<point2f>(); var goodmatches = new list<dmatch>(); foreach (dmatch[] items in matches.where(x => x.length > 1)) { if (items[0].distance < 0.5 * items[1].distance) { pointssrc.add(keypointssrc[items[0].queryidx].pt); pointsdst.add(keypointsto[items[0].trainidx].pt); goodmatches.add(items[0]); console.writeline($"{keypointssrc[items[0].queryidx].pt.x}, {keypointssrc[items[0].queryidx].pt.y}"); } } var outmat = new mat(); // 算法ransac对匹配的结果做过滤 var psrc = pointssrc.convertall(point2ftopoint2d); var pdst = pointsdst.convertall(point2ftopoint2d); var outmask = new mat(); // 如果原始的匹配结果为空, 则跳过过滤步骤 if (psrc.count > 0 && pdst.count > 0) cv2.findhomography(psrc, pdst, homographymethods.ransac, mask: outmask); // 如果通过ransac处理后的匹配点大于10个,才应用过滤. 否则使用原始的匹配点结果(匹配点过少的时候通过ransac处理后,可能会得到0个匹配点的结果). if (outmask.rows > 10) { byte[] maskbytes = new byte[outmask.rows * outmask.cols]; outmask.getarray(0, 0, maskbytes); cv2.drawmatches(matsrc, keypointssrc, matto, keypointsto, goodmatches, outmat, matchesmask: maskbytes, flags: drawmatchesflags.notdrawsinglepoints); } else cv2.drawmatches(matsrc, keypointssrc, matto, keypointsto, goodmatches, outmat, flags: drawmatchesflags.notdrawsinglepoints); return opencvsharp.extensions.bitmapconverter.tobitmap(outmat); } } }
surf算法
public static bitmap matchpicbysurf(bitmap imgsrc, bitmap imgsub, double threshold = 400) { using (mat matsrc = imgsrc.tomat()) using (mat matto = imgsub.tomat()) using (mat matsrcret = new mat()) using (mat mattoret = new mat()) { keypoint[] keypointssrc, keypointsto; using (var surf = opencvsharp.xfeatures2d.surf.create(threshold,4,3,true,true)) { surf.detectandcompute(matsrc, null, out keypointssrc, matsrcret); surf.detectandcompute(matto, null, out keypointsto, mattoret); } using (var flnmatcher = new opencvsharp.flannbasedmatcher()) { var matches = flnmatcher.match(matsrcret, mattoret); //求最小最大距离 double mindistance = 1000;//反向逼近 double maxdistance = 0; for (int i = 0; i < matsrcret.rows; i++) { double distance = matches[i].distance; if (distance > maxdistance) { maxdistance = distance; } if (distance < mindistance) { mindistance = distance; } } console.writeline($"max distance : {maxdistance}"); console.writeline($"min distance : {mindistance}"); var pointssrc = new list<point2f>(); var pointsdst = new list<point2f>(); //筛选较好的匹配点 var goodmatches = new list<dmatch>(); for (int i = 0; i < matsrcret.rows; i++) { double distance = matches[i].distance; if (distance < math.max(mindistance * 2, 0.02)) { pointssrc.add(keypointssrc[matches[i].queryidx].pt); pointsdst.add(keypointsto[matches[i].trainidx].pt); //距离小于范围的压入新的dmatch goodmatches.add(matches[i]); } } var outmat = new mat(); // 算法ransac对匹配的结果做过滤 var psrc = pointssrc.convertall(point2ftopoint2d); var pdst = pointsdst.convertall(point2ftopoint2d); var outmask = new mat(); // 如果原始的匹配结果为空, 则跳过过滤步骤 if (psrc.count > 0 && pdst.count > 0) cv2.findhomography(psrc, pdst, homographymethods.ransac, mask: outmask); // 如果通过ransac处理后的匹配点大于10个,才应用过滤. 否则使用原始的匹配点结果(匹配点过少的时候通过ransac处理后,可能会得到0个匹配点的结果). if (outmask.rows > 10) { byte[] maskbytes = new byte[outmask.rows * outmask.cols]; outmask.getarray(0, 0, maskbytes); cv2.drawmatches(matsrc, keypointssrc, matto, keypointsto, goodmatches, outmat, matchesmask: maskbytes, flags: drawmatchesflags.notdrawsinglepoints); } else cv2.drawmatches(matsrc, keypointssrc, matto, keypointsto, goodmatches, outmat, flags: drawmatchesflags.notdrawsinglepoints); return opencvsharp.extensions.bitmapconverter.tobitmap(outmat); } } }
模板匹配
public static system.drawing.point findpicfromimage(bitmap imgsrc, bitmap imgsub, double threshold = 0.9) { opencvsharp.mat srcmat = null; opencvsharp.mat dstmat = null; opencvsharp.outputarray outarray = null; try { srcmat = imgsrc.tomat(); dstmat = imgsub.tomat(); outarray = opencvsharp.outputarray.create(srcmat); opencvsharp.cv2.matchtemplate(srcmat, dstmat, outarray, common.templatematchmodes); double minvalue, maxvalue; opencvsharp.point location, point; opencvsharp.cv2.minmaxloc(opencvsharp.inputarray.create(outarray.getmat()), out minvalue, out maxvalue, out location, out point); console.writeline(maxvalue); if (maxvalue >= threshold) return new system.drawing.point(point.x, point.y); return system.drawing.point.empty; } catch(exception ex) { return system.drawing.point.empty; } finally { if (srcmat != null) srcmat.dispose(); if (dstmat != null) dstmat.dispose(); if (outarray != null) outarray.dispose(); } }
上一篇: C#开发学习人工智能的第一步
推荐阅读
-
OpenCvSharp 通过特征点匹配图片
-
浅析ORB、SURF、SIFT特征点提取方法以及ICP匹配方法
-
C#中OpenCvSharp 通过特征点匹配图片的方法
-
小弟我想问问。如何通过url获取图片,通俗点讲就是这么通过get方式获取图片
-
OpenCV中feature2D学习——SIFT和SURF算子实现特征点提取与匹配
-
FLANN进行特征点匹配
-
opencv_python Stitcher拼接图像实例(SIFT/SURF检测特征点,BF/FLANN匹配特征点)
-
opencv3.2 SURF实现特征点匹配
-
OpenCV中feature2D学习SIFT和SURF算子实现特征点提取与匹配
-
OpenCvSharp 通过特征点匹配图片