一、YOLO v1 论文阅读笔记(附代码)
文章目录
算是写一个系列吧,目标检测的系列文章。
前言
YOLO算法运用分类的思想,进行对象的位置检测。
主要的特点是:(1)速度够快;(2)准确率也很高;(3)泛化能力强。
- faster rcnn 首先通过RPN提取region proposal,来判断是前景还是背景,然后检测得到对象的坐标和类别 是双阶段的目标检测, 而YOLO系列采用End-to-End的方法直接预测目标对象的边界框和对象的类别。
- 以前目标检测的方法是通过窗口滑动确定目标的位置;而YOLO则是直接将原始图片分割成互不重合的小方块,然后通过卷积得到特征图,再对特征图进行分类和回归,从而得到目标的位置。因此不会将背景块误检为目标。与Faster R-CNN相比,YOLO v1的背景误检数量少了一半。
- 在应用于新领域或碰到意外的输入时不太可能出故障,所以泛化能力较强。
创新点
(1)解决目标检测速度慢的问题。
(2)解决多目标检测。---- NMS(非极大值移植)
(3)解决多类目标检测。---- 带类别输出。
(4)解决小目标检测。 ---- 输出大目标和小目标两类数据。
网络主要结构
如图所示,YOLO v1的网络结构共分为以下三个部分:
- (1)将图像 Resize 为 448 ∗ 448 {448 * 448} 448∗448大小
- (2)将图片送入到一个卷积网络(GoogleNet 作为backbone)
- (3)通过模型的置信度,并采用NMS(非极大值抑制)等策略,得到最后检测的目标。
- 接下来介绍一下本文的特征提取网络:
网络有24个卷积层,后面是2个全连接层,最后输出层用线性函数做**函数,其它层**函数都是Leaky ReLU。
我们 只使用11降维层,后面是33卷积层,
模型最后的输出是 7 × 7 × 30 7 × 7 × 30 7×7×30
7 × 7 7 × 7 7×7,代表的是图像分为了 7 × 7 = 49 个 块 7 × 7 =49个块 7×7=49个块
30 30 30,表示的是 2 × 5 + 20 2 × 5 + 20 2×5+20,5表示的是(c,x,y,w,h);2表示的大目标,和 小目标两类;20表示的是具体的类别个数。【公式和图形表示,如下图所示】
- 损失函数部分的介绍:
- 第1,2行计算前景的geo_loss。
- 第3行计算前景的confidence_loss。
- 第4行计算背景的confidence_loss。
- 第5行计算分类损失class_loss。
实验结果分析
实时检测结果分析如下图所示:Fast YOLO 的速度是YOLO的两倍,但是YOLO的精度比Fast YOLO要高10个mAP。
代码解读
特征提取层:
class VGG(nn.Module):
def __init__(self):
super(VGG,self).__init__()
# the vgg's layers
#self.features = features
cfg = [64,64,'M',128,128,'M',256,256,256,'M',512,512,512,'M',512,512,512,'M']
layers= []
batch_norm = False
in_channels = 3
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2,stride = 2)]
else:
conv2d = nn.Conv2d(in_channels,v,kernel_size=3,padding = 1)
if batch_norm:
layers += [conv2d,nn.Batchnorm2d(v),nn.ReLU(inplace=True)]
else:
layers += [conv2d,nn.ReLU(inplace=True)]
in_channels = v
# use the vgg layers to get the feature
self.features = nn.Sequential(*layers)
# 全局池化
self.avgpool = nn.AdaptiveAvgPool2d((7,7))
# 决策层:分类层
self.classifier = nn.Sequential(
nn.Linear(512*7*7,4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096,4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096,1000),
)
for m in self.modules():
if isinstance(m,nn.Conv2d): # 判断 m 是否为卷积层
nn.init.kaiming_normal_(m.weight,mode='fan_out',nonlinearity='relu')
if m.bias is not None:
nn.init.constant_(m.bias,0)
elif isinstance(m,nn.BatchNorm2d):
nn.init.constant_(m.weight,1)
nn.init.constant_(m.bias,1)
elif isinstance(m,nn.Linear):
nn.init.normal_(m.weight,0,0.01)
nn.init.constant_(m.bias,0)
def forward(self,x):
x = self.features(x)
x_fea = x
x = self.avgpool(x)
x_avg = x
x = x.view(x.size(0),-1)
x = self.classifier(x)
return x,x_fea,x_avg
def extractor(self,x):
x = self.features(x)
return x
定义检测头:
self.detector = nn.Sequential(
nn.Linear(512*7*7,4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096,1470),
)
整体模型:
class YOLOV1(nn.Module):
def __init__(self):
super(YOLOV1,self).__init__()
vgg = VGG()
self.extractor = vgg.extractor
self.avgpool = nn.AdaptiveAvgPool2d((7,7))
# 决策层:检测层
self.detector = nn.Sequential(
nn.Linear(512*7*7,4096),
nn.ReLU(True),
nn.Dropout(),
#nn.Linear(4096,1470),
nn.Linear(4096,245), # 和forward 中最后的输出 (7,7,5)相对应
#nn.Linear(4096,5),
)
for m in self.modules():
if isinstance(m,nn.Conv2d):
nn.init.kaiming_normal_(m.weight,mode='fan_out',nonlinearity='relu')
if m.bias is not None:
nn.init.constant_(m.bias,0)
elif isinstance(m,nn.BatchNorm2d):
nn.init.constant_(m.weight,1)
nn.init.constant_(m.bias,1)
elif isinstance(m,nn.Linear):
nn.init.normal_(m.weight,0,0.01)
nn.init.constant_(m.bias,0)
def forward(self,x):
x = self.extractor(x)
#import pdb # 终端调试代码用的包
#pdb.set_trace()
x = self.avgpool(x)
x = x.view(x.size(0),-1)
x = self.detector(x)
b,_ = x.shape
#x = x.view(b,7,7,30)
x = x.view(b,7,7,5)
#x = x.view(b,1,1,5)
return x
主函数:
if __name__ == '__main__':
vgg = VGG()
x = torch.randn(1,3,512,512)
feature,x_fea,x_avg = vgg(x)
print(feature.shape)
print(x_fea.shape)
print(x_avg.shape)
yolov1 = YOLOV1()
feature = yolov1(x)
# feature_size b*7*7*30
print(feature.shape)
train()函数:
def train():
for epoch in range(epochs):
ts = time.time()
for iter, batch in enumerate(train_loader):
optimizer.zero_grad()
# 取图片
inputs = input_process(batch)
# 取标注
labels = target_process(batch)
# 获取得到输出
outputs = yolov1_model(inputs)
#loss = criterion(outputs, labels)
loss,lm,glm,clm = lossfunc_details(outputs,labels)
loss.backward()
optimizer.step()
#print(torch.cat([outputs.detach().view(1,5),labels.view(1,5)],0).view(2,5))
if iter % 10 == 0:
# print(torch.cat([outputs.detach().view(1,5),labels.view(1,5)],0).view(2,5))
print("epoch{}, iter{}, loss: {}, lr: {}".format(epoch, iter, /
loss.data.item(),optimizer.state_dict()['param_groups'][0]['lr']))
#print("Finish epoch {}, time elapsed {}".format(epoch, time.time() - ts))
#print("*"*30)
#val(epoch)
scheduler.step()
下面说下2个训练集的数据处理函数:
input_process:
def input_process(batch):
#import pdb
#pdb.set_trace()
batch_size=len(batch[0])
input_batch= torch.zeros(batch_size,3,448,448)
for i in range(batch_size):
inputs_tmp = Variable(batch[0][i])
inputs_tmp1=cv2.resize(inputs_tmp.permute([1,2,0]).numpy(),(448,448))
inputs_tmp2=torch.tensor(inputs_tmp1).permute([2,0,1])
input_batch[i:i+1,:,:,:]= torch.unsqueeze(inputs_tmp2,0)
return input_batch
batch[0]为image,batch[1]为label,batch_size为1个batch的图片数量。
batch[0][i]为这个batch的第i张图片,inputs_tmp2为尺寸变成了3,448,448之后的图片,再经过unsqueeze操作拓展1维,size=[1,3,448,448],存储在input_batch中。
最后,返回的是size=[batch_size,3,448,448]的输入数据。
target_process:
def target_process(batch,grid_number=7):
# batch[1]表示label
# batch[0]表示image
batch_size=len(batch[0])
target_batch= torch.zeros(batch_size,grid_number,grid_number,30)
#import pdb
#pdb.set_trace()
for i in range(batch_size):
labels = batch[1]
batch_labels = labels[i]
#import pdb
#pdb.set_trace()
number_box = len(batch_labels['boxes'])
for wi in range(grid_number):
for hi in range(grid_number):
# 遍历每个标注的框
for bi in range(number_box):
bbox=batch_labels['boxes'][bi]
_,himg,wimg = batch[0][i].numpy().shape
bbox = bbox/ torch.tensor([wimg,himg,wimg,himg])
#import pdb
#pdb.set_trace()
center_x= (bbox[0]+bbox[2])*0.5
center_y= (bbox[1]+bbox[3])*0.5
#print("[%s,%s,%s],[%s,%s,%s]"%(wi/grid_number,center_x,(wi+1)/grid_number,hi/grid_number,center_y,(hi+1)/grid_number))
if center_x<=(wi+1)/grid_number and center_x>=wi/grid_number and center_y<=(hi+1)/grid_number and center_y>= hi/grid_number:
#pdb.set_trace()
cbbox = torch.cat([torch.ones(1),bbox])
# 中心点落在grid内,
target_batch[i:i+1,wi:wi+1,hi:hi+1,:] = torch.unsqueeze(cbbox,0)
#else:
#cbbox = torch.cat([torch.zeros(1),bbox])
#import pdb
#pdb.set_trace()
#rint(target_batch[i:i+1,wi:wi+1,hi:hi+1,:])
#target_batch[i:i+1,wi:wi+1,hi:hi+1,:] = torch.unsqueeze(cbbox,0)
return target_batch
要从batch里面获得label,首先要想清楚label(就是bounding box)应该是什么size,输出的结果应该是 7 × 7 × 30 {7×7×30} 7×7×30 的,所以label的size应该是:[batch_size,7,7,30]。在这个程序里我们实现的是输出 7 × 7 × 5 {7×7×5} 7×7×5 。这个 5 {5} 5 就是x,y,w,h,所以label的size应该是:[batch_size,7,7,5]
batch_labels表示这个batch的第i个图片的label,number_box表示这个图有几个真值框。
接下来3重循环遍历每个grid的每个框,bbox表示正在遍历的这个框。
bbox = bbox/ torch.tensor([wimg,himg,wimg,himg])表示对x,y,w,h进行归一化。
接下来if语句得到confidence的真值,存储在target_batch中返回。
Loss 函数
def lossfunc_details(outputs,labels):
# 判断维度
assert ( outputs.shape == labels.shape),"outputs shape[%s] not equal labels shape[%s]"%(outputs.shape,labels.shape)
b,w,h,c = outputs.shape
loss = 0
conf_loss_matrix = torch.zeros(b,w,h)
geo_loss_matrix = torch.zeros(b,w,h)
loss_matrix = torch.zeros(b,w,h)
for bi in range(b):
for wi in range(w):
for hi in range(h):
# detect_vector=[confidence,x,y,w,h]
detect_vector = outputs[bi,wi,hi]
gt_dv = labels[bi,wi,hi]
conf_pred = detect_vector[0]
conf_gt = gt_dv[0]
x_pred = detect_vector[1]
x_gt = gt_dv[1]
y_pred = detect_vector[2]
y_gt = gt_dv[2]
w_pred = detect_vector[3]
w_gt = gt_dv[3]
h_pred = detect_vector[4]
h_gt = gt_dv[4]
loss_confidence = (conf_pred-conf_gt)**2
#loss_geo = (x_pred-x_gt)**2 + (y_pred-y_gt)**2 + (w_pred**0.5-w_gt**0.5)**2 + (h_pred**0.5-h_gt**0.5)**2
loss_geo = (x_pred-x_gt)**2 + (y_pred-y_gt)**2 + (w_pred-w_gt)**2 + (h_pred-h_gt)**2
loss_geo = conf_gt*loss_geo
loss_tmp = loss_confidence + 0.3*loss_geo
#print("loss[%s,%s] = %s,%s"%(wi,hi,loss_confidence.item(),loss_geo.item()))
loss += loss_tmp
conf_loss_matrix[bi,wi,hi]=loss_confidence
geo_loss_matrix[bi,wi,hi]=loss_geo
loss_matrix[bi,wi,hi]=loss_tmp
#打印出batch中每张片的位置loss,和置信度输出
print(geo_loss_matrix)
print(outputs[0,:,:,0]>0.5)
return loss,loss_matrix,geo_loss_matrix,conf_loss_matrix
首先需要注意:label和output的size应该是:[batch_size,7,7,5]。
outputs[bi,wi,hi]就是一个5位向量: ( c p r e d , x p r e d , y p r e d , w p r e d , h p r e d ) {({c}^{pred}, {x}^{pred}, {y}^{pred}, {w}^{pred}, {h}^{pred} )} (cpred,xpred,ypred,wpred,hpred) 。
我们分别计算了loss_confidence和loss_geo,因为我们实现的这个模型只检测1个类,所以没有class_loss。