做药物分析常用网站wordpress列表页调用图片
做药物分析常用网站,wordpress列表页调用图片,建筑工程有限责任公司,做内贸的什么网站效果好文章目录 0 简介1 课题背景2 实现效果3 卷积神经网络4 Yolov55 模型训练6 实现效果最后 0 简介
今天学长向大家分享一个毕业设计项目
毕业设计 基于深度学习的安检管制物品识别系统
项目运行效果#xff1a; 毕业设计 深度学习管制刀具识别系统#x1f9ff; 项目分享:见文…文章目录0 简介1 课题背景2 实现效果3 卷积神经网络4 Yolov55 模型训练6 实现效果最后0 简介今天学长向大家分享一个毕业设计项目毕业设计 基于深度学习的安检管制物品识别系统项目运行效果毕业设计 深度学习管制刀具识别系统 项目分享:见文末!1 课题背景军事信息化建设一直是各国的研究热点但我国的武器存在着种类繁多、信息散落等问题这不利于国防工作提取有效信息大大妨碍了我军信息化建设的步伐。同时我军武器常以文字、二维图片和实体武器等传统方式进行展示交互性差且无法满足更多军迷了解武器性能、近距离观赏或把玩武器的迫切需求。本文将改进后的Yolov5算法应用到武器识别中将武器图片中的武器快速识别出来提取武器的相关信息并将其放入三维的武器展现系统中进行展示以期让人们了解和掌握各种武器有利于推动军事信息化建设。2 实现效果检测展示3 卷积神经网络简介卷积神经网络 (CNN)是一种算法将图像作为输入然后为图像的所有方面分配权重和偏差从而区分彼此。神经网络可以通过使用成批的图像进行训练每个图像都有一个标签来识别图像的真实性质这里是猫或狗。一个批次可以包含十分之几到数百个图像。对于每张图像将网络预测与相应的现有标签进行比较并评估整个批次的网络预测与真实值之间的距离。然后修改网络参数以最小化距离从而增加网络的预测能力。类似地每个批次的训练过程都是类似的。相关代码实现cnn卷积神经网络的编写如下编写卷积层、池化层和全连接层的代码conv1_1 tf.layers.conv2d(x, 16, (3, 3), paddingsame, activationtf.nn.relu, nameconv1_1) conv1_2 tf.layers.conv2d(conv1_1, 16, (3, 3), paddingsame, activationtf.nn.relu, nameconv1_2) pool1 tf.layers.max_pooling2d(conv1_2, (2, 2), (2, 2), namepool1) conv2_1 tf.layers.conv2d(pool1, 32, (3, 3), paddingsame, activationtf.nn.relu, nameconv2_1) conv2_2 tf.layers.conv2d(conv2_1, 32, (3, 3), paddingsame, activationtf.nn.relu, nameconv2_2) pool2 tf.layers.max_pooling2d(conv2_2, (2, 2), (2, 2), namepool2) conv3_1 tf.layers.conv2d(pool2, 64, (3, 3), paddingsame, activationtf.nn.relu, nameconv3_1) conv3_2 tf.layers.conv2d(conv3_1, 64, (3, 3), paddingsame, activationtf.nn.relu, nameconv3_2) pool3 tf.layers.max_pooling2d(conv3_2, (2, 2), (2, 2), namepool3) conv4_1 tf.layers.conv2d(pool3, 128, (3, 3), paddingsame, activationtf.nn.relu, nameconv4_1) conv4_2 tf.layers.conv2d(conv4_1, 128, (3, 3), paddingsame, activationtf.nn.relu, nameconv4_2) pool4 tf.layers.max_pooling2d(conv4_2, (2, 2), (2, 2), namepool4) flatten tf.layers.flatten(pool4) fc1 tf.layers.dense(flatten, 512, tf.nn.relu) fc1_dropout tf.nn.dropout(fc1, keep_probkeep_prob) fc2 tf.layers.dense(fc1, 256, tf.nn.relu) fc2_dropout tf.nn.dropout(fc2, keep_probkeep_prob) fc3 tf.layers.dense(fc2, 2, None)4 Yolov5我们选择当下YOLO最新的卷积神经网络YOLOv5来进行火焰识别检测。6月9日Ultralytics公司开源了YOLOv5离上一次YOLOv4发布不到50天。而且这一次的YOLOv5是完全基于PyTorch实现的在我们还对YOLOv4的各种高端操作、丰富的实验对比惊叹不已时YOLOv5又带来了更强实时目标检测技术。按照官方给出的数目现版本的YOLOv5每个图像的推理时间最快0.007秒即每秒140帧FPS但YOLOv5的权重文件大小只有YOLOv4的1/9。目标检测架构分为两种一种是two-stage一种是one-stage区别就在于 two-stage 有regionproposal过程类似于一种海选过程,网络会根据候选区域生成位置和类别而one-stage直接从图片生成位置和类别。今天提到的 YOLO就是一种one-stage方法。YOLO是You Only Look Once的缩写,意思是神经网络只需要看一次图片就能输出结果。YOLO一共发布了五个版本其中 YOLOv1 奠定了整个系列的基础后面的系列就是在第一版基础上的改进为的是提升性能。YOLOv5有4个版本性能如图所示网络架构图YOLOv5是一种单阶段目标检测算法该算法在YOLOv4的基础上添加了一些新的改进思路使其速度与精度都得到了极大的性能提升。主要的改进思路如下所示输入端在模型训练阶段提出了一些改进思路主要包括Mosaic数据增强、自适应锚框计算、自适应图片缩放Mosaic数据增强Mosaic数据增强的作者也是来自YOLOv5团队的成员通过随机缩放、随机裁剪、随机排布的方式进行拼接对小目标的检测效果很不错基准网络融合其它检测算法中的一些新思路主要包括Focus结构与CSP结构Neck网络在目标检测领域为了更好的提取融合特征通常在Backbone和输出层会插入一些层这个部分称为Neck。Yolov5中添加了FPNPAN结构相当于目标检测网络的颈部也是非常关键的。FPNPAN的结构这样结合操作FPN层自顶向下传达强语义特征High-Level特征而特征金字塔则自底向上传达强定位特征Low-Level特征)两两联手从不同的主干层对不同的检测层进行特征聚合。FPNPAN借鉴的是18年CVPR的PANet当时主要应用于图像分割领域但Alexey将其拆分应用到Yolov4中进一步提高特征提取的能力。Head输出层输出层的锚框机制与YOLOv4相同主要改进的是训练时的损失函数GIOU_Loss以及预测框筛选的DIOU_nms。对于Head部分可以看到三个紫色箭头处的特征图是40×40、20×20、10×10。以及最后Prediction中用于预测的3个特征图①40×40×255 ②20×20×255 ③10×10×255相关代码class Detect(nn.Module):stride None # strides computed during buildonnx_dynamic False # ONNX export parameterdefinit(self, nc80, anchors(), ch(), inplaceTrue): # detection layersuper().init()self.nc nc # number of classesself.no nc 5 # number of outputs per anchorself.nl len(anchors) # number of detection layersself.na len(anchors[0]) // 2 # number of anchorsself.grid [torch.zeros(1)] * self.nl # init gridself.anchor_grid [torch.zeros(1)] * self.nl # init anchor gridself.register_buffer(‘anchors’, torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2)self.m nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output convself.inplace inplace # use in-place ops (e.g. slice assignment)def forward(self, x):z [] # inference outputfor i in range(self.nl):x[i] self.mi # convbs, _, ny, nx x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)x[i] x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()if not self.training: # inference if self.onnx_dynamic or self.grid[i].shape[2:4] ! x[i].shape[2:4]: self.grid[i], self.anchor_grid[i] self._make_grid(nx, ny, i) y x[i].sigmoid() if self.inplace: y[..., 0:2] (y[..., 0:2] * 2 - 0.5 self.grid[i]) * self.stride[i] # xy y[..., 2:4] (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953 xy (y[..., 0:2] * 2 - 0.5 self.grid[i]) * self.stride[i] # xy wh (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh y torch.cat((xy, wh, y[..., 4:]), -1) z.append(y.view(bs, -1, self.no)) return x if self.training else (torch.cat(z, 1), x)def _make_grid(self, nx20, ny20, i0):d self.anchors[i].deviceif check_version(torch.version, ‘1.10.0’): # torch1.10.0 meshgrid workaround for torch0.7 compatibilityyv, xv torch.meshgrid([torch.arange(ny).to(d), torch.arange(nx).to(d)], indexing‘ij’)else:yv, xv torch.meshgrid([torch.arange(ny).to(d), torch.arange(nx).to(d)])grid torch.stack((xv, yv), 2).expand((1, self.na, ny, nx, 2)).float()anchor_grid (self.anchors[i].clone() * self.stride[i]).view((1, self.na, 1, 1, 2)).expand((1, self.na, ny, nx, 2)).float()return grid, anchor_grid5 模型训练训练效果如下相关代码#部分代码 def train(hyp, opt, device, tb_writerNone): print(fHyperparameters {hyp}) log_dir tb_writer.log_dir if tb_writer else runs/evolve # run directory wdir str(Path(log_dir) / weights) os.sep # weights directory os.makedirs(wdir, exist_okTrue) last wdir last.pt best wdir best.pt results_file log_dir os.sep results.txt epochs, batch_size, total_batch_size, weights, rank \ opt.epochs, opt.batch_size, opt.total_batch_size, opt.weights, opt.local_rank # TODO: Use DDP logging. Only the first process is allowed to log. # Save run settings with open(Path(log_dir) / hyp.yaml, w) as f: yaml.dump(hyp, f, sort_keysFalse) with open(Path(log_dir) / opt.yaml, w) as f: yaml.dump(vars(opt), f, sort_keysFalse) # Configure cuda device.type ! cpu init_seeds(2 rank) with open(opt.data) as f: data_dict yaml.load(f, Loaderyaml.FullLoader) # model dict train_path data_dict[train] test_path data_dict[val] nc, names (1, [item]) if opt.single_cls else (int(data_dict[nc]), data_dict[names]) # number classes, names assert len(names) nc, %g names found for nc%g dataset in %s % (len(names), nc, opt.data) # check # Remove previous results if rank in [-1, 0]: for f in glob.glob(*_batch*.jpg) glob.glob(results_file): os.remove(f) # Create model model Model(opt.cfg, ncnc).to(device) # Image sizes gs int(max(model.stride)) # grid size (max stride) imgsz, imgsz_test [check_img_size(x, gs) for x in opt.img_size] # verify imgsz are gs-multiples # Optimizer nbs 64 # nominal batch size # default DDP implementation is slow for accumulation according to: https://pytorch.org/docs/stable/notes/ddp.html # all-reduce operation is carried out during loss.backward(). # Thus, there would be redundant all-reduce communications in a accumulation procedure, # which means, the result is still right but the training speed gets slower. # TODO: If acceleration is needed, there is an implementation of allreduce_post_accumulation # in https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/run_pretraining.py accumulate max(round(nbs / total_batch_size), 1) # accumulate loss before optimizing hyp[weight_decay] * total_batch_size * accumulate / nbs # scale weight_decay pg0, pg1, pg2 [], [], [] # optimizer parameter groups for k, v in model.named_parameters(): if v.requires_grad: if .bias in k: pg2.append(v) # biases elif .weight in k and .bn not in k: pg1.append(v) # apply weight decay else: pg0.append(v) # all else if opt.adam: optimizer optim.Adam(pg0, lrhyp[lr0], betas(hyp[momentum], 0.999)) # adjust beta1 to momentum else: optimizer optim.SGD(pg0, lrhyp[lr0], momentumhyp[momentum], nesterovTrue) optimizer.add_param_group({params: pg1, weight_decay: hyp[weight_decay]}) # add pg1 with weight_decay optimizer.add_param_group({params: pg2}) # add pg2 (biases) print(Optimizer groups: %g .bias, %g conv.weight, %g other % (len(pg2), len(pg1), len(pg0))) del pg0, pg1,6 实现效果最后项目运行效果毕业设计 深度学习管制刀具识别系统 项目分享:见文末!