2000字范文,分享全网优秀范文,学习好帮手!
2000字范文 > 树莓派利用OpenCV的图像跟踪 人脸识别等

树莓派利用OpenCV的图像跟踪 人脸识别等

时间:2024-01-10 01:51:22

相关推荐

树莓派利用OpenCV的图像跟踪 人脸识别等

作者丨woshigaowei5146 @CSDN

编辑丨3D视觉开发者社区

content

准备

配置

测试

程序

颜色识别跟踪

人脸识别

手势识别

形状识别

条码识别

二维码识别

故障问题解决

module 'cv2' has no attribute 'dnn'

‍ ImportError:numpy.core.multiarray failed to import1121:error:(-2:Unspecified error)FAILED:fs.is_open(). Can't open

准备

树莓派4B

USB免驱摄像头

配置

安装python-opencv,参考:/weixin_45911959/article/details/122709090

安装numpy,pip3 install-U numpy

安装opencv-python,opencv-contrib-python,参考:/weixin_57605235/article/details/121512923

测试

图片:

import cv2a=cv2.imread("/home/pi/-06-15-162551_1920x1080_scrot.png")cv2.imshow("test",a)cv2.waitKey()cv2.destroyAllWindows()

视频:

import cv2cap = cv2.VideoCapture(0)while True:ret, frame = cap.read()cv2.imshow('frame', frame)# 这一步必须有,否则图像无法显示if cv2.waitKey(1) & 0xFF == ord('q'):break#当一切完成时,释放捕获cap.release()cv2.destroyAllWindows()

程序

颜色识别跟踪

import sysimport cv2import mathimport timeimport threadingimport numpy as npimport HiwonderSDK.yaml_handle as yaml_handleif sys.version_info.major == 2:print('Please run this program with python3!')sys.exit(0)range_rgb = {'red': (0, 0, 255),'blue': (255, 0, 0),'green': (0, 255, 0),'black': (0, 0, 0),'white': (255, 255, 255)}__target_color = ('red', 'green', 'blue')lab_data = yaml_handle.get_yaml_data(yaml_handle.lab_file_path)# 找出面积最大的轮廓# 参数为要比较的轮廓的列表def getAreaMaxContour(contours):contour_area_temp = 0contour_area_max = 0area_max_contour = Nonefor c in contours: # 历遍所有轮廓contour_area_temp = math.fabs(cv2.contourArea(c)) # 计算轮廓面积if contour_area_temp > contour_area_max:contour_area_max = contour_area_tempif contour_area_temp > 300: # 只有在面积大于300时,最大面积的轮廓才是有效的,以过滤干扰area_max_contour = creturn area_max_contour, contour_area_max # 返回最大的轮廓detect_color = Nonecolor_list = []start_pick_up = Falsesize = (640, 480)def run(img):global rectglobal detect_colorglobal start_pick_upglobal color_listimg_copy = img.copy()frame_resize = cv2.resize(img_copy, size, interpolation=cv2.INTER_NEAREST)frame_gb = cv2.GaussianBlur(frame_resize, (3, 3), 3)frame_lab = cv2.cvtColor(frame_gb, cv2.COLOR_BGR2LAB) # 将图像转换到LAB空间color_area_max = Nonemax_area = 0areaMaxContour_max = 0if not start_pick_up:for i in lab_data:if i in __target_color:frame_mask = cv2.inRange(frame_lab,(lab_data[i]['min'][0],lab_data[i]['min'][1],lab_data[i]['min'][2]),(lab_data[i]['max'][0],lab_data[i]['max'][1],lab_data[i]['max'][2])) #对原图像和掩模进行位运算opened = cv2.morphologyEx(frame_mask, cv2.MORPH_OPEN, np.ones((3, 3), np.uint8)) # 开运算closed = cv2.morphologyEx(opened, cv2.MORPH_CLOSE, np.ones((3, 3), np.uint8)) # 闭运算contours = cv2.findContours(closed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # 找出轮廓areaMaxContour, area_max = getAreaMaxContour(contours) # 找出最大轮廓if areaMaxContour is not None:if area_max > max_area: # 找最大面积max_area = area_maxcolor_area_max = iareaMaxContour_max = areaMaxContourif max_area > 500: # 有找到最大面积rect = cv2.minAreaRect(areaMaxContour_max)box = np.int0(cv2.boxPoints(rect))y = int((box[1][0]-box[0][0])/2+box[0][0])x = int((box[2][1]-box[0][1])/2+box[0][1])print('X:',x,'Y:',y) #打印坐标cv2.drawContours(img, [box], -1, range_rgb[color_area_max], 2)if not start_pick_up:if color_area_max == 'red': # 红色最大color = 1elif color_area_max == 'green': # 绿色最大color = 2elif color_area_max == 'blue': # 蓝色最大color = 3else:color = 0color_list.append(color)if len(color_list) == 3: # 多次判断# 取平均值color = int(round(np.mean(np.array(color_list))))color_list = []if color == 1:detect_color = 'red'elif color == 2:detect_color = 'green'elif color == 3:detect_color = 'blue'else:detect_color = 'None'## cv2.putText(img, "Color: " + detect_color, (10, img.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.65, detect_color, 2)return imgif __name__ == '__main__':cap = cv2.VideoCapture(-1) #读取摄像头__target_color = ('red',)while True:ret, img = cap.read()if ret:frame = img.copy()Frame = run(frame) cv2.imshow('Frame', Frame)key = cv2.waitKey(1)if key == 27:breakelse:time.sleep(0.01)cv2.destroyAllWindows()

效果:

人脸识别

利用了Caffe训练的人脸数据集。

import sysimport numpy as npimport cv2import mathimport timeimport threading# 人脸检测if sys.version_info.major == 2:print('Please run this program with python3!')sys.exit(0)# 阈值conf_threshold = 0.6# 模型位置modelFile = "/home/pi/mu_code/models/res10_300x300_ssd_iter_140000_fp16.caffemodel"configFile = "/home/pi/mu_code/models/deploy.prototxt"net = cv2.dnn.readNetFromCaffe(configFile, modelFile)frame_pass = Truex1=x2=y1=y2 = 0old_time = 0def run(img):global old_timeglobal frame_passglobal x1,x2,y1,y2if not frame_pass:frame_pass = Truecv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), 2, 8)x1=x2=y1=y2 = 0return imgelse:frame_pass = Falseimg_copy = img.copy()img_h, img_w = img.shape[:2]blob = cv2.dnn.blobFromImage(img_copy, 1, (100, 100), [104, 117, 123], False, False)net.setInput(blob)detections = net.forward() #计算识别for i in range(detections.shape[2]):confidence = detections[0, 0, i, 2]if confidence > conf_threshold:#识别到的人了的各个坐标转换会未缩放前的坐标x1 = int(detections[0, 0, i, 3] * img_w)y1 = int(detections[0, 0, i, 4] * img_h)x2 = int(detections[0, 0, i, 5] * img_w)y2 = int(detections[0, 0, i, 6] * img_h)cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), 2, 8) #将识别到的人脸框出X = (x1 + x2)/2Y = (y1 + y2)/2print('X:',X,'Y:',Y)return imgif __name__ == '__main__':cap = cv2.VideoCapture(-1) #读取摄像头while True:ret, img = cap.read()if ret:frame = img.copy()Frame = run(frame)cv2.imshow('Frame', Frame)key = cv2.waitKey(1)if key == 27:breakelse:time.sleep(0.01)cv2.destroyAllWindows(

手势识别

import osimport sysimport cv2import mathimport timeimport numpy as npimport HiwonderSDK.Misc as Miscif sys.version_info.major == 2:print('Please run this program with python3!')sys.exit(0)__finger = 0__t1 = 0__step = 0__count = 0__get_finger = False# 初始位置def initMove():passdef reset():global __finger, __t1, __step, __count, __get_finger__finger = 0__t1 = 0__step = 0__count = 0__get_finger = Falsedef init():reset()initMove()class Point(object): # 一个坐标点x = 0y = 0def __init__(self, x=0, y=0):self.x = xself.y = yclass Line(object): # 一条线def __init__(self, p1, p2):self.p1 = p1self.p2 = p2def GetCrossAngle(l1, l2):'''求两直线之间的夹角:param l1::param l2::return:'''arr_0 = np.array([(l1.p2.x - l1.p1.x), (l1.p2.y - l1.p1.y)])arr_1 = np.array([(l2.p2.x - l2.p1.x), (l2.p2.y - l2.p1.y)])cos_value = (float(arr_0.dot(arr_1)) / (np.sqrt(arr_0.dot(arr_0))* np.sqrt(arr_1.dot(arr_1)))) # 注意转成浮点数运算return np.arccos(cos_value) * (180/np.pi)def distance(start, end):"""计算两点的距离:param start: 开始点:param end: 结束点:return: 返回两点之间的距离"""s_x, s_y = starte_x, e_y = endx = s_x - e_xy = s_y - e_yreturn math.sqrt((x**2)+(y**2))def image_process(image, rw, rh): # hsv'''# 光线影响,请修改 cb的范围# 正常黄种人的Cr分量大约在140~160之间识别肤色:param image: 图像:return: 识别后的二值图像'''frame_resize = cv2.resize(image, (rw, rh), interpolation=cv2.INTER_CUBIC)YUV = cv2.cvtColor(frame_resize, cv2.COLOR_BGR2YCR_CB) # 将图片转化为YCrCb_, Cr, _ = cv2.split(YUV) # 分割YCrCbCr = cv2.GaussianBlur(Cr, (5, 5), 0)_, Cr = cv2.threshold(Cr, 135, 160, cv2.THRESH_BINARY +cv2.THRESH_OTSU) # OTSU 二值化# 开运算,去除噪点open_element = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))opend = cv2.morphologyEx(Cr, cv2.MORPH_OPEN, open_element)# 腐蚀kernel = np.ones((3, 3), np.uint8)erosion = cv2.erode(opend, kernel, iterations=3)return erosiondef get_defects_far(defects, contours, img):'''获取凸包中最远的点'''if defects is None and contours is None:return Nonefar_list = []for i in range(defects.shape[0]):s, e, f, d = defects[i, 0]start = tuple(contours[s][0])end = tuple(contours[e][0])far = tuple(contours[f][0])# 求两点之间的距离a = distance(start, end)b = distance(start, far)c = distance(end, far)# 求出手指之间的角度angle = math.acos((b ** 2 + c ** 2 - a ** 2) /(2 * b * c)) * 180 / math.pi# 手指之间的角度一般不会大于100度# 小于90度if angle <= 75: # 90:# cv.circle(img, far, 10, [0, 0, 255], 1)far_list.append(far)return far_listdef get_max_coutour(cou, max_area):'''找出最大的轮廓根据面积来计算,找到最大后,判断是否小于最小面积,如果小于侧放弃:param cou: 轮廓:return: 返回最大轮廓'''max_coutours = 0r_c = Noneif len(cou) < 1:return Noneelse:for c in cou:# 计算面积temp_coutours = math.fabs(cv2.contourArea(c))if temp_coutours > max_coutours:max_coutours = temp_coutourscc = c# 判断所有轮廓中最大的面积if max_coutours > max_area:r_c = ccreturn r_cdef find_contours(binary, max_area):'''CV_RETR_EXTERNAL - 只提取最外层的轮廓CV_RETR_LIST - 提取所有轮廓,并且放置在 list 中CV_RETR_CCOMP - 提取所有轮廓,并且将其组织为两层的 hierarchy: 顶层为连通域的外围边界,次层为洞的内层边界。CV_RETR_TREE - 提取所有轮廓,并且重构嵌套轮廓的全部 hierarchymethod 逼近方法 (对所有节点, 不包括使用内部逼近的 CV_RETR_RUNS).CV_CHAIN_CODE - Freeman 链码的输出轮廓. 其它方法输出多边形(定点序列).CV_CHAIN_APPROX_NONE - 将所有点由链码形式翻译(转化)为点序列形式CV_CHAIN_APPROX_SIMPLE - 压缩水平、垂直和对角分割,即函数只保留末端的象素点;CV_CHAIN_APPROX_TC89_L1,CV_CHAIN_APPROX_TC89_KCOS - 应用 Teh-Chin 链逼近算法. CV_LINK_RUNS - 通过连接为 1 的水平碎片使用完全不同的轮廓提取算法:param binary: 传入的二值图像:return: 返回最大轮廓'''# 找出所有轮廓contours = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2]# 返回最大轮廓return get_max_coutour(contours, max_area)def get_hand_number(binary_image, contours, rw, rh, rgb_image):''':param binary_image::param rgb_image::return:'''# # 2、找出手指尖的位置# # 查找轮廓,返回最大轮廓x = 0y = 0coord_list = []new_hand_list = [] # 获取最终的手指间坐标if contours is not None:# 周长 0.035 根据识别情况修改,识别越好,越小epsilon = 0.020 * cv2.arcLength(contours, True)# 轮廓相似approx = cv2.approxPolyDP(contours, epsilon, True)# cv2.approxPolyDP()的参数2(epsilon)是一个距离值,表示多边形的轮廓接近实际轮廓的程度,值越小,越精确;参数3表示是否闭合# cv2.polylines(rgb_image, [approx], True, (0, 255, 0), 1)#画多边形if approx.shape[0] >= 3: # 有三个点以上#多边形最小为三角形,三角形需要三个点approx_list = []for j in range(approx.shape[0]): # 将多边形所有的点储存在一个列表里# cv2.circle(rgb_image, (approx[j][0][0],approx[j][0][1]), 5, [255, 0, 0], -1)approx_list.append(approx[j][0])approx_list.append(approx[0][0]) # 在末尾添加第一个点approx_list.append(approx[1][0]) # 在末尾添加第二个点for i in range(1, len(approx_list) - 1):p1 = Point(approx_list[i - 1][0],approx_list[i - 1][1]) # 声明一个点p2 = Point(approx_list[i][0], approx_list[i][1])p3 = Point(approx_list[i + 1][0], approx_list[i + 1][1])line1 = Line(p1, p2) # 声明一条直线line2 = Line(p2, p3)angle = GetCrossAngle(line1, line2)# 获取两条直线的夹角angle = 180 - angle## print angleif angle < 42: # 求出两线相交的角度,并小于37度的#cv2.circle(rgb_image, tuple(approx_list[i]), 5, [255, 0, 0], -1)coord_list.append(tuple(approx_list[i]))############################################################################### 去除手指间的点# 1、获取凸包缺陷点,最远点点#cv2.drawContours(rgb_image, contours, -1, (255, 0, 0), 1)try:hull = cv2.convexHull(contours, returnPoints=False)# 找凸包缺陷点 。返回的数据, 【起点,终点, 最远的点, 到最远点的近似距离】defects = cv2.convexityDefects(contours, hull)# 返回手指间的点hand_coord = get_defects_far(defects, contours, rgb_image)except:return rgb_image, 0# 2、从coord_list 去除最远点alike_flag = Falseif len(coord_list) > 0:for l in range(len(coord_list)):for k in range(len(hand_coord)):if (-10 <= coord_list[l][0] - hand_coord[k][0] <= 10 and-10 <= coord_list[l][1] - hand_coord[k][1] <= 10): # 最比较X,Y轴, 相近的去除alike_flag = Truebreak #if alike_flag is False:new_hand_list.append(coord_list[l])alike_flag = False# 获取指尖的坐标列表并显示for i in new_hand_list:j = list(tuple(i))j[0] = int(Misc.map(j[0], 0, rw, 0, 640))j[1] = int(Misc.map(j[1], 0, rh, 0, 480))cv2.circle(rgb_image, (j[0], j[1]), 20, [0, 255, 255], -1)fingers = len(new_hand_list)return rgb_image, fingersdef run(img, debug=False):global __act_map, __get_fingerglobal __step, __count, __fingerbinary = image_process(img, 320, 240)contours = find_contours(binary, 3000)img, finger = get_hand_number(binary, contours, 320, 240, img)if not __get_finger:if finger == __finger:__count += 1else:__count = 0__finger = fingercv2.putText(img, "Finger(s):%d" % __finger, (50, 480 - 30),cv2.FONT_HERSHEY_SIMPLEX, 1.2, (0, 255, 255), 2)#将识别到的手指个数写在图片上return img if __name__ == '__main__':init()cap = cv2.VideoCapture(-1) #读取摄像头while True:ret, img = cap.read()if ret:frame = img.copy()Frame = run(frame) frame_resize = cv2.resize(Frame, (320, 240))cv2.imshow('frame', frame_resize)key = cv2.waitKey(1)if key == 27:breakelse:time.sleep(0.01)cv2.destroyAllWindows()

形状识别

import sysimport cv2import mathimport timeimport threadingimport numpy as npimport HiwonderSDK.tm1640 as tmimport RPi.GPIO as GPIOGPIO.setwarnings(False)GPIO.setmode(GPIO.BCM)color_range = {'red': [(0, 101, 177), (255, 255, 255)], 'green': [(47, 0, 135), (255, 119, 255)], 'blue': [(0, 0, 0), (255, 255, 115)], 'black': [(0, 0, 0), (41, 255, 136)], 'white': [(193, 0, 0), (255, 250, 255)], }if sys.version_info.major == 2:print('Please run this program with python3!')sys.exit(0)range_rgb = {'red': (0, 0, 255),'blue': (255, 0, 0),'green': (0, 255, 0),'black': (0, 0, 0),'white': (255, 255, 255),}# 找出面积最大的轮廓# 参数为要比较的轮廓的列表def getAreaMaxContour(contours):contour_area_temp = 0contour_area_max = 0area_max_contour = Nonefor c in contours: # 历遍所有轮廓contour_area_temp = math.fabs(cv2.contourArea(c)) # 计算轮廓面积if contour_area_temp > contour_area_max:contour_area_max = contour_area_tempif contour_area_temp > 50: # 只有在面积大于50时,最大面积的轮廓才是有效的,以过滤干扰area_max_contour = creturn area_max_contour, contour_area_max # 返回最大的轮廓shape_length = 0def move():global shape_lengthwhile True:if shape_length == 3:print('三角形')## 显示'三角形'tm.display_buf = (0x80, 0xc0, 0xa0, 0x90, 0x88, 0x84, 0x82, 0x81,0x81, 0x82, 0x84,0x88, 0x90, 0xa0, 0xc0, 0x80)tm.update_display()elif shape_length == 4:print('矩形')## 显示'矩形'tm.display_buf = (0x00, 0x00, 0x00, 0x00, 0xff, 0x81, 0x81, 0x81,0x81, 0x81, 0x81,0xff, 0x00, 0x00, 0x00, 0x00)tm.update_display()elif shape_length >= 6: print('圆')## 显示'圆形'tm.display_buf = (0x00, 0x00, 0x00, 0x00, 0x1c, 0x22, 0x41, 0x41,0x41, 0x22, 0x1c,0x00, 0x00, 0x00, 0x00, 0x00)tm.update_display()time.sleep(0.01)# 运行子线程th = threading.Thread(target=move)th.setDaemon(True)th.start()shape_list = []action_finish = Trueif __name__ == '__main__':cap = cv2.VideoCapture(-1)while True:ret,img = cap.read()if ret:img_copy = img.copy()img_h, img_w = img.shape[:2]frame_gb = cv2.GaussianBlur(img_copy, (3, 3), 3)frame_lab = cv2.cvtColor(frame_gb, cv2.COLOR_BGR2LAB) # 将图像转换到LAB空间max_area = 0color_area_max = None areaMaxContour_max = 0if action_finish:for i in color_range:if i != 'white':frame_mask = cv2.inRange(frame_lab, color_range[i][0], color_range[i][1]) #对原图像和掩模进行位运算opened = cv2.morphologyEx(frame_mask, cv2.MORPH_OPEN, np.ones((6,6),np.uint8)) #开运算closed = cv2.morphologyEx(opened, cv2.MORPH_CLOSE, np.ones((6,6),np.uint8)) #闭运算contours = cv2.findContours(closed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] #找出轮廓areaMaxContour, area_max = getAreaMaxContour(contours) #找出最大轮廓if areaMaxContour is not None:if area_max > max_area:#找最大面积max_area = area_maxcolor_area_max = iareaMaxContour_max = areaMaxContourif max_area > 200: cv2.drawContours(img, areaMaxContour_max, -1, (0, 0, 255), 2)# 识别形状# 周长 0.035 根据识别情况修改,识别越好,越小epsilon = 0.035 * cv2.arcLength(areaMaxContour_max, True)# 轮廓相似approx = cv2.approxPolyDP(areaMaxContour_max, epsilon, True)shape_list.append(len(approx))if len(shape_list) == 30:shape_length = int(round(np.mean(shape_list))) shape_list = []print(shape_length)frame_resize = cv2.resize(img, (320, 240))cv2.imshow('frame', frame_resize)key = cv2.waitKey(1)if key == 27:breakelse:time.sleep(0.01)my_camera.camera_close()cv2.destroyAllWindows()

approxPolyDP()函数用于将一个连续光滑曲线折线化。

以代码"approx=cv2.approxPolyDP(areaMaxContour_max,epsilon,True)”为例,括号内的参数含义如下:

第一个参数“areaMaxContour_max”是输入的形状轮廓;

第二个参数“epsilon”是距离值,表示多边形的轮廓接近实际轮廓的程度,值越小,越精确;

第三个参数“True”表示轮廓为闭合曲线。

cv2.approxPolyDP()函数的输出为近似多边形的顶点坐标,根据顶点的数量判断形状。

条码识别

首先安装pyzbar,pip3 install pyzbar

import cv2import sysfrom pyzbar import pyzbarif sys.version_info.major == 2:print('Please run this program with python3!')sys.exit(0)def run(image):# 找到图像中的条形码并解码每个条形码barcodes = pyzbar.decode(image)# 循环检测到的条形码for barcode in barcodes:# 提取条形码的边界框位置(x, y, w, h) = barcode.rect# 绘出图像上条形码的边框cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2)barcodeData = barcode.data.decode("utf-8")barcodeType = barcode.type# 在图像上绘制条形码数据和条形码类型text = "{} ({})".format(barcodeData, barcodeType)cv2.putText(image, text, (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)return imageif __name__ == '__main__':cap = cv2.VideoCapture(-1) #读取摄像头while True:ret, img = cap.read()if ret:frame = img.copy()Frame = run(frame) cv2.imshow('Frame', Frame)key = cv2.waitKey(1)if key == 27:breakelse:time.sleep(0.01)cv2.destroyAllWindows()

二维码识别

安装apriltag,发现安装失败。还是老办法下载到本地以后安装。

在/simple/apriltag/,我下载了apriltag-0.0.16-cp37-cp37mlinux_armv7l.whl。

使用FileZilla传输到树莓派,打开whl文件所在的树莓派目录,安装whl文件,显示成功安装。

cd /home/pi/Downloadssudopip3installapriltag-0.0.16-cp37-cp37m-linux_armv7l.whl

import sysimport cv2import mathimport timeimport threadingimport numpy as npimport apriltag#apriltag检测if sys.version_info.major == 2:print('Please run this program with python3!')sys.exit(0)object_center_x = 0.0object_center_y = 0.0# 检测apriltagdetector = apriltag.Detector(searchpath=apriltag._get_demo_searchpath())def apriltagDetect(img):global object_center_x, object_center_ygray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)detections = detector.detect(gray, return_image=False)if len(detections) != 0:for detection in detections: corners = np.rint(detection.corners) # 获取四个角点cv2.drawContours(img, [np.array(corners, np.int)], -1, (0, 255, 255), 2)tag_family = str(detection.tag_family, encoding='utf-8') # 获取tag_familytag_id = int(detection.tag_id) # 获取tag_idobject_center_x, object_center_y = int(detection.center[0]), int(detection.center[1]) # 中心点object_angle = int(math.degrees(math.atan2(corners[0][1] - corners[1][1], corners[0][0] - corners[1][0]))) # 计算旋转角return tag_family, tag_idreturn None, Nonedef run(img):global stateglobal tag_idglobal action_finishglobal object_center_x, object_center_yimg_h, img_w = img.shape[:2]tag_family, tag_id = apriltagDetect(img) # apriltag检测if tag_id is not None:print('X:',object_center_x,'Y:',object_center_y)cv2.putText(img, "tag_id: " + str(tag_id), (10, img.shape[0] - 30), cv2.FONT_HERSHEY_SIMPLEX, 0.65, [0, 255, 255], 2)cv2.putText(img, "tag_family: " + tag_family, (10, img.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.65, [0, 255, 255], 2)else:cv2.putText(img, "tag_id: None", (10, img.shape[0] - 30), cv2.FONT_HERSHEY_SIMPLEX, 0.65, [0, 255, 255], 2)cv2.putText(img, "tag_family: None", (10, img.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.65, [0, 255, 255], 2)return imgif __name__ == '__main__':cap = cv2.VideoCapture(-1) #读取摄像头while True:ret, img = cap.read()if ret:frame = img.copy()Frame = run(frame) cv2.imshow('Frame', Frame)key = cv2.waitKey(1)if key == 27:breakelse:time.sleep(0.01)cv2.destroyAllWindows()

故障问题解决

module ‘cv2’ has no attribute ‘dnn’

尝试用一下指令都有问题,一直在报错,或者显示无法识别 python-opencv,更换镜像也没用:

sudo apt install python-opencv 或 sudo apt install python3-opencv sudo apt-get install opencv-pythonsudo apt-get install opencv-contrib-pythonpip install opencv-contrib-pythonpipinstallopencv-python

最后,通过下载本地文件的方式安装成功。

首先习惯更新树莓派系统和文件

sudo apt-get update sudoapt-getupgrade

若下载速度太慢可以考虑换源。

1) 使用“ sudo nano /etc/apt/sources.list” 命令编辑 sources.list 文件,注释原文件所有内容,并追加以下内容:deb /raspbian/raspbian/ buster main contrib non-free rpideb-src /raspbian/raspbian/ buster main contrib non-free rpi使用 Ctrl+O 快捷键保存文件,Ctrl+X 退出文件。2)使用 “sudo nano /etc/apt/sources.list.d/raspi.list” 命令编辑 raspi.list 文件,注释原文件所有内容,并追加以下内容:deb http://mirrors.tuna./raspbian/raspbian/ buster maindeb-src http://mirrors.tuna./raspbian/raspbian/ buster main使用 Ctrl+O 快捷键保存文件,Ctrl+X 退出文件。3)执行“sudo apt-get update” 命令。4) 为加速 Python pip 安装速度,特更改 Python 软件源,操作方法:打开树莓派命令行,输入下面命令:pip config set global.index-url https://pypi.tuna./simplepip install pip -U5)最后输入指令“sudo reboot”,重新启动树莓派即可。

下载whl文件并传到树莓派上,在电脑上打开 /simple/opencv-python/

下载与自己python版本相对的whl文件,我下载的是opencv_python-3.4.10.37-cp37-cp37m-linux_armv7l.whl

cp37表示python的版本,armv7表示处理器的架构,树莓派4B选择armv7

将其使用FileZilla传输到树莓派,打开whl文件所在的树莓派目录,安装whl文件,显示成功安装opencv-python

cd /home/pi/Downloadssudopip3installopencv_python-3.4.10.37-cp37-cp37m-linux_armv7l.whl

参‍考:/weixin_57605235/article/details/121512923

ImportError:numpy.core.multiarray failed to import

先卸载低版本的numpy,再安装新版本的numpy,即

1. pip uninstall numpy2.pipinstall-Unumpy

来自/qq_25603827/article/details/107824977

无效。

pipinstallnumpy--upgrade--force

来自/article/38668.html

无效。

查看本地numpy版本:

pipshownumpy

而我们在安装opencv-python时,其对应numpy版本为:

所以对numpy进行版本降级处理即可:

pipinstall-Unumpy==1.14.5-ihttps://pypi.mirrors./simple/

来自/p/280702247

无效。

最后,用pip3 install-Unumpy成功。所以用python3的最好还是用pip3。

网上有很多尝试方法,有升级版本的,有降级版本的,各种诡异的现象层出不穷,说法不一,参考:

/Robin_Pi/article/details/120544691 /p/29026597

1121:error:(-2:Unspecified error) FAILED: fs.is_open(). Can’t open

找了半天发现多了个点在开头。

本文仅做学术分享,如有侵权,请联系删文。

3D视觉工坊精品课程官网:

1.面向自动驾驶领域的多传感器数据融合技术

2.面向自动驾驶领域的3D点云目标检测全栈学习路线!(单模态+多模态/数据+代码)

3.彻底搞透视觉三维重建:原理剖析、代码讲解、及优化改进

4.国内首个面向工业级实战的点云处理课程

5.激光-视觉-IMU-GPS融合SLAM算法梳理和代码讲解

6.彻底搞懂视觉-惯性SLAM:基于VINS-Fusion正式开课啦

7.彻底搞懂基于LOAM框架的3D激光SLAM: 源码剖析到算法优化

8.彻底剖析室内、室外激光SLAM关键算法原理、代码和实战(cartographer+LOAM +LIO-SAM)

9.从零搭建一套结构光3D重建系统[理论+源码+实践]

10.单目深度估计方法:算法梳理与代码实现

11.自动驾驶中的深度学习模型部署实战

12.相机模型与标定(单目+双目+鱼眼)

13.重磅!四旋翼飞行器:算法与实战

14.ROS2从入门到精通:理论与实战

15.国内首个3D缺陷检测教程:理论、源码与实战

16.基于Open3D的点云处理入门与实战教程

重磅!3DCVer-学术论文写作投稿交流群已成立

扫码添加小助手微信,可申请加入3D视觉工坊-学术论文写作与投稿微信交流群,旨在交流顶会、顶刊、SCI、EI等写作与投稿事宜。

同时也可申请加入我们的细分方向交流群,目前主要有3D视觉CV&深度学习SLAM三维重建点云后处理自动驾驶、多传感器融合、CV入门、三维测量、VR/AR、3D人脸识别、医疗影像、缺陷检测、行人重识别、目标跟踪、视觉产品落地、视觉竞赛、车牌识别、硬件选型、学术交流、求职交流、ORB-SLAM系列源码交流、深度估计等微信群。

一定要备注:研究方向+学校/公司+昵称,例如:”3D视觉+ 上海交大 + 静静“。请按照格式备注,可快速被通过且邀请进群。原创投稿也请联系。

▲长按加微信群或投稿

▲长按关注公众号

3D视觉从入门到精通知识星球:针对3D视觉领域的视频课程(三维重建系列、三维点云系列、结构光系列、手眼标定、相机标定、激光/视觉SLAM自动驾驶等)、知识点汇总、入门进阶学习路线、最新paper分享、疑问解答五个方面进行深耕,更有各类大厂的算法工程人员进行技术指导。与此同时,星球将联合知名企业发布3D视觉相关算法开发岗位以及项目对接信息,打造成集技术与就业为一体的铁杆粉丝聚集区,近4000星球成员为创造更好的AI世界共同进步,知识星球入口:

学习3D视觉核心技术,扫描查看介绍,3天内无条件退款

圈里有高质量教程资料、答疑解惑、助你高效解决问题

觉得有用,麻烦给个赞和在看~

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。