人脸疲劳检测何实现?基于米尔瑞芯微RK3576开发板

科技   2024-12-27 08:01   广东  
你手边如果有开发板,设计一个有意思的智能小设备!
👆来参加我们的DIY活动吧!


作者:lulugl,来源:面包板社《米尔MYC-LR3576核心板及开发板》评活动。

人脸疲劳检测:一种通过分析人脸特征来判断一个人是否处于疲劳状态的技术。其原理主要基于计算机视觉和机器学习方法。当人疲劳时,面部会出现一些特征变化,如眼睛闭合程度增加、眨眼频率变慢、打哈欠、头部姿态改变等。

例如,通过检测眼睛的状态来判断疲劳程度是一个关键部分。正常情况下,人的眨眼频率相对稳定,而当疲劳时,眨眼频率会降低,并且每次眨眼时眼睛闭合的时间可能会延长。同时,头部可能会不自觉地下垂或者摇晃,这些特征都可以作为疲劳检测的依据。

米尔MYC-LR3576采用8核CPU+搭载6 TOPS的NPU加速器,3D GPU,能够非常轻松的实现这个功能,下面就如何实现这一功能分享如下:

【硬件】
1、米尔MYC-LR3576开发板
2、USB摄像头

【软件】
v4l2
openCV
dlib库:dlib 是一个现代化的 C++ 工具包,它包含了许多用于机器学习、图像处理、数值计算等多种任务的算法和工具。它的设计目标是提供高性能、易于使用的库,并且在开源社区中被广泛应用。

【实现步骤】
1、安装python-opencv
2、按装dlib库
3、安装v4l2库

【代码实现】

1、引入cv2、dlib以及线程等:

import cv2import dlibimport numpy as npimport timefrom concurrent.futures import ThreadPoolExecutorimport threading


2、初始化dlib的面部检测器和特征点预测器

detector = dlib.get_frontal_face_detector()predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')


3、定义计算眼睛纵横比的函数

def eye_aspect_ratio(eye):   A = np.linalg.norm(np.array(eye[1]) - np.array(eye[5]))   B = np.linalg.norm(np.array(eye[2]) - np.array(eye[4]))   C = np.linalg.norm(np.array(eye[0]) - np.array(eye[3]))   ear = (A + B) / (2.0 * C)                                                                                   return ear


4、定义计算头部姿势的函数

def get_head_pose(shape):   # 定义面部特征点的三维坐标   object_points = np.array([     (0.0, 0.0, 0.0),             # 鼻尖     (0.0, -330.0, -65.0),        # 下巴     (-225.0, 170.0, -135.0),     # 左眼左眼角     (225.0, 170.0, -135.0),      # 右眼右眼角     (-150.0, -150.0, -125.0),    # 左嘴角     (150.0, -150.0, -125.0)      # 右嘴角   ], dtype=np.float32)
image_pts = np.float32([shape[i] for i in [30, 8, 36, 45, 48, 54]]) size = frame.shape focal_length = size[1] center = (size[1] // 2, size[0] // 2) camera_matrix = np.array( [[focal_length, 0, center[0]], [0, focal_length, center[1]], [0, 0, 1]], dtype="double" )
dist_coeffs = np.zeros((4, 1)) (success, rotation_vector, translation_vector) = cv2.solvePnP( object_points, image_pts, camera_matrix, dist_coeffs, flags=cv2.SOLVEPNP_ITERATIVE )
rmat, _ = cv2.Rodrigues(rotation_vector) angles, _, _, _, _, _ = cv2.RQDecomp3x3(rmat) return angles


5、定义眼睛纵横比阈值和连续帧数阈值

EYE_AR_THRESH = 0.3EYE_AR_CONSEC_FRAMES = 48


6、打开摄像头

我们先使用v4l2-ctl --list-devices来例出接在开发板上的列表信息:

USB Camera: USB Camera (usb-xhci-hcd.0.auto-1.2):    /dev/video60    /dev/video61    /dev/media7

在代码中填入60为摄像头的编号:

cap = cv2.VideoCapture(60)cap.set(cv2.CAP_PROP_FRAME_WIDTH, 480)  # 降低分辨率cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 320)


7、创建多线程处理函数,实现采集与分析分离:

# 多线程处理函数def process_frame(frame):  global COUNTER, TOTAL  gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)  faces = detector(gray, 0)  # 第二个参数为0,表示不使用upsampling
for face in faces: landmarks = predictor(gray, face) shape = [(landmarks.part(i).x, landmarks.part(i).y) for i in range(68)] left_eye = shape[36:42] right_eye = shape[42:48]
left_ear = eye_aspect_ratio(left_eye) right_ear = eye_aspect_ratio(right_eye) ear = (left_ear + right_ear) / 2.0
if ear < EYE_AR_THRESH:       with lock: COUNTER += 1 else: with lock: if COUNTER >= EYE_AR_CONSEC_FRAMES: TOTAL += 1 COUNTER = 0
# 绘制68个特征点 for n in range(0, 68): x, y = shape[n]        cv2.circle(frame, (x, y), 2, (02550), -1)
cv2.putText(frame, f"Eye AR: {ear:.2f}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, font_scale, (0, 0, 255), 2) cv2.putText(frame, f"Blink Count: {TOTAL}", (10, 60), cv2.FONT_HERSHEY_SIMPLEX, font_scale, (0, 0, 255), 2)
# 计算头部姿势 ngles = get_head_pose(shape) pitch, yaw, roll = angles cv2.putText(frame, f"Pitch: {pitch:.2f}", (10, 120), cv2.FONT_HERSHEY_SIMPLEX, font_scale, (0, 0, 255), 2) cv2.putText(frame, f"Yaw: {yaw:.2f}", (10, 150), cv2.FONT_HERSHEY_SIMPLEX, font_scale, (0, 0, 255), 2) cv2.putText(frame, f"Roll: {roll:.2f}", (10, 180), cv2.FONT_HERSHEY_SIMPLEX, font_scale, (0, 0, 255), 2)
# 判断疲劳状态 if COUNTER >= EYE_AR_CONSEC_FRAMES or abs(pitch) > 30 or abs(yaw) > 30 or abs(roll) > 30: cv2.putText(frame, "Fatigue Detected!", (10, 210), cv2.FONT_HERSHEY_SIMPLEX, font_scale, (0, 0, 255), 2)
return frame



8、创建图像显示线程:

with ThreadPoolExecutor(max_workers=2) as executor:  future_to_frame = {}  while True:    ret, frame = cap.read()    if not ret:       break
# 提交当前帧到线程池 future = executor.submit(process_frame, frame.copy()) future_to_frame[future] = frame
# 获取已完成的任务结果 for future in list(future_to_frame.keys()): if future.done(): processed_frame = future.result() cv2.imshow("Frame", processed_frame) del future_to_frame[future] break
# 计算帧数 fps_counter += 1 elapsed_time = time.time() - start_time if elapsed_time > 1.0: fps = fps_counter / elapsed_time fps_counter = 0 start_time = time.time() cv2.putText(processed_frame, f"FPS: {fps:.2f}", (10, 90), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) if cv2.waitKey(1) & 0xFF == ord('q'):


实现效果:

根据检测的结果,我们就可以来实现疲劳提醒等等的功能。

整体代码如下:

import cv2import dlibimport numpy as npimport timefrom concurrent.futures import ThreadPoolExecutorimport threading
# 初始化dlib的面部检测器和特征点预测器detector = dlib.get_frontal_face_detector()predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
# 修改字体大小font_scale = 0.5 # 原来的字体大小是0.7,现在改为0.5
# 定义计算眼睛纵横比的函数def eye_aspect_ratio(eye): A = np.linalg.norm(np.array(eye[1]) - np.array(eye[5])) B = np.linalg.norm(np.array(eye[2]) - np.array(eye[4])) C = np.linalg.norm(np.array(eye[0]) - np.array(eye[3])) ear = (A + B) / (2.0 * C) return ear
# 定义计算头部姿势的函数def get_head_pose(shape): # 定义面部特征点的三维坐标 object_points = np.array([ (0.0, 0.0, 0.0), # 鼻尖 (0.0, -330.0, -65.0), # 下巴 (-225.0, 170.0, -135.0), # 左眼左眼角 (225.0, 170.0, -135.0), # 右眼右眼角 (-150.0, -150.0, -125.0), # 左嘴角 (150.0, -150.0, -125.0) # 右嘴角 ], dtype=np.float32)
image_pts = np.float32([shape[i] for i in [30, 8, 36, 45, 48, 54]]) size = frame.shape focal_length = size[1] center = (size[1] // 2, size[0] // 2) camera_matrix = np.array( [[focal_length, 0, center[0]], [0, focal_length, center[1]], [0, 0, 1]], dtype="double" )
dist_coeffs = np.zeros((4, 1)) (success, rotation_vector, translation_vector) = cv2.solvePnP( object_points, image_pts, camera_matrix, dist_coeffs, flags=cv2.SOLVEPNP_ITERATIVE )
rmat, _ = cv2.Rodrigues(rotation_vector) angles, _, _, _, _, _ = cv2.RQDecomp3x3(rmat) return angles
# 定义眼睛纵横比阈值和连续帧数阈值EYE_AR_THRESH = 0.3EYE_AR_CONSEC_FRAMES = 48
# 初始化计数器COUNTER = 0TOTAL = 0
# 创建锁对象lock = threading.Lock()
# 打开摄像头cap = cv2.VideoCapture(60)cap.set(cv2.CAP_PROP_FRAME_WIDTH, 480) # 降低分辨率cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 320)
# 初始化帧计数器和时间戳fps_counter = 0start_time = time.time()
# 多线程处理函数def process_frame(frame): global COUNTER, TOTAL gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = detector(gray, 0) # 第二个参数为0,表示不使用upsampling
for face in faces: landmarks = predictor(gray, face) shape = [(landmarks.part(i).x, landmarks.part(i).y) for i in range(68)] left_eye = shape[36:42] right_eye = shape[42:48]
left_ear = eye_aspect_ratio(left_eye) right_ear = eye_aspect_ratio(right_eye) ear = (left_ear + right_ear) / 2.0
if ear < EYE_AR_THRESH: with lock: COUNTER += 1 else: with lock: if COUNTER >= EYE_AR_CONSEC_FRAMES: TOTAL += 1 COUNTER = 0
# 绘制68个特征点 for n in range(0, 68): x, y = shape[n] cv2.circle(frame, (x, y), 2, (0, 255, 0), -1)
cv2.putText(frame, f"Eye AR: {ear:.2f}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, font_scale, (0, 0, 255), 2) cv2.putText(frame, f"Blink Count: {TOTAL}", (10, 60), cv2.FONT_HERSHEY_SIMPLEX, font_scale, (0, 0, 255), 2)
# 计算头部姿势 angles = get_head_pose(shape) pitch, yaw, roll = angles cv2.putText(frame, f"Pitch: {pitch:.2f}", (10, 120), cv2.FONT_HERSHEY_SIMPLEX, font_scale, (0, 0, 255), 2) cv2.putText(frame, f"Yaw: {yaw:.2f}", (10, 150), cv2.FONT_HERSHEY_SIMPLEX, font_scale, (0, 0, 255), 2) cv2.putText(frame, f"Roll: {roll:.2f}", (10, 180), cv2.FONT_HERSHEY_SIMPLEX, font_scale, (0, 0, 255), 2)
# 判断疲劳状态 if COUNTER >= EYE_AR_CONSEC_FRAMES or abs(pitch) > 30 or abs(yaw) > 30 or abs(roll) > 30: cv2.putText(frame, "Fatigue Detected!", (10, 210), cv2.FONT_HERSHEY_SIMPLEX, font_scale, (0, 0, 255), 2)
return frame
with ThreadPoolExecutor(max_workers=2) as executor: future_to_frame = {} while True: ret, frame = cap.read() if not ret: break
# 提交当前帧到线程池 future = executor.submit(process_frame, frame.copy()) future_to_frame[future] = frame
# 获取已完成的任务结果 for future in list(future_to_frame.keys()): if future.done(): processed_frame = future.result() cv2.imshow("Frame", processed_frame) del future_to_frame[future] break
# 计算帧数 fps_counter += 1 elapsed_time = time.time() - start_time if elapsed_time > 1.0: fps = fps_counter / elapsed_time fps_counter = 0 start_time = time.time() cv2.putText(processed_frame, f"FPS: {fps:.2f}", (10, 90), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
if cv2.waitKey(1) & 0xFF == ord('q'): break
# 释放摄像头并关闭所有窗口cap.release()cv2.destroyAllWindows()


 
【总结】

【米尔MYC-LR3576核心板及开发板】这块开发板性能强大,能轻松实现对人脸的疲劳检测,通过计算结果后进入非常多的工业、人工智能等等的实用功能。

你手边如果有开发板,也可以用来参加我们的DIY活动哦,设计一个有意思的智能小设备,都可以哦!  

面包板社区DIY活动进行中!👇

https://mbb.eet-china.com/forum/topic/147007_1_1.html





在面包板社区博客/论坛分享你的DIY电子设计,并在发布文章时添加【电子DIY】标签,字数不限,文章通过社区评审,将有丰富奖励送给您!


↓ 扫码参加


【奖项设置】


一等奖(1位):大疆DJI Neo无人机、小米Redmi Note13Pro、海康威视 NAS Mage20PRO...(任选一个)
二等奖(不限人数):福禄克F15B PRO MAX万用表、华为WATCH FIT 2...(任选一个)
三等奖(不限人数):小米手环9 NFC版、小米mini筋膜枪2C、价值¥300电子常用工具大礼包...(任选一个)
参与奖所有参与者均可获得500个E币奖励!
过万奖励若单篇文章/单个视频,在MBB微信号阅读量过万,每篇奖1000个E币奖励(E币可在商城兑换奖品,数据统计截至时间为2025.2.28,仅限前20篇,按文章发布在MBB社区的时间排序,个人累计封顶奖励10000个E币奖励)
E币可以在E币商城(https://estore.eet-china.com/)兑换工程师常用工具及精美礼品!如,镊子、开发板、示波器、工具箱、收纳盒等。


【文章要求】

1、内容要求原创,必须和电子行业相关且首次发布于互联网

2、内容应包括:设计思路、成品展示(如电路原理、功能展示、成本控制方案或清单、代码等)涉及核心知识产权的内容可不展示。

3. 活动期间,同一ID可多次参加,等质量条件下多发作品可累积评分。


活动时间:2024.11.13 - 2025.02.13

点击阅读原文,了解活动详情!

面包板社区
分享电子技术干货,电子工程师福利!EET电子工程专辑、ESM国际电子商情、EDN电子技术设计官方社区。
 最新文章