点击下方卡片,关注“机器视觉与AI深度学习”
视觉/图像重磅干货,第一时间送达!
Raspberry Pi 4(建议 2GB 或更高) Coral USB 加速器 适用于 Raspberry Pi 的兼容 USB-C 电源 microSD 卡(16GB 或更高) Raspberry Pi 操作系统(Raspbian 版本:11(bullseye)— 64) Raspberry Pi 相机模块 v3
在您的电脑上下载并安装 Raspberry Pi Imager。 将 microSD 插入读卡器并打开 Raspberry Pi Imager 我选择 Raspberry PI 4 作为设备,选择 Raspberry Pi OS (Bullseye) 作为基础镜像。
选择存储并转到下一步。
单击编辑设置。输入您想要为您的操作系统设置的用户名和密码。
另外在服务中启用 ssh 并进行密码验证
sudo apt-get update
sudo apt-get -y upgrade
sudo apt-get -y install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev
sudo apt-get -y install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get -y install libxvidcore-dev libx264-dev
sudo apt-get -y install qt4-dev-tools
sudo apt-get -y install libatlas-base-dev
pip install opencv-python
然后我们安装 Tensorflow Lite
pip install tflite-runtime
我们现在将安装 Coral 加速器。请记住,在安装加速器之前,请将其从 Raspberry Pi 上拔下。
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get updateInstall the libedgetpu library by issuing:
通过发出以下命令安装 libedgetpu 库:
sudo apt-get install libedgetpu1-std
from picamera2 import Picamera2
camera = Picamera2()
camera.configure(picam2.create_preview_configuration(main={"format": 'XRGB8888', "size": (640, 480)}))
camera.start()
image = camera.capture_array()
cv2.imwrite('test.jpg', image)
import cv2
import PoseNet.engine.utils as utils
from PoseNet.engine.pose_engine import PoseEngine
from datetime import datetime
# Model Path
_MODEL_PATH = "PoseNet/model/posenet_resnet_50_416_288_16_quant_edgetpu_decoder.tflite"
# Frame shape
_FRAME_WEIGHT, _FRAME_HEIGHT = 1024, 768
# Threshold of the accuracy
_THERESHOLD = 0.50
def detect_pose(callback_function, quit_on_key=True):
# Initating Interpreter
engine = PoseEngine(_MODEL_PATH)
# Initiating camera instance
camera = utils.init_camera(_FRAME_WEIGHT, _FRAME_HEIGHT)
# Initialize frame rate calculation
frame_rate_calc = 1
freq = cv2.getTickFrequency()
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc('F','M','P','4')
video_name= f"./PoseNet/captured_video/{datetime.today().strftime('%Y%m%d%H%M%S')}.avi"
# FPS for recording video. Setting 6 as the real fps is 6 to 7
fps = 6.0
# Video Recorder instance
out = cv2.VideoWriter(video_name,fourcc, fps, (_FRAME_WEIGHT, _FRAME_HEIGHT))
while True:
# Grab frame from video stream
image = camera.capture_array()
# Start timer (for calculating frame rate)
t1 = cv2.getTickCount()
# Getting the input details the model expect
_, src_height, src_width, _ = engine.get_input_tensor_shape()
# The main Magic is happening here
poses, _ = engine.DetectPosesInImage(image)
# Draw the lines in the keypoints
output_image = utils.draw_keypoints_from_keypoints(poses, image, _THERESHOLD, src_width, src_height)
# Converting the image from 640x480x4 to 640x480x3
output_image=cv2.cvtColor(output_image, cv2.COLOR_BGRA2BGR)
out.write(output_image)
# flipping the image for display
output_image = cv2.flip(output_image, 1)
# Draw framerate in corner of frame
# Calculate framerate
t2 = cv2.getTickCount()
time1 = (t2-t1)/freq
frame_rate_calc= 1/time1
# Adding the FPS Text to the output
cv2.putText(
output_image,
'FPS: {0:.2f}'.format(frame_rate_calc),
(30,50),
cv2.FONT_HERSHEY_SIMPLEX,
1,
(255,255,0),
2,
cv2.LINE_AA
)
# Call back function calling here with the image
callback_function(output_image)
# Key to quite display
if cv2.waitKey(1) == ord('q') and quit_on_key:
break
# Clean up
out.release()
cv2.destroyAllWindows()