python 3利用Dlib 19.7實現攝像頭人臉檢測特徵點標定

python 3利用Dlib 19.7實現攝像頭人臉檢測特徵點標定

Python 3 利用 Dlib 19.7 實現攝像頭人臉檢測特徵點標定

0.引言

利用python開發,藉助Dlib庫捕獲攝像頭中的人臉,進行實時特徵點標定;

圖1 工程效果示例(gif)

圖2 工程效果示例(靜態圖片)

(實現比較簡單,程式碼量也比較少,適合入門或者興趣學習。)

1.開發環境

  python:  3.6.3

  dlib:    19.7

  OpenCv, numpy


import dlib     # 人臉識別的庫dlib
import numpy as np # 資料處理的庫numpy
import cv2     # 影象處理的庫OpenCv 

2.原始碼介紹

  其實實現很簡單,主要分為兩個部分:攝像頭呼叫 人臉特徵點標定

2.1 攝像頭呼叫

  介紹下opencv中攝像頭的呼叫方法;

  利用 cap = cv2.VideoCapture(0) 建立一個物件;

  (具體可以參考官方文件


# 2018-2-26
# By TimeStamp
# cnblogs: http://www.cnblogs.com/AdaminXie
"""
cv2.VideoCapture(), 建立cv2攝像頭物件/ open the default camera
Python: cv2.VideoCapture() → <VideoCapture object>
Python: cv2.VideoCapture(filename) → <VideoCapture object>  
filename – name of the opened video file (eg. video.avi) or image sequence (eg. img_%02d.jpg, which will read samples like img_00.jpg, img_01.jpg, img_02.jpg, ...)
Python: cv2.VideoCapture(device) → <VideoCapture object>
device – id of the opened video capturing device (i.e. a camera index). If there is a single camera connected, just pass 0.
"""
cap = cv2.VideoCapture(0)
"""
cv2.VideoCapture.set(propId, value),設定視訊引數;
propId:
CV_CAP_PROP_POS_MSEC Current position of the video file in milliseconds.
CV_CAP_PROP_POS_FRAMES 0-based index of the frame to be decoded/captured next.
CV_CAP_PROP_POS_AVI_RATIO Relative position of the video file: 0 - start of the film, 1 - end of the film.
CV_CAP_PROP_FRAME_WIDTH Width of the frames in the video stream.
CV_CAP_PROP_FRAME_HEIGHT Height of the frames in the video stream.
CV_CAP_PROP_FPS Frame rate.
CV_CAP_PROP_FOURCC 4-character code of codec.
CV_CAP_PROP_FRAME_COUNT Number of frames in the video file.
CV_CAP_PROP_FORMAT Format of the Mat objects returned by retrieve() .
CV_CAP_PROP_MODE Backend-specific value indicating the current capture mode.
CV_CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras).
CV_CAP_PROP_CONTRAST Contrast of the image (only for cameras).
CV_CAP_PROP_SATURATION Saturation of the image (only for cameras).
CV_CAP_PROP_HUE Hue of the image (only for cameras).
CV_CAP_PROP_GAIN Gain of the image (only for cameras).
CV_CAP_PROP_EXPOSURE Exposure (only for cameras).
CV_CAP_PROP_CONVERT_RGB Boolean flags indicating whether images should be converted to RGB.
CV_CAP_PROP_WHITE_BALANCE_U The U value of the whitebalance setting (note: only supported by DC1394 v 2.x backend currently)
CV_CAP_PROP_WHITE_BALANCE_V The V value of the whitebalance setting (note: only supported by DC1394 v 2.x backend currently)
CV_CAP_PROP_RECTIFICATION Rectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently)
CV_CAP_PROP_ISO_SPEED The ISO speed of the camera (note: only supported by DC1394 v 2.x backend currently)
CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backend currently)
value: 設定的引數值/ Value of the property
"""
cap.set(3, 480)
"""
cv2.VideoCapture.isOpened(), 檢查攝像頭初始化是否成功 / check if we succeeded
返回true或false
"""
cap.isOpened()
""" 
cv2.VideoCapture.read([imgage]) -> retval,image, 讀取視訊 / Grabs, decodes and returns the next video frame
返回兩個值:
一個是布林值true/false,用來判斷讀取視訊是否成功/是否到視訊末尾
影象物件,影象的三維矩陣
"""
flag, im_rd = cap.read()

2.2 人臉特徵點標定

  呼叫預測器“shape_predictor_68_face_landmarks.dat”進行68點標定,這是dlib訓練好的模型,可以直接呼叫進行人臉68個人臉特徵點的標定;

  具體可以參考我的另一篇部落格(python3利用Dlib19.7實現人臉68個特徵點標定); 

2.3 原始碼

  實現的方法比較簡單:

  利用 cv2.VideoCapture() 建立攝像頭物件,然後利用 flag, im_rd = cv2.VideoCapture.read() 讀取攝像頭視訊,im_rd就是視訊中的一幀幀影象;

  然後就類似於單張影象進行人臉檢測,對這一幀幀的影象im_rd利用dlib進行特徵點標定,然後繪製特徵點;

  你可以按下s鍵來獲取當前截圖,或者按下q鍵來退出攝像頭;

# 2018-2-26


# By TimeStamp
# cnblogs: http://www.cnblogs.com/AdaminXie
# github: https://github.com/coneypo/Dlib_face_detection_from_camera
import dlib           #人臉識別的庫dlib
import numpy as np       #資料處理的庫numpy
import cv2           #影象處理的庫OpenCv
# dlib預測器
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
# 建立cv2攝像頭物件
cap = cv2.VideoCapture(0)
# cap.set(propId, value)
# 設定視訊引數,propId設定的視訊引數,value設定的引數值
cap.set(3, 480)
# 截圖screenshoot的計數器
cnt = 0
# cap.isOpened() 返回true/false 檢查初始化是否成功
while(cap.isOpened()):
# cap.read()
# 返回兩個值:
#  一個布林值true/false,用來判斷讀取視訊是否成功/是否到視訊末尾
#  影象物件,影象的三維矩陣
flag, im_rd = cap.read()
# 每幀資料延時1ms,延時為0讀取的是靜態幀
k = cv2.waitKey(1)
# 取灰度
img_gray = cv2.cvtColor(im_rd, cv2.COLOR_RGB2GRAY)
# 人臉數rects
rects = detector(img_gray, 0)
#print(len(rects))
# 待會要寫的字型
font = cv2.FONT_HERSHEY_SIMPLEX
# 標68個點
if(len(rects)!=0):
# 檢測到人臉
for i in range(len(rects)):
landmarks = np.matrix([[p.x, p.y] for p in predictor(im_rd, rects[i]).parts()])
for idx, point in enumerate(landmarks):
# 68點的座標
pos = (point[0, 0], point[0, 1])
# 利用cv2.circle給每個特徵點畫一個圈,共68個
cv2.circle(im_rd, pos, 2, color=(0, 255, 0))
# 利用cv2.putText輸出1-68
cv2.putText(im_rd, str(idx   1), pos, font, 0.2, (0, 0, 255), 1, cv2.LINE_AA)
cv2.putText(im_rd, "faces: " str(len(rects)), (20,50), font, 1, (0, 0, 255), 1, cv2.LINE_AA)
else:
# 沒有檢測到人臉
cv2.putText(im_rd, "no face", (20, 50), font, 1, (0, 0, 255), 1, cv2.LINE_AA)
# 新增說明
im_rd = cv2.putText(im_rd, "s: screenshot", (20, 400), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)
im_rd = cv2.putText(im_rd, "q: quit", (20, 450), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)
# 按下s鍵儲存
if (k == ord('s')):
cnt =1
cv2.imwrite("screenshoot" str(cnt) ".jpg", im_rd)
# 按下q鍵退出
if(k==ord('q')):
break
# 視窗顯示
cv2.imshow("camera", im_rd)
# 釋放攝像頭
cap.release()
# 刪除建立的視窗
cv2.destroyAllWindows()

如果對您有幫助,歡迎在GitHub上star本專案