各位,對代碼有疑問得可加羣討論193369905,小編承接各種AI小項目開發
**功能
人臉檢測、識別(圖片、視頻)
輪廓標識
頭像合成(給人戴帽子)
數字化妝(畫口紅、眉毛、眼睛等)
性別識別
表情識別(生氣、厭惡、恐懼、開心、難過、驚喜、平靜等七種情緒)
視頻對象提取
眼動追蹤(待完善)
換臉(待實現)
**
Linux版環境搭建
Ubuntu 16.04
Python 3.6.5
Pip 10.0.1
Numpy 1.14.3
OpenCV 3.4.0
keras
Dlib 19.8.1
face_recognition 1.2.2
tensorflow 1.8.0
Tesseract OCR 4.0.0-beta.1
Ubuntu有一個好處就是內置Python環境,不需要像Windows那樣在爲Python環境折騰了,但要注意的是Ubuntu本身自帶的apt-get和安裝的pip的數據源是國外的,所以使用起來都很慢,一定要把apt-get和pip的數據源更換成國內的,請移步到:《Ubuntu apt-get和pip源更換》
正式安裝
根據上面的提示,你已經配置好了開發環境,現在需要正式安裝了,當然Ubuntu的安裝也比Windows簡單很多,只需要使用pip安裝包,安裝相應的模塊即可。
安裝Numpy
使用命令:pip3 install numpy
使用命令:python3,進入python腳本執行環境,輸入代碼查看numpy是否安裝成功,以及numpy的安裝版本:
import numpy
numpy.__version__
正常輸入版本號,證明已經安裝成功。
如圖:
安裝OpenCV
OpenCV的安裝在Ubuntu和numpy相似,使用命令:
pip3 install opencv-python
使用命令:python3,進入python腳本執行環境,輸入代碼查看OpenCV版本:
import cv2
cv2.__version__
正常輸入版本號,證明已經安裝成功。
功能預覽
1、繪製臉部輪廓
2、人臉68個關鍵點標識
3、頭像特效合成
4、性別識別
5、表情識別
6、數字化妝
下面貼出表情識別得代碼吧,各位玩嗨
#coding=utf-8
#表情識別
import cv2
from keras.models import load_model
import numpy as np
import chineseText
import datetime
startTime = datetime.datetime.now()
emotion_classifier = load_model(
'classifier/emotion_models/simple_CNN.530-0.65.hdf5')
endTime = datetime.datetime.now()
print(endTime - startTime)
emotion_labels = {
0: '生氣',
1: '厭惡',
2: '恐懼',
3: '開心',
4: '難過',
5: '驚喜',
6: '平靜'
}
img = cv2.imread("img/emotion/emotion.png")
face_classifier = cv2.CascadeClassifier(
"C:\Python36\Lib\site-packages\opencv-master\data\haarcascades\haarcascade_frontalface_default.xml"
)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_classifier.detectMultiScale(
gray, scaleFactor=1.2, minNeighbors=3, minSize=(40, 40))
color = (255, 0, 0)
for (x, y, w, h) in faces:
gray_face = gray[(y):(y + h), (x):(x + w)]
gray_face = cv2.resize(gray_face, (48, 48))
gray_face = gray_face / 255.0
gray_face = np.expand_dims(gray_face, 0)
gray_face = np.expand_dims(gray_face, -1)
emotion_label_arg = np.argmax(emotion_classifier.predict(gray_face))
emotion = emotion_labels[emotion_label_arg]
cv2.rectangle(img, (x + 10, y + 10), (x + h - 10, y + w - 10),
(255, 255, 255), 2)
img = chineseText.cv2ImgAddText(img, emotion, x + h * 0.3, y, color, 20)
cv2.imshow("Image", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
化妝功能代碼
#coding=utf-8
#數字化妝類
import face_recognition
from PIL import Image, ImageDraw
#加載圖片到numpy array
image = face_recognition.load_image_file("img/ag.png")
#標識臉部特徵
face_landmarks_list = face_recognition.face_landmarks(image)
for face_landmarks in face_landmarks_list:
pil_image = Image.fromarray(image)
d = ImageDraw.Draw(pil_image, 'RGBA')
# 繪製眉毛
d.polygon(face_landmarks['left_eyebrow'], fill=(68, 54, 39, 128))
d.polygon(face_landmarks['right_eyebrow'], fill=(68, 54, 39, 128))
d.line(face_landmarks['left_eyebrow'], fill=(68, 54, 39, 150), width=5)
d.line(face_landmarks['right_eyebrow'], fill=(68, 54, 39, 150), width=5)
# 繪製嘴脣
d.polygon(face_landmarks['top_lip'], fill=(150, 0, 0, 128))
d.polygon(face_landmarks['bottom_lip'], fill=(150, 0, 0, 128))
d.line(face_landmarks['top_lip'], fill=(150, 0, 0, 64), width=8)
d.line(face_landmarks['bottom_lip'], fill=(150, 0, 0, 64), width=8)
# 繪製眼睛
d.polygon(face_landmarks['left_eye'], fill=(255, 255, 255, 30))
d.polygon(face_landmarks['right_eye'], fill=(255, 255, 255, 30))
# 繪製眼線
d.line(
face_landmarks['left_eye'] + [face_landmarks['left_eye'][0]],
fill=(0, 0, 0, 110),
width=6)
d.line(
face_landmarks['right_eye'] + [face_landmarks['right_eye'][0]],
fill=(0, 0, 0, 110),
width=6)
pil_image.show()
#性別識別
#coding=utf-8
#性別識別
import cv2
from keras.models import load_model
import numpy as np
import chineseText
img = cv2.imread("img/gather.png")
face_classifier = cv2.CascadeClassifier(
"C:\Python36\Lib\site-packages\opencv-master\data\haarcascades\haarcascade_frontalface_default.xml"
)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_classifier.detectMultiScale(
gray, scaleFactor=1.2, minNeighbors=3, minSize=(140, 140))
gender_classifier = load_model(
"classifier/gender_models/simple_CNN.81-0.96.hdf5")
gender_labels = {0: '女', 1: '男'}
color = (255, 255, 255)
for (x, y, w, h) in faces:
face = img[(y - 60):(y + h + 60), (x - 30):(x + w + 30)]
face = cv2.resize(face, (48, 48))
face = np.expand_dims(face, 0)
face = face / 255.0
gender_label_arg = np.argmax(gender_classifier.predict(face))
gender = gender_labels[gender_label_arg]
cv2.rectangle(img, (x, y), (x + h, y + w), color, 2)
img = chineseText.cv2ImgAddText(img, gender, x + h, y, color, 30)
cv2.imshow("Image", img)
cv2.waitKey(0)
cv2.destroyAllWindows()