Я пытаюсь запустить скрипт распознавания лиц в Flask приложении, работающем на apache 2 сервере в Raspberry Pi 3 модель B +.
При каждом обращении к приложению из браузера я получаю эта ошибка:
******************************************************************
* FATAL ERROR: *
* This OpenCV build doesn't support current CPU/HW configuration *
* *
* Use OPENCV_DUMP_CONFIG=1 environment variable for details *
******************************************************************
Required baseline features:
NEON - NOT AVAILABLE
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(3.4.4) /home/pi/packaging/opencv-python/opencv/modules/core/src/system.cpp:538: error: (-215:Assertion failed) Missing support for required CPU baseline features.
Check OpenCV build configuration and required CPU/HW setup. in function 'initialize'
[Tue Mar 03 14:20:36.611427 2020] [core:notice] [pid 776:tid 1995485424] AH00052: child pid 6966 exit signal Aborted (6)
Но при запуске в качестве отдельного скрипта он работает нормально. В чем здесь причина вышеуказанной ошибки?
/videostream
маршрут зарегистрирован с использованием Flask проекта в index.py
ОБНОВЛЕНИЕ: Кажется, import cv2
вызывает ошибку. videostream.py
import cv2
from flask import Blueprint, render_template, Response
videoStreamBp = Blueprint('videoStream', __name__)
# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
from imutils.video import FPS
import time
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (480, 320)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(480, 320))
time.sleep(0.5)
fps = FPS().start()
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml')
def gen(camera):
# Video streaming generator function.
while True:
from camera_pi import Camera
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.1, 5)
for (x,y,w,h) in faces:
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
# show the frame
#cv2.imshow("Frame", image)
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + image + b'\r\n')
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
# update the FPS counter
fps.update()
# stop the timer and display FPS information
fps.stop()
print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
@videoStreamBp.route('/videoStream')
def getVideo():
return Response(gen(Camera()),
mimetype='multipart/x-mixed-replace; boundary=frame')