react-native-vision-camera-face-detector
is a React Native library that integrates with the Vision Camera module to provide face detection functionality. It allows you to easily detect faces in real-time using device's front and back camera.
If you like this package please give it a β on GitHub.
- Real-time face detection using front and back camera
- Adjustable face detection settings
- Optional native side face bounds, contour and landmarks auto scaling
- Can be combined with Skia Frame Processor
yarn add react-native-vision-camera-face-detector
Then you need to add react-native-worklets-core
plugin to babel.config.js
. More details here.
Recommended way:
import {
StyleSheet,
Text,
View
} from 'react-native'
import {
useEffect,
useState,
useRef
} from 'react'
import {
Frame,
useCameraDevice
} from 'react-native-vision-camera'
import {
Face,
Camera,
FaceDetectionOptions
} from 'react-native-vision-camera-face-detector'
export default function App() {
const faceDetectionOptions = useRef<FaceDetectionOptions>( {
// detection options
} ).current
const device = useCameraDevice('front')
useEffect(() => {
(async () => {
const status = await Camera.requestCameraPermission()
console.log({ status })
})()
}, [device])
function handleFacesDetection(
faces: Face[],
frame: Frame
) {
console.log(
'faces', faces.length,
'frame', frame.toString()
)
}
return (
<View style={{ flex: 1 }}>
{!!device? <Camera
style={StyleSheet.absoluteFill}
device={device}
faceDetectionCallback={ handleFacesDetection }
faceDetectionOptions={ faceDetectionOptions }
/> : <Text>
No Device
</Text>}
</View>
)
}
Or use it following vision-camera docs:
import {
StyleSheet,
Text,
View
} from 'react-native'
import {
useEffect,
useState,
useRef
} from 'react'
import {
Camera,
useCameraDevice,
useFrameProcessor
} from 'react-native-vision-camera'
import {
Face,
runAsync,
useFaceDetector,
FaceDetectionOptions
} from 'react-native-vision-camera-face-detector'
import { Worklets } from 'react-native-worklets-core'
export default function App() {
const faceDetectionOptions = useRef<FaceDetectionOptions>( {
// detection options
} ).current
const device = useCameraDevice('front')
const { detectFaces } = useFaceDetector( faceDetectionOptions )
useEffect(() => {
(async () => {
const status = await Camera.requestCameraPermission()
console.log({ status })
})()
}, [device])
const handleDetectedFaces = Worklets.createRunOnJS( (
faces: Face[]
) => {
console.log( 'faces detected', faces )
})
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
runAsync(frame, () => {
'worklet'
const faces = detectFaces(frame)
// ... chain some asynchronous frame processor
// ... do something asynchronously with frame
handleDetectedFaces(faces)
})
// ... chain frame processors
// ... do something with frame
}, [handleDetectedFaces])
return (
<View style={{ flex: 1 }}>
{!!device? <Camera
style={StyleSheet.absoluteFill}
device={device}
isActive={true}
frameProcessor={frameProcessor}
/> : <Text>
No Device
</Text>}
</View>
)
}
As face detection is a heavy process you should run it in an asynchronous thread so it can be finished without blocking your camera preview.
You should read vision-camera
docs about this feature.
Option | Description | Default |
---|---|---|
performanceMode |
Favor speed or accuracy when detecting faces. | fast |
landmarkMode |
Whether to attempt to identify facial landmarks : eyes, ears, nose, cheeks, mouth, and so on. |
none |
contourMode |
Whether to detect the contours of facial features. Contours are detected for only the most prominent face in an image. | none |
classificationMode |
Whether or not to classify faces into categories such as 'smiling', and 'eyes open'. | none |
minFaceSize |
Sets the smallest desired face size, expressed as the ratio of the width of the head to width of the image. | 0.15 |
trackingEnabled |
Whether or not to assign faces an ID, which can be used to track faces across images. Note that when contour detection is enabled, only one face is detected, so face tracking doesn't produce useful results. For this reason, and to improve detection speed, don't enable both contour detection and face tracking. | false |
autoScale |
Should auto scale face bounds, contour and landmarks on native side? If this option is disabled all detection results will be relative to frame coordinates, not to screen/preview. You shouldn't use this option if you want to draw on screen using Skia Frame Processor . See this and this for more details. |
false |
windowWidth |
* Required if you want to use autoScale . You must handle your own logic to get screen sizes, with or without statusbar size, etc... |
1.0 |
windowHeight |
* Required if you want to use autoScale . You must handle your own logic to get screen sizes, with or without statusbar size, etc... |
1.0 |
Here is a common issue when trying to use this package and how you can try to fix it:
Regular javascript function cannot be shared. Try decorating the function with the 'worklet' keyword...
:- If you're using
react-native-reanimated
maybe you're missing this step.
- If you're using
Execution failed for task ':react-native-vision-camera-face-detector:compileDebugKotlin'...
:
If you find other errors while using this package you're wellcome to open a new issue or create a PR with the fix.
This package was tested using the following:
react-native
:0.74.3
(new arch disabled)react-native-vision-camera
:4.5.0
react-native-worklets-core
:1.3.3
react-native-reanimated
:3.12.1
expo
:51.0.17
Min O.S version:
Android
:SDK 26
(Android 8)IOS
:14
Make sure to follow tested versions and your device is using the minimum O.S version before opening issues.
Made with β€οΈ by luicfrr