Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Geometry for FaceLandmark vs Camera #1

Open
JpEncausse opened this issue Dec 25, 2021 · 1 comment
Open

Geometry for FaceLandmark vs Camera #1

JpEncausse opened this issue Dec 25, 2021 · 1 comment

Comments

@JpEncausse
Copy link

Hello,
I love the parralax effect but I'm a little lost in your code.

I implemented this codepen describe from this topic but I don't think it's a good idea to move the cube. It's better to move the camera according to the FaceLandmark like you did.

But ... I'ml lost ... how to move the camera according to FaceMesh ?

@vivien000
Copy link
Owner

Hi @JpEncausse.

Thanks for your feedback. Beside the parallax effect, the code is mainly adapted from the first chapter of https://discoverthreejs.com/book/

The parallax effect is implemented through only two files:

In geometry.js, things are a bit messy because various options are available (cf. the README file). You can only focus on the parts corresponding to BLAZE = false and distanceMethod = 0, which are the default options.

The essential steps are the following:

  • you get the predictions of the model
        predictions = await model.estimateFaces({
            input: ctx.getImageData(0, 0, width, height),
            predictIrises: (distanceMethod == 1),
            flipHorizontal: false
        });
  • you keep only the facial landmark between the two eyes
            keypoints = predictions[0].scaledMesh;
            centerX = keypoints[168][0];
            centerY = keypoints[168][1];
  • you compute the horizontal angle, the vertical angle and the distance (here, width, height, HFOV, VFOV are derived from the video stream and some constant values)
        return [Math.atan(2 * (centerX - width / 2) / width * Math.tan(HFOV / 2)),
            Math.atan(2 * (centerY - height / 2) / height * Math.tan(VFOV / 2)),
            d
        ]
  • you derive from this the (x, y, z) coordinates of the observer (that you'll use for the camera)
        const result = await getLocation(context, width, height);
...
        const [angle1, angle2, distance] = result;
...
        let d = this.distance;
        const tan1 = -Math.tan(this.angle1);
        const tan2 = -Math.tan(this.angle2);
        const z = Math.sqrt(d * d / (1 + tan1 * tan1 + tan2 * tan2))
        const cameraPosition = [z * tan1, z * tan2, z];
        const fov = 180 / Math.PI * 2 * Math.atan(HALF_DIAGONAL / d);
        return [cameraPosition, fov];

In Loop.js, we just periodically update the parameters for the camera:

      const result = await this.faceTracker.getCameraParameters();
      if (result !== null) {
        const [cameraPosition, fov] = result;
        this.cameraPosition = cameraPosition;
        this.fov = fov;
        this.lastUpdate = Date.now();
      }

I hope this helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants