Skip to content

Web-based Augmented Reality. A-frame. AR. Gesture Recognition

License

Notifications You must be signed in to change notification settings

rylanpeng/fern-ar-js

Repository files navigation

title

Web Augmented Reality. A-Frame Component. Currently supports gesture recognition

Downloads Version License Jsdelivr

Feature

📌 Gesture Recognition A-Frame Component: Capable of recognizing 18 gestures.

📌 Custom Gesture Training: Train personalized models and pass them to the FernAR engine for customized gesture recognition.

Gesture Recognition:

Place the library into a scene to initiate an AR scene using the device camera. This allows you to use gestures for controlling A-Frame elements. If the library's predefined gestures are insufficient, you can explore the Train Your Own Model section to train a custom model.

Simple Example:

Live demo: Enter AR

<html>
  <head>
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <script src="https://aframe.io/releases/1.5.0/aframe.min.js"></script>
    <script src="https://unpkg.com/[email protected]/dist/aframe-event-set-component.min.js"></script>
    <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/fernar-gesture.prod.js"></script>
  </head>
  <body>
    <a-scene fernar-gesture="drawLandmarker:true">
      <a-camera active="true" position="0 0 0"></a-camera>
      <a-box
        id="coolbox"
        fernar-gesture-target="gesture: 0,1,2"
        event-set__red="_event: fernar-gesture-event-0; material.color: red"
        event-set__blue="_event: fernar-gesture-event-1; material.color: blue"
        event-set__green="_event: fernar-gesture-event-2; material.color: green"
        color="yellow"
        position="0 0 -10"
      ></a-box>
    </a-scene>
  </body>
</html>

Step 1: Add fern-ar library

CDN

<script src="https://cdn.jsdelivr.net/npm/fern-ar@latest/dist/fernar-gesture.prod.js"></script>

NPM

$ npm install fern-ar@latest

Step 2: Add fernar-gesture to <a-scene>

<a-scene fernar-gesture>

Note 1: For using self-training ML model, utilize the FERNAR.updateModel function with JSON (string), weights (array), and weight bin file name.

async function initializeScene() {
    const jsonModelPath = "../test-model/model.json";
    const binModelPath = "../test-model/model.weights.bin";

    // Fetching JSON model file
    const jsonResponse = await fetch(jsonModelPath);
    const jsonModelFile = await jsonResponse.text();

    // Fetching binary model file
    const binResponse = await fetch(binModelPath);
    const binModelFile = new Uint8Array(await binResponse.arrayBuffer());

    await FERNAR.updateModel(jsonModelFile, binModelFile, binModelPath);
}
initializeScene();

Note 2: Use FERNAR.setDebugMode to enable debug messages

FERNAR.setDebugMode(true);

Note 3: Properties for fernar-gesture

Property Type default
drawLandmarker boolean true
threshold int 10
confidence number 0.7
planePosition array -6 3 -7
planeWidth int 5
planeHeight int 5

Step 3: Add a camera to <a-scene>

<a-scene fernar-gesture="drawLandmarker:true">
  <a-camera active="true" position="0 0 0"></a-camera>
</a-scene>

Step 4: Register entities for gesture events

Instruct FernAR to register the entity for gesture events by adding fernar-gesture-target to the entity and specifying an array of gesture IDs the entity should listen to. The event name would be fernar-gesture-event-${gesture-id}.

For example, by using <a-box fernar-gesture-target="gesture: 1,2,5"></a-box>, the FernAR engine will send fernar-gesture-event-1, fernar-gesture-event-2, and fernar-gesture-event-5 events to the <a-box> every time the engine detects the corresponding gesture.

Step 4: Listen to the event from FernAR engine and perform desired actions

entity.addEventListener("fernar-gesture-event-0", function (event) {
     ...
});

Supported Gestures:

gesture-table

Train Your Own Model:

Refer to this Git page Train Model to train a customized model. Provide a CSV file (you can use the one in data/label.csv), enter the new gesture ID you want to recognize, and perferm the gesture in front of the camera. The web pages will record the gesture landmark every time you click Store Gesture.

After labeling, you can either click the Download CSV File button to download the new labeling CSV file or specify the total number of gestures you want to train (ex: gestures 0,1,2,3 would be total 4 gestures) and click Start Training to train the model using the CSV file you just labeled.

Once the training is complete, the web pages will automatically save the model in your device's Download folder. You should find two files: model.json and model.weight.bin.

Contributing

Any support or questions are welcome! If you encounter any bugs or have suggestions for improvements, please feel free to open issues, and let's discuss them together.

Reference:

⭐ The gesture recognition heavily relies on the repo hand-gesture-recognition-using-mediapipe

⭐ A-Frame: https://github.com/aframevr/aframe

⭐ MediaPipe: https://developers.google.com/mediapipe/solutions/vision/hand_landmarker

⭐ Mind AR: https://github.com/hiukim/mind-ar-js

About

Web-based Augmented Reality. A-frame. AR. Gesture Recognition

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published