For the A-Frame version, it's recommended to use version 3.4.5, rather than master:
<!-- A-Frame itself -->
<script src="https://aframe.io/releases/1.3.0/aframe.min.js"></script>
<!-- Pure three.js code that the A-Frame components use for location-based AR -->
-<script type='text/javascript' src='https://raw.githack.com/AR-js-org/AR.js/master/three.js/build/ar-threex-location-only.js'></script>
+<script type='text/javascript' src='https://raw.githack.com/AR-js-org/AR.js/3.4.5/three.js/build/ar-threex-location-only.js'></script>
<!-- AR.js A-Frame components -->
-<script type='text/javascript' src='https://raw.githack.com/AR-js-org/AR.js/master/aframe/build/aframe-ar.js'></script>
+<script type='text/javascript' src='https://raw.githack.com/AR-js-org/AR.js/3.4.5/aframe/build/aframe-ar.js'></script>
For the three.js version, it's recommended to import AR.js as a module and build with a bundler such as Webpack. There is an example given in the location-based section.
Requirements and Known Issues
@@ -225,6 +225,7 @@
Requirements and Known Issues
You must ensure that you have matching versions of AR.js and A-Frame. AR.js 3.4.5 (the latest version) requires A-Frame 1.3.0 while AR.js 3.4.4 and below requires 1.0.4.
Location-based AR will not work correctly on Firefox, due to the inability to obtain absolute device orientation (compass bearing)
On Android/Chrome, you may encounter issues with location-based AR due to inaccuracies in compass calibration (incorrect north). This is likely to be a hardware limitation of the device.
+
On some phones you may encounter problems with locating North due to inherent miscalibrations of the device sensors. This is a known problem recognised by the three.js developers: see here
Please ensure you enable high accuracy location for your selected browser on Android. Sometimes high accuracy location is turned off by default, and this will lead to an inaccurate GPS location.
There is currently a bug in location-based AR where the camera feed is stretched away from the centre of the screen, meaning that there is reduced accuracy in placement of objects further away from the centre. Work is ongoing to investigate this.
On device with multi-cameras, Chrome may have problems on detecting the right one. Please use Firefox if you find that AR.js opens on the wrong camera. There is an open issue for this.
Important! You might want to check out the new AR.js LocAR project if you are interested in location-based AR. This aims to provide a cleaner API, with just a single version, and more frequent updates.
+
In the future, updates on the location-based side will be focused on LocAR.
+
Intro to location-based
This article gives you a first glance to Location Based on AR.js.
It can be used for indoor (but with low precision) and outdoor geopositioning of AR content.
You can load places statically, from HTML or from Javascript, or you can load your data from local/remote json, or even through API calls. Choice is yours. On the article above there are all the options explained, as tutorials.
Location-based AR with AR.js is subject to certain limitations.
+
+
Your device must have a GPS chip, accelerometer and magnetometer.
+
On some devices, the sensors may be miscalibrated, resulting in an incorrect North. See, for example, this three.js issue. This is unfortunately a limitation of the device. This will be investigated further in LocAR, for example, as to whether certain devices are consistently "out" by a certain bearing.
+
The camera feed may appear "stretched". Again the focus on fixing this will be in LocAR.
+
A-Frame
AR.js offers A-Frame components to implement location-based AR. There are three variants of the components, detailed as below:
-
The new-location-based components. These have been available since AR.js 3.4.0, incorporate various bug fixes, use simpler code, and provide a thin wrapper round the three.js API shown below. These are recommended for most uses, though do not support all the events of the older components due to a different internal implementation. Nonetheless they the components likely to see further development - the older variants are unlikely to see further work besides bug fixes.
+
The new-location-based components. In most cases, these are recommended. These have been available since AR.js 3.4.0, incorporate various bug fixes, use simpler code, and provide a thin wrapper round the three.js API shown below. These are recommended for most uses, though do not support all the events of the older components due to a different internal implementation. Nonetheless they the components likely to see further development - the older variants are unlikely to see further work besides bug fixes.
-
The projected components. These have been available since AR.js 3.3.1, use largely the same internal implementation as the classic components, and were the first to offer projection of latitude/longitude into Spherical Mercator, discussed below.
+
The projected components. These have been available since AR.js 3.3.1, use largely the same internal implementation as the classic components, and were the first to offer projection of latitude/longitude into Spherical Mercator, discussed below. They are generally not recommended unless you have problems with new-location-based.
-
The classic components, available before AR.js 3.3.1. These are similar to the projected components but do not offer the facility to convert between latitude/longitude and the projected coordinates used for augmented reality, which can cause problems for more specialist uses such as showing roads and paths in augmented reality.
+
The classic components, available before AR.js 3.3.1. These are similar to the projected components but do not offer the facility to convert between latitude/longitude and the projected coordinates used for augmented reality, which can cause problems for more specialist uses such as showing roads and paths in augmented reality. For most use cases it is preferred to use new-location-based but some uses, such as embedded AR scenes, only work with the classic components.
The components
@@ -308,6 +326,7 @@
Calculatin
gps-new-camera implements projection via the underlying AR.js three.js LocationBased object (see three.js documentation, below) which is responsible for the actual projection.
gps-projected-camera provides similar functionality but via a different method and with some implementation differences. In gps-projected-camera, unlike gps-new-camera, the original GPS position is set as the world origin.
three.js
+
For pure three.js (no A-Frame) it is recommended to use LocAR. The notes below, however, refer to the three.js version in the main AR.js repository.
The three.js API keeps track of your current GPS location (or allows you to set a fake location) and allows you to add three.js objects at a given latitude and longitude. It includes these classes:
THREEx.LocationBased - general manager class for the three.js location-based API.
diff --git a/search/search_index.json b/search/search_index.json
index 4e2ea0e..b1387a3 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"AR.js - Augmented Reality on the Web AR.js is a lightweight library for Augmented Reality on the Web, which includes features like Image Tracking, Location based AR and Marker tracking. Location Based documentation updated and enhanced for AR.js 3.4 What Web AR means (Augmented Reality on the Web) Augmented Reality is the technology that makes possible to overlay content on the real world. It can be provided for several type of devices: handheld (like mobile phones), headsets, desktop displays, and so on. For handheld devices (more generally, for video-see-through devices) the 'reality' is captured from one or more cameras and then shown on the device display, adding some kind of content on top of it. For developers, to develop Augmented Reality ('AR' from now on) on the Web, means to avoid all the Mobile app development efforts and costs related to App stores (validation, time to publish). It also means to re-use well known technologies like Javascript, HTML and CSS, familiar to a lot of developers and possibly designers. It basically means that it is possible to release every new version instantly, fix bugs or release new features in near real-time, opening a lot of practical possibilities. For users, it means to reach an AR experience just visiting a website. As QR Codes are now widespread, it's also possible to scan a QR Code and reach the URL without typing. Additionally, users do not have to reserve storage space on their download the AR app, and do not have to keep it updated. Why AR.js We believe in the Web, as a collaborative and accessible environment. We also believe in Augmented Reality technology, as a new communication medium, that can help people see reality in new, exciting ways. We see Augmented Reality (AR) used everyday for a lot of useful applications, from art, to education, also for fun. We strongly believe that such a powerful technology, that can help people and leverage their creativity, should be free in some way. Also collaborative, if possible. And so, we continue the work started by Jerome Etienne, in bringing AR on the Web, as a free and Open Source technology. Thank you for being interested in this, if you'd like to collaborate in any way, contact us ( https://twitter.com/nicolocarp ). The project is now under a Github organization, that you can find at https://github.com/ar-js-org and you can ask to be part of it, for free. AR types AR.js features the following types of Augmented Reality, on the Web: Image Tracking , when a 2D images is found by the camera, it's possible to show some kind of content on top of it, or near it. The content can be a 2D image, a GIF, a 3D model (also animated) and a 2D video too. Cases of use: Augmented Art, learning (Augmented books), Augmented flyers, advertising, etc. Location Based AR , this kind of AR uses real-world places in order to show Augmented Reality content, on the user device. The experiences that can be built with this library are those that use a user's position in the real world. The user can move (ideally outdoor) and through their smartphones they can see AR content where places are in the real world. Moving around and rotating the phone will make the AR content change according to users position and rotation (so places are 'anchored' in their real position, and appear bigger/smaller according to their distance from the user). With this solution it\u2019s possible to build experiences like interactive support for tourist guides, assistance when exploring a new city, find places of interest like buildings, museums, restaurants, hotels and so on. It\u2019s also possible to build learning experiences like treasure hunts, and biology or history learning games, or use this technology for situated art (visual art experiences bound to specific real world coordinates). Marker Tracking , When a marker is found by the camera, it's possible to show some content (same as Image Tracking). Markers are very stable but limited in shape, color and size. It is suggested for those experiences where are required a lot of different markers with different content. Examples of use: (Augmented books), Augmented flyers, advertising. Key points Very Fast : It runs efficiently even on phones Web-based : It is a pure web solution, so no installation required. Fully javascript based, using three.js + A-Frame + jsartoolkit5 Open Source : It is completely open source and free of charge! Standards : It works on any phone with webgl and webrtc AR.js has reached version 3. This is the official repository: https://github.com/AR-js-org/AR.js . If you want to visit the old AR.js repository, here it is: https://github.com/jeromeetienne/AR.js . Import the library AR.js from version 3 has a new structure. AR.js comes in two, different builds. They are both maintained. They are exclusive. The file you want to import depends on what features you want, and also which render library you want to use (A-Frame or three.js). AR.js uses jsartoolkit5 for tracking, but can display augmented content with either three.js or A-Frame . You can import AR.js in one version of your choice, using the For the three.js version, it's recommended to import AR.js as a module and build with a bundler such as Webpack. There is an example given in the location-based section. Requirements and Known Issues Some requirements and known issues are listed below: It works on every phone with webgl and webrtc . Marker based tracking is very lightweight, while Image Tracking is more CPU consuming You must ensure that you have matching versions of AR.js and A-Frame. AR.js 3.4.5 (the latest version) requires A-Frame 1.3.0 while AR.js 3.4.4 and below requires 1.0.4. Location-based AR will not work correctly on Firefox, due to the inability to obtain absolute device orientation (compass bearing) On Android/Chrome, you may encounter issues with location-based AR due to inaccuracies in compass calibration (incorrect north). This is likely to be a hardware limitation of the device. Please ensure you enable high accuracy location for your selected browser on Android. Sometimes high accuracy location is turned off by default, and this will lead to an inaccurate GPS location. There is currently a bug in location-based AR where the camera feed is stretched away from the centre of the screen, meaning that there is reduced accuracy in placement of objects further away from the centre. Work is ongoing to investigate this. On device with multi-cameras, Chrome may have problems on detecting the right one. Please use Firefox if you find that AR.js opens on the wrong camera. There is an open issue for this. To work with Location Based feature, your device needs to have GPS, accelerometer and magnetometer sensors. It will not work if any of these sensors are absent. Please, read carefully any suggestions that AR.js pops-up -as alerts- for Location Based on iOS, as iOS requires user actions to activate geoposition Access to the phone camera or to camera GPS sensors, due to major browsers restrictions, can be done only under https websites. All the examples you will see, and all AR.js web apps in general, have to be run on a server. You can use local server or deploy the static web app on the web. Always deploy under https So don't forget to always run your examples on secure connections servers or localhost. Github Pages is a great way to have free and live websites under https. Getting started Here we present three, basic examples, one for each AR feature. For specific documentation, on the top menu you can find every section, or you can click on the following links: Image Tracking Documentation Location Based Documentation Marker Based Documentation Image Tracking Example There is a Codepen for you to try. Below you can find also a live example. Please follow these simple steps: Create a new project with the code below (or open this live example and go directly to the last step) Run it on a server Open the website on your phone Scan this picture to see content through the camera.
Loading, please wait...
Location Based Example This example retrieves your position and places a red box near you. Please follow these simple steps: Create a new project with the following snippet, and change add-your-latitude and add-your-longitude with a point very close to your latitude and longitude (about 0.001 degrees distant for both latitude and longitude), without the <> . Run it on a server Activate GPS on your phone and navigate to the example URL Look around. You should see the box close to you, appearing in the requested position, even if you look around and move the phone. AR.js A-Frame Location-based; longitude: \" scale=\"10 10 10\"> This is just a basic example and most location-based applications will involve JavaScript coding. So, if you want to enhance and customize your Location Based experience, take a look at the Location Based docs. Marker Based Example Please follow these simple steps: Create a new project with the code below (or open this live example and go directly to the last step) Run it on a server Open the website on your phone Scan this picture to see content through the camera. Advanced stuff AR.js offers two ways, with A-Frame, to interact with the web page: to interact directly with AR content and Overlayed DOM interaction. Also, there are several Custom Events triggered during the life cycle of every AR.js web app. You can learn more about these aspects on the UI and Events section . AR.js architecture AR.js uses jsartoolkit5 for tracking, but can display augmented content with either three.js or A-Frame . three.js folder contains source code for AR.js core, Marker based and Image Tracking examples for AR.js three.js based build for three.js AR.js based vendor stuff (jsartoolkit5) workers (used for Image Tracking). When you find files that ends with -nft suffix, they're bundled only with the Image Tracking version. A-Frame version of AR.js uses three.js parts as its core. A-Frame code, on AR.js, is simply a wrapper to write AR with Custom Components in HTML. aframe folder contains source code for AR.js A-Frame (aka wrappers for Marker Based, Image Tracking components) source code for Location Based build for A-Frame AR.js based examples for A-Frame AR.js. Tutorials There are various tutorials available for developing with AR.js. These include: Location Based Build your Location-Based Augmented Reality Web App : covers location-based AR.js with A-Frame. Develop a Simple Points Of Interest App (A-Frame version) ( Provided with these docs ): a further location-based A-Frame tutorial, written with AR.js 3.4 in mind. Develop a Simple Points of Interest App (three.js version) ( Provided with these docs ): a pure three.js version of the above, also written for AR.js 3.4. Troubleshooting, feature requests, community You can find a lot of help on the old AR.js repositories issues . Please search on open/closed issues, you may find useful information. Contributing From opening a bug report to creating a pull request: every contribution is appreciated and welcome. If you're planning to implement a new feature or change the API please create an issue first. This way we can ensure that your precious work is not in vain. Issues If you are having configuration or setup problems, please post a question to StackOverflow . You can also address question to us in our Gitter chatroom If you have discovered a bug or have a feature suggestion, feel free to create an issue on Github. Submitting Changes After getting some feedback, push to your fork and submit a pull request. We may suggest some changes or improvements or alternatives, but for small changes your pull request should be accepted quickly. Some things that will increase the chance that your pull request is accepted: Follow the existing coding style Write a good commit message","title":"Home"},{"location":"#arjs-augmented-reality-on-the-web","text":"AR.js is a lightweight library for Augmented Reality on the Web, which includes features like Image Tracking, Location based AR and Marker tracking. Location Based documentation updated and enhanced for AR.js 3.4","title":"AR.js - Augmented Reality on the Web"},{"location":"#what-web-ar-means-augmented-reality-on-the-web","text":"Augmented Reality is the technology that makes possible to overlay content on the real world. It can be provided for several type of devices: handheld (like mobile phones), headsets, desktop displays, and so on. For handheld devices (more generally, for video-see-through devices) the 'reality' is captured from one or more cameras and then shown on the device display, adding some kind of content on top of it. For developers, to develop Augmented Reality ('AR' from now on) on the Web, means to avoid all the Mobile app development efforts and costs related to App stores (validation, time to publish). It also means to re-use well known technologies like Javascript, HTML and CSS, familiar to a lot of developers and possibly designers. It basically means that it is possible to release every new version instantly, fix bugs or release new features in near real-time, opening a lot of practical possibilities. For users, it means to reach an AR experience just visiting a website. As QR Codes are now widespread, it's also possible to scan a QR Code and reach the URL without typing. Additionally, users do not have to reserve storage space on their download the AR app, and do not have to keep it updated.","title":"What Web AR means (Augmented Reality on the Web)"},{"location":"#why-arjs","text":"We believe in the Web, as a collaborative and accessible environment. We also believe in Augmented Reality technology, as a new communication medium, that can help people see reality in new, exciting ways. We see Augmented Reality (AR) used everyday for a lot of useful applications, from art, to education, also for fun. We strongly believe that such a powerful technology, that can help people and leverage their creativity, should be free in some way. Also collaborative, if possible. And so, we continue the work started by Jerome Etienne, in bringing AR on the Web, as a free and Open Source technology. Thank you for being interested in this, if you'd like to collaborate in any way, contact us ( https://twitter.com/nicolocarp ). The project is now under a Github organization, that you can find at https://github.com/ar-js-org and you can ask to be part of it, for free.","title":"Why AR.js"},{"location":"#ar-types","text":"AR.js features the following types of Augmented Reality, on the Web: Image Tracking , when a 2D images is found by the camera, it's possible to show some kind of content on top of it, or near it. The content can be a 2D image, a GIF, a 3D model (also animated) and a 2D video too. Cases of use: Augmented Art, learning (Augmented books), Augmented flyers, advertising, etc. Location Based AR , this kind of AR uses real-world places in order to show Augmented Reality content, on the user device. The experiences that can be built with this library are those that use a user's position in the real world. The user can move (ideally outdoor) and through their smartphones they can see AR content where places are in the real world. Moving around and rotating the phone will make the AR content change according to users position and rotation (so places are 'anchored' in their real position, and appear bigger/smaller according to their distance from the user). With this solution it\u2019s possible to build experiences like interactive support for tourist guides, assistance when exploring a new city, find places of interest like buildings, museums, restaurants, hotels and so on. It\u2019s also possible to build learning experiences like treasure hunts, and biology or history learning games, or use this technology for situated art (visual art experiences bound to specific real world coordinates). Marker Tracking , When a marker is found by the camera, it's possible to show some content (same as Image Tracking). Markers are very stable but limited in shape, color and size. It is suggested for those experiences where are required a lot of different markers with different content. Examples of use: (Augmented books), Augmented flyers, advertising.","title":"AR types"},{"location":"#key-points","text":"Very Fast : It runs efficiently even on phones Web-based : It is a pure web solution, so no installation required. Fully javascript based, using three.js + A-Frame + jsartoolkit5 Open Source : It is completely open source and free of charge! Standards : It works on any phone with webgl and webrtc AR.js has reached version 3. This is the official repository: https://github.com/AR-js-org/AR.js . If you want to visit the old AR.js repository, here it is: https://github.com/jeromeetienne/AR.js .","title":"Key points"},{"location":"#import-the-library","text":"AR.js from version 3 has a new structure. AR.js comes in two, different builds. They are both maintained. They are exclusive. The file you want to import depends on what features you want, and also which render library you want to use (A-Frame or three.js). AR.js uses jsartoolkit5 for tracking, but can display augmented content with either three.js or A-Frame . You can import AR.js in one version of your choice, using the For the three.js version, it's recommended to import AR.js as a module and build with a bundler such as Webpack. There is an example given in the location-based section.","title":"Import the library"},{"location":"#requirements-and-known-issues","text":"Some requirements and known issues are listed below: It works on every phone with webgl and webrtc . Marker based tracking is very lightweight, while Image Tracking is more CPU consuming You must ensure that you have matching versions of AR.js and A-Frame. AR.js 3.4.5 (the latest version) requires A-Frame 1.3.0 while AR.js 3.4.4 and below requires 1.0.4. Location-based AR will not work correctly on Firefox, due to the inability to obtain absolute device orientation (compass bearing) On Android/Chrome, you may encounter issues with location-based AR due to inaccuracies in compass calibration (incorrect north). This is likely to be a hardware limitation of the device. Please ensure you enable high accuracy location for your selected browser on Android. Sometimes high accuracy location is turned off by default, and this will lead to an inaccurate GPS location. There is currently a bug in location-based AR where the camera feed is stretched away from the centre of the screen, meaning that there is reduced accuracy in placement of objects further away from the centre. Work is ongoing to investigate this. On device with multi-cameras, Chrome may have problems on detecting the right one. Please use Firefox if you find that AR.js opens on the wrong camera. There is an open issue for this. To work with Location Based feature, your device needs to have GPS, accelerometer and magnetometer sensors. It will not work if any of these sensors are absent. Please, read carefully any suggestions that AR.js pops-up -as alerts- for Location Based on iOS, as iOS requires user actions to activate geoposition Access to the phone camera or to camera GPS sensors, due to major browsers restrictions, can be done only under https websites. All the examples you will see, and all AR.js web apps in general, have to be run on a server. You can use local server or deploy the static web app on the web.","title":"Requirements and Known Issues"},{"location":"#always-deploy-under-https","text":"So don't forget to always run your examples on secure connections servers or localhost. Github Pages is a great way to have free and live websites under https.","title":"Always deploy under https"},{"location":"#getting-started","text":"Here we present three, basic examples, one for each AR feature. For specific documentation, on the top menu you can find every section, or you can click on the following links: Image Tracking Documentation Location Based Documentation Marker Based Documentation","title":"Getting started"},{"location":"#image-tracking-example","text":"There is a Codepen for you to try. Below you can find also a live example. Please follow these simple steps: Create a new project with the code below (or open this live example and go directly to the last step) Run it on a server Open the website on your phone Scan this picture to see content through the camera.
Loading, please wait...
","title":"Image Tracking Example"},{"location":"#location-based-example","text":"This example retrieves your position and places a red box near you. Please follow these simple steps: Create a new project with the following snippet, and change add-your-latitude and add-your-longitude with a point very close to your latitude and longitude (about 0.001 degrees distant for both latitude and longitude), without the <> . Run it on a server Activate GPS on your phone and navigate to the example URL Look around. You should see the box close to you, appearing in the requested position, even if you look around and move the phone. AR.js A-Frame Location-based; longitude: \" scale=\"10 10 10\"> This is just a basic example and most location-based applications will involve JavaScript coding. So, if you want to enhance and customize your Location Based experience, take a look at the Location Based docs.","title":"Location Based Example"},{"location":"#marker-based-example","text":"Please follow these simple steps: Create a new project with the code below (or open this live example and go directly to the last step) Run it on a server Open the website on your phone Scan this picture to see content through the camera. ","title":"Marker Based Example"},{"location":"#advanced-stuff","text":"AR.js offers two ways, with A-Frame, to interact with the web page: to interact directly with AR content and Overlayed DOM interaction. Also, there are several Custom Events triggered during the life cycle of every AR.js web app. You can learn more about these aspects on the UI and Events section .","title":"Advanced stuff"},{"location":"#arjs-architecture","text":"AR.js uses jsartoolkit5 for tracking, but can display augmented content with either three.js or A-Frame . three.js folder contains source code for AR.js core, Marker based and Image Tracking examples for AR.js three.js based build for three.js AR.js based vendor stuff (jsartoolkit5) workers (used for Image Tracking). When you find files that ends with -nft suffix, they're bundled only with the Image Tracking version. A-Frame version of AR.js uses three.js parts as its core. A-Frame code, on AR.js, is simply a wrapper to write AR with Custom Components in HTML. aframe folder contains source code for AR.js A-Frame (aka wrappers for Marker Based, Image Tracking components) source code for Location Based build for A-Frame AR.js based examples for A-Frame AR.js.","title":"AR.js architecture"},{"location":"#tutorials","text":"There are various tutorials available for developing with AR.js. These include:","title":"Tutorials"},{"location":"#location-based","text":"Build your Location-Based Augmented Reality Web App : covers location-based AR.js with A-Frame. Develop a Simple Points Of Interest App (A-Frame version) ( Provided with these docs ): a further location-based A-Frame tutorial, written with AR.js 3.4 in mind. Develop a Simple Points of Interest App (three.js version) ( Provided with these docs ): a pure three.js version of the above, also written for AR.js 3.4.","title":"Location Based"},{"location":"#troubleshooting-feature-requests-community","text":"You can find a lot of help on the old AR.js repositories issues . Please search on open/closed issues, you may find useful information.","title":"Troubleshooting, feature requests, community"},{"location":"#contributing","text":"From opening a bug report to creating a pull request: every contribution is appreciated and welcome. If you're planning to implement a new feature or change the API please create an issue first. This way we can ensure that your precious work is not in vain.","title":"Contributing"},{"location":"#issues","text":"If you are having configuration or setup problems, please post a question to StackOverflow . You can also address question to us in our Gitter chatroom If you have discovered a bug or have a feature suggestion, feel free to create an issue on Github.","title":"Issues"},{"location":"#submitting-changes","text":"After getting some feedback, push to your fork and submit a pull request. We may suggest some changes or improvements or alternatives, but for small changes your pull request should be accepted quickly. Some things that will increase the chance that your pull request is accepted: Follow the existing coding style Write a good commit message","title":"Submitting Changes"},{"location":"about/","text":"Aknowledgments This project has been created by @jeromeetienne and it is now maintained by @nicolocarpignoli and the AR.js Org Community. Notes about AR.js 3 release: After months of work, we have changed AR.js for good. The aim was to make it a true, free alternative to paid Web AR solutions. We don't know if we're already there, but now the path is clear, at least. We have worked hard, spent many days and nights\u200a-\u200aobviously, we are coders, what did you expect?\u200a-\u200aand we are now so thrilled to share this achievement with the community. We know that it can be better, we know its limitations, but we would love to share this journey's result. AR.js is now under a Github organisation, that means, more collaborative than ever. It has a new structure, and a lot of new code. And most of all, we've added Image Tracking, what we felt was the missing piece for a true alternative to Web AR. A huge, huge thanks to the wonderful guys who made this possible: Walter Perdan Thorsten Bux Daniel Fernandes misdake hatsumatsu and many more. It was great to built this with all of you.","title":"About"},{"location":"about/#aknowledgments","text":"This project has been created by @jeromeetienne and it is now maintained by @nicolocarpignoli and the AR.js Org Community. Notes about AR.js 3 release: After months of work, we have changed AR.js for good. The aim was to make it a true, free alternative to paid Web AR solutions. We don't know if we're already there, but now the path is clear, at least. We have worked hard, spent many days and nights\u200a-\u200aobviously, we are coders, what did you expect?\u200a-\u200aand we are now so thrilled to share this achievement with the community. We know that it can be better, we know its limitations, but we would love to share this journey's result. AR.js is now under a Github organisation, that means, more collaborative than ever. It has a new structure, and a lot of new code. And most of all, we've added Image Tracking, what we felt was the missing piece for a true alternative to Web AR. A huge, huge thanks to the wonderful guys who made this possible: Walter Perdan Thorsten Bux Daniel Fernandes misdake hatsumatsu and many more. It was great to built this with all of you.","title":"Aknowledgments"},{"location":"image-tracking/","text":"Image Tracking Image Tracking makes possible to scan a picture, a drawing, any image, and show content over it. All the following examples are with A-Frame, for simplicity. You can use three.js if you want. See on the official repository the nft three.js example . All A-Frame examples for Image Tracking can be found here . Getting started with Image Tracking Natural Feature Tracking or NFT is a technology that enables the use of images instead of markers like QR Codes or the Hiro marker. The software tracks interesting points in the image and using them, it estimates the position of the camera. These interesting points (aka \"Image Descriptors\") are created using the NFT Marker Creator , a tool available for creating NFT markers. It comes in two versions: the Web version (recommended), and the node.js version . There is also a fork of this project on the AR.js Github organisation, but as for now, Daniel Fernandes version works perfectly. Thanks to Daniel Fernandes for contribution on this docs section. Choose good images If you want to understand the creation of markers in more depth, check out the NFT Marker Creator wiki . It explains also why certain images work way better than others. An important factor is the DPI of the image: a good dpi (300 or more) will give a very good stabilization, while low DPI (like 72) will require the user to stay very still and close to the image, otherwise tracking will lag. Create Image Descriptors Once you have chosen your image, you can either use the NFT Marker Creator in its Web version or the node version. If you're using the node version, this is the basic command to run: node app.js -i After that, you will find the Image Descriptors files on the output folder. In the web version, the generator will automatically download the files from your browser. In either cases, you will end up with three files as Image Descriptors, with .fset , .fset3 , .iset . Each of them will have the same prefix before the file extension. That one will be the Image Descriptor name that you will use on the AR.js web app. For example: with files trex.fset , trex.fset3 and trex.iset , your Image Descriptors name will be trex . Render the content Now it's time to create the actual AR web app.
Loading, please wait...
\" smooth=\"true\" smoothCount=\"10\" smoothTolerance=\".01\" smoothThreshold=\"5\" > \" scale=\"5 5 5\" position=\"50 150 0\" > See on the comments above, inline on the code, for explanations. You can refer to A-Frame docs to know everything about content and customization. You can add geometries, 3D models, videos, images. And you can customize their position, scale, rotation and so on. The only custom component here is the a-nft , the Image Tracking HTML anchor. Here are the attributes for this entity Attribute Description Component Mapping type type of marker - ['nft' only valid value] artoolkitmarker.type url url of the Image Descriptors, without extension artoolkitmarker.descriptorsUrl emitevents emits 'markerFound' and 'markerLost' events - ['true', 'false'] - smooth turn on/off camera smoothing - ['true', 'false'] - default: false - smoothCount number of matrices to smooth tracking over, more = smoother but slower follow - default: 5 - smoothTolerance distance tolerance for smoothing, if smoothThreshold # of matrices are under tolerance, tracking will stay still - default: 0.01 - smoothThreshold threshold for smoothing, will keep still unless enough matrices are over tolerance - default: 2 - size size of the marker in meter artoolkitmarker.size \u26a1\ufe0f It is suggested to use smooth , smoothCount and smoothTolerance because of weak stabilization of content in Image Tracking. Thanks to smoothing, content is way more stable, from 3D models to 2D videos. Event listeners The arjs-nft-loaded event is fired when all NFT Markers have finished loading. This is when you will be able to start tracking your NFT Marker with the camera. You can use this to build a UI to inform the user that things are still loading. Usage window.addEventListener(\"arjs-nft-loaded\", (event) => { // Hide loading overlay });","title":"Image Tracking"},{"location":"image-tracking/#image-tracking","text":"Image Tracking makes possible to scan a picture, a drawing, any image, and show content over it. All the following examples are with A-Frame, for simplicity. You can use three.js if you want. See on the official repository the nft three.js example . All A-Frame examples for Image Tracking can be found here .","title":"Image Tracking"},{"location":"image-tracking/#getting-started-with-image-tracking","text":"Natural Feature Tracking or NFT is a technology that enables the use of images instead of markers like QR Codes or the Hiro marker. The software tracks interesting points in the image and using them, it estimates the position of the camera. These interesting points (aka \"Image Descriptors\") are created using the NFT Marker Creator , a tool available for creating NFT markers. It comes in two versions: the Web version (recommended), and the node.js version . There is also a fork of this project on the AR.js Github organisation, but as for now, Daniel Fernandes version works perfectly. Thanks to Daniel Fernandes for contribution on this docs section.","title":"Getting started with Image Tracking"},{"location":"image-tracking/#choose-good-images","text":"If you want to understand the creation of markers in more depth, check out the NFT Marker Creator wiki . It explains also why certain images work way better than others. An important factor is the DPI of the image: a good dpi (300 or more) will give a very good stabilization, while low DPI (like 72) will require the user to stay very still and close to the image, otherwise tracking will lag.","title":"Choose good images"},{"location":"image-tracking/#create-image-descriptors","text":"Once you have chosen your image, you can either use the NFT Marker Creator in its Web version or the node version. If you're using the node version, this is the basic command to run: node app.js -i After that, you will find the Image Descriptors files on the output folder. In the web version, the generator will automatically download the files from your browser. In either cases, you will end up with three files as Image Descriptors, with .fset , .fset3 , .iset . Each of them will have the same prefix before the file extension. That one will be the Image Descriptor name that you will use on the AR.js web app. For example: with files trex.fset , trex.fset3 and trex.iset , your Image Descriptors name will be trex .","title":"Create Image Descriptors"},{"location":"image-tracking/#render-the-content","text":"Now it's time to create the actual AR web app.
Loading, please wait...
\" smooth=\"true\" smoothCount=\"10\" smoothTolerance=\".01\" smoothThreshold=\"5\" > \" scale=\"5 5 5\" position=\"50 150 0\" > See on the comments above, inline on the code, for explanations. You can refer to A-Frame docs to know everything about content and customization. You can add geometries, 3D models, videos, images. And you can customize their position, scale, rotation and so on. The only custom component here is the a-nft , the Image Tracking HTML anchor.","title":"Render the content"},{"location":"image-tracking/#a-nft","text":"Here are the attributes for this entity Attribute Description Component Mapping type type of marker - ['nft' only valid value] artoolkitmarker.type url url of the Image Descriptors, without extension artoolkitmarker.descriptorsUrl emitevents emits 'markerFound' and 'markerLost' events - ['true', 'false'] - smooth turn on/off camera smoothing - ['true', 'false'] - default: false - smoothCount number of matrices to smooth tracking over, more = smoother but slower follow - default: 5 - smoothTolerance distance tolerance for smoothing, if smoothThreshold # of matrices are under tolerance, tracking will stay still - default: 0.01 - smoothThreshold threshold for smoothing, will keep still unless enough matrices are over tolerance - default: 2 - size size of the marker in meter artoolkitmarker.size \u26a1\ufe0f It is suggested to use smooth , smoothCount and smoothTolerance because of weak stabilization of content in Image Tracking. Thanks to smoothing, content is way more stable, from 3D models to 2D videos.","title":"<a-nft\\>"},{"location":"image-tracking/#event-listeners","text":"The arjs-nft-loaded event is fired when all NFT Markers have finished loading. This is when you will be able to start tracking your NFT Marker with the camera. You can use this to build a UI to inform the user that things are still loading.","title":"Event listeners"},{"location":"image-tracking/#usage","text":"window.addEventListener(\"arjs-nft-loaded\", (event) => { // Hide loading overlay });","title":"Usage"},{"location":"location-based/","text":"Location Based This article gives you a first glance to Location Based on AR.js. It can be used for indoor (but with low precision) and outdoor geopositioning of AR content. You can load places statically, from HTML or from Javascript, or you can load your data from local/remote json, or even through API calls. Choice is yours. On the article above there are all the options explained, as tutorials. Location Based has been implemented for both three.js and A-Frame. Each of these is documented below. This document is intended as reference documentation. There are also two tutorials available, with full example code: A-Frame location based three.js location based A-Frame AR.js offers A-Frame components to implement location-based AR. There are three variants of the components, detailed as below: The new-location-based components. These have been available since AR.js 3.4.0, incorporate various bug fixes, use simpler code, and provide a thin wrapper round the three.js API shown below. These are recommended for most uses, though do not support all the events of the older components due to a different internal implementation. Nonetheless they the components likely to see further development - the older variants are unlikely to see further work besides bug fixes. The projected components. These have been available since AR.js 3.3.1, use largely the same internal implementation as the classic components, and were the first to offer projection of latitude/longitude into Spherical Mercator, discussed below. The classic components, available before AR.js 3.3.1. These are similar to the projected components but do not offer the facility to convert between latitude/longitude and the projected coordinates used for augmented reality, which can cause problems for more specialist uses such as showing roads and paths in augmented reality. The components Each variant above includes two components, a camera component which enables the location-based AR, and an entity-place component which enables setting components' latitude and longitude. The exact component names for each variant are shown below. Component variant Camera component Entity-place component new-location-based gps-new-camera gps-new-entity-place projected gps-projected-camera gps-projected-entity-place classic gps-camera gps-entity-place Camera component ( gps-new-camera , gps-projected-camera or gps-camera ) Required : yes Max allowed per scene : 1 This component enables the Location AR. It has to be added to the camera entity. It makes possible to handle both position and rotation of the camera and it's used to determine where the user is pointing their device. For example: Properties Property Description Default Value Availability positionMinAccuracy Minimum accuracy allowed for position signal 100 all gpsMinDistance Setting this allows you to control how far the camera must move, in meters, to generate a GPS update event. Useful to prevent 'jumping' of augmented content due to frequent small changes in position. 5 all simulateLatitude Setting this allows you to simulate the latitude of the camera, to aid in testing. 0 (disabled) all (but only triggers GPS update event in new-location-based) simulateLongitude Setting this allows you to simulate the longitude of the camera, to aid in testing. 0 (disabled) all (but only triggers GPS update event in new-location-based) simulateAltitude Setting this allows you to simulate the altitude of the camera in meters above sea level, to aid in testing. 0 (disabled) all alert Whether to show a message when GPS signal is under the positionMinAccuracy false projected, classic minDistance If set, places with a distance from the user lower than this value, are not shown. Only a positive value is allowed. Value is in meters. In the new-location-based components, please set the near clipping plane of the perspective camera. 0 (disabled) projected, classic maxDistance If set, places with a distance from the user higher than this value, are not shown. Only a positive value is allowed. Value is in meters. In the new-location-based components, please set the far clipping plane of the perspective camera. 0 (disabled) projected, classic gpsTimeInterval Setting this allows you to control how frequently to obtain a new GPS position. If a previous GPS location is cached, the cached position will be used rather than a new position if its 'age' is less than this value, in milliseconds. This parameter is passed directly to the Geolocation API's watchPosition() method. 0 (always use new position, not cached) all Entity-place component ( gps-new-entity-place , gps-projected-entity-place or gps-entity-place ) Required : yes Max allowed per scene : no limit This component makes each entity GPS-trackable. This assigns a specific world position to an entity, so that the user can see it when their device is pointing to its position in the real world. If the user is far from the entity, it will seem smaller. If it's too far away, it won't be seen at all. It requires latitude and longitude as a single string parameter (example with a-box aframe primitive): ; longitude: \"/> \u26a1\ufe0f In addition, you can use the a-frame \"position\" parameter to assign a y-value to change the height of the content. This value should be entered as meters above or below (if negative) the current camera height. For example, this would assign a height of 30 meters, and will be displayed relative to the gps-new-camera's current height: ; longitude: \" position=\"0 30 0\"/> Properties distance : current distance from the camera, in metres. Available in gps-new-entity-place only: for the classic components, please use events to obtain the current distance. Events Take a look at the UI and Events page for Location Based Custom Events. \u26a1\ufe0f Usually, in Location Based, it's nice to have the augmented content that will always face the user, so when you rotate the camera, 3D models or most of all, text, are well visible. Look at this example in order to create gps-new-entity-place entities that will always face the user (camera). Viewing every distant object If your location-based AR content is distant from the user (around 1km or more), it is recommended to use the new arjs-webcam-texture component (introduced in AR.js 3.2.0), which uses a three.js texture to stream the camera feed and allows distant content to be viewed. This component is automatically injected if the videoTexture parameter of the arjs system is set to true and the sourceType is webcam . For example (code snippet only): Reducing shaking effects In location-based mode, 'shaking' effects can occur due to frequent small changes in the device's orientation, due to the high sensitivity of the device sensors such as the accelerometer. If using AR.js 3.3.1 or greater (3.4.3 or greater for the new-location-based components), this can optionally be reduced using an exponential smoothing technique. Note that, if you are NOT using the new-location-based components, there are currently some occasional display artefacts with this if moving the device quickly or suddenly so please test before you enable it in a finished application; work to resolve these is on-going. Alternatively, please use the new-location-based components. This is enabled by adding a custom look-controls component to your a-camera with a smoothingFactor property. This replaces A-Frames default look-controls component, which must be disabled. The name of the custom look-controls component varies, depending on which version of the location-based components you are using: for new-location-based , use arjs-device-orientation-controls ; for the classic and projected components, use arjs-look-controls . For example, in the new-location-based components: or, otherwise: Exponential smoothing works by applying a smoothing factor to each newly-read device rotation angle (obtained from sensor readings) such that the previous smoothed value counts more than the current value, thus reducing 'noise' and 'jitter'. If k is the smoothing factor: smoothedAngle = k * newValue + (1 - k) * previousSmoothedAngle It can be seen from this that the smaller the value of k (the smoothingFactor property), the greater the smoothing effect. In tests, 0.1 appears to give the best result. You can also reduce 'jumping' of augmented content when near a place - a bad-looking effect due to GPS sensor's low precision. To do so you can use the gpsMinDistance property, as shown in the examples above. This will only update the position if the user has moved at least that number of metres. Projection Details The new-location-based and projected location-based components for AR.js uses Spherical Mercator (aka EPSG:3857) to store both the camera position and the position of added points of interest and other geographical data. Spherical Mercator is the same projection used by Google Maps and projects the earth onto a flat surface. It works reasonably at most latitudes but is highly distorted near the poles. Latitude and longitude is projected into Spherical Mercator eastings and northings , which are approximately (but not exactly) equivalent to metres. The rationale for this is to allow easy addition of more complex geographic data such as roads and paths. Such data can be projected and added to an AR.js scene, and then, because Spherical Mercator units approximate to metres (away from the poles), the coordinates can be used directly as WebGL/A-Frame world coordinates. Calculating world coordinates of arbitrary augmented content The new-location-based and projected components have some useful properties and methods which can be used to easily work with more specialist augmented content (for example, you might want to overlay AR polylines or polygons representing roads and paths, downloaded from geodata APIs such as OpenStreetMap ). Such data can be downloaded from the API as lat/lon based coordinates, projected using AR.js API methods into Spherical Mercator (approximating to, but not exactly metres, but in tests good enough to use as world coordinates), and then added to the scene as a three.js object. This is implemented differently in the new-location-based and projected components, but the external API is (as of 3.4.3) the same. The key method is the latLonToWorld(lat, lon) method of the gps-new-camera and gps-projected-camera components. This converts latitude and longitude directly to world coordinates, performing the projection as the first step and then calculating the world coordinates from the projected coordinates. It will return a 2-member array containing the x and z world coordinates, allowing the developer to calculate or specify the y coordinate (altitude) independently. Note that the sign of the Spherical Mercator northing is reversed to align with the OpenGL coordinate system (eastings are equivalent to x coordinates and northings to z coordinates). gps-new-camera implements projection via the underlying AR.js three.js LocationBased object (see three.js documentation, below) which is responsible for the actual projection. gps-projected-camera provides similar functionality but via a different method and with some implementation differences. In gps-projected-camera , unlike gps-new-camera , the original GPS position is set as the world origin. three.js The three.js API keeps track of your current GPS location (or allows you to set a fake location) and allows you to add three.js objects at a given latitude and longitude. It includes these classes: THREEx.LocationBased - general manager class for the three.js location-based API. THREEx.WebcamRenderer - renders the feed from the webcam as a WebGL texture. THREEx.DeviceOrientationControls - for detecting changes in the orientation of the device. These classes include the following methods: LocationBased constructor(scene, camera, options={}) : Initialises a new LocationBased object. Takes a THREE.Scene and a THREE.Camera object as parameters, as well as an object of GPS options (see setGpsOptions() , below) setProjection(proj) : allows the projection to be defined. By default Spherical Mercator is used. The projection object must provide a project() method which takes longitude and latitude as parameters and returns a 2-member array of projected coordinates (easting, northing). setGpsOptions(options={}) : sets the GPS options. These include gpsMinDistance and gpsMinAccuracy , described in the A-Frame documentation above. startGps() : starts the GPS. Takes an optional maximumAge , as used by the native Geolocation API. stopGps() : stops the GPS. fakeGps(lon, lat, elev=null, acc=0) : fakes a GPS position being received. Elevation and accuracy can optionally be provided. lonLatToWorldCoords(lon, lat) : projects a given longitude and latitude into world coordinates using the current projection. The sign of the northing is reversed to align it with the OpenGL coordinate system. add(object, lon, lat, elev) : adds a given three.js object to the world at the given longitude and latitude and at the given elevation. setWorldPosition(object, lon, lat, elev) : changes the world position of a given object to the given longitude and latitude, without adding it to the scene. setElevation(elev) : sets the current elevation in metres. This will set the camera's y coordinate to that elevation. on(eventname, eventhandler) : allows event handlers to be specified. Currently gpsupdate and gpserror handlers are supported, for receiving a new GPS position and GPS errors (as in the Geolocation API) respectively. WebcamRenderer Renders the webcam feed. constructor(renderer, videoElementSelector) : creates a WebcamRenderer . Takes a THREE.WebGLRenderer plus a selector for an HTML video element to stream the feed to. update() : updates the camera feed. Should be done each time the scene is rendered. DeviceOrientationControls Represents the device orientation controls, i.e. accelerometer and magnetic field sensors, for determining the orientation of the device. Based on the sample included in the three.js distribution. constructor(cameraObject) : creates a DeviceOrientationControls object. Takes a three.js camera. update() : updates the device orientation controls. Should be done each time the scene is rendered. Using three.js location-based in an application You are recommended to use npm to install AR.js, import it into your application, and use a bundler such as Webpack to build. Here is a sample package.json : { \"dependencies\": { \"@ar-js-org/ar.js\": \"3.4.5\", }, \"devDependencies\": { \"webpack\": \"^5.75.0\", \"webpack-cli\": \"^5.0.0\" }, \"scripts\": { \"build\": \"npx webpack\" } } and a sample webpack.config.js : const path = require('path'); module.exports = { mode: 'development', entry: './index.js', output: { path: path.resolve(__dirname, 'dist'), filename: 'bundle.js' }, optimization: { minimize: false } }; This will build a bundle named bundle.js in the dist subdirectory from a source file index.js . Here is an example of importing the components into an application: import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'","title":"Location Based"},{"location":"location-based/#location-based","text":"This article gives you a first glance to Location Based on AR.js. It can be used for indoor (but with low precision) and outdoor geopositioning of AR content. You can load places statically, from HTML or from Javascript, or you can load your data from local/remote json, or even through API calls. Choice is yours. On the article above there are all the options explained, as tutorials. Location Based has been implemented for both three.js and A-Frame. Each of these is documented below. This document is intended as reference documentation. There are also two tutorials available, with full example code: A-Frame location based three.js location based","title":"Location Based"},{"location":"location-based/#a-frame","text":"AR.js offers A-Frame components to implement location-based AR. There are three variants of the components, detailed as below: The new-location-based components. These have been available since AR.js 3.4.0, incorporate various bug fixes, use simpler code, and provide a thin wrapper round the three.js API shown below. These are recommended for most uses, though do not support all the events of the older components due to a different internal implementation. Nonetheless they the components likely to see further development - the older variants are unlikely to see further work besides bug fixes. The projected components. These have been available since AR.js 3.3.1, use largely the same internal implementation as the classic components, and were the first to offer projection of latitude/longitude into Spherical Mercator, discussed below. The classic components, available before AR.js 3.3.1. These are similar to the projected components but do not offer the facility to convert between latitude/longitude and the projected coordinates used for augmented reality, which can cause problems for more specialist uses such as showing roads and paths in augmented reality.","title":"A-Frame"},{"location":"location-based/#the-components","text":"Each variant above includes two components, a camera component which enables the location-based AR, and an entity-place component which enables setting components' latitude and longitude. The exact component names for each variant are shown below. Component variant Camera component Entity-place component new-location-based gps-new-camera gps-new-entity-place projected gps-projected-camera gps-projected-entity-place classic gps-camera gps-entity-place","title":"The components"},{"location":"location-based/#camera-component-gps-new-camera-gps-projected-camera-or-gps-camera","text":"Required : yes Max allowed per scene : 1 This component enables the Location AR. It has to be added to the camera entity. It makes possible to handle both position and rotation of the camera and it's used to determine where the user is pointing their device. For example: ","title":"Camera component (gps-new-camera, gps-projected-camera or gps-camera)"},{"location":"location-based/#properties","text":"Property Description Default Value Availability positionMinAccuracy Minimum accuracy allowed for position signal 100 all gpsMinDistance Setting this allows you to control how far the camera must move, in meters, to generate a GPS update event. Useful to prevent 'jumping' of augmented content due to frequent small changes in position. 5 all simulateLatitude Setting this allows you to simulate the latitude of the camera, to aid in testing. 0 (disabled) all (but only triggers GPS update event in new-location-based) simulateLongitude Setting this allows you to simulate the longitude of the camera, to aid in testing. 0 (disabled) all (but only triggers GPS update event in new-location-based) simulateAltitude Setting this allows you to simulate the altitude of the camera in meters above sea level, to aid in testing. 0 (disabled) all alert Whether to show a message when GPS signal is under the positionMinAccuracy false projected, classic minDistance If set, places with a distance from the user lower than this value, are not shown. Only a positive value is allowed. Value is in meters. In the new-location-based components, please set the near clipping plane of the perspective camera. 0 (disabled) projected, classic maxDistance If set, places with a distance from the user higher than this value, are not shown. Only a positive value is allowed. Value is in meters. In the new-location-based components, please set the far clipping plane of the perspective camera. 0 (disabled) projected, classic gpsTimeInterval Setting this allows you to control how frequently to obtain a new GPS position. If a previous GPS location is cached, the cached position will be used rather than a new position if its 'age' is less than this value, in milliseconds. This parameter is passed directly to the Geolocation API's watchPosition() method. 0 (always use new position, not cached) all","title":"Properties"},{"location":"location-based/#entity-place-component-gps-new-entity-place-gps-projected-entity-place-or-gps-entity-place","text":"Required : yes Max allowed per scene : no limit This component makes each entity GPS-trackable. This assigns a specific world position to an entity, so that the user can see it when their device is pointing to its position in the real world. If the user is far from the entity, it will seem smaller. If it's too far away, it won't be seen at all. It requires latitude and longitude as a single string parameter (example with a-box aframe primitive): ; longitude: \"/> \u26a1\ufe0f In addition, you can use the a-frame \"position\" parameter to assign a y-value to change the height of the content. This value should be entered as meters above or below (if negative) the current camera height. For example, this would assign a height of 30 meters, and will be displayed relative to the gps-new-camera's current height: ; longitude: \" position=\"0 30 0\"/>","title":"Entity-place component (gps-new-entity-place, gps-projected-entity-place or gps-entity-place)"},{"location":"location-based/#properties_1","text":"distance : current distance from the camera, in metres. Available in gps-new-entity-place only: for the classic components, please use events to obtain the current distance.","title":"Properties"},{"location":"location-based/#events","text":"Take a look at the UI and Events page for Location Based Custom Events. \u26a1\ufe0f Usually, in Location Based, it's nice to have the augmented content that will always face the user, so when you rotate the camera, 3D models or most of all, text, are well visible. Look at this example in order to create gps-new-entity-place entities that will always face the user (camera).","title":"Events"},{"location":"location-based/#viewing-every-distant-object","text":"If your location-based AR content is distant from the user (around 1km or more), it is recommended to use the new arjs-webcam-texture component (introduced in AR.js 3.2.0), which uses a three.js texture to stream the camera feed and allows distant content to be viewed. This component is automatically injected if the videoTexture parameter of the arjs system is set to true and the sourceType is webcam . For example (code snippet only): ","title":"Viewing every distant object"},{"location":"location-based/#reducing-shaking-effects","text":"In location-based mode, 'shaking' effects can occur due to frequent small changes in the device's orientation, due to the high sensitivity of the device sensors such as the accelerometer. If using AR.js 3.3.1 or greater (3.4.3 or greater for the new-location-based components), this can optionally be reduced using an exponential smoothing technique. Note that, if you are NOT using the new-location-based components, there are currently some occasional display artefacts with this if moving the device quickly or suddenly so please test before you enable it in a finished application; work to resolve these is on-going. Alternatively, please use the new-location-based components. This is enabled by adding a custom look-controls component to your a-camera with a smoothingFactor property. This replaces A-Frames default look-controls component, which must be disabled. The name of the custom look-controls component varies, depending on which version of the location-based components you are using: for new-location-based , use arjs-device-orientation-controls ; for the classic and projected components, use arjs-look-controls . For example, in the new-location-based components: or, otherwise: Exponential smoothing works by applying a smoothing factor to each newly-read device rotation angle (obtained from sensor readings) such that the previous smoothed value counts more than the current value, thus reducing 'noise' and 'jitter'. If k is the smoothing factor: smoothedAngle = k * newValue + (1 - k) * previousSmoothedAngle It can be seen from this that the smaller the value of k (the smoothingFactor property), the greater the smoothing effect. In tests, 0.1 appears to give the best result. You can also reduce 'jumping' of augmented content when near a place - a bad-looking effect due to GPS sensor's low precision. To do so you can use the gpsMinDistance property, as shown in the examples above. This will only update the position if the user has moved at least that number of metres.","title":"Reducing shaking effects"},{"location":"location-based/#projection-details","text":"The new-location-based and projected location-based components for AR.js uses Spherical Mercator (aka EPSG:3857) to store both the camera position and the position of added points of interest and other geographical data. Spherical Mercator is the same projection used by Google Maps and projects the earth onto a flat surface. It works reasonably at most latitudes but is highly distorted near the poles. Latitude and longitude is projected into Spherical Mercator eastings and northings , which are approximately (but not exactly) equivalent to metres. The rationale for this is to allow easy addition of more complex geographic data such as roads and paths. Such data can be projected and added to an AR.js scene, and then, because Spherical Mercator units approximate to metres (away from the poles), the coordinates can be used directly as WebGL/A-Frame world coordinates.","title":"Projection Details"},{"location":"location-based/#calculating-world-coordinates-of-arbitrary-augmented-content","text":"The new-location-based and projected components have some useful properties and methods which can be used to easily work with more specialist augmented content (for example, you might want to overlay AR polylines or polygons representing roads and paths, downloaded from geodata APIs such as OpenStreetMap ). Such data can be downloaded from the API as lat/lon based coordinates, projected using AR.js API methods into Spherical Mercator (approximating to, but not exactly metres, but in tests good enough to use as world coordinates), and then added to the scene as a three.js object. This is implemented differently in the new-location-based and projected components, but the external API is (as of 3.4.3) the same. The key method is the latLonToWorld(lat, lon) method of the gps-new-camera and gps-projected-camera components. This converts latitude and longitude directly to world coordinates, performing the projection as the first step and then calculating the world coordinates from the projected coordinates. It will return a 2-member array containing the x and z world coordinates, allowing the developer to calculate or specify the y coordinate (altitude) independently. Note that the sign of the Spherical Mercator northing is reversed to align with the OpenGL coordinate system (eastings are equivalent to x coordinates and northings to z coordinates). gps-new-camera implements projection via the underlying AR.js three.js LocationBased object (see three.js documentation, below) which is responsible for the actual projection. gps-projected-camera provides similar functionality but via a different method and with some implementation differences. In gps-projected-camera , unlike gps-new-camera , the original GPS position is set as the world origin.","title":"Calculating world coordinates of arbitrary augmented content"},{"location":"location-based/#threejs","text":"The three.js API keeps track of your current GPS location (or allows you to set a fake location) and allows you to add three.js objects at a given latitude and longitude. It includes these classes: THREEx.LocationBased - general manager class for the three.js location-based API. THREEx.WebcamRenderer - renders the feed from the webcam as a WebGL texture. THREEx.DeviceOrientationControls - for detecting changes in the orientation of the device. These classes include the following methods:","title":"three.js"},{"location":"location-based/#locationbased","text":"constructor(scene, camera, options={}) : Initialises a new LocationBased object. Takes a THREE.Scene and a THREE.Camera object as parameters, as well as an object of GPS options (see setGpsOptions() , below) setProjection(proj) : allows the projection to be defined. By default Spherical Mercator is used. The projection object must provide a project() method which takes longitude and latitude as parameters and returns a 2-member array of projected coordinates (easting, northing). setGpsOptions(options={}) : sets the GPS options. These include gpsMinDistance and gpsMinAccuracy , described in the A-Frame documentation above. startGps() : starts the GPS. Takes an optional maximumAge , as used by the native Geolocation API. stopGps() : stops the GPS. fakeGps(lon, lat, elev=null, acc=0) : fakes a GPS position being received. Elevation and accuracy can optionally be provided. lonLatToWorldCoords(lon, lat) : projects a given longitude and latitude into world coordinates using the current projection. The sign of the northing is reversed to align it with the OpenGL coordinate system. add(object, lon, lat, elev) : adds a given three.js object to the world at the given longitude and latitude and at the given elevation. setWorldPosition(object, lon, lat, elev) : changes the world position of a given object to the given longitude and latitude, without adding it to the scene. setElevation(elev) : sets the current elevation in metres. This will set the camera's y coordinate to that elevation. on(eventname, eventhandler) : allows event handlers to be specified. Currently gpsupdate and gpserror handlers are supported, for receiving a new GPS position and GPS errors (as in the Geolocation API) respectively.","title":"LocationBased"},{"location":"location-based/#webcamrenderer","text":"Renders the webcam feed. constructor(renderer, videoElementSelector) : creates a WebcamRenderer . Takes a THREE.WebGLRenderer plus a selector for an HTML video element to stream the feed to. update() : updates the camera feed. Should be done each time the scene is rendered.","title":"WebcamRenderer"},{"location":"location-based/#deviceorientationcontrols","text":"Represents the device orientation controls, i.e. accelerometer and magnetic field sensors, for determining the orientation of the device. Based on the sample included in the three.js distribution. constructor(cameraObject) : creates a DeviceOrientationControls object. Takes a three.js camera. update() : updates the device orientation controls. Should be done each time the scene is rendered.","title":"DeviceOrientationControls"},{"location":"location-based/#using-threejs-location-based-in-an-application","text":"You are recommended to use npm to install AR.js, import it into your application, and use a bundler such as Webpack to build. Here is a sample package.json : { \"dependencies\": { \"@ar-js-org/ar.js\": \"3.4.5\", }, \"devDependencies\": { \"webpack\": \"^5.75.0\", \"webpack-cli\": \"^5.0.0\" }, \"scripts\": { \"build\": \"npx webpack\" } } and a sample webpack.config.js : const path = require('path'); module.exports = { mode: 'development', entry: './index.js', output: { path: path.resolve(__dirname, 'dist'), filename: 'bundle.js' }, optimization: { minimize: false } }; This will build a bundle named bundle.js in the dist subdirectory from a source file index.js . Here is an example of importing the components into an application: import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'","title":"Using three.js location-based in an application"},{"location":"marker-based/","text":"Marker Based Markers can be of three, different types: Hiro Barcode Pattern. To learn more about markers, please read these articles: AR.js basic Marker Based tutorial and Markers explanation Deliver AR.js experiences using only QRCodes (Markers inside QRCodes) . TL:DR Hiro Marker is the default one, not very useful actually Barcode markers are auto-generated markers, from matrix computations. Learn more on the above articles on how to use them. If you need the full list of barcode markers, here it is Pattern markers are custom ones, created starting from an image (very simple, hight contrast), loaded by the user. \u26a1\ufe0f You can create your Pattern Markers with this tool . It will generate an image to scan and a .patt file, to be loaded on the AR.js web app, in order for it to recognise the marker when running. How to choose good images for Pattern Markers Markers have a black border and high contrast shapes. Lately, we have added also white border markers with black background, although the classic ones, with black border, behave better. Here's an article explaining all good practice on how to choose good images to be used to generate custom markers: 10 tips to enhance your AR.js app . API Reference for Marker Based A-Frame Here are the attributes for this entity Attribute Description Component Mapping type type of marker - ['pattern', 'barcode', 'unknown' ] artoolkitmarker.type size size of the marker in meter artoolkitmarker.size url url of the pattern - IIF type='pattern' artoolkitmarker.patternUrl value value of the barcode - IIF type='barcode' artoolkitmarker.barcodeValue preset parameters preset - ['hiro', 'kanji'] artoolkitmarker.preset emitevents emits 'markerFound' and 'markerLost' events - ['true', 'false'] - smooth turn on/off camera smoothing - ['true', 'false'] - default: false - smooth-count number of matrices to smooth tracking over, more = smoother but slower follow - default: 5 - smooth-tolerance distance tolerance for smoothing, if smoothThreshold # of matrices are under tolerance, tracking will stay still - default: 0.01 - smooth-threshold threshold for smoothing, will keep still unless enough matrices are over tolerance - default: 2 - three.js threex-artoolkit threex.artoolkit is the three.js extension to easily handle artoolkit . Architecture threex.artoolkit is composed of 3 classes THREEx.ArToolkitSource : It is the image which is analyzed to do the position tracking. It can be the webcam, a video or even an image THREEx.ArToolkitContext : It is the main engine. It will actually find the marker position in the image source. THREEx.ArMarkerControls : it controls the position of the marker It uses the classical three.js controls API . It will make sure to position your content right on top of the marker. THREEx.ArMarkerControls var parameters = { // size of the marker in meter size: 1, // type of marker - ['pattern', 'barcode', 'unknown' ] type: \"unknown\", // url of the pattern - IIF type='pattern' patternUrl: null, // value of the barcode - IIF type='barcode' barcodeValue: null, // change matrix mode - [modelViewMatrix, cameraTransformMatrix] changeMatrixMode: \"modelViewMatrix\", // turn on/off camera smoothing smooth: true, // number of matrices to smooth tracking over, more = smoother but slower follow smoothCount: 5, // distance tolerance for smoothing, if smoothThreshold # of matrices are under tolerance, tracking will stay still smoothTolerance: 0.01, // threshold for smoothing, will keep still unless enough matrices are over tolerance smoothThreshold: 2 }; THREEx.ArToolkitContext var parameters = { // debug - true if one should display artoolkit debug canvas, false otherwise debug: false, // the mode of detection - ['color', 'color_and_matrix', 'mono', 'mono_and_matrix'] detectionMode: 'color_and_matrix', // type of matrix code - valid iif detectionMode end with 'matrix' - [3x3, 3x3_HAMMING63, 3x3_PARITY65, 4x4, 4x4_BCH_13_9_3, 4x4_BCH_13_5_5] matrixCodeType: '3x3', // Pattern ratio for custom markers patternRatio: 0.5 // Labeling mode for markers - ['black_region', 'white_region'] // black_region: Black bordered markers on a white background, white_region: White bordered markers on a black background labelingMode: 'black_region', // url of the camera parameters cameraParametersUrl: THREEx.ArToolkitContext.baseURL + '../data/data/camera_para.dat', // tune the maximum rate of pose detection in the source image maxDetectionRate: 60, // resolution of at which we detect pose in the source image canvasWidth: 640, canvasHeight: 480, // enable image smoothing or not for canvas copy - default to true // https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/imageSmoothingEnabled imageSmoothingEnabled : true, } THREEx.ArToolkitSource var parameters = { // type of source - ['webcam', 'image', 'video'] sourceType: \"webcam\", // url of the source - valid if sourceType = image|video sourceUrl: null, // resolution of at which we initialize the source image sourceWidth: 640, sourceHeight: 480, // resolution displayed for the source displayWidth: 640, displayHeight: 480 };","title":"Marker Based"},{"location":"marker-based/#marker-based","text":"Markers can be of three, different types: Hiro Barcode Pattern. To learn more about markers, please read these articles: AR.js basic Marker Based tutorial and Markers explanation Deliver AR.js experiences using only QRCodes (Markers inside QRCodes) . TL:DR Hiro Marker is the default one, not very useful actually Barcode markers are auto-generated markers, from matrix computations. Learn more on the above articles on how to use them. If you need the full list of barcode markers, here it is Pattern markers are custom ones, created starting from an image (very simple, hight contrast), loaded by the user. \u26a1\ufe0f You can create your Pattern Markers with this tool . It will generate an image to scan and a .patt file, to be loaded on the AR.js web app, in order for it to recognise the marker when running.","title":"Marker Based"},{"location":"marker-based/#how-to-choose-good-images-for-pattern-markers","text":"Markers have a black border and high contrast shapes. Lately, we have added also white border markers with black background, although the classic ones, with black border, behave better. Here's an article explaining all good practice on how to choose good images to be used to generate custom markers: 10 tips to enhance your AR.js app .","title":"How to choose good images for Pattern Markers"},{"location":"marker-based/#api-reference-for-marker-based","text":"","title":"API Reference for Marker Based"},{"location":"marker-based/#a-frame","text":"","title":"A-Frame"},{"location":"marker-based/#a-marker","text":"Here are the attributes for this entity Attribute Description Component Mapping type type of marker - ['pattern', 'barcode', 'unknown' ] artoolkitmarker.type size size of the marker in meter artoolkitmarker.size url url of the pattern - IIF type='pattern' artoolkitmarker.patternUrl value value of the barcode - IIF type='barcode' artoolkitmarker.barcodeValue preset parameters preset - ['hiro', 'kanji'] artoolkitmarker.preset emitevents emits 'markerFound' and 'markerLost' events - ['true', 'false'] - smooth turn on/off camera smoothing - ['true', 'false'] - default: false - smooth-count number of matrices to smooth tracking over, more = smoother but slower follow - default: 5 - smooth-tolerance distance tolerance for smoothing, if smoothThreshold # of matrices are under tolerance, tracking will stay still - default: 0.01 - smooth-threshold threshold for smoothing, will keep still unless enough matrices are over tolerance - default: 2 -","title":"<a-marker/>"},{"location":"marker-based/#threejs","text":"","title":"three.js"},{"location":"marker-based/#threex-artoolkit","text":"threex.artoolkit is the three.js extension to easily handle artoolkit .","title":"threex-artoolkit"},{"location":"marker-based/#architecture","text":"threex.artoolkit is composed of 3 classes THREEx.ArToolkitSource : It is the image which is analyzed to do the position tracking. It can be the webcam, a video or even an image THREEx.ArToolkitContext : It is the main engine. It will actually find the marker position in the image source. THREEx.ArMarkerControls : it controls the position of the marker It uses the classical three.js controls API . It will make sure to position your content right on top of the marker.","title":"Architecture"},{"location":"marker-based/#threexarmarkercontrols","text":"var parameters = { // size of the marker in meter size: 1, // type of marker - ['pattern', 'barcode', 'unknown' ] type: \"unknown\", // url of the pattern - IIF type='pattern' patternUrl: null, // value of the barcode - IIF type='barcode' barcodeValue: null, // change matrix mode - [modelViewMatrix, cameraTransformMatrix] changeMatrixMode: \"modelViewMatrix\", // turn on/off camera smoothing smooth: true, // number of matrices to smooth tracking over, more = smoother but slower follow smoothCount: 5, // distance tolerance for smoothing, if smoothThreshold # of matrices are under tolerance, tracking will stay still smoothTolerance: 0.01, // threshold for smoothing, will keep still unless enough matrices are over tolerance smoothThreshold: 2 };","title":"THREEx.ArMarkerControls"},{"location":"marker-based/#threexartoolkitcontext","text":"var parameters = { // debug - true if one should display artoolkit debug canvas, false otherwise debug: false, // the mode of detection - ['color', 'color_and_matrix', 'mono', 'mono_and_matrix'] detectionMode: 'color_and_matrix', // type of matrix code - valid iif detectionMode end with 'matrix' - [3x3, 3x3_HAMMING63, 3x3_PARITY65, 4x4, 4x4_BCH_13_9_3, 4x4_BCH_13_5_5] matrixCodeType: '3x3', // Pattern ratio for custom markers patternRatio: 0.5 // Labeling mode for markers - ['black_region', 'white_region'] // black_region: Black bordered markers on a white background, white_region: White bordered markers on a black background labelingMode: 'black_region', // url of the camera parameters cameraParametersUrl: THREEx.ArToolkitContext.baseURL + '../data/data/camera_para.dat', // tune the maximum rate of pose detection in the source image maxDetectionRate: 60, // resolution of at which we detect pose in the source image canvasWidth: 640, canvasHeight: 480, // enable image smoothing or not for canvas copy - default to true // https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/imageSmoothingEnabled imageSmoothingEnabled : true, }","title":"THREEx.ArToolkitContext"},{"location":"marker-based/#threexartoolkitsource","text":"var parameters = { // type of source - ['webcam', 'image', 'video'] sourceType: \"webcam\", // url of the source - valid if sourceType = image|video sourceUrl: null, // resolution of at which we initialize the source image sourceWidth: 640, sourceHeight: 480, // resolution displayed for the source displayWidth: 640, displayHeight: 480 };","title":"THREEx.ArToolkitSource"},{"location":"ui-events/","text":"UI and Custom Events To make AR.js based Web App look better and add UI capabilities, it's possible to treat is as common website. Here you will learn how to use Raycaster, Custom Events and Interaction with overlayed DOM elements. Handle clicks on AR content It's now possible to use AR.js (marker based or image tracking) with a-frame latest versions (1.0.0 and above) in order to have touch gestures to zoom and rotate your content! Disclaimer: this will work for your entire a-scene , so it's not a real option if you have to handle different interactions for multiple markers. It will work like charm if you have one marker/image for scene. Check Fabio Cort\u00e8s great walkthrough in order to add this feature on your AR.js web app. You can use this exact approach for Image Tracking a-nft and Marker Based a-entity elements. The clickhandler name can be customized, you can choose the one you like most, it's just a reference. Keep in mind that this click/touch interaction is not handled by AR.js at all, it is all A-Frame based. Always look on the A-Frame documentation for more details. Check out the tutorial Interaction with Overlayed DOM content You can add interations by adding DOM HTML elements on the body . For example, starting from this example: We can add on the body, outside the a-scene :
Then, we need to add some CSS to absolute positioning the DIV and BUTTON, and also some scripting to listen to click events. You can customize your a-scene or content, like 3D models, play video, and so on. See on A-Frame Docs on how to change entity properties and work with events: https://aframe.io/docs/1.0.0/introduction/javascript-events-dom-apis.html. We will end up with the following code:
Custom Events AR.js dispatches several Custom Events. Some of them are general, others are specific for AR Feature. Here's the full list. Custom Event name Description Payload Source File Feature arjs-video-loaded Fired when camera video stream has been appended to the DOM { detail: { component: }} threex-artoolkitsource.js all camera-error Fired when camera video stream could not be retrieved { error: } threex-artoolkitsource.js all camera-init Fired when camera video stream has been retrieved correctly { stream: } threex-artoolkitsource.js all markerFound Fired when a marker in Marker Based, or a picture in Image Tracking, has been found - component-anchor.js only Image Tracking and Marker Based markerLost Fired when a marker in Marker Based, or a picture in Image Tracking, has been lost - component-anchor.js only Image Tracking and Marker Based arjs-nft-loaded Fired when a nft marker is full loaded threex-armarkercontrols-nft-start.js only Image Tracking gps-camera-update-positon Fired when gps-camera has updated its position { detail: { position: , origin: }} gps-camera.js only Location Based gps-entity-place-update-positon Fired when gps-entity-place has updated its position { detail: { distance: }} gps-entity-place.js only classic and projected Location Based gps-entity-place-added Fired when the gps-entity-place has been added { detail: { component: }} gps-entity-place.js only classic and projected Location Based gps-camera-origin-coord-set Fired when the origin coordinates are set - gps-camera.js only classic and projected Location Based gps-entity-place-loaded Fired when the gps-entity-place has been - see 'loaded' event of A-Frame entities { detail: { component: }} gps-entity-place.js only classic and projected Location Based Internal Loading Events \u26a1\ufe0f Both Image Tracking and Location Based automatically handle an internal event when origin location has been set Image Tracking (Image Descriptors) are fully loaded And automatically remove from the DOM elements that match the .arjs-loader selector. You can add any custom loader that will be remove in the above situations, just use the .arjs-loader class on it. Trigger actions when image has been found You can trigger any action you want when marker/image has been found. You can avoid linking a content to a marker/image and only trigger an action (like a redirect to an external website) when the anchor has been found by the camera.
Loading, please wait...
Trigger action when marker has been found Get distance from marker // import this on your HTML window.addEventListener('load', () => { const camera = document.querySelector('[camera]'); const marker = document.querySelector('a-marker'); let check; marker.addEventListener('markerFound', () => { let cameraPosition = camera.object3D.position; let markerPosition = marker.object3D.position; let distance = cameraPosition.distanceTo(markerPosition) check = setInterval(() => { cameraPosition = camera.object3D.position; markerPosition = marker.object3D.position; distance = cameraPosition.distanceTo(markerPosition) // do what you want with the distance: console.log(distance); }, 100); }); marker.addEventListener('markerLost', () => { clearInterval(check); }) })","title":"UI and Events"},{"location":"ui-events/#ui-and-custom-events","text":"To make AR.js based Web App look better and add UI capabilities, it's possible to treat is as common website. Here you will learn how to use Raycaster, Custom Events and Interaction with overlayed DOM elements.","title":"UI and Custom Events"},{"location":"ui-events/#handle-clicks-on-ar-content","text":"It's now possible to use AR.js (marker based or image tracking) with a-frame latest versions (1.0.0 and above) in order to have touch gestures to zoom and rotate your content! Disclaimer: this will work for your entire a-scene , so it's not a real option if you have to handle different interactions for multiple markers. It will work like charm if you have one marker/image for scene. Check Fabio Cort\u00e8s great walkthrough in order to add this feature on your AR.js web app. You can use this exact approach for Image Tracking a-nft and Marker Based a-entity elements. The clickhandler name can be customized, you can choose the one you like most, it's just a reference. Keep in mind that this click/touch interaction is not handled by AR.js at all, it is all A-Frame based. Always look on the A-Frame documentation for more details. Check out the tutorial","title":"Handle clicks on AR content"},{"location":"ui-events/#interaction-with-overlayed-dom-content","text":"You can add interations by adding DOM HTML elements on the body . For example, starting from this example: We can add on the body, outside the a-scene :
Then, we need to add some CSS to absolute positioning the DIV and BUTTON, and also some scripting to listen to click events. You can customize your a-scene or content, like 3D models, play video, and so on. See on A-Frame Docs on how to change entity properties and work with events: https://aframe.io/docs/1.0.0/introduction/javascript-events-dom-apis.html. We will end up with the following code:
","title":"Interaction with Overlayed DOM content"},{"location":"ui-events/#custom-events","text":"AR.js dispatches several Custom Events. Some of them are general, others are specific for AR Feature. Here's the full list. Custom Event name Description Payload Source File Feature arjs-video-loaded Fired when camera video stream has been appended to the DOM { detail: { component: }} threex-artoolkitsource.js all camera-error Fired when camera video stream could not be retrieved { error: } threex-artoolkitsource.js all camera-init Fired when camera video stream has been retrieved correctly { stream: } threex-artoolkitsource.js all markerFound Fired when a marker in Marker Based, or a picture in Image Tracking, has been found - component-anchor.js only Image Tracking and Marker Based markerLost Fired when a marker in Marker Based, or a picture in Image Tracking, has been lost - component-anchor.js only Image Tracking and Marker Based arjs-nft-loaded Fired when a nft marker is full loaded threex-armarkercontrols-nft-start.js only Image Tracking gps-camera-update-positon Fired when gps-camera has updated its position { detail: { position: , origin: }} gps-camera.js only Location Based gps-entity-place-update-positon Fired when gps-entity-place has updated its position { detail: { distance: }} gps-entity-place.js only classic and projected Location Based gps-entity-place-added Fired when the gps-entity-place has been added { detail: { component: }} gps-entity-place.js only classic and projected Location Based gps-camera-origin-coord-set Fired when the origin coordinates are set - gps-camera.js only classic and projected Location Based gps-entity-place-loaded Fired when the gps-entity-place has been - see 'loaded' event of A-Frame entities { detail: { component: }} gps-entity-place.js only classic and projected Location Based","title":"Custom Events"},{"location":"ui-events/#internal-loading-events","text":"\u26a1\ufe0f Both Image Tracking and Location Based automatically handle an internal event when origin location has been set Image Tracking (Image Descriptors) are fully loaded And automatically remove from the DOM elements that match the .arjs-loader selector. You can add any custom loader that will be remove in the above situations, just use the .arjs-loader class on it.","title":"Internal Loading Events"},{"location":"ui-events/#trigger-actions-when-image-has-been-found","text":"You can trigger any action you want when marker/image has been found. You can avoid linking a content to a marker/image and only trigger an action (like a redirect to an external website) when the anchor has been found by the camera.
Loading, please wait...
","title":"Trigger actions when image has been found"},{"location":"ui-events/#trigger-action-when-marker-has-been-found","text":" ","title":"Trigger action when marker has been found"},{"location":"ui-events/#get-distance-from-marker","text":"// import this on your HTML window.addEventListener('load', () => { const camera = document.querySelector('[camera]'); const marker = document.querySelector('a-marker'); let check; marker.addEventListener('markerFound', () => { let cameraPosition = camera.object3D.position; let markerPosition = marker.object3D.position; let distance = cameraPosition.distanceTo(markerPosition) check = setInterval(() => { cameraPosition = camera.object3D.position; markerPosition = marker.object3D.position; distance = cameraPosition.distanceTo(markerPosition) // do what you want with the distance: console.log(distance); }, 100); }); marker.addEventListener('markerLost', () => { clearInterval(check); }) })","title":"Get distance from marker"},{"location":"location-based-aframe/","text":"AR.js A-Frame Location-Based Tutorial - Develop a Simple Points of Interest App Introduction This tutorial ( updated for AR.js 3.4 ) aims to take you from a basic location-based AR.js example all the way to a working, simple points of interest app. We will start with an HTML-only example and gradually add JavaScript to make our app more sophisticated. It is expected that you have some basic A-Frame experience. Do note that this code will not work on Firefox on a mobile device due to limitations of the device orientation API; absolute orientation cannot be obtained. Chrome on Android is recommended. Basic example We will start with a basic example, using pure HTML, to display a box close to your location. This example is identical to the location-based example on the index page . AR.js A-Frame Location-based; longitude: \" scale=\"10 10 10\"> Upload this to a server with HTTPS, or run locally on localhost . Make sure you replace your-lat and your-lon with values close to your actual position (to see the box clearly, I would recommend an offset of around 0.001 degrees in any direction for both the latitude and longitude). How does this work? The arjs component of our a-scene initialises AR.js. Note the properties we are setting: we set the sourceType to webcam for obvious reasons but also set videoTexture to true. This is vital in an outdoor location-based AR app as it allows distant augmented content - such at the peaks we are going to eventually visualise - to be seen. (It does this by using a three.js texture for the camera feed which can be easily combined with our augmented content). Note the gps-new-camera component on our a-camera . This is the AR.js component which automatically converts latitudes and longitudes into 3D world coordinates, allowing us to use latitude and longitude, rather than world coordinates, when adding places. Note that we are using gps-new-camera , not gps-camera . The gps-new-camera component includes some bugfixes and makes it easy for us to work with arbitrary geographical data provided by a server, as internally it uses the Spherical Mercator projection to represent the augmented content's world coordinates. Spherical Mercator units are commonly used to represent mapping data and are almost (but not quite) equivalent to metres. Away from the polar regions, though, it's good enough to use for AR. We then create an a-box primitive. This is the augmented content that we want to display. Ordinarily, in A-Frame, you would give this a position in world coordinates. However, AR.js, and specifically the gps-new-entity-place component, allows us to position it using latitude and longitude. We can position any A-Frame entity at a given latitude and longitude using gps-projected-entity-place . Things to try Change the a-box to some other kind of A-Frame primitive, such as an a-sphere or a-cylinder . Does it still work? Try adding multiple objects with different colours at different locations. Try adding a text primitive at a nearby latitude and longitude. You will need to use the A-Frame look-at component to ensure the text always faces the camera. Try giving your objects an elevation. This can be done by setting the y coordinate of the position property of each object to a given height (in metres) and setting the x and z coordinates to 0. Having done that, try giving the camera an elevation by similarly setting its position property, and look at the effect this has on where the objects appear. Introducing JavaScript with AR.js Much of the power of A-Frame, and AR.js, comes from adding scripting to your basic applications. It is assumed that you already know the basics of how to create components in A-Frame. We will start with a very basic example, which simply retrieves your current GPS location and adds a red box immediately to the north. Create this JavaScript, basic.js , and link it to the HTML example shown above. (Remove the hard-coded red box from the HTML first). window.onload = () => { let testEntityAdded = false; const el = document.querySelector(\"[gps-new-camera]\"); el.addEventListener(\"gps-camera-update-position\", e => { if(!testEntityAdded) { alert(`Got first GPS position: lon ${e.detail.position.longitude} lat ${e.detail.position.latitude}`); // Add a box to the north of the initial GPS position const entity = document.createElement(\"a-box\"); entity.setAttribute(\"scale\", { x: 20, y: 20, z: 20 }); entity.setAttribute('material', { color: 'red' } ); entity.setAttribute('gps-new-entity-place', { latitude: e.detail.position.latitude + 0.001, longitude: e.detail.position.longitude }); document.querySelector(\"a-scene\").appendChild(entity); } testEntityAdded = true; }); }; How is this working? we set up an onload function to run when the page loads. With A-Frame, we can only use entities once they have been loaded into the DOM, so we must delay the execution of the code until the page loads. Using the normal DOM API, we use document.querySelector() to obtain the entity with the gps-new-camera component attached to it (which will be your ) we then handle the gps-camera-update-position event . This event is emitted by the camera entity when we receive a new GPS location. This allows us to write code which runs every time we get a new GPS position, such as downloading new POI data from a server. We can retrieve the new location via the e.detail.position object, which has longitude and latitude properties. In this example, we check that we have not already added our entity (via the testEntityAdded boolean), display the location to the user, and then create a new entity dynamically, and specify its scale and colour using standard DOM/A-Frame techniques. We then dynamically add a gps-new-entity-place component to the entity, with the latitude set to the GPS latitude plus 0.001 degrees (so it will appear a short distance to the north) and the longitude set to the current GPS longitude. Finally we add the entity to the scene using the standard DOM appendChild() method. Things to try Add three more entities to the scene, close to the original GPS position a yellow sphere 0.001 degrees to the east; an orange cylinder 0.001 degrees to the south; a magenta cone 0.001 degrees to the west. Connecting to a web server We will now enhance the example to download data from a web server . The server used will be the Hikar server, used by the Hikar project: https://hikar.org/webapp/map?bbox=west,south,east,north&layers=poi&outProj=4326 This provides OpenStreetMap data for Europe and Turkey (apologies, other parts of the world are not covered due to server constraints). Note how we specify the bounding box with the bbox parameter. window.onload = () => { let downloaded = false; const el = document.querySelector(\"[gps-new-camera]\"); el.addEventListener(\"gps-camera-update-position\", async(e) => { if(!downloaded) { const west = e.detail.position.longitude - 0.01, east = e.detail.position.longitude + 0.01, south = e.detail.position.latitude - 0.01; north = e.detail.position.latitude + 0.01; const response = await fetch(`https://hikar.org/webapp/map?bbox=${west},${south},${east},${north}&layers=poi&outProj=4326`); const pois = await response.json(); pois.features.forEach ( feature => { const entity = document.createElement(\"a-box\"); entity.setAttribute(\"scale\", { x: 20, y: 20, z: 20 }); entity.setAttribute('material', { color: 'red' } ); entity.setAttribute('gps-new-entity-place', { latitude: feature.geometry.coordinates[1], longitude: feature.geometry.coordinates[0] }); document.querySelector(\"a-scene\").appendChild(entity); }); } downloaded = true; }); }; Much of the logic is similar to the previous example, but note that we now send a request to the web server via the fetch API, sending a bounding box surrounding the current position. The server sends back GeoJSON . GeoJSON contains a features array containing each point of interest, and each feature includes a geometry object containing the latitude and longitude witthin a two-member coordinates array. So we loop through each feature, dynamically create an entity (as in the previous example) from the current feature, use the latitude and longitude from the GeoJSON to create the gps-new-entity-place component, and add it to the scene. Things to try Try requesting the Hikar URL directly in your browser, supplying a bounding box representing an area you are familiar with, and explore the format used for points of interest of different types. Each GeoJSON feature object has a properties object containing properties describing the point of interest. The amenity property is commonly used: this describes the type of amenity (such as restaurant, cafe, pub, etc). Try colouring the boxes differently depending on point of interest type (e.g. restaurants, cafes, pubs, etc). Adding text labels The next example shows how you can add text labels to your POIs. window.onload = () => { let downloaded = false; const el = document.querySelector(\"[gps-new-camera]\"); el.addEventListener(\"gps-camera-update-position\", async(e) => { if(!downloaded) { const west = e.detail.position.longitude - 0.05, east = e.detail.position.longitude + 0.05, south = e.detail.position.latitude - 0.05; north = e.detail.position.latitude + 0.05; console.log(`${west} ${south} ${east} ${north}`); const response = await fetch(`https://hikar.org/webapp/map?bbox=${west},${south},${east},${north}&layers=poi&outProj=4326`); const pois = await response.json(); pois.features.forEach ( feature => { const compoundEntity = document.createElement(\"a-entity\"); compoundEntity.setAttribute('gps-new-entity-place', { latitude: feature.geometry.coordinates[1], longitude: feature.geometry.coordinates[0] }); const box = document.createElement(\"a-box\"); box.setAttribute(\"scale\", { x: 20, y: 20, z: 20 }); box.setAttribute('material', { color: 'red' } ); box.setAttribute(\"position\", { x : 0, y : 20, z: 0 } ); const text = document.createElement(\"a-text\"); const textScale = 100; text.setAttribute(\"look-at\", \"[gps-new-camera]\"); text.setAttribute(\"scale\", { x: textScale, y: textScale, z: textScale }); text.setAttribute(\"value\", feature.properties.name); text.setAttribute(\"align\", \"center\"); compoundEntity.appendChild(box); compoundEntity.appendChild(text); document.querySelector(\"a-scene\").appendChild(compoundEntity); }); } downloaded = true; }); }; How is this working? We now create a compound entity . In A-Frame, a compound entity is an entity which has other entities as children. Here, we will create a compound entity, position it at the POI's latitude and longitude, and add the box, and a new text entity containing the POI name, to it. We create the box as before, and set its y coordinate to 20. This is relative to its parent, i.e. the compound entity. The compound entity is already positioned at the correct latitude and longitude, so we will position the box 20 metres above that position. We then create a text entity, scale it appropriately, and set its value attribute to the name of the feature from the GeoJSON. Note the use of the look-at component. This makes a given A-Frame entity look at another. Here we want the text to look at the camera (i.e. the entity with a gps-new-camera property), so we always see it. The look-at component is a third-party component and must be added to your HTML as follows: We then append the box and text to the compound entity, and append our compound entity to the scene. Things to try Try filtering out POIs with no name, so that only those with a name are displayed. Those without a name should not be displayed, not even as a box. Try implementing logic to re-download POIs if the user moves 0.05 degrees of either latitude or longitude from the previous download position. Each POI in the GeoJSON includes an osm_id property which is a unique OpenStreetMap ID for that POI. Using the osm_id , implement logic so that a POI is not re-added to the scene if it is already present. (This may happen if you move 0.05 degrees but return to an area you have already visited).","title":"AR.js A-Frame Location-Based Tutorial - Develop a Simple Points of Interest App"},{"location":"location-based-aframe/#arjs-a-frame-location-based-tutorial-develop-a-simple-points-of-interest-app","text":"","title":"AR.js A-Frame Location-Based Tutorial - Develop a Simple Points of Interest App"},{"location":"location-based-aframe/#introduction","text":"This tutorial ( updated for AR.js 3.4 ) aims to take you from a basic location-based AR.js example all the way to a working, simple points of interest app. We will start with an HTML-only example and gradually add JavaScript to make our app more sophisticated. It is expected that you have some basic A-Frame experience. Do note that this code will not work on Firefox on a mobile device due to limitations of the device orientation API; absolute orientation cannot be obtained. Chrome on Android is recommended.","title":"Introduction"},{"location":"location-based-aframe/#basic-example","text":"We will start with a basic example, using pure HTML, to display a box close to your location. This example is identical to the location-based example on the index page . AR.js A-Frame Location-based; longitude: \" scale=\"10 10 10\"> Upload this to a server with HTTPS, or run locally on localhost . Make sure you replace your-lat and your-lon with values close to your actual position (to see the box clearly, I would recommend an offset of around 0.001 degrees in any direction for both the latitude and longitude).","title":"Basic example"},{"location":"location-based-aframe/#how-does-this-work","text":"The arjs component of our a-scene initialises AR.js. Note the properties we are setting: we set the sourceType to webcam for obvious reasons but also set videoTexture to true. This is vital in an outdoor location-based AR app as it allows distant augmented content - such at the peaks we are going to eventually visualise - to be seen. (It does this by using a three.js texture for the camera feed which can be easily combined with our augmented content). Note the gps-new-camera component on our a-camera . This is the AR.js component which automatically converts latitudes and longitudes into 3D world coordinates, allowing us to use latitude and longitude, rather than world coordinates, when adding places. Note that we are using gps-new-camera , not gps-camera . The gps-new-camera component includes some bugfixes and makes it easy for us to work with arbitrary geographical data provided by a server, as internally it uses the Spherical Mercator projection to represent the augmented content's world coordinates. Spherical Mercator units are commonly used to represent mapping data and are almost (but not quite) equivalent to metres. Away from the polar regions, though, it's good enough to use for AR. We then create an a-box primitive. This is the augmented content that we want to display. Ordinarily, in A-Frame, you would give this a position in world coordinates. However, AR.js, and specifically the gps-new-entity-place component, allows us to position it using latitude and longitude. We can position any A-Frame entity at a given latitude and longitude using gps-projected-entity-place .","title":"How does this work?"},{"location":"location-based-aframe/#things-to-try","text":"Change the a-box to some other kind of A-Frame primitive, such as an a-sphere or a-cylinder . Does it still work? Try adding multiple objects with different colours at different locations. Try adding a text primitive at a nearby latitude and longitude. You will need to use the A-Frame look-at component to ensure the text always faces the camera. Try giving your objects an elevation. This can be done by setting the y coordinate of the position property of each object to a given height (in metres) and setting the x and z coordinates to 0. Having done that, try giving the camera an elevation by similarly setting its position property, and look at the effect this has on where the objects appear.","title":"Things to try"},{"location":"location-based-aframe/#introducing-javascript-with-arjs","text":"Much of the power of A-Frame, and AR.js, comes from adding scripting to your basic applications. It is assumed that you already know the basics of how to create components in A-Frame. We will start with a very basic example, which simply retrieves your current GPS location and adds a red box immediately to the north. Create this JavaScript, basic.js , and link it to the HTML example shown above. (Remove the hard-coded red box from the HTML first). window.onload = () => { let testEntityAdded = false; const el = document.querySelector(\"[gps-new-camera]\"); el.addEventListener(\"gps-camera-update-position\", e => { if(!testEntityAdded) { alert(`Got first GPS position: lon ${e.detail.position.longitude} lat ${e.detail.position.latitude}`); // Add a box to the north of the initial GPS position const entity = document.createElement(\"a-box\"); entity.setAttribute(\"scale\", { x: 20, y: 20, z: 20 }); entity.setAttribute('material', { color: 'red' } ); entity.setAttribute('gps-new-entity-place', { latitude: e.detail.position.latitude + 0.001, longitude: e.detail.position.longitude }); document.querySelector(\"a-scene\").appendChild(entity); } testEntityAdded = true; }); }; How is this working? we set up an onload function to run when the page loads. With A-Frame, we can only use entities once they have been loaded into the DOM, so we must delay the execution of the code until the page loads. Using the normal DOM API, we use document.querySelector() to obtain the entity with the gps-new-camera component attached to it (which will be your ) we then handle the gps-camera-update-position event . This event is emitted by the camera entity when we receive a new GPS location. This allows us to write code which runs every time we get a new GPS position, such as downloading new POI data from a server. We can retrieve the new location via the e.detail.position object, which has longitude and latitude properties. In this example, we check that we have not already added our entity (via the testEntityAdded boolean), display the location to the user, and then create a new entity dynamically, and specify its scale and colour using standard DOM/A-Frame techniques. We then dynamically add a gps-new-entity-place component to the entity, with the latitude set to the GPS latitude plus 0.001 degrees (so it will appear a short distance to the north) and the longitude set to the current GPS longitude. Finally we add the entity to the scene using the standard DOM appendChild() method.","title":"Introducing JavaScript with AR.js"},{"location":"location-based-aframe/#things-to-try_1","text":"Add three more entities to the scene, close to the original GPS position a yellow sphere 0.001 degrees to the east; an orange cylinder 0.001 degrees to the south; a magenta cone 0.001 degrees to the west.","title":"Things to try"},{"location":"location-based-aframe/#connecting-to-a-web-server","text":"We will now enhance the example to download data from a web server . The server used will be the Hikar server, used by the Hikar project: https://hikar.org/webapp/map?bbox=west,south,east,north&layers=poi&outProj=4326 This provides OpenStreetMap data for Europe and Turkey (apologies, other parts of the world are not covered due to server constraints). Note how we specify the bounding box with the bbox parameter. window.onload = () => { let downloaded = false; const el = document.querySelector(\"[gps-new-camera]\"); el.addEventListener(\"gps-camera-update-position\", async(e) => { if(!downloaded) { const west = e.detail.position.longitude - 0.01, east = e.detail.position.longitude + 0.01, south = e.detail.position.latitude - 0.01; north = e.detail.position.latitude + 0.01; const response = await fetch(`https://hikar.org/webapp/map?bbox=${west},${south},${east},${north}&layers=poi&outProj=4326`); const pois = await response.json(); pois.features.forEach ( feature => { const entity = document.createElement(\"a-box\"); entity.setAttribute(\"scale\", { x: 20, y: 20, z: 20 }); entity.setAttribute('material', { color: 'red' } ); entity.setAttribute('gps-new-entity-place', { latitude: feature.geometry.coordinates[1], longitude: feature.geometry.coordinates[0] }); document.querySelector(\"a-scene\").appendChild(entity); }); } downloaded = true; }); }; Much of the logic is similar to the previous example, but note that we now send a request to the web server via the fetch API, sending a bounding box surrounding the current position. The server sends back GeoJSON . GeoJSON contains a features array containing each point of interest, and each feature includes a geometry object containing the latitude and longitude witthin a two-member coordinates array. So we loop through each feature, dynamically create an entity (as in the previous example) from the current feature, use the latitude and longitude from the GeoJSON to create the gps-new-entity-place component, and add it to the scene.","title":"Connecting to a web server"},{"location":"location-based-aframe/#things-to-try_2","text":"Try requesting the Hikar URL directly in your browser, supplying a bounding box representing an area you are familiar with, and explore the format used for points of interest of different types. Each GeoJSON feature object has a properties object containing properties describing the point of interest. The amenity property is commonly used: this describes the type of amenity (such as restaurant, cafe, pub, etc). Try colouring the boxes differently depending on point of interest type (e.g. restaurants, cafes, pubs, etc).","title":"Things to try"},{"location":"location-based-aframe/#adding-text-labels","text":"The next example shows how you can add text labels to your POIs. window.onload = () => { let downloaded = false; const el = document.querySelector(\"[gps-new-camera]\"); el.addEventListener(\"gps-camera-update-position\", async(e) => { if(!downloaded) { const west = e.detail.position.longitude - 0.05, east = e.detail.position.longitude + 0.05, south = e.detail.position.latitude - 0.05; north = e.detail.position.latitude + 0.05; console.log(`${west} ${south} ${east} ${north}`); const response = await fetch(`https://hikar.org/webapp/map?bbox=${west},${south},${east},${north}&layers=poi&outProj=4326`); const pois = await response.json(); pois.features.forEach ( feature => { const compoundEntity = document.createElement(\"a-entity\"); compoundEntity.setAttribute('gps-new-entity-place', { latitude: feature.geometry.coordinates[1], longitude: feature.geometry.coordinates[0] }); const box = document.createElement(\"a-box\"); box.setAttribute(\"scale\", { x: 20, y: 20, z: 20 }); box.setAttribute('material', { color: 'red' } ); box.setAttribute(\"position\", { x : 0, y : 20, z: 0 } ); const text = document.createElement(\"a-text\"); const textScale = 100; text.setAttribute(\"look-at\", \"[gps-new-camera]\"); text.setAttribute(\"scale\", { x: textScale, y: textScale, z: textScale }); text.setAttribute(\"value\", feature.properties.name); text.setAttribute(\"align\", \"center\"); compoundEntity.appendChild(box); compoundEntity.appendChild(text); document.querySelector(\"a-scene\").appendChild(compoundEntity); }); } downloaded = true; }); }; How is this working? We now create a compound entity . In A-Frame, a compound entity is an entity which has other entities as children. Here, we will create a compound entity, position it at the POI's latitude and longitude, and add the box, and a new text entity containing the POI name, to it. We create the box as before, and set its y coordinate to 20. This is relative to its parent, i.e. the compound entity. The compound entity is already positioned at the correct latitude and longitude, so we will position the box 20 metres above that position. We then create a text entity, scale it appropriately, and set its value attribute to the name of the feature from the GeoJSON. Note the use of the look-at component. This makes a given A-Frame entity look at another. Here we want the text to look at the camera (i.e. the entity with a gps-new-camera property), so we always see it. The look-at component is a third-party component and must be added to your HTML as follows: We then append the box and text to the compound entity, and append our compound entity to the scene.","title":"Adding text labels"},{"location":"location-based-aframe/#things-to-try_3","text":"Try filtering out POIs with no name, so that only those with a name are displayed. Those without a name should not be displayed, not even as a box. Try implementing logic to re-download POIs if the user moves 0.05 degrees of either latitude or longitude from the previous download position. Each POI in the GeoJSON includes an osm_id property which is a unique OpenStreetMap ID for that POI. Using the osm_id , implement logic so that a POI is not re-added to the scene if it is already present. (This may happen if you move 0.05 degrees but return to an area you have already visited).","title":"Things to try"},{"location":"location-based-three/","text":"Location-based AR.js with three.js - Develop a simple Points of Interest app AR.js 3.4 features a pure three.js API for location-baed AR. Here is a series of tutorials taking you through how to use it, from the basics to a more advanced example: a simple but working Points of Interest app using a live web API. It is expected that you have some basic three.js experience. Do note that this code will not work on Firefox on a mobile device due to limitations of the device orientation API; absolute orientation cannot be obtained. Chrome on Android is recommended. Installing It is recommended to install via npm and build with a bundler such as Webpack. and import it into your application. Here is a sample package.json : { \"dependencies\": { \"@ar-js-org/ar.js\": \"3.4.5\" }, \"devDependencies\": { \"webpack\": \"^5.75.0\", \"webpack-cli\": \"^5.0.0\" }, \"scripts\": { \"build\": \"npx webpack\" } } and a sample webpack.config.js : const path = require('path'); module.exports = { mode: 'development', entry: './index.js', output: { path: path.resolve(__dirname, 'dist'), filename: 'bundle.js' }, optimization: { minimize: false } }; This will build a bundle named bundle.js in the dist subdirectory from a source file index.js . This will be assumed in the examples. Part 1: Hello World Part 2: Using the GPS and Device Orientation Part 3: Connecting to a web API","title":"Location-based AR.js with three.js - Develop a simple Points of Interest app"},{"location":"location-based-three/#location-based-arjs-with-threejs-develop-a-simple-points-of-interest-app","text":"AR.js 3.4 features a pure three.js API for location-baed AR. Here is a series of tutorials taking you through how to use it, from the basics to a more advanced example: a simple but working Points of Interest app using a live web API. It is expected that you have some basic three.js experience. Do note that this code will not work on Firefox on a mobile device due to limitations of the device orientation API; absolute orientation cannot be obtained. Chrome on Android is recommended.","title":"Location-based AR.js with three.js - Develop a simple Points of Interest app"},{"location":"location-based-three/#installing","text":"It is recommended to install via npm and build with a bundler such as Webpack. and import it into your application. Here is a sample package.json : { \"dependencies\": { \"@ar-js-org/ar.js\": \"3.4.5\" }, \"devDependencies\": { \"webpack\": \"^5.75.0\", \"webpack-cli\": \"^5.0.0\" }, \"scripts\": { \"build\": \"npx webpack\" } } and a sample webpack.config.js : const path = require('path'); module.exports = { mode: 'development', entry: './index.js', output: { path: path.resolve(__dirname, 'dist'), filename: 'bundle.js' }, optimization: { minimize: false } }; This will build a bundle named bundle.js in the dist subdirectory from a source file index.js . This will be assumed in the examples. Part 1: Hello World Part 2: Using the GPS and Device Orientation Part 3: Connecting to a web API","title":"Installing"},{"location":"location-based-three/part1/","text":"Location-based AR.js with three.js Part 1 - Hello World! The first part of this tutorial will show you how to create a \"hello world\" application using the pure three.js API for location-based ar.js. It is assumed you are aware of basic three.js concepts, such as the scene, renderer and camera as well as geometries, materials and meshes. This example will set your location to a \"fake\" GPS location and add a box a short distance away. Let's start with the HTML: Location-based AR.js with three.js This example assumes that you have installed AR.js via npm and used Webpack to build the application, as described on the index page for the tutorial . We link in the built bundle of our own code plus the three.js and AR.js dependencies. Here is our own code: save this as index.js . import * as THREE from 'three'; import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'; function main() { const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const box = new THREE.Mesh(geom, mtl); arjs.add(box, -0.72, 51.051); arjs.fakeGps(-0.72, 51.05); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } } main(); Much of this should be familiar to you from basic three.js examples, and is written in the same style as the manual . As normal, we create a THREE.Scene , a THREE.PerspectiveCamera and a THREE.WebGLRenderer using our canvas. What comes next though is new, and specific to AR.js: const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); We use two new objects, both part of the AR.js API. Firstly THREEx.LocationBased is the overall AR.js \"manager\" object and secondly THREEx.WebcamRenderer is responsible for rendering the camera feed. We need to supply our scene and camera as arguments to THREEx.LocationBased and our renderer as an argument to THREEx.WebcamRenderer . The THREEx.WebcamRenderer will create a video element to capture the webcam. Alternatively, if you have a video element already set up in your HTML, you can pass its CSS selector into the WebcamRenderer as an optional argument. For example: const cam = new THREEx.WebcamRenderer(renderer, '#video1'); Next, using standard three.js code, we set up a mesh using a box geometry and red material (i.e. a red box). However, this is where it then gets interesting: arjs.add(box, -0.72, 51.051); Rather than setting the box's position as we would normally do in standard three.js, we add it to a specific real-world location defined by longitude and latitude. The add() method of THREEx.LocationBased allows us to do that. Having positioned our box in a specific real-world location, we now need to place ourselves (i.e. the camera) at a given real-world location We can do this with THREEx.LocationBased s fakeGps() method, which takes longitude and latitude as parameters: arjs.fakeGps(-0.72, 51.05); This plaves us just to the south of the red box. By default, we face north, so the red box will appear in front of us. The remaining code is the standard three.js code for rendering each frame, and dealing with potential screen resizes. However note this code within the rendering function: cam.update(); This API call will render the latest camera frame. Try it! Try it on either a desktop machine or an Android device running Chrome. On a mobile device or desktop you should see the feed from the webcam, and a red box just in front of you. Note that the mobile device will not yet respond to changes in orientation: we will add that next time. For this reason you must ensure the box is to your north as the default view is to face north. Faking rotation on a desktop machine If you do not have a suitable mobile device, you can simulate rotation with the mouse. The code below will do this (add to your main block of code, just before the rendering function): const rotationStep = THREE.Math.degToRad(2); let mousedown = false, lastX =0; window.addEventListener(\"mousedown\", e=> { mousedown = true; }); window.addEventListener(\"mouseup\", e=> { mousedown = false; }); window.addEventListener(\"mousemove\", e=> { if(!mousedown) return; if(e.clientX < lastX) { camera.rotation.y -= rotationStep; if(camera.rotation.y < 0) { camera.rotation.y += 2 * Math.PI; } } else if (e.clientX > lastX) { camera.rotation.y += rotationStep; if(camera.rotation.y > 2 * Math.PI) { camera.rotation.y -= 2 * Math.PI; } } lastX = e.clientX; }); What does this do? Using mouse events, it detects the direction of movement of the mouse when it's pressed down, and in doing so, determines whether to rotate the camera clockwise or anticlockwise. It does this using the clientX property of the event object, which contains the mouse X position. This is compared to the previous value of e.clientX and from this, we can determine whether we moved the mouse to the left or to the right, and rotate accordingly. We move the camera by the amount specified in rotationStep and ensure that the camera rotation is always within the range 0 to 2PI radians (i.e. 360 degrees).","title":"Location-based AR.js with three.js"},{"location":"location-based-three/part1/#location-based-arjs-with-threejs","text":"","title":"Location-based AR.js with three.js"},{"location":"location-based-three/part1/#part-1-hello-world","text":"The first part of this tutorial will show you how to create a \"hello world\" application using the pure three.js API for location-based ar.js. It is assumed you are aware of basic three.js concepts, such as the scene, renderer and camera as well as geometries, materials and meshes. This example will set your location to a \"fake\" GPS location and add a box a short distance away. Let's start with the HTML: Location-based AR.js with three.js This example assumes that you have installed AR.js via npm and used Webpack to build the application, as described on the index page for the tutorial . We link in the built bundle of our own code plus the three.js and AR.js dependencies. Here is our own code: save this as index.js . import * as THREE from 'three'; import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'; function main() { const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const box = new THREE.Mesh(geom, mtl); arjs.add(box, -0.72, 51.051); arjs.fakeGps(-0.72, 51.05); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } } main(); Much of this should be familiar to you from basic three.js examples, and is written in the same style as the manual . As normal, we create a THREE.Scene , a THREE.PerspectiveCamera and a THREE.WebGLRenderer using our canvas. What comes next though is new, and specific to AR.js: const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); We use two new objects, both part of the AR.js API. Firstly THREEx.LocationBased is the overall AR.js \"manager\" object and secondly THREEx.WebcamRenderer is responsible for rendering the camera feed. We need to supply our scene and camera as arguments to THREEx.LocationBased and our renderer as an argument to THREEx.WebcamRenderer . The THREEx.WebcamRenderer will create a video element to capture the webcam. Alternatively, if you have a video element already set up in your HTML, you can pass its CSS selector into the WebcamRenderer as an optional argument. For example: const cam = new THREEx.WebcamRenderer(renderer, '#video1'); Next, using standard three.js code, we set up a mesh using a box geometry and red material (i.e. a red box). However, this is where it then gets interesting: arjs.add(box, -0.72, 51.051); Rather than setting the box's position as we would normally do in standard three.js, we add it to a specific real-world location defined by longitude and latitude. The add() method of THREEx.LocationBased allows us to do that. Having positioned our box in a specific real-world location, we now need to place ourselves (i.e. the camera) at a given real-world location We can do this with THREEx.LocationBased s fakeGps() method, which takes longitude and latitude as parameters: arjs.fakeGps(-0.72, 51.05); This plaves us just to the south of the red box. By default, we face north, so the red box will appear in front of us. The remaining code is the standard three.js code for rendering each frame, and dealing with potential screen resizes. However note this code within the rendering function: cam.update(); This API call will render the latest camera frame.","title":"Part 1 - Hello World!"},{"location":"location-based-three/part1/#try-it","text":"Try it on either a desktop machine or an Android device running Chrome. On a mobile device or desktop you should see the feed from the webcam, and a red box just in front of you. Note that the mobile device will not yet respond to changes in orientation: we will add that next time. For this reason you must ensure the box is to your north as the default view is to face north.","title":"Try it!"},{"location":"location-based-three/part1/#faking-rotation-on-a-desktop-machine","text":"If you do not have a suitable mobile device, you can simulate rotation with the mouse. The code below will do this (add to your main block of code, just before the rendering function): const rotationStep = THREE.Math.degToRad(2); let mousedown = false, lastX =0; window.addEventListener(\"mousedown\", e=> { mousedown = true; }); window.addEventListener(\"mouseup\", e=> { mousedown = false; }); window.addEventListener(\"mousemove\", e=> { if(!mousedown) return; if(e.clientX < lastX) { camera.rotation.y -= rotationStep; if(camera.rotation.y < 0) { camera.rotation.y += 2 * Math.PI; } } else if (e.clientX > lastX) { camera.rotation.y += rotationStep; if(camera.rotation.y > 2 * Math.PI) { camera.rotation.y -= 2 * Math.PI; } } lastX = e.clientX; }); What does this do? Using mouse events, it detects the direction of movement of the mouse when it's pressed down, and in doing so, determines whether to rotate the camera clockwise or anticlockwise. It does this using the clientX property of the event object, which contains the mouse X position. This is compared to the previous value of e.clientX and from this, we can determine whether we moved the mouse to the left or to the right, and rotate accordingly. We move the camera by the amount specified in rotationStep and ensure that the camera rotation is always within the range 0 to 2PI radians (i.e. 360 degrees).","title":"Faking rotation on a desktop machine"},{"location":"location-based-three/part2/","text":"Location-based AR.js with three.js Part 2 - Using the GPS and Device Orientation Having looked at the basics of the three.js location-based API in the first tutorial, we will now look at how to use the real GPS location. Last time, if you remember, we used a \"fake\" location with the THREEx.LocationBased 's fakeGps() call. We will also look at how we can use the device's orientation controls, so that the orientation sensors are tracked and objects will appear in their real-world position when the device is rotated. For example, an object directly north of the user will only appear when the device is facing north. GPS tracking Here is a revised version of the previous example which obtains your real GPS location: import * as THREE from 'three'; import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'; function main() { const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const box = new THREE.Mesh(geom, mtl); // Change this to a location 0.001 degrees of latitude north of you, so that you will face it arjs.add(box, -0.72, 51.051); // Start the GPS arjs.startGps(); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } } main(); Note that we only needed to make one change, we replace the fakeGps() call with: arjs.startGps(); Using the Geolocation API this will make the application start listening for GPS updates. The nice thing is we do not need to do anything else. The LocationBased object automatically updates the camera x and z coordinates to reflect our current GPS location. Specifically, the GPS latitude and longitude are converted to Spherical Mercator, the sign of z reversed (to match the OpenGL coordinate system), and the resulting coordinates used for the camera coordinates. Using the device orientation controls Having looked at obtaining our real GPS position, we will now look at how we can use the orientation controls to ensure our AR scene matches the real world as we rotate the device around. This is, in principle, quite easy: we just need to create a THREEx.DeviceOrientationControls object and update it in our rendering function. This object is based on the original DeviceOrientationControls from three.js. However, there is a slight problem. Unfortunately this will only work in Chrome on Android (it may also work in Chrome on iOS, this needs testing). This is due to the difficulty in obtaining absolute orientation (i.e. our orientation relative to north) using the device orientation API. This can be done on Chrome/Android using the deviceorientationabsolute event (and in fact, the THREEx.DeviceOrientationControls has been modified from the original to handle this event); it can also be done on Safari with webkitCompassHeading (but, due to the lack of an iDevice for testing, has not been implemented yet) but sadly it appears that support on Firefox is completely missing for now. See this table of compatibility for absolute device orientation . So it's recommended you use Chrome on Android for the moment. The example below shows the use of orientation tracking: const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const box = new THREE.Mesh(geom, mtl); // Create the device orientation tracker const deviceOrientationControls = new THREEx.DeviceOrientationControls(camera); // Change this to a location close to you (e.g. 0.001 degrees of latitude north of you) arjs.add(box, -0.72, 51.051); arjs.startGps(); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } // Update the scene using the latest sensor readings deviceOrientationControls.update(); cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } Note how we create a device orientation tracker with: const deviceOrientationControls = new THREEx.DeviceOrientationControls(camera); The device orientation tracker updates the camera, so we need to pass it in as an argument. Also note how we update the device orientation tracker in our rendering function, so that new readings from the sensors are accounted for: deviceOrientationControls.update(); Try it! Try it out. As real GPS location and device orientation is used, you will need a mobile device. You should find that the red box appears in its real world position (ensure it's not too far from you, e.g. 0.001 degrees of longitude to the north) and, due to the use of orientation tracking, only appears in the field of view when you are facing its location.","title":"Location-based AR.js with three.js"},{"location":"location-based-three/part2/#location-based-arjs-with-threejs","text":"","title":"Location-based AR.js with three.js"},{"location":"location-based-three/part2/#part-2-using-the-gps-and-device-orientation","text":"Having looked at the basics of the three.js location-based API in the first tutorial, we will now look at how to use the real GPS location. Last time, if you remember, we used a \"fake\" location with the THREEx.LocationBased 's fakeGps() call. We will also look at how we can use the device's orientation controls, so that the orientation sensors are tracked and objects will appear in their real-world position when the device is rotated. For example, an object directly north of the user will only appear when the device is facing north.","title":"Part 2 - Using the GPS and Device Orientation"},{"location":"location-based-three/part2/#gps-tracking","text":"Here is a revised version of the previous example which obtains your real GPS location: import * as THREE from 'three'; import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'; function main() { const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const box = new THREE.Mesh(geom, mtl); // Change this to a location 0.001 degrees of latitude north of you, so that you will face it arjs.add(box, -0.72, 51.051); // Start the GPS arjs.startGps(); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } } main(); Note that we only needed to make one change, we replace the fakeGps() call with: arjs.startGps(); Using the Geolocation API this will make the application start listening for GPS updates. The nice thing is we do not need to do anything else. The LocationBased object automatically updates the camera x and z coordinates to reflect our current GPS location. Specifically, the GPS latitude and longitude are converted to Spherical Mercator, the sign of z reversed (to match the OpenGL coordinate system), and the resulting coordinates used for the camera coordinates.","title":"GPS tracking"},{"location":"location-based-three/part2/#using-the-device-orientation-controls","text":"Having looked at obtaining our real GPS position, we will now look at how we can use the orientation controls to ensure our AR scene matches the real world as we rotate the device around. This is, in principle, quite easy: we just need to create a THREEx.DeviceOrientationControls object and update it in our rendering function. This object is based on the original DeviceOrientationControls from three.js. However, there is a slight problem. Unfortunately this will only work in Chrome on Android (it may also work in Chrome on iOS, this needs testing). This is due to the difficulty in obtaining absolute orientation (i.e. our orientation relative to north) using the device orientation API. This can be done on Chrome/Android using the deviceorientationabsolute event (and in fact, the THREEx.DeviceOrientationControls has been modified from the original to handle this event); it can also be done on Safari with webkitCompassHeading (but, due to the lack of an iDevice for testing, has not been implemented yet) but sadly it appears that support on Firefox is completely missing for now. See this table of compatibility for absolute device orientation . So it's recommended you use Chrome on Android for the moment. The example below shows the use of orientation tracking: const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const box = new THREE.Mesh(geom, mtl); // Create the device orientation tracker const deviceOrientationControls = new THREEx.DeviceOrientationControls(camera); // Change this to a location close to you (e.g. 0.001 degrees of latitude north of you) arjs.add(box, -0.72, 51.051); arjs.startGps(); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } // Update the scene using the latest sensor readings deviceOrientationControls.update(); cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } Note how we create a device orientation tracker with: const deviceOrientationControls = new THREEx.DeviceOrientationControls(camera); The device orientation tracker updates the camera, so we need to pass it in as an argument. Also note how we update the device orientation tracker in our rendering function, so that new readings from the sensors are accounted for: deviceOrientationControls.update();","title":"Using the device orientation controls"},{"location":"location-based-three/part2/#try-it","text":"Try it out. As real GPS location and device orientation is used, you will need a mobile device. You should find that the red box appears in its real world position (ensure it's not too far from you, e.g. 0.001 degrees of longitude to the north) and, due to the use of orientation tracking, only appears in the field of view when you are facing its location.","title":"Try it!"},{"location":"location-based-three/part3/","text":"Location-based AR.js with three.js Part 3 - Connecting to a web API Having looked at how to use the three.js location-based API, we will now consider an example which connects to a web API providing points of interest. This example does not actually introduce any new AR.js concepts, but shows you how you can work with a web API. import * as THREE from 'three'; import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'; function main() { const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const deviceOrientationControls = new THREEx.DeviceOrientationControls(camera); let fetched = false; // Handle the \"gpsupdate\" event on the LocationBased object // This triggers when a GPS update (from the Geolocation API) occurs // 'pos' is the position object from the Geolocation API. arjs.on(\"gpsupdate\", async(pos) => { if(!fetched) { const response = await fetch(`https://hikar.org/webapp/map?bbox=${pos.coords.longitude-0.01},${pos.coords.latitude-0.01},${pos.coords.longitude+0.01},${pos.coords.latitude+0.01}&layers=poi&outProj=4326`); const geojson = await response.json(); geojson.features.forEach ( feature => { const box = new THREE.Mesh(geom, mtl); arjs.add(box, feature.geometry.coordinates[0], feature.geometry.coordinates[1]); }); fetched = true; } }); arjs.startGps(); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } // Update the scene using the latest sensor readings deviceOrientationControls.update(); cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } } main(); How is this working? The key thing is we handle the gpsupdate event emitted by the LocationBased object when a GPS update occurs. This is specifically emitted when the inbuilt Geolocation API receives a GPS update, and allows us to trigger certain code. Here, we trigger a download from a web API when we get the update. Note that the gpsupdate event handler receives the standard position object of the Geolocation API, so that, for example, its coords property contains the longitude and latitude. We then download data in a 0.02 x 0.02 degree box centred on our current location from the API at https://hikar.org. This provides OpenStreetMap POI data, but only for Europe and Turkey due to server capacity constraints. The data is provided as GeoJSON . So having received the data, we simply loop through it and create one THREE.Mesh for each POI, adding it at the appropriate location (accessible via the coordinates of the geometry of each GeoJSON object). Note the boolean variable fetched which is set to true as soon as we have fetched the data. This prevents data being continuously downloaded from the server every time we get a position update, as it's set to false as soon as data has been downloaded. In a real application you could implement code to download data by tile, so that new data is downloaded whenever you move into a new tile.","title":"Location-based AR.js with three.js"},{"location":"location-based-three/part3/#location-based-arjs-with-threejs","text":"","title":"Location-based AR.js with three.js"},{"location":"location-based-three/part3/#part-3-connecting-to-a-web-api","text":"Having looked at how to use the three.js location-based API, we will now consider an example which connects to a web API providing points of interest. This example does not actually introduce any new AR.js concepts, but shows you how you can work with a web API. import * as THREE from 'three'; import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'; function main() { const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const deviceOrientationControls = new THREEx.DeviceOrientationControls(camera); let fetched = false; // Handle the \"gpsupdate\" event on the LocationBased object // This triggers when a GPS update (from the Geolocation API) occurs // 'pos' is the position object from the Geolocation API. arjs.on(\"gpsupdate\", async(pos) => { if(!fetched) { const response = await fetch(`https://hikar.org/webapp/map?bbox=${pos.coords.longitude-0.01},${pos.coords.latitude-0.01},${pos.coords.longitude+0.01},${pos.coords.latitude+0.01}&layers=poi&outProj=4326`); const geojson = await response.json(); geojson.features.forEach ( feature => { const box = new THREE.Mesh(geom, mtl); arjs.add(box, feature.geometry.coordinates[0], feature.geometry.coordinates[1]); }); fetched = true; } }); arjs.startGps(); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } // Update the scene using the latest sensor readings deviceOrientationControls.update(); cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } } main(); How is this working? The key thing is we handle the gpsupdate event emitted by the LocationBased object when a GPS update occurs. This is specifically emitted when the inbuilt Geolocation API receives a GPS update, and allows us to trigger certain code. Here, we trigger a download from a web API when we get the update. Note that the gpsupdate event handler receives the standard position object of the Geolocation API, so that, for example, its coords property contains the longitude and latitude. We then download data in a 0.02 x 0.02 degree box centred on our current location from the API at https://hikar.org. This provides OpenStreetMap POI data, but only for Europe and Turkey due to server capacity constraints. The data is provided as GeoJSON . So having received the data, we simply loop through it and create one THREE.Mesh for each POI, adding it at the appropriate location (accessible via the coordinates of the geometry of each GeoJSON object). Note the boolean variable fetched which is set to true as soon as we have fetched the data. This prevents data being continuously downloaded from the server every time we get a position update, as it's set to false as soon as data has been downloaded. In a real application you could implement code to download data by tile, so that new data is downloaded whenever you move into a new tile.","title":"Part 3 - Connecting to a web API"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"AR.js - Augmented Reality on the Web AR.js is a lightweight library for Augmented Reality on the Web, which includes features like Image Tracking, Location based AR and Marker tracking. Location Based documentation updated and enhanced for AR.js 3.4 What Web AR means (Augmented Reality on the Web) Augmented Reality is the technology that makes possible to overlay content on the real world. It can be provided for several type of devices: handheld (like mobile phones), headsets, desktop displays, and so on. For handheld devices (more generally, for video-see-through devices) the 'reality' is captured from one or more cameras and then shown on the device display, adding some kind of content on top of it. For developers, to develop Augmented Reality ('AR' from now on) on the Web, means to avoid all the Mobile app development efforts and costs related to App stores (validation, time to publish). It also means to re-use well known technologies like Javascript, HTML and CSS, familiar to a lot of developers and possibly designers. It basically means that it is possible to release every new version instantly, fix bugs or release new features in near real-time, opening a lot of practical possibilities. For users, it means to reach an AR experience just visiting a website. As QR Codes are now widespread, it's also possible to scan a QR Code and reach the URL without typing. Additionally, users do not have to reserve storage space on their download the AR app, and do not have to keep it updated. Why AR.js We believe in the Web, as a collaborative and accessible environment. We also believe in Augmented Reality technology, as a new communication medium, that can help people see reality in new, exciting ways. We see Augmented Reality (AR) used everyday for a lot of useful applications, from art, to education, also for fun. We strongly believe that such a powerful technology, that can help people and leverage their creativity, should be free in some way. Also collaborative, if possible. And so, we continue the work started by Jerome Etienne, in bringing AR on the Web, as a free and Open Source technology. Thank you for being interested in this, if you'd like to collaborate in any way, contact us ( https://twitter.com/nicolocarp ). The project is now under a Github organization, that you can find at https://github.com/ar-js-org and you can ask to be part of it, for free. AR types AR.js features the following types of Augmented Reality, on the Web: Image Tracking , when a 2D images is found by the camera, it's possible to show some kind of content on top of it, or near it. The content can be a 2D image, a GIF, a 3D model (also animated) and a 2D video too. Cases of use: Augmented Art, learning (Augmented books), Augmented flyers, advertising, etc. Location Based AR , this kind of AR uses real-world places in order to show Augmented Reality content, on the user device. The experiences that can be built with this library are those that use a user's position in the real world. The user can move (ideally outdoor) and through their smartphones they can see AR content where places are in the real world. Moving around and rotating the phone will make the AR content change according to users position and rotation (so places are 'anchored' in their real position, and appear bigger/smaller according to their distance from the user). With this solution it\u2019s possible to build experiences like interactive support for tourist guides, assistance when exploring a new city, find places of interest like buildings, museums, restaurants, hotels and so on. It\u2019s also possible to build learning experiences like treasure hunts, and biology or history learning games, or use this technology for situated art (visual art experiences bound to specific real world coordinates). Marker Tracking , When a marker is found by the camera, it's possible to show some content (same as Image Tracking). Markers are very stable but limited in shape, color and size. It is suggested for those experiences where are required a lot of different markers with different content. Examples of use: (Augmented books), Augmented flyers, advertising. Key points Very Fast : It runs efficiently even on phones Web-based : It is a pure web solution, so no installation required. Fully javascript based, using three.js + A-Frame + jsartoolkit5 Open Source : It is completely open source and free of charge! Standards : It works on any phone with webgl and webrtc AR.js has reached version 3. This is the official repository: https://github.com/AR-js-org/AR.js . If you want to visit the old AR.js repository, here it is: https://github.com/jeromeetienne/AR.js . Import the library AR.js from version 3 has a new structure. AR.js comes in two, different builds. They are both maintained. They are exclusive. The file you want to import depends on what features you want, and also which render library you want to use (A-Frame or three.js). AR.js uses jsartoolkit5 for tracking, but can display augmented content with either three.js or A-Frame . You can import AR.js in one version of your choice, using the For the three.js version, it's recommended to import AR.js as a module and build with a bundler such as Webpack. There is an example given in the location-based section. Requirements and Known Issues Some requirements and known issues are listed below: It works on every phone with webgl and webrtc . Marker based tracking is very lightweight, while Image Tracking is more CPU consuming You must ensure that you have matching versions of AR.js and A-Frame. AR.js 3.4.5 (the latest version) requires A-Frame 1.3.0 while AR.js 3.4.4 and below requires 1.0.4. Location-based AR will not work correctly on Firefox, due to the inability to obtain absolute device orientation (compass bearing) On Android/Chrome, you may encounter issues with location-based AR due to inaccuracies in compass calibration (incorrect north). This is likely to be a hardware limitation of the device. On some phones you may encounter problems with locating North due to inherent miscalibrations of the device sensors. This is a known problem recognised by the three.js developers: see here Please ensure you enable high accuracy location for your selected browser on Android. Sometimes high accuracy location is turned off by default, and this will lead to an inaccurate GPS location. There is currently a bug in location-based AR where the camera feed is stretched away from the centre of the screen, meaning that there is reduced accuracy in placement of objects further away from the centre. Work is ongoing to investigate this. On device with multi-cameras, Chrome may have problems on detecting the right one. Please use Firefox if you find that AR.js opens on the wrong camera. There is an open issue for this. To work with Location Based feature, your device needs to have GPS, accelerometer and magnetometer sensors. It will not work if any of these sensors are absent. Please, read carefully any suggestions that AR.js pops-up -as alerts- for Location Based on iOS, as iOS requires user actions to activate geoposition Access to the phone camera or to camera GPS sensors, due to major browsers restrictions, can be done only under https websites. All the examples you will see, and all AR.js web apps in general, have to be run on a server. You can use local server or deploy the static web app on the web. Always deploy under https So don't forget to always run your examples on secure connections servers or localhost. Github Pages is a great way to have free and live websites under https. Getting started Here we present three, basic examples, one for each AR feature. For specific documentation, on the top menu you can find every section, or you can click on the following links: Image Tracking Documentation Location Based Documentation Marker Based Documentation Image Tracking Example There is a Codepen for you to try. Below you can find also a live example. Please follow these simple steps: Create a new project with the code below (or open this live example and go directly to the last step) Run it on a server Open the website on your phone Scan this picture to see content through the camera.
Loading, please wait...
Location Based Example This example retrieves your position and places a red box near you. Please follow these simple steps: Create a new project with the following snippet, and change add-your-latitude and add-your-longitude with a point very close to your latitude and longitude (about 0.001 degrees distant for both latitude and longitude), without the <> . Run it on a server Activate GPS on your phone and navigate to the example URL Look around. You should see the box close to you, appearing in the requested position, even if you look around and move the phone. AR.js A-Frame Location-based; longitude: \" scale=\"10 10 10\"> This is just a basic example and most location-based applications will involve JavaScript coding. So, if you want to enhance and customize your Location Based experience, take a look at the Location Based docs. Marker Based Example Please follow these simple steps: Create a new project with the code below (or open this live example and go directly to the last step) Run it on a server Open the website on your phone Scan this picture to see content through the camera. Advanced stuff AR.js offers two ways, with A-Frame, to interact with the web page: to interact directly with AR content and Overlayed DOM interaction. Also, there are several Custom Events triggered during the life cycle of every AR.js web app. You can learn more about these aspects on the UI and Events section . AR.js architecture AR.js uses jsartoolkit5 for tracking, but can display augmented content with either three.js or A-Frame . three.js folder contains source code for AR.js core, Marker based and Image Tracking examples for AR.js three.js based build for three.js AR.js based vendor stuff (jsartoolkit5) workers (used for Image Tracking). When you find files that ends with -nft suffix, they're bundled only with the Image Tracking version. A-Frame version of AR.js uses three.js parts as its core. A-Frame code, on AR.js, is simply a wrapper to write AR with Custom Components in HTML. aframe folder contains source code for AR.js A-Frame (aka wrappers for Marker Based, Image Tracking components) source code for Location Based build for A-Frame AR.js based examples for A-Frame AR.js. Tutorials There are various tutorials available for developing with AR.js. These include: Location Based Build your Location-Based Augmented Reality Web App : covers location-based AR.js with A-Frame. Develop a Simple Points Of Interest App (A-Frame version) ( Provided with these docs ): a further location-based A-Frame tutorial, written with AR.js 3.4 in mind. Develop a Simple Points of Interest App (three.js version) ( Provided with these docs ): a pure three.js version of the above, also written for AR.js 3.4. Troubleshooting, feature requests, community You can find a lot of help on the old AR.js repositories issues . Please search on open/closed issues, you may find useful information. Contributing From opening a bug report to creating a pull request: every contribution is appreciated and welcome. If you're planning to implement a new feature or change the API please create an issue first. This way we can ensure that your precious work is not in vain. Issues If you are having configuration or setup problems, please post a question to StackOverflow . You can also address question to us in our Gitter chatroom If you have discovered a bug or have a feature suggestion, feel free to create an issue on Github. Submitting Changes After getting some feedback, push to your fork and submit a pull request. We may suggest some changes or improvements or alternatives, but for small changes your pull request should be accepted quickly. Some things that will increase the chance that your pull request is accepted: Follow the existing coding style Write a good commit message","title":"Home"},{"location":"#arjs-augmented-reality-on-the-web","text":"AR.js is a lightweight library for Augmented Reality on the Web, which includes features like Image Tracking, Location based AR and Marker tracking. Location Based documentation updated and enhanced for AR.js 3.4","title":"AR.js - Augmented Reality on the Web"},{"location":"#what-web-ar-means-augmented-reality-on-the-web","text":"Augmented Reality is the technology that makes possible to overlay content on the real world. It can be provided for several type of devices: handheld (like mobile phones), headsets, desktop displays, and so on. For handheld devices (more generally, for video-see-through devices) the 'reality' is captured from one or more cameras and then shown on the device display, adding some kind of content on top of it. For developers, to develop Augmented Reality ('AR' from now on) on the Web, means to avoid all the Mobile app development efforts and costs related to App stores (validation, time to publish). It also means to re-use well known technologies like Javascript, HTML and CSS, familiar to a lot of developers and possibly designers. It basically means that it is possible to release every new version instantly, fix bugs or release new features in near real-time, opening a lot of practical possibilities. For users, it means to reach an AR experience just visiting a website. As QR Codes are now widespread, it's also possible to scan a QR Code and reach the URL without typing. Additionally, users do not have to reserve storage space on their download the AR app, and do not have to keep it updated.","title":"What Web AR means (Augmented Reality on the Web)"},{"location":"#why-arjs","text":"We believe in the Web, as a collaborative and accessible environment. We also believe in Augmented Reality technology, as a new communication medium, that can help people see reality in new, exciting ways. We see Augmented Reality (AR) used everyday for a lot of useful applications, from art, to education, also for fun. We strongly believe that such a powerful technology, that can help people and leverage their creativity, should be free in some way. Also collaborative, if possible. And so, we continue the work started by Jerome Etienne, in bringing AR on the Web, as a free and Open Source technology. Thank you for being interested in this, if you'd like to collaborate in any way, contact us ( https://twitter.com/nicolocarp ). The project is now under a Github organization, that you can find at https://github.com/ar-js-org and you can ask to be part of it, for free.","title":"Why AR.js"},{"location":"#ar-types","text":"AR.js features the following types of Augmented Reality, on the Web: Image Tracking , when a 2D images is found by the camera, it's possible to show some kind of content on top of it, or near it. The content can be a 2D image, a GIF, a 3D model (also animated) and a 2D video too. Cases of use: Augmented Art, learning (Augmented books), Augmented flyers, advertising, etc. Location Based AR , this kind of AR uses real-world places in order to show Augmented Reality content, on the user device. The experiences that can be built with this library are those that use a user's position in the real world. The user can move (ideally outdoor) and through their smartphones they can see AR content where places are in the real world. Moving around and rotating the phone will make the AR content change according to users position and rotation (so places are 'anchored' in their real position, and appear bigger/smaller according to their distance from the user). With this solution it\u2019s possible to build experiences like interactive support for tourist guides, assistance when exploring a new city, find places of interest like buildings, museums, restaurants, hotels and so on. It\u2019s also possible to build learning experiences like treasure hunts, and biology or history learning games, or use this technology for situated art (visual art experiences bound to specific real world coordinates). Marker Tracking , When a marker is found by the camera, it's possible to show some content (same as Image Tracking). Markers are very stable but limited in shape, color and size. It is suggested for those experiences where are required a lot of different markers with different content. Examples of use: (Augmented books), Augmented flyers, advertising.","title":"AR types"},{"location":"#key-points","text":"Very Fast : It runs efficiently even on phones Web-based : It is a pure web solution, so no installation required. Fully javascript based, using three.js + A-Frame + jsartoolkit5 Open Source : It is completely open source and free of charge! Standards : It works on any phone with webgl and webrtc AR.js has reached version 3. This is the official repository: https://github.com/AR-js-org/AR.js . If you want to visit the old AR.js repository, here it is: https://github.com/jeromeetienne/AR.js .","title":"Key points"},{"location":"#import-the-library","text":"AR.js from version 3 has a new structure. AR.js comes in two, different builds. They are both maintained. They are exclusive. The file you want to import depends on what features you want, and also which render library you want to use (A-Frame or three.js). AR.js uses jsartoolkit5 for tracking, but can display augmented content with either three.js or A-Frame . You can import AR.js in one version of your choice, using the For the three.js version, it's recommended to import AR.js as a module and build with a bundler such as Webpack. There is an example given in the location-based section.","title":"Import the library"},{"location":"#requirements-and-known-issues","text":"Some requirements and known issues are listed below: It works on every phone with webgl and webrtc . Marker based tracking is very lightweight, while Image Tracking is more CPU consuming You must ensure that you have matching versions of AR.js and A-Frame. AR.js 3.4.5 (the latest version) requires A-Frame 1.3.0 while AR.js 3.4.4 and below requires 1.0.4. Location-based AR will not work correctly on Firefox, due to the inability to obtain absolute device orientation (compass bearing) On Android/Chrome, you may encounter issues with location-based AR due to inaccuracies in compass calibration (incorrect north). This is likely to be a hardware limitation of the device. On some phones you may encounter problems with locating North due to inherent miscalibrations of the device sensors. This is a known problem recognised by the three.js developers: see here Please ensure you enable high accuracy location for your selected browser on Android. Sometimes high accuracy location is turned off by default, and this will lead to an inaccurate GPS location. There is currently a bug in location-based AR where the camera feed is stretched away from the centre of the screen, meaning that there is reduced accuracy in placement of objects further away from the centre. Work is ongoing to investigate this. On device with multi-cameras, Chrome may have problems on detecting the right one. Please use Firefox if you find that AR.js opens on the wrong camera. There is an open issue for this. To work with Location Based feature, your device needs to have GPS, accelerometer and magnetometer sensors. It will not work if any of these sensors are absent. Please, read carefully any suggestions that AR.js pops-up -as alerts- for Location Based on iOS, as iOS requires user actions to activate geoposition Access to the phone camera or to camera GPS sensors, due to major browsers restrictions, can be done only under https websites. All the examples you will see, and all AR.js web apps in general, have to be run on a server. You can use local server or deploy the static web app on the web.","title":"Requirements and Known Issues"},{"location":"#always-deploy-under-https","text":"So don't forget to always run your examples on secure connections servers or localhost. Github Pages is a great way to have free and live websites under https.","title":"Always deploy under https"},{"location":"#getting-started","text":"Here we present three, basic examples, one for each AR feature. For specific documentation, on the top menu you can find every section, or you can click on the following links: Image Tracking Documentation Location Based Documentation Marker Based Documentation","title":"Getting started"},{"location":"#image-tracking-example","text":"There is a Codepen for you to try. Below you can find also a live example. Please follow these simple steps: Create a new project with the code below (or open this live example and go directly to the last step) Run it on a server Open the website on your phone Scan this picture to see content through the camera.
Loading, please wait...
","title":"Image Tracking Example"},{"location":"#location-based-example","text":"This example retrieves your position and places a red box near you. Please follow these simple steps: Create a new project with the following snippet, and change add-your-latitude and add-your-longitude with a point very close to your latitude and longitude (about 0.001 degrees distant for both latitude and longitude), without the <> . Run it on a server Activate GPS on your phone and navigate to the example URL Look around. You should see the box close to you, appearing in the requested position, even if you look around and move the phone. AR.js A-Frame Location-based; longitude: \" scale=\"10 10 10\"> This is just a basic example and most location-based applications will involve JavaScript coding. So, if you want to enhance and customize your Location Based experience, take a look at the Location Based docs.","title":"Location Based Example"},{"location":"#marker-based-example","text":"Please follow these simple steps: Create a new project with the code below (or open this live example and go directly to the last step) Run it on a server Open the website on your phone Scan this picture to see content through the camera. ","title":"Marker Based Example"},{"location":"#advanced-stuff","text":"AR.js offers two ways, with A-Frame, to interact with the web page: to interact directly with AR content and Overlayed DOM interaction. Also, there are several Custom Events triggered during the life cycle of every AR.js web app. You can learn more about these aspects on the UI and Events section .","title":"Advanced stuff"},{"location":"#arjs-architecture","text":"AR.js uses jsartoolkit5 for tracking, but can display augmented content with either three.js or A-Frame . three.js folder contains source code for AR.js core, Marker based and Image Tracking examples for AR.js three.js based build for three.js AR.js based vendor stuff (jsartoolkit5) workers (used for Image Tracking). When you find files that ends with -nft suffix, they're bundled only with the Image Tracking version. A-Frame version of AR.js uses three.js parts as its core. A-Frame code, on AR.js, is simply a wrapper to write AR with Custom Components in HTML. aframe folder contains source code for AR.js A-Frame (aka wrappers for Marker Based, Image Tracking components) source code for Location Based build for A-Frame AR.js based examples for A-Frame AR.js.","title":"AR.js architecture"},{"location":"#tutorials","text":"There are various tutorials available for developing with AR.js. These include:","title":"Tutorials"},{"location":"#location-based","text":"Build your Location-Based Augmented Reality Web App : covers location-based AR.js with A-Frame. Develop a Simple Points Of Interest App (A-Frame version) ( Provided with these docs ): a further location-based A-Frame tutorial, written with AR.js 3.4 in mind. Develop a Simple Points of Interest App (three.js version) ( Provided with these docs ): a pure three.js version of the above, also written for AR.js 3.4.","title":"Location Based"},{"location":"#troubleshooting-feature-requests-community","text":"You can find a lot of help on the old AR.js repositories issues . Please search on open/closed issues, you may find useful information.","title":"Troubleshooting, feature requests, community"},{"location":"#contributing","text":"From opening a bug report to creating a pull request: every contribution is appreciated and welcome. If you're planning to implement a new feature or change the API please create an issue first. This way we can ensure that your precious work is not in vain.","title":"Contributing"},{"location":"#issues","text":"If you are having configuration or setup problems, please post a question to StackOverflow . You can also address question to us in our Gitter chatroom If you have discovered a bug or have a feature suggestion, feel free to create an issue on Github.","title":"Issues"},{"location":"#submitting-changes","text":"After getting some feedback, push to your fork and submit a pull request. We may suggest some changes or improvements or alternatives, but for small changes your pull request should be accepted quickly. Some things that will increase the chance that your pull request is accepted: Follow the existing coding style Write a good commit message","title":"Submitting Changes"},{"location":"about/","text":"Aknowledgments This project has been created by @jeromeetienne and it is now maintained by @nicolocarpignoli and the AR.js Org Community. Notes about AR.js 3 release: After months of work, we have changed AR.js for good. The aim was to make it a true, free alternative to paid Web AR solutions. We don't know if we're already there, but now the path is clear, at least. We have worked hard, spent many days and nights\u200a-\u200aobviously, we are coders, what did you expect?\u200a-\u200aand we are now so thrilled to share this achievement with the community. We know that it can be better, we know its limitations, but we would love to share this journey's result. AR.js is now under a Github organisation, that means, more collaborative than ever. It has a new structure, and a lot of new code. And most of all, we've added Image Tracking, what we felt was the missing piece for a true alternative to Web AR. A huge, huge thanks to the wonderful guys who made this possible: Walter Perdan Thorsten Bux Daniel Fernandes misdake hatsumatsu and many more. It was great to built this with all of you.","title":"About"},{"location":"about/#aknowledgments","text":"This project has been created by @jeromeetienne and it is now maintained by @nicolocarpignoli and the AR.js Org Community. Notes about AR.js 3 release: After months of work, we have changed AR.js for good. The aim was to make it a true, free alternative to paid Web AR solutions. We don't know if we're already there, but now the path is clear, at least. We have worked hard, spent many days and nights\u200a-\u200aobviously, we are coders, what did you expect?\u200a-\u200aand we are now so thrilled to share this achievement with the community. We know that it can be better, we know its limitations, but we would love to share this journey's result. AR.js is now under a Github organisation, that means, more collaborative than ever. It has a new structure, and a lot of new code. And most of all, we've added Image Tracking, what we felt was the missing piece for a true alternative to Web AR. A huge, huge thanks to the wonderful guys who made this possible: Walter Perdan Thorsten Bux Daniel Fernandes misdake hatsumatsu and many more. It was great to built this with all of you.","title":"Aknowledgments"},{"location":"image-tracking/","text":"Image Tracking Image Tracking makes possible to scan a picture, a drawing, any image, and show content over it. All the following examples are with A-Frame, for simplicity. You can use three.js if you want. See on the official repository the nft three.js example . All A-Frame examples for Image Tracking can be found here . Getting started with Image Tracking Natural Feature Tracking or NFT is a technology that enables the use of images instead of markers like QR Codes or the Hiro marker. The software tracks interesting points in the image and using them, it estimates the position of the camera. These interesting points (aka \"Image Descriptors\") are created using the NFT Marker Creator , a tool available for creating NFT markers. It comes in two versions: the Web version (recommended), and the node.js version . There is also a fork of this project on the AR.js Github organisation, but as for now, Daniel Fernandes version works perfectly. Thanks to Daniel Fernandes for contribution on this docs section. Choose good images If you want to understand the creation of markers in more depth, check out the NFT Marker Creator wiki . It explains also why certain images work way better than others. An important factor is the DPI of the image: a good dpi (300 or more) will give a very good stabilization, while low DPI (like 72) will require the user to stay very still and close to the image, otherwise tracking will lag. Create Image Descriptors Once you have chosen your image, you can either use the NFT Marker Creator in its Web version or the node version. If you're using the node version, this is the basic command to run: node app.js -i After that, you will find the Image Descriptors files on the output folder. In the web version, the generator will automatically download the files from your browser. In either cases, you will end up with three files as Image Descriptors, with .fset , .fset3 , .iset . Each of them will have the same prefix before the file extension. That one will be the Image Descriptor name that you will use on the AR.js web app. For example: with files trex.fset , trex.fset3 and trex.iset , your Image Descriptors name will be trex . Render the content Now it's time to create the actual AR web app.
Loading, please wait...
\" smooth=\"true\" smoothCount=\"10\" smoothTolerance=\".01\" smoothThreshold=\"5\" > \" scale=\"5 5 5\" position=\"50 150 0\" > See on the comments above, inline on the code, for explanations. You can refer to A-Frame docs to know everything about content and customization. You can add geometries, 3D models, videos, images. And you can customize their position, scale, rotation and so on. The only custom component here is the a-nft , the Image Tracking HTML anchor. Here are the attributes for this entity Attribute Description Component Mapping type type of marker - ['nft' only valid value] artoolkitmarker.type url url of the Image Descriptors, without extension artoolkitmarker.descriptorsUrl emitevents emits 'markerFound' and 'markerLost' events - ['true', 'false'] - smooth turn on/off camera smoothing - ['true', 'false'] - default: false - smoothCount number of matrices to smooth tracking over, more = smoother but slower follow - default: 5 - smoothTolerance distance tolerance for smoothing, if smoothThreshold # of matrices are under tolerance, tracking will stay still - default: 0.01 - smoothThreshold threshold for smoothing, will keep still unless enough matrices are over tolerance - default: 2 - size size of the marker in meter artoolkitmarker.size \u26a1\ufe0f It is suggested to use smooth , smoothCount and smoothTolerance because of weak stabilization of content in Image Tracking. Thanks to smoothing, content is way more stable, from 3D models to 2D videos. Event listeners The arjs-nft-loaded event is fired when all NFT Markers have finished loading. This is when you will be able to start tracking your NFT Marker with the camera. You can use this to build a UI to inform the user that things are still loading. Usage window.addEventListener(\"arjs-nft-loaded\", (event) => { // Hide loading overlay });","title":"Image Tracking"},{"location":"image-tracking/#image-tracking","text":"Image Tracking makes possible to scan a picture, a drawing, any image, and show content over it. All the following examples are with A-Frame, for simplicity. You can use three.js if you want. See on the official repository the nft three.js example . All A-Frame examples for Image Tracking can be found here .","title":"Image Tracking"},{"location":"image-tracking/#getting-started-with-image-tracking","text":"Natural Feature Tracking or NFT is a technology that enables the use of images instead of markers like QR Codes or the Hiro marker. The software tracks interesting points in the image and using them, it estimates the position of the camera. These interesting points (aka \"Image Descriptors\") are created using the NFT Marker Creator , a tool available for creating NFT markers. It comes in two versions: the Web version (recommended), and the node.js version . There is also a fork of this project on the AR.js Github organisation, but as for now, Daniel Fernandes version works perfectly. Thanks to Daniel Fernandes for contribution on this docs section.","title":"Getting started with Image Tracking"},{"location":"image-tracking/#choose-good-images","text":"If you want to understand the creation of markers in more depth, check out the NFT Marker Creator wiki . It explains also why certain images work way better than others. An important factor is the DPI of the image: a good dpi (300 or more) will give a very good stabilization, while low DPI (like 72) will require the user to stay very still and close to the image, otherwise tracking will lag.","title":"Choose good images"},{"location":"image-tracking/#create-image-descriptors","text":"Once you have chosen your image, you can either use the NFT Marker Creator in its Web version or the node version. If you're using the node version, this is the basic command to run: node app.js -i After that, you will find the Image Descriptors files on the output folder. In the web version, the generator will automatically download the files from your browser. In either cases, you will end up with three files as Image Descriptors, with .fset , .fset3 , .iset . Each of them will have the same prefix before the file extension. That one will be the Image Descriptor name that you will use on the AR.js web app. For example: with files trex.fset , trex.fset3 and trex.iset , your Image Descriptors name will be trex .","title":"Create Image Descriptors"},{"location":"image-tracking/#render-the-content","text":"Now it's time to create the actual AR web app.
Loading, please wait...
\" smooth=\"true\" smoothCount=\"10\" smoothTolerance=\".01\" smoothThreshold=\"5\" > \" scale=\"5 5 5\" position=\"50 150 0\" > See on the comments above, inline on the code, for explanations. You can refer to A-Frame docs to know everything about content and customization. You can add geometries, 3D models, videos, images. And you can customize their position, scale, rotation and so on. The only custom component here is the a-nft , the Image Tracking HTML anchor.","title":"Render the content"},{"location":"image-tracking/#a-nft","text":"Here are the attributes for this entity Attribute Description Component Mapping type type of marker - ['nft' only valid value] artoolkitmarker.type url url of the Image Descriptors, without extension artoolkitmarker.descriptorsUrl emitevents emits 'markerFound' and 'markerLost' events - ['true', 'false'] - smooth turn on/off camera smoothing - ['true', 'false'] - default: false - smoothCount number of matrices to smooth tracking over, more = smoother but slower follow - default: 5 - smoothTolerance distance tolerance for smoothing, if smoothThreshold # of matrices are under tolerance, tracking will stay still - default: 0.01 - smoothThreshold threshold for smoothing, will keep still unless enough matrices are over tolerance - default: 2 - size size of the marker in meter artoolkitmarker.size \u26a1\ufe0f It is suggested to use smooth , smoothCount and smoothTolerance because of weak stabilization of content in Image Tracking. Thanks to smoothing, content is way more stable, from 3D models to 2D videos.","title":"<a-nft\\>"},{"location":"image-tracking/#event-listeners","text":"The arjs-nft-loaded event is fired when all NFT Markers have finished loading. This is when you will be able to start tracking your NFT Marker with the camera. You can use this to build a UI to inform the user that things are still loading.","title":"Event listeners"},{"location":"image-tracking/#usage","text":"window.addEventListener(\"arjs-nft-loaded\", (event) => { // Hide loading overlay });","title":"Usage"},{"location":"location-based/","text":"Location Based Important! You might want to check out the new AR.js LocAR project if you are interested in location-based AR. This aims to provide a cleaner API, with just a single version, and more frequent updates. In the future, updates on the location-based side will be focused on LocAR. Intro to location-based This article gives you a first glance to Location Based on AR.js. It can be used for indoor (but with low precision) and outdoor geopositioning of AR content. You can load places statically, from HTML or from Javascript, or you can load your data from local/remote json, or even through API calls. Choice is yours. On the article above there are all the options explained, as tutorials. Location Based has been implemented for both three.js and A-Frame. Each of these is documented below. This document is intended as reference documentation. There are also two tutorials available, with full example code: A-Frame location based three.js location based Limitations Location-based AR with AR.js is subject to certain limitations. Your device must have a GPS chip, accelerometer and magnetometer. On some devices, the sensors may be miscalibrated, resulting in an incorrect North. See, for example, this three.js issue . This is unfortunately a limitation of the device. This will be investigated further in LocAR , for example, as to whether certain devices are consistently \"out\" by a certain bearing. The camera feed may appear \"stretched\". Again the focus on fixing this will be in LocAR. A-Frame AR.js offers A-Frame components to implement location-based AR. There are three variants of the components, detailed as below: The new-location-based components. In most cases, these are recommended . These have been available since AR.js 3.4.0, incorporate various bug fixes, use simpler code, and provide a thin wrapper round the three.js API shown below. These are recommended for most uses, though do not support all the events of the older components due to a different internal implementation. Nonetheless they the components likely to see further development - the older variants are unlikely to see further work besides bug fixes. The projected components. These have been available since AR.js 3.3.1, use largely the same internal implementation as the classic components, and were the first to offer projection of latitude/longitude into Spherical Mercator, discussed below. They are generally not recommended unless you have problems with new-location-based . The classic components, available before AR.js 3.3.1. These are similar to the projected components but do not offer the facility to convert between latitude/longitude and the projected coordinates used for augmented reality, which can cause problems for more specialist uses such as showing roads and paths in augmented reality. For most use cases it is preferred to use new-location-based but some uses, such as embedded AR scenes, only work with the classic components. The components Each variant above includes two components, a camera component which enables the location-based AR, and an entity-place component which enables setting components' latitude and longitude. The exact component names for each variant are shown below. Component variant Camera component Entity-place component new-location-based gps-new-camera gps-new-entity-place projected gps-projected-camera gps-projected-entity-place classic gps-camera gps-entity-place Camera component ( gps-new-camera , gps-projected-camera or gps-camera ) Required : yes Max allowed per scene : 1 This component enables the Location AR. It has to be added to the camera entity. It makes possible to handle both position and rotation of the camera and it's used to determine where the user is pointing their device. For example: Properties Property Description Default Value Availability positionMinAccuracy Minimum accuracy allowed for position signal 100 all gpsMinDistance Setting this allows you to control how far the camera must move, in meters, to generate a GPS update event. Useful to prevent 'jumping' of augmented content due to frequent small changes in position. 5 all simulateLatitude Setting this allows you to simulate the latitude of the camera, to aid in testing. 0 (disabled) all (but only triggers GPS update event in new-location-based) simulateLongitude Setting this allows you to simulate the longitude of the camera, to aid in testing. 0 (disabled) all (but only triggers GPS update event in new-location-based) simulateAltitude Setting this allows you to simulate the altitude of the camera in meters above sea level, to aid in testing. 0 (disabled) all alert Whether to show a message when GPS signal is under the positionMinAccuracy false projected, classic minDistance If set, places with a distance from the user lower than this value, are not shown. Only a positive value is allowed. Value is in meters. In the new-location-based components, please set the near clipping plane of the perspective camera. 0 (disabled) projected, classic maxDistance If set, places with a distance from the user higher than this value, are not shown. Only a positive value is allowed. Value is in meters. In the new-location-based components, please set the far clipping plane of the perspective camera. 0 (disabled) projected, classic gpsTimeInterval Setting this allows you to control how frequently to obtain a new GPS position. If a previous GPS location is cached, the cached position will be used rather than a new position if its 'age' is less than this value, in milliseconds. This parameter is passed directly to the Geolocation API's watchPosition() method. 0 (always use new position, not cached) all Entity-place component ( gps-new-entity-place , gps-projected-entity-place or gps-entity-place ) Required : yes Max allowed per scene : no limit This component makes each entity GPS-trackable. This assigns a specific world position to an entity, so that the user can see it when their device is pointing to its position in the real world. If the user is far from the entity, it will seem smaller. If it's too far away, it won't be seen at all. It requires latitude and longitude as a single string parameter (example with a-box aframe primitive): ; longitude: \"/> \u26a1\ufe0f In addition, you can use the a-frame \"position\" parameter to assign a y-value to change the height of the content. This value should be entered as meters above or below (if negative) the current camera height. For example, this would assign a height of 30 meters, and will be displayed relative to the gps-new-camera's current height: ; longitude: \" position=\"0 30 0\"/> Properties distance : current distance from the camera, in metres. Available in gps-new-entity-place only: for the classic components, please use events to obtain the current distance. Events Take a look at the UI and Events page for Location Based Custom Events. \u26a1\ufe0f Usually, in Location Based, it's nice to have the augmented content that will always face the user, so when you rotate the camera, 3D models or most of all, text, are well visible. Look at this example in order to create gps-new-entity-place entities that will always face the user (camera). Viewing every distant object If your location-based AR content is distant from the user (around 1km or more), it is recommended to use the new arjs-webcam-texture component (introduced in AR.js 3.2.0), which uses a three.js texture to stream the camera feed and allows distant content to be viewed. This component is automatically injected if the videoTexture parameter of the arjs system is set to true and the sourceType is webcam . For example (code snippet only): Reducing shaking effects In location-based mode, 'shaking' effects can occur due to frequent small changes in the device's orientation, due to the high sensitivity of the device sensors such as the accelerometer. If using AR.js 3.3.1 or greater (3.4.3 or greater for the new-location-based components), this can optionally be reduced using an exponential smoothing technique. Note that, if you are NOT using the new-location-based components, there are currently some occasional display artefacts with this if moving the device quickly or suddenly so please test before you enable it in a finished application; work to resolve these is on-going. Alternatively, please use the new-location-based components. This is enabled by adding a custom look-controls component to your a-camera with a smoothingFactor property. This replaces A-Frames default look-controls component, which must be disabled. The name of the custom look-controls component varies, depending on which version of the location-based components you are using: for new-location-based , use arjs-device-orientation-controls ; for the classic and projected components, use arjs-look-controls . For example, in the new-location-based components: or, otherwise: Exponential smoothing works by applying a smoothing factor to each newly-read device rotation angle (obtained from sensor readings) such that the previous smoothed value counts more than the current value, thus reducing 'noise' and 'jitter'. If k is the smoothing factor: smoothedAngle = k * newValue + (1 - k) * previousSmoothedAngle It can be seen from this that the smaller the value of k (the smoothingFactor property), the greater the smoothing effect. In tests, 0.1 appears to give the best result. You can also reduce 'jumping' of augmented content when near a place - a bad-looking effect due to GPS sensor's low precision. To do so you can use the gpsMinDistance property, as shown in the examples above. This will only update the position if the user has moved at least that number of metres. Projection Details The new-location-based and projected location-based components for AR.js uses Spherical Mercator (aka EPSG:3857) to store both the camera position and the position of added points of interest and other geographical data. Spherical Mercator is the same projection used by Google Maps and projects the earth onto a flat surface. It works reasonably at most latitudes but is highly distorted near the poles. Latitude and longitude is projected into Spherical Mercator eastings and northings , which are approximately (but not exactly) equivalent to metres. The rationale for this is to allow easy addition of more complex geographic data such as roads and paths. Such data can be projected and added to an AR.js scene, and then, because Spherical Mercator units approximate to metres (away from the poles), the coordinates can be used directly as WebGL/A-Frame world coordinates. Calculating world coordinates of arbitrary augmented content The new-location-based and projected components have some useful properties and methods which can be used to easily work with more specialist augmented content (for example, you might want to overlay AR polylines or polygons representing roads and paths, downloaded from geodata APIs such as OpenStreetMap ). Such data can be downloaded from the API as lat/lon based coordinates, projected using AR.js API methods into Spherical Mercator (approximating to, but not exactly metres, but in tests good enough to use as world coordinates), and then added to the scene as a three.js object. This is implemented differently in the new-location-based and projected components, but the external API is (as of 3.4.3) the same. The key method is the latLonToWorld(lat, lon) method of the gps-new-camera and gps-projected-camera components. This converts latitude and longitude directly to world coordinates, performing the projection as the first step and then calculating the world coordinates from the projected coordinates. It will return a 2-member array containing the x and z world coordinates, allowing the developer to calculate or specify the y coordinate (altitude) independently. Note that the sign of the Spherical Mercator northing is reversed to align with the OpenGL coordinate system (eastings are equivalent to x coordinates and northings to z coordinates). gps-new-camera implements projection via the underlying AR.js three.js LocationBased object (see three.js documentation, below) which is responsible for the actual projection. gps-projected-camera provides similar functionality but via a different method and with some implementation differences. In gps-projected-camera , unlike gps-new-camera , the original GPS position is set as the world origin. three.js For pure three.js (no A-Frame) it is recommended to use LocAR . The notes below, however, refer to the three.js version in the main AR.js repository. The three.js API keeps track of your current GPS location (or allows you to set a fake location) and allows you to add three.js objects at a given latitude and longitude. It includes these classes: THREEx.LocationBased - general manager class for the three.js location-based API. THREEx.WebcamRenderer - renders the feed from the webcam as a WebGL texture. THREEx.DeviceOrientationControls - for detecting changes in the orientation of the device. These classes include the following methods: LocationBased constructor(scene, camera, options={}) : Initialises a new LocationBased object. Takes a THREE.Scene and a THREE.Camera object as parameters, as well as an object of GPS options (see setGpsOptions() , below) setProjection(proj) : allows the projection to be defined. By default Spherical Mercator is used. The projection object must provide a project() method which takes longitude and latitude as parameters and returns a 2-member array of projected coordinates (easting, northing). setGpsOptions(options={}) : sets the GPS options. These include gpsMinDistance and gpsMinAccuracy , described in the A-Frame documentation above. startGps() : starts the GPS. Takes an optional maximumAge , as used by the native Geolocation API. stopGps() : stops the GPS. fakeGps(lon, lat, elev=null, acc=0) : fakes a GPS position being received. Elevation and accuracy can optionally be provided. lonLatToWorldCoords(lon, lat) : projects a given longitude and latitude into world coordinates using the current projection. The sign of the northing is reversed to align it with the OpenGL coordinate system. add(object, lon, lat, elev) : adds a given three.js object to the world at the given longitude and latitude and at the given elevation. setWorldPosition(object, lon, lat, elev) : changes the world position of a given object to the given longitude and latitude, without adding it to the scene. setElevation(elev) : sets the current elevation in metres. This will set the camera's y coordinate to that elevation. on(eventname, eventhandler) : allows event handlers to be specified. Currently gpsupdate and gpserror handlers are supported, for receiving a new GPS position and GPS errors (as in the Geolocation API) respectively. WebcamRenderer Renders the webcam feed. constructor(renderer, videoElementSelector) : creates a WebcamRenderer . Takes a THREE.WebGLRenderer plus a selector for an HTML video element to stream the feed to. update() : updates the camera feed. Should be done each time the scene is rendered. DeviceOrientationControls Represents the device orientation controls, i.e. accelerometer and magnetic field sensors, for determining the orientation of the device. Based on the sample included in the three.js distribution. constructor(cameraObject) : creates a DeviceOrientationControls object. Takes a three.js camera. update() : updates the device orientation controls. Should be done each time the scene is rendered. Using three.js location-based in an application You are recommended to use npm to install AR.js, import it into your application, and use a bundler such as Webpack to build. Here is a sample package.json : { \"dependencies\": { \"@ar-js-org/ar.js\": \"3.4.5\", }, \"devDependencies\": { \"webpack\": \"^5.75.0\", \"webpack-cli\": \"^5.0.0\" }, \"scripts\": { \"build\": \"npx webpack\" } } and a sample webpack.config.js : const path = require('path'); module.exports = { mode: 'development', entry: './index.js', output: { path: path.resolve(__dirname, 'dist'), filename: 'bundle.js' }, optimization: { minimize: false } }; This will build a bundle named bundle.js in the dist subdirectory from a source file index.js . Here is an example of importing the components into an application: import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'","title":"Location Based"},{"location":"location-based/#location-based","text":"Important! You might want to check out the new AR.js LocAR project if you are interested in location-based AR. This aims to provide a cleaner API, with just a single version, and more frequent updates. In the future, updates on the location-based side will be focused on LocAR.","title":"Location Based"},{"location":"location-based/#intro-to-location-based","text":"This article gives you a first glance to Location Based on AR.js. It can be used for indoor (but with low precision) and outdoor geopositioning of AR content. You can load places statically, from HTML or from Javascript, or you can load your data from local/remote json, or even through API calls. Choice is yours. On the article above there are all the options explained, as tutorials. Location Based has been implemented for both three.js and A-Frame. Each of these is documented below. This document is intended as reference documentation. There are also two tutorials available, with full example code: A-Frame location based three.js location based","title":"Intro to location-based"},{"location":"location-based/#limitations","text":"Location-based AR with AR.js is subject to certain limitations. Your device must have a GPS chip, accelerometer and magnetometer. On some devices, the sensors may be miscalibrated, resulting in an incorrect North. See, for example, this three.js issue . This is unfortunately a limitation of the device. This will be investigated further in LocAR , for example, as to whether certain devices are consistently \"out\" by a certain bearing. The camera feed may appear \"stretched\". Again the focus on fixing this will be in LocAR.","title":"Limitations"},{"location":"location-based/#a-frame","text":"AR.js offers A-Frame components to implement location-based AR. There are three variants of the components, detailed as below: The new-location-based components. In most cases, these are recommended . These have been available since AR.js 3.4.0, incorporate various bug fixes, use simpler code, and provide a thin wrapper round the three.js API shown below. These are recommended for most uses, though do not support all the events of the older components due to a different internal implementation. Nonetheless they the components likely to see further development - the older variants are unlikely to see further work besides bug fixes. The projected components. These have been available since AR.js 3.3.1, use largely the same internal implementation as the classic components, and were the first to offer projection of latitude/longitude into Spherical Mercator, discussed below. They are generally not recommended unless you have problems with new-location-based . The classic components, available before AR.js 3.3.1. These are similar to the projected components but do not offer the facility to convert between latitude/longitude and the projected coordinates used for augmented reality, which can cause problems for more specialist uses such as showing roads and paths in augmented reality. For most use cases it is preferred to use new-location-based but some uses, such as embedded AR scenes, only work with the classic components.","title":"A-Frame"},{"location":"location-based/#the-components","text":"Each variant above includes two components, a camera component which enables the location-based AR, and an entity-place component which enables setting components' latitude and longitude. The exact component names for each variant are shown below. Component variant Camera component Entity-place component new-location-based gps-new-camera gps-new-entity-place projected gps-projected-camera gps-projected-entity-place classic gps-camera gps-entity-place","title":"The components"},{"location":"location-based/#camera-component-gps-new-camera-gps-projected-camera-or-gps-camera","text":"Required : yes Max allowed per scene : 1 This component enables the Location AR. It has to be added to the camera entity. It makes possible to handle both position and rotation of the camera and it's used to determine where the user is pointing their device. For example: ","title":"Camera component (gps-new-camera, gps-projected-camera or gps-camera)"},{"location":"location-based/#properties","text":"Property Description Default Value Availability positionMinAccuracy Minimum accuracy allowed for position signal 100 all gpsMinDistance Setting this allows you to control how far the camera must move, in meters, to generate a GPS update event. Useful to prevent 'jumping' of augmented content due to frequent small changes in position. 5 all simulateLatitude Setting this allows you to simulate the latitude of the camera, to aid in testing. 0 (disabled) all (but only triggers GPS update event in new-location-based) simulateLongitude Setting this allows you to simulate the longitude of the camera, to aid in testing. 0 (disabled) all (but only triggers GPS update event in new-location-based) simulateAltitude Setting this allows you to simulate the altitude of the camera in meters above sea level, to aid in testing. 0 (disabled) all alert Whether to show a message when GPS signal is under the positionMinAccuracy false projected, classic minDistance If set, places with a distance from the user lower than this value, are not shown. Only a positive value is allowed. Value is in meters. In the new-location-based components, please set the near clipping plane of the perspective camera. 0 (disabled) projected, classic maxDistance If set, places with a distance from the user higher than this value, are not shown. Only a positive value is allowed. Value is in meters. In the new-location-based components, please set the far clipping plane of the perspective camera. 0 (disabled) projected, classic gpsTimeInterval Setting this allows you to control how frequently to obtain a new GPS position. If a previous GPS location is cached, the cached position will be used rather than a new position if its 'age' is less than this value, in milliseconds. This parameter is passed directly to the Geolocation API's watchPosition() method. 0 (always use new position, not cached) all","title":"Properties"},{"location":"location-based/#entity-place-component-gps-new-entity-place-gps-projected-entity-place-or-gps-entity-place","text":"Required : yes Max allowed per scene : no limit This component makes each entity GPS-trackable. This assigns a specific world position to an entity, so that the user can see it when their device is pointing to its position in the real world. If the user is far from the entity, it will seem smaller. If it's too far away, it won't be seen at all. It requires latitude and longitude as a single string parameter (example with a-box aframe primitive): ; longitude: \"/> \u26a1\ufe0f In addition, you can use the a-frame \"position\" parameter to assign a y-value to change the height of the content. This value should be entered as meters above or below (if negative) the current camera height. For example, this would assign a height of 30 meters, and will be displayed relative to the gps-new-camera's current height: ; longitude: \" position=\"0 30 0\"/>","title":"Entity-place component (gps-new-entity-place, gps-projected-entity-place or gps-entity-place)"},{"location":"location-based/#properties_1","text":"distance : current distance from the camera, in metres. Available in gps-new-entity-place only: for the classic components, please use events to obtain the current distance.","title":"Properties"},{"location":"location-based/#events","text":"Take a look at the UI and Events page for Location Based Custom Events. \u26a1\ufe0f Usually, in Location Based, it's nice to have the augmented content that will always face the user, so when you rotate the camera, 3D models or most of all, text, are well visible. Look at this example in order to create gps-new-entity-place entities that will always face the user (camera).","title":"Events"},{"location":"location-based/#viewing-every-distant-object","text":"If your location-based AR content is distant from the user (around 1km or more), it is recommended to use the new arjs-webcam-texture component (introduced in AR.js 3.2.0), which uses a three.js texture to stream the camera feed and allows distant content to be viewed. This component is automatically injected if the videoTexture parameter of the arjs system is set to true and the sourceType is webcam . For example (code snippet only): ","title":"Viewing every distant object"},{"location":"location-based/#reducing-shaking-effects","text":"In location-based mode, 'shaking' effects can occur due to frequent small changes in the device's orientation, due to the high sensitivity of the device sensors such as the accelerometer. If using AR.js 3.3.1 or greater (3.4.3 or greater for the new-location-based components), this can optionally be reduced using an exponential smoothing technique. Note that, if you are NOT using the new-location-based components, there are currently some occasional display artefacts with this if moving the device quickly or suddenly so please test before you enable it in a finished application; work to resolve these is on-going. Alternatively, please use the new-location-based components. This is enabled by adding a custom look-controls component to your a-camera with a smoothingFactor property. This replaces A-Frames default look-controls component, which must be disabled. The name of the custom look-controls component varies, depending on which version of the location-based components you are using: for new-location-based , use arjs-device-orientation-controls ; for the classic and projected components, use arjs-look-controls . For example, in the new-location-based components: or, otherwise: Exponential smoothing works by applying a smoothing factor to each newly-read device rotation angle (obtained from sensor readings) such that the previous smoothed value counts more than the current value, thus reducing 'noise' and 'jitter'. If k is the smoothing factor: smoothedAngle = k * newValue + (1 - k) * previousSmoothedAngle It can be seen from this that the smaller the value of k (the smoothingFactor property), the greater the smoothing effect. In tests, 0.1 appears to give the best result. You can also reduce 'jumping' of augmented content when near a place - a bad-looking effect due to GPS sensor's low precision. To do so you can use the gpsMinDistance property, as shown in the examples above. This will only update the position if the user has moved at least that number of metres.","title":"Reducing shaking effects"},{"location":"location-based/#projection-details","text":"The new-location-based and projected location-based components for AR.js uses Spherical Mercator (aka EPSG:3857) to store both the camera position and the position of added points of interest and other geographical data. Spherical Mercator is the same projection used by Google Maps and projects the earth onto a flat surface. It works reasonably at most latitudes but is highly distorted near the poles. Latitude and longitude is projected into Spherical Mercator eastings and northings , which are approximately (but not exactly) equivalent to metres. The rationale for this is to allow easy addition of more complex geographic data such as roads and paths. Such data can be projected and added to an AR.js scene, and then, because Spherical Mercator units approximate to metres (away from the poles), the coordinates can be used directly as WebGL/A-Frame world coordinates.","title":"Projection Details"},{"location":"location-based/#calculating-world-coordinates-of-arbitrary-augmented-content","text":"The new-location-based and projected components have some useful properties and methods which can be used to easily work with more specialist augmented content (for example, you might want to overlay AR polylines or polygons representing roads and paths, downloaded from geodata APIs such as OpenStreetMap ). Such data can be downloaded from the API as lat/lon based coordinates, projected using AR.js API methods into Spherical Mercator (approximating to, but not exactly metres, but in tests good enough to use as world coordinates), and then added to the scene as a three.js object. This is implemented differently in the new-location-based and projected components, but the external API is (as of 3.4.3) the same. The key method is the latLonToWorld(lat, lon) method of the gps-new-camera and gps-projected-camera components. This converts latitude and longitude directly to world coordinates, performing the projection as the first step and then calculating the world coordinates from the projected coordinates. It will return a 2-member array containing the x and z world coordinates, allowing the developer to calculate or specify the y coordinate (altitude) independently. Note that the sign of the Spherical Mercator northing is reversed to align with the OpenGL coordinate system (eastings are equivalent to x coordinates and northings to z coordinates). gps-new-camera implements projection via the underlying AR.js three.js LocationBased object (see three.js documentation, below) which is responsible for the actual projection. gps-projected-camera provides similar functionality but via a different method and with some implementation differences. In gps-projected-camera , unlike gps-new-camera , the original GPS position is set as the world origin.","title":"Calculating world coordinates of arbitrary augmented content"},{"location":"location-based/#threejs","text":"For pure three.js (no A-Frame) it is recommended to use LocAR . The notes below, however, refer to the three.js version in the main AR.js repository. The three.js API keeps track of your current GPS location (or allows you to set a fake location) and allows you to add three.js objects at a given latitude and longitude. It includes these classes: THREEx.LocationBased - general manager class for the three.js location-based API. THREEx.WebcamRenderer - renders the feed from the webcam as a WebGL texture. THREEx.DeviceOrientationControls - for detecting changes in the orientation of the device. These classes include the following methods:","title":"three.js"},{"location":"location-based/#locationbased","text":"constructor(scene, camera, options={}) : Initialises a new LocationBased object. Takes a THREE.Scene and a THREE.Camera object as parameters, as well as an object of GPS options (see setGpsOptions() , below) setProjection(proj) : allows the projection to be defined. By default Spherical Mercator is used. The projection object must provide a project() method which takes longitude and latitude as parameters and returns a 2-member array of projected coordinates (easting, northing). setGpsOptions(options={}) : sets the GPS options. These include gpsMinDistance and gpsMinAccuracy , described in the A-Frame documentation above. startGps() : starts the GPS. Takes an optional maximumAge , as used by the native Geolocation API. stopGps() : stops the GPS. fakeGps(lon, lat, elev=null, acc=0) : fakes a GPS position being received. Elevation and accuracy can optionally be provided. lonLatToWorldCoords(lon, lat) : projects a given longitude and latitude into world coordinates using the current projection. The sign of the northing is reversed to align it with the OpenGL coordinate system. add(object, lon, lat, elev) : adds a given three.js object to the world at the given longitude and latitude and at the given elevation. setWorldPosition(object, lon, lat, elev) : changes the world position of a given object to the given longitude and latitude, without adding it to the scene. setElevation(elev) : sets the current elevation in metres. This will set the camera's y coordinate to that elevation. on(eventname, eventhandler) : allows event handlers to be specified. Currently gpsupdate and gpserror handlers are supported, for receiving a new GPS position and GPS errors (as in the Geolocation API) respectively.","title":"LocationBased"},{"location":"location-based/#webcamrenderer","text":"Renders the webcam feed. constructor(renderer, videoElementSelector) : creates a WebcamRenderer . Takes a THREE.WebGLRenderer plus a selector for an HTML video element to stream the feed to. update() : updates the camera feed. Should be done each time the scene is rendered.","title":"WebcamRenderer"},{"location":"location-based/#deviceorientationcontrols","text":"Represents the device orientation controls, i.e. accelerometer and magnetic field sensors, for determining the orientation of the device. Based on the sample included in the three.js distribution. constructor(cameraObject) : creates a DeviceOrientationControls object. Takes a three.js camera. update() : updates the device orientation controls. Should be done each time the scene is rendered.","title":"DeviceOrientationControls"},{"location":"location-based/#using-threejs-location-based-in-an-application","text":"You are recommended to use npm to install AR.js, import it into your application, and use a bundler such as Webpack to build. Here is a sample package.json : { \"dependencies\": { \"@ar-js-org/ar.js\": \"3.4.5\", }, \"devDependencies\": { \"webpack\": \"^5.75.0\", \"webpack-cli\": \"^5.0.0\" }, \"scripts\": { \"build\": \"npx webpack\" } } and a sample webpack.config.js : const path = require('path'); module.exports = { mode: 'development', entry: './index.js', output: { path: path.resolve(__dirname, 'dist'), filename: 'bundle.js' }, optimization: { minimize: false } }; This will build a bundle named bundle.js in the dist subdirectory from a source file index.js . Here is an example of importing the components into an application: import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'","title":"Using three.js location-based in an application"},{"location":"marker-based/","text":"Marker Based Markers can be of three, different types: Hiro Barcode Pattern. To learn more about markers, please read these articles: AR.js basic Marker Based tutorial and Markers explanation Deliver AR.js experiences using only QRCodes (Markers inside QRCodes) . TL:DR Hiro Marker is the default one, not very useful actually Barcode markers are auto-generated markers, from matrix computations. Learn more on the above articles on how to use them. If you need the full list of barcode markers, here it is Pattern markers are custom ones, created starting from an image (very simple, hight contrast), loaded by the user. \u26a1\ufe0f You can create your Pattern Markers with this tool . It will generate an image to scan and a .patt file, to be loaded on the AR.js web app, in order for it to recognise the marker when running. How to choose good images for Pattern Markers Markers have a black border and high contrast shapes. Lately, we have added also white border markers with black background, although the classic ones, with black border, behave better. Here's an article explaining all good practice on how to choose good images to be used to generate custom markers: 10 tips to enhance your AR.js app . API Reference for Marker Based A-Frame Here are the attributes for this entity Attribute Description Component Mapping type type of marker - ['pattern', 'barcode', 'unknown' ] artoolkitmarker.type size size of the marker in meter artoolkitmarker.size url url of the pattern - IIF type='pattern' artoolkitmarker.patternUrl value value of the barcode - IIF type='barcode' artoolkitmarker.barcodeValue preset parameters preset - ['hiro', 'kanji'] artoolkitmarker.preset emitevents emits 'markerFound' and 'markerLost' events - ['true', 'false'] - smooth turn on/off camera smoothing - ['true', 'false'] - default: false - smooth-count number of matrices to smooth tracking over, more = smoother but slower follow - default: 5 - smooth-tolerance distance tolerance for smoothing, if smoothThreshold # of matrices are under tolerance, tracking will stay still - default: 0.01 - smooth-threshold threshold for smoothing, will keep still unless enough matrices are over tolerance - default: 2 - three.js threex-artoolkit threex.artoolkit is the three.js extension to easily handle artoolkit . Architecture threex.artoolkit is composed of 3 classes THREEx.ArToolkitSource : It is the image which is analyzed to do the position tracking. It can be the webcam, a video or even an image THREEx.ArToolkitContext : It is the main engine. It will actually find the marker position in the image source. THREEx.ArMarkerControls : it controls the position of the marker It uses the classical three.js controls API . It will make sure to position your content right on top of the marker. THREEx.ArMarkerControls var parameters = { // size of the marker in meter size: 1, // type of marker - ['pattern', 'barcode', 'unknown' ] type: \"unknown\", // url of the pattern - IIF type='pattern' patternUrl: null, // value of the barcode - IIF type='barcode' barcodeValue: null, // change matrix mode - [modelViewMatrix, cameraTransformMatrix] changeMatrixMode: \"modelViewMatrix\", // turn on/off camera smoothing smooth: true, // number of matrices to smooth tracking over, more = smoother but slower follow smoothCount: 5, // distance tolerance for smoothing, if smoothThreshold # of matrices are under tolerance, tracking will stay still smoothTolerance: 0.01, // threshold for smoothing, will keep still unless enough matrices are over tolerance smoothThreshold: 2 }; THREEx.ArToolkitContext var parameters = { // debug - true if one should display artoolkit debug canvas, false otherwise debug: false, // the mode of detection - ['color', 'color_and_matrix', 'mono', 'mono_and_matrix'] detectionMode: 'color_and_matrix', // type of matrix code - valid iif detectionMode end with 'matrix' - [3x3, 3x3_HAMMING63, 3x3_PARITY65, 4x4, 4x4_BCH_13_9_3, 4x4_BCH_13_5_5] matrixCodeType: '3x3', // Pattern ratio for custom markers patternRatio: 0.5 // Labeling mode for markers - ['black_region', 'white_region'] // black_region: Black bordered markers on a white background, white_region: White bordered markers on a black background labelingMode: 'black_region', // url of the camera parameters cameraParametersUrl: THREEx.ArToolkitContext.baseURL + '../data/data/camera_para.dat', // tune the maximum rate of pose detection in the source image maxDetectionRate: 60, // resolution of at which we detect pose in the source image canvasWidth: 640, canvasHeight: 480, // enable image smoothing or not for canvas copy - default to true // https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/imageSmoothingEnabled imageSmoothingEnabled : true, } THREEx.ArToolkitSource var parameters = { // type of source - ['webcam', 'image', 'video'] sourceType: \"webcam\", // url of the source - valid if sourceType = image|video sourceUrl: null, // resolution of at which we initialize the source image sourceWidth: 640, sourceHeight: 480, // resolution displayed for the source displayWidth: 640, displayHeight: 480 };","title":"Marker Based"},{"location":"marker-based/#marker-based","text":"Markers can be of three, different types: Hiro Barcode Pattern. To learn more about markers, please read these articles: AR.js basic Marker Based tutorial and Markers explanation Deliver AR.js experiences using only QRCodes (Markers inside QRCodes) . TL:DR Hiro Marker is the default one, not very useful actually Barcode markers are auto-generated markers, from matrix computations. Learn more on the above articles on how to use them. If you need the full list of barcode markers, here it is Pattern markers are custom ones, created starting from an image (very simple, hight contrast), loaded by the user. \u26a1\ufe0f You can create your Pattern Markers with this tool . It will generate an image to scan and a .patt file, to be loaded on the AR.js web app, in order for it to recognise the marker when running.","title":"Marker Based"},{"location":"marker-based/#how-to-choose-good-images-for-pattern-markers","text":"Markers have a black border and high contrast shapes. Lately, we have added also white border markers with black background, although the classic ones, with black border, behave better. Here's an article explaining all good practice on how to choose good images to be used to generate custom markers: 10 tips to enhance your AR.js app .","title":"How to choose good images for Pattern Markers"},{"location":"marker-based/#api-reference-for-marker-based","text":"","title":"API Reference for Marker Based"},{"location":"marker-based/#a-frame","text":"","title":"A-Frame"},{"location":"marker-based/#a-marker","text":"Here are the attributes for this entity Attribute Description Component Mapping type type of marker - ['pattern', 'barcode', 'unknown' ] artoolkitmarker.type size size of the marker in meter artoolkitmarker.size url url of the pattern - IIF type='pattern' artoolkitmarker.patternUrl value value of the barcode - IIF type='barcode' artoolkitmarker.barcodeValue preset parameters preset - ['hiro', 'kanji'] artoolkitmarker.preset emitevents emits 'markerFound' and 'markerLost' events - ['true', 'false'] - smooth turn on/off camera smoothing - ['true', 'false'] - default: false - smooth-count number of matrices to smooth tracking over, more = smoother but slower follow - default: 5 - smooth-tolerance distance tolerance for smoothing, if smoothThreshold # of matrices are under tolerance, tracking will stay still - default: 0.01 - smooth-threshold threshold for smoothing, will keep still unless enough matrices are over tolerance - default: 2 -","title":"<a-marker/>"},{"location":"marker-based/#threejs","text":"","title":"three.js"},{"location":"marker-based/#threex-artoolkit","text":"threex.artoolkit is the three.js extension to easily handle artoolkit .","title":"threex-artoolkit"},{"location":"marker-based/#architecture","text":"threex.artoolkit is composed of 3 classes THREEx.ArToolkitSource : It is the image which is analyzed to do the position tracking. It can be the webcam, a video or even an image THREEx.ArToolkitContext : It is the main engine. It will actually find the marker position in the image source. THREEx.ArMarkerControls : it controls the position of the marker It uses the classical three.js controls API . It will make sure to position your content right on top of the marker.","title":"Architecture"},{"location":"marker-based/#threexarmarkercontrols","text":"var parameters = { // size of the marker in meter size: 1, // type of marker - ['pattern', 'barcode', 'unknown' ] type: \"unknown\", // url of the pattern - IIF type='pattern' patternUrl: null, // value of the barcode - IIF type='barcode' barcodeValue: null, // change matrix mode - [modelViewMatrix, cameraTransformMatrix] changeMatrixMode: \"modelViewMatrix\", // turn on/off camera smoothing smooth: true, // number of matrices to smooth tracking over, more = smoother but slower follow smoothCount: 5, // distance tolerance for smoothing, if smoothThreshold # of matrices are under tolerance, tracking will stay still smoothTolerance: 0.01, // threshold for smoothing, will keep still unless enough matrices are over tolerance smoothThreshold: 2 };","title":"THREEx.ArMarkerControls"},{"location":"marker-based/#threexartoolkitcontext","text":"var parameters = { // debug - true if one should display artoolkit debug canvas, false otherwise debug: false, // the mode of detection - ['color', 'color_and_matrix', 'mono', 'mono_and_matrix'] detectionMode: 'color_and_matrix', // type of matrix code - valid iif detectionMode end with 'matrix' - [3x3, 3x3_HAMMING63, 3x3_PARITY65, 4x4, 4x4_BCH_13_9_3, 4x4_BCH_13_5_5] matrixCodeType: '3x3', // Pattern ratio for custom markers patternRatio: 0.5 // Labeling mode for markers - ['black_region', 'white_region'] // black_region: Black bordered markers on a white background, white_region: White bordered markers on a black background labelingMode: 'black_region', // url of the camera parameters cameraParametersUrl: THREEx.ArToolkitContext.baseURL + '../data/data/camera_para.dat', // tune the maximum rate of pose detection in the source image maxDetectionRate: 60, // resolution of at which we detect pose in the source image canvasWidth: 640, canvasHeight: 480, // enable image smoothing or not for canvas copy - default to true // https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/imageSmoothingEnabled imageSmoothingEnabled : true, }","title":"THREEx.ArToolkitContext"},{"location":"marker-based/#threexartoolkitsource","text":"var parameters = { // type of source - ['webcam', 'image', 'video'] sourceType: \"webcam\", // url of the source - valid if sourceType = image|video sourceUrl: null, // resolution of at which we initialize the source image sourceWidth: 640, sourceHeight: 480, // resolution displayed for the source displayWidth: 640, displayHeight: 480 };","title":"THREEx.ArToolkitSource"},{"location":"ui-events/","text":"UI and Custom Events To make AR.js based Web App look better and add UI capabilities, it's possible to treat is as common website. Here you will learn how to use Raycaster, Custom Events and Interaction with overlayed DOM elements. Handle clicks on AR content It's now possible to use AR.js (marker based or image tracking) with a-frame latest versions (1.0.0 and above) in order to have touch gestures to zoom and rotate your content! Disclaimer: this will work for your entire a-scene , so it's not a real option if you have to handle different interactions for multiple markers. It will work like charm if you have one marker/image for scene. Check Fabio Cort\u00e8s great walkthrough in order to add this feature on your AR.js web app. You can use this exact approach for Image Tracking a-nft and Marker Based a-entity elements. The clickhandler name can be customized, you can choose the one you like most, it's just a reference. Keep in mind that this click/touch interaction is not handled by AR.js at all, it is all A-Frame based. Always look on the A-Frame documentation for more details. Check out the tutorial Interaction with Overlayed DOM content You can add interations by adding DOM HTML elements on the body . For example, starting from this example: We can add on the body, outside the a-scene :
Then, we need to add some CSS to absolute positioning the DIV and BUTTON, and also some scripting to listen to click events. You can customize your a-scene or content, like 3D models, play video, and so on. See on A-Frame Docs on how to change entity properties and work with events: https://aframe.io/docs/1.0.0/introduction/javascript-events-dom-apis.html. We will end up with the following code:
Custom Events AR.js dispatches several Custom Events. Some of them are general, others are specific for AR Feature. Here's the full list. Custom Event name Description Payload Source File Feature arjs-video-loaded Fired when camera video stream has been appended to the DOM { detail: { component: }} threex-artoolkitsource.js all camera-error Fired when camera video stream could not be retrieved { error: } threex-artoolkitsource.js all camera-init Fired when camera video stream has been retrieved correctly { stream: } threex-artoolkitsource.js all markerFound Fired when a marker in Marker Based, or a picture in Image Tracking, has been found - component-anchor.js only Image Tracking and Marker Based markerLost Fired when a marker in Marker Based, or a picture in Image Tracking, has been lost - component-anchor.js only Image Tracking and Marker Based arjs-nft-loaded Fired when a nft marker is full loaded threex-armarkercontrols-nft-start.js only Image Tracking gps-camera-update-positon Fired when gps-camera has updated its position { detail: { position: , origin: }} gps-camera.js only Location Based gps-entity-place-update-positon Fired when gps-entity-place has updated its position { detail: { distance: }} gps-entity-place.js only classic and projected Location Based gps-entity-place-added Fired when the gps-entity-place has been added { detail: { component: }} gps-entity-place.js only classic and projected Location Based gps-camera-origin-coord-set Fired when the origin coordinates are set - gps-camera.js only classic and projected Location Based gps-entity-place-loaded Fired when the gps-entity-place has been - see 'loaded' event of A-Frame entities { detail: { component: }} gps-entity-place.js only classic and projected Location Based Internal Loading Events \u26a1\ufe0f Both Image Tracking and Location Based automatically handle an internal event when origin location has been set Image Tracking (Image Descriptors) are fully loaded And automatically remove from the DOM elements that match the .arjs-loader selector. You can add any custom loader that will be remove in the above situations, just use the .arjs-loader class on it. Trigger actions when image has been found You can trigger any action you want when marker/image has been found. You can avoid linking a content to a marker/image and only trigger an action (like a redirect to an external website) when the anchor has been found by the camera.
Loading, please wait...
Trigger action when marker has been found Get distance from marker // import this on your HTML window.addEventListener('load', () => { const camera = document.querySelector('[camera]'); const marker = document.querySelector('a-marker'); let check; marker.addEventListener('markerFound', () => { let cameraPosition = camera.object3D.position; let markerPosition = marker.object3D.position; let distance = cameraPosition.distanceTo(markerPosition) check = setInterval(() => { cameraPosition = camera.object3D.position; markerPosition = marker.object3D.position; distance = cameraPosition.distanceTo(markerPosition) // do what you want with the distance: console.log(distance); }, 100); }); marker.addEventListener('markerLost', () => { clearInterval(check); }) })","title":"UI and Events"},{"location":"ui-events/#ui-and-custom-events","text":"To make AR.js based Web App look better and add UI capabilities, it's possible to treat is as common website. Here you will learn how to use Raycaster, Custom Events and Interaction with overlayed DOM elements.","title":"UI and Custom Events"},{"location":"ui-events/#handle-clicks-on-ar-content","text":"It's now possible to use AR.js (marker based or image tracking) with a-frame latest versions (1.0.0 and above) in order to have touch gestures to zoom and rotate your content! Disclaimer: this will work for your entire a-scene , so it's not a real option if you have to handle different interactions for multiple markers. It will work like charm if you have one marker/image for scene. Check Fabio Cort\u00e8s great walkthrough in order to add this feature on your AR.js web app. You can use this exact approach for Image Tracking a-nft and Marker Based a-entity elements. The clickhandler name can be customized, you can choose the one you like most, it's just a reference. Keep in mind that this click/touch interaction is not handled by AR.js at all, it is all A-Frame based. Always look on the A-Frame documentation for more details. Check out the tutorial","title":"Handle clicks on AR content"},{"location":"ui-events/#interaction-with-overlayed-dom-content","text":"You can add interations by adding DOM HTML elements on the body . For example, starting from this example: We can add on the body, outside the a-scene :
Then, we need to add some CSS to absolute positioning the DIV and BUTTON, and also some scripting to listen to click events. You can customize your a-scene or content, like 3D models, play video, and so on. See on A-Frame Docs on how to change entity properties and work with events: https://aframe.io/docs/1.0.0/introduction/javascript-events-dom-apis.html. We will end up with the following code:
","title":"Interaction with Overlayed DOM content"},{"location":"ui-events/#custom-events","text":"AR.js dispatches several Custom Events. Some of them are general, others are specific for AR Feature. Here's the full list. Custom Event name Description Payload Source File Feature arjs-video-loaded Fired when camera video stream has been appended to the DOM { detail: { component: }} threex-artoolkitsource.js all camera-error Fired when camera video stream could not be retrieved { error: } threex-artoolkitsource.js all camera-init Fired when camera video stream has been retrieved correctly { stream: } threex-artoolkitsource.js all markerFound Fired when a marker in Marker Based, or a picture in Image Tracking, has been found - component-anchor.js only Image Tracking and Marker Based markerLost Fired when a marker in Marker Based, or a picture in Image Tracking, has been lost - component-anchor.js only Image Tracking and Marker Based arjs-nft-loaded Fired when a nft marker is full loaded threex-armarkercontrols-nft-start.js only Image Tracking gps-camera-update-positon Fired when gps-camera has updated its position { detail: { position: , origin: }} gps-camera.js only Location Based gps-entity-place-update-positon Fired when gps-entity-place has updated its position { detail: { distance: }} gps-entity-place.js only classic and projected Location Based gps-entity-place-added Fired when the gps-entity-place has been added { detail: { component: }} gps-entity-place.js only classic and projected Location Based gps-camera-origin-coord-set Fired when the origin coordinates are set - gps-camera.js only classic and projected Location Based gps-entity-place-loaded Fired when the gps-entity-place has been - see 'loaded' event of A-Frame entities { detail: { component: }} gps-entity-place.js only classic and projected Location Based","title":"Custom Events"},{"location":"ui-events/#internal-loading-events","text":"\u26a1\ufe0f Both Image Tracking and Location Based automatically handle an internal event when origin location has been set Image Tracking (Image Descriptors) are fully loaded And automatically remove from the DOM elements that match the .arjs-loader selector. You can add any custom loader that will be remove in the above situations, just use the .arjs-loader class on it.","title":"Internal Loading Events"},{"location":"ui-events/#trigger-actions-when-image-has-been-found","text":"You can trigger any action you want when marker/image has been found. You can avoid linking a content to a marker/image and only trigger an action (like a redirect to an external website) when the anchor has been found by the camera.
Loading, please wait...
","title":"Trigger actions when image has been found"},{"location":"ui-events/#trigger-action-when-marker-has-been-found","text":" ","title":"Trigger action when marker has been found"},{"location":"ui-events/#get-distance-from-marker","text":"// import this on your HTML window.addEventListener('load', () => { const camera = document.querySelector('[camera]'); const marker = document.querySelector('a-marker'); let check; marker.addEventListener('markerFound', () => { let cameraPosition = camera.object3D.position; let markerPosition = marker.object3D.position; let distance = cameraPosition.distanceTo(markerPosition) check = setInterval(() => { cameraPosition = camera.object3D.position; markerPosition = marker.object3D.position; distance = cameraPosition.distanceTo(markerPosition) // do what you want with the distance: console.log(distance); }, 100); }); marker.addEventListener('markerLost', () => { clearInterval(check); }) })","title":"Get distance from marker"},{"location":"location-based-aframe/","text":"AR.js A-Frame Location-Based Tutorial - Develop a Simple Points of Interest App Introduction This tutorial ( updated for AR.js 3.4 ) aims to take you from a basic location-based AR.js example all the way to a working, simple points of interest app. We will start with an HTML-only example and gradually add JavaScript to make our app more sophisticated. It is expected that you have some basic A-Frame experience. Do note that this code will not work on Firefox on a mobile device due to limitations of the device orientation API; absolute orientation cannot be obtained. Chrome on Android is recommended. Basic example We will start with a basic example, using pure HTML, to display a box close to your location. This example is identical to the location-based example on the index page . AR.js A-Frame Location-based; longitude: \" depth=\"10\" height=\"10\" width=\"10\"> Upload this to a server with HTTPS, or run locally on localhost . Make sure you replace your-lat and your-lon with values close to your actual position (to see the box clearly, I would recommend an offset of around 0.001 degrees in any direction for both the latitude and longitude). How does this work? The arjs component of our a-scene initialises AR.js. Note the properties we are setting: we set the sourceType to webcam for obvious reasons but also set videoTexture to true. This is vital in an outdoor location-based AR app as it allows distant augmented content - such at the peaks we are going to eventually visualise - to be seen. (It does this by using a three.js texture for the camera feed which can be easily combined with our augmented content). Note the gps-new-camera component on our a-camera . This is the AR.js component which automatically converts latitudes and longitudes into 3D world coordinates, allowing us to use latitude and longitude, rather than world coordinates, when adding places. Note that we are using gps-new-camera , not gps-camera . The gps-new-camera component includes some bugfixes and makes it easy for us to work with arbitrary geographical data provided by a server, as internally it uses the Spherical Mercator projection to represent the augmented content's world coordinates. Spherical Mercator units are commonly used to represent mapping data and are almost (but not quite) equivalent to metres. Away from the polar regions, though, it's good enough to use for AR. We then create an a-box primitive. This is the augmented content that we want to display. Ordinarily, in A-Frame, you would give this a position in world coordinates. However, AR.js, and specifically the gps-new-entity-place component, allows us to position it using latitude and longitude. We can position any A-Frame entity at a given latitude and longitude using gps-projected-entity-place . Things to try Change the a-box to some other kind of A-Frame primitive, such as an a-sphere or a-cylinder . Does it still work? Try adding multiple objects with different colours at different locations. Try adding a text primitive at a nearby latitude and longitude. You will need to use the A-Frame look-at component to ensure the text always faces the camera. Try giving your objects an elevation. This can be done by setting the y coordinate of the position property of each object to a given height (in metres) and setting the x and z coordinates to 0. Having done that, try giving the camera an elevation by similarly setting its position property, and look at the effect this has on where the objects appear. Introducing JavaScript with AR.js Much of the power of A-Frame, and AR.js, comes from adding scripting to your basic applications. It is assumed that you already know the basics of how to create components in A-Frame. We will start with a very basic example, which simply retrieves your current GPS location and adds a red box immediately to the north. Create this JavaScript, basic.js , and link it to the HTML example shown above. (Remove the hard-coded red box from the HTML first). window.onload = () => { let testEntityAdded = false; const el = document.querySelector(\"[gps-new-camera]\"); el.addEventListener(\"gps-camera-update-position\", e => { if(!testEntityAdded) { alert(`Got first GPS position: lon ${e.detail.position.longitude} lat ${e.detail.position.latitude}`); // Add a box to the north of the initial GPS position const entity = document.createElement(\"a-box\"); entity.setAttribute(\"scale\", { x: 20, y: 20, z: 20 }); entity.setAttribute('material', { color: 'red' } ); entity.setAttribute('gps-new-entity-place', { latitude: e.detail.position.latitude + 0.001, longitude: e.detail.position.longitude }); document.querySelector(\"a-scene\").appendChild(entity); } testEntityAdded = true; }); }; How is this working? we set up an onload function to run when the page loads. With A-Frame, we can only use entities once they have been loaded into the DOM, so we must delay the execution of the code until the page loads. Using the normal DOM API, we use document.querySelector() to obtain the entity with the gps-new-camera component attached to it (which will be your ) we then handle the gps-camera-update-position event . This event is emitted by the camera entity when we receive a new GPS location. This allows us to write code which runs every time we get a new GPS position, such as downloading new POI data from a server. We can retrieve the new location via the e.detail.position object, which has longitude and latitude properties. In this example, we check that we have not already added our entity (via the testEntityAdded boolean), display the location to the user, and then create a new entity dynamically, and specify its scale and colour using standard DOM/A-Frame techniques. We then dynamically add a gps-new-entity-place component to the entity, with the latitude set to the GPS latitude plus 0.001 degrees (so it will appear a short distance to the north) and the longitude set to the current GPS longitude. Finally we add the entity to the scene using the standard DOM appendChild() method. Things to try Add three more entities to the scene, close to the original GPS position a yellow sphere 0.001 degrees to the east; an orange cylinder 0.001 degrees to the south; a magenta cone 0.001 degrees to the west. Connecting to a web server We will now enhance the example to download data from a web server . The server used will be the Hikar server, used by the Hikar project: https://hikar.org/webapp/map?bbox=west,south,east,north&layers=poi&outProj=4326 This provides OpenStreetMap data for Europe and Turkey (apologies, other parts of the world are not covered due to server constraints). Note how we specify the bounding box with the bbox parameter. window.onload = () => { let downloaded = false; const el = document.querySelector(\"[gps-new-camera]\"); el.addEventListener(\"gps-camera-update-position\", async(e) => { if(!downloaded) { const west = e.detail.position.longitude - 0.01, east = e.detail.position.longitude + 0.01, south = e.detail.position.latitude - 0.01; north = e.detail.position.latitude + 0.01; const response = await fetch(`https://hikar.org/webapp/map?bbox=${west},${south},${east},${north}&layers=poi&outProj=4326`); const pois = await response.json(); pois.features.forEach ( feature => { const entity = document.createElement(\"a-box\"); entity.setAttribute(\"scale\", { x: 20, y: 20, z: 20 }); entity.setAttribute('material', { color: 'red' } ); entity.setAttribute('gps-new-entity-place', { latitude: feature.geometry.coordinates[1], longitude: feature.geometry.coordinates[0] }); document.querySelector(\"a-scene\").appendChild(entity); }); } downloaded = true; }); }; Much of the logic is similar to the previous example, but note that we now send a request to the web server via the fetch API, sending a bounding box surrounding the current position. The server sends back GeoJSON . GeoJSON contains a features array containing each point of interest, and each feature includes a geometry object containing the latitude and longitude witthin a two-member coordinates array. So we loop through each feature, dynamically create an entity (as in the previous example) from the current feature, use the latitude and longitude from the GeoJSON to create the gps-new-entity-place component, and add it to the scene. Things to try Try requesting the Hikar URL directly in your browser, supplying a bounding box representing an area you are familiar with, and explore the format used for points of interest of different types. Each GeoJSON feature object has a properties object containing properties describing the point of interest. The amenity property is commonly used: this describes the type of amenity (such as restaurant, cafe, pub, etc). Try colouring the boxes differently depending on point of interest type (e.g. restaurants, cafes, pubs, etc). Adding text labels The next example shows how you can add text labels to your POIs. window.onload = () => { let downloaded = false; const el = document.querySelector(\"[gps-new-camera]\"); el.addEventListener(\"gps-camera-update-position\", async(e) => { if(!downloaded) { const west = e.detail.position.longitude - 0.05, east = e.detail.position.longitude + 0.05, south = e.detail.position.latitude - 0.05; north = e.detail.position.latitude + 0.05; console.log(`${west} ${south} ${east} ${north}`); const response = await fetch(`https://hikar.org/webapp/map?bbox=${west},${south},${east},${north}&layers=poi&outProj=4326`); const pois = await response.json(); pois.features.forEach ( feature => { const compoundEntity = document.createElement(\"a-entity\"); compoundEntity.setAttribute('gps-new-entity-place', { latitude: feature.geometry.coordinates[1], longitude: feature.geometry.coordinates[0] }); const box = document.createElement(\"a-box\"); box.setAttribute(\"scale\", { x: 20, y: 20, z: 20 }); box.setAttribute('material', { color: 'red' } ); box.setAttribute(\"position\", { x : 0, y : 20, z: 0 } ); const text = document.createElement(\"a-text\"); const textScale = 100; text.setAttribute(\"look-at\", \"[gps-new-camera]\"); text.setAttribute(\"scale\", { x: textScale, y: textScale, z: textScale }); text.setAttribute(\"value\", feature.properties.name); text.setAttribute(\"align\", \"center\"); compoundEntity.appendChild(box); compoundEntity.appendChild(text); document.querySelector(\"a-scene\").appendChild(compoundEntity); }); } downloaded = true; }); }; How is this working? We now create a compound entity . In A-Frame, a compound entity is an entity which has other entities as children. Here, we will create a compound entity, position it at the POI's latitude and longitude, and add the box, and a new text entity containing the POI name, to it. We create the box as before, and set its y coordinate to 20. This is relative to its parent, i.e. the compound entity. The compound entity is already positioned at the correct latitude and longitude, so we will position the box 20 metres above that position. We then create a text entity, scale it appropriately, and set its value attribute to the name of the feature from the GeoJSON. Note the use of the look-at component. This makes a given A-Frame entity look at another. Here we want the text to look at the camera (i.e. the entity with a gps-new-camera property), so we always see it. The look-at component is a third-party component and must be added to your HTML as follows: We then append the box and text to the compound entity, and append our compound entity to the scene. Things to try Try filtering out POIs with no name, so that only those with a name are displayed. Those without a name should not be displayed, not even as a box. Try implementing logic to re-download POIs if the user moves 0.05 degrees of either latitude or longitude from the previous download position. Each POI in the GeoJSON includes an osm_id property which is a unique OpenStreetMap ID for that POI. Using the osm_id , implement logic so that a POI is not re-added to the scene if it is already present. (This may happen if you move 0.05 degrees but return to an area you have already visited).","title":"AR.js A-Frame Location-Based Tutorial - Develop a Simple Points of Interest App"},{"location":"location-based-aframe/#arjs-a-frame-location-based-tutorial-develop-a-simple-points-of-interest-app","text":"","title":"AR.js A-Frame Location-Based Tutorial - Develop a Simple Points of Interest App"},{"location":"location-based-aframe/#introduction","text":"This tutorial ( updated for AR.js 3.4 ) aims to take you from a basic location-based AR.js example all the way to a working, simple points of interest app. We will start with an HTML-only example and gradually add JavaScript to make our app more sophisticated. It is expected that you have some basic A-Frame experience. Do note that this code will not work on Firefox on a mobile device due to limitations of the device orientation API; absolute orientation cannot be obtained. Chrome on Android is recommended.","title":"Introduction"},{"location":"location-based-aframe/#basic-example","text":"We will start with a basic example, using pure HTML, to display a box close to your location. This example is identical to the location-based example on the index page . AR.js A-Frame Location-based; longitude: \" depth=\"10\" height=\"10\" width=\"10\"> Upload this to a server with HTTPS, or run locally on localhost . Make sure you replace your-lat and your-lon with values close to your actual position (to see the box clearly, I would recommend an offset of around 0.001 degrees in any direction for both the latitude and longitude).","title":"Basic example"},{"location":"location-based-aframe/#how-does-this-work","text":"The arjs component of our a-scene initialises AR.js. Note the properties we are setting: we set the sourceType to webcam for obvious reasons but also set videoTexture to true. This is vital in an outdoor location-based AR app as it allows distant augmented content - such at the peaks we are going to eventually visualise - to be seen. (It does this by using a three.js texture for the camera feed which can be easily combined with our augmented content). Note the gps-new-camera component on our a-camera . This is the AR.js component which automatically converts latitudes and longitudes into 3D world coordinates, allowing us to use latitude and longitude, rather than world coordinates, when adding places. Note that we are using gps-new-camera , not gps-camera . The gps-new-camera component includes some bugfixes and makes it easy for us to work with arbitrary geographical data provided by a server, as internally it uses the Spherical Mercator projection to represent the augmented content's world coordinates. Spherical Mercator units are commonly used to represent mapping data and are almost (but not quite) equivalent to metres. Away from the polar regions, though, it's good enough to use for AR. We then create an a-box primitive. This is the augmented content that we want to display. Ordinarily, in A-Frame, you would give this a position in world coordinates. However, AR.js, and specifically the gps-new-entity-place component, allows us to position it using latitude and longitude. We can position any A-Frame entity at a given latitude and longitude using gps-projected-entity-place .","title":"How does this work?"},{"location":"location-based-aframe/#things-to-try","text":"Change the a-box to some other kind of A-Frame primitive, such as an a-sphere or a-cylinder . Does it still work? Try adding multiple objects with different colours at different locations. Try adding a text primitive at a nearby latitude and longitude. You will need to use the A-Frame look-at component to ensure the text always faces the camera. Try giving your objects an elevation. This can be done by setting the y coordinate of the position property of each object to a given height (in metres) and setting the x and z coordinates to 0. Having done that, try giving the camera an elevation by similarly setting its position property, and look at the effect this has on where the objects appear.","title":"Things to try"},{"location":"location-based-aframe/#introducing-javascript-with-arjs","text":"Much of the power of A-Frame, and AR.js, comes from adding scripting to your basic applications. It is assumed that you already know the basics of how to create components in A-Frame. We will start with a very basic example, which simply retrieves your current GPS location and adds a red box immediately to the north. Create this JavaScript, basic.js , and link it to the HTML example shown above. (Remove the hard-coded red box from the HTML first). window.onload = () => { let testEntityAdded = false; const el = document.querySelector(\"[gps-new-camera]\"); el.addEventListener(\"gps-camera-update-position\", e => { if(!testEntityAdded) { alert(`Got first GPS position: lon ${e.detail.position.longitude} lat ${e.detail.position.latitude}`); // Add a box to the north of the initial GPS position const entity = document.createElement(\"a-box\"); entity.setAttribute(\"scale\", { x: 20, y: 20, z: 20 }); entity.setAttribute('material', { color: 'red' } ); entity.setAttribute('gps-new-entity-place', { latitude: e.detail.position.latitude + 0.001, longitude: e.detail.position.longitude }); document.querySelector(\"a-scene\").appendChild(entity); } testEntityAdded = true; }); }; How is this working? we set up an onload function to run when the page loads. With A-Frame, we can only use entities once they have been loaded into the DOM, so we must delay the execution of the code until the page loads. Using the normal DOM API, we use document.querySelector() to obtain the entity with the gps-new-camera component attached to it (which will be your ) we then handle the gps-camera-update-position event . This event is emitted by the camera entity when we receive a new GPS location. This allows us to write code which runs every time we get a new GPS position, such as downloading new POI data from a server. We can retrieve the new location via the e.detail.position object, which has longitude and latitude properties. In this example, we check that we have not already added our entity (via the testEntityAdded boolean), display the location to the user, and then create a new entity dynamically, and specify its scale and colour using standard DOM/A-Frame techniques. We then dynamically add a gps-new-entity-place component to the entity, with the latitude set to the GPS latitude plus 0.001 degrees (so it will appear a short distance to the north) and the longitude set to the current GPS longitude. Finally we add the entity to the scene using the standard DOM appendChild() method.","title":"Introducing JavaScript with AR.js"},{"location":"location-based-aframe/#things-to-try_1","text":"Add three more entities to the scene, close to the original GPS position a yellow sphere 0.001 degrees to the east; an orange cylinder 0.001 degrees to the south; a magenta cone 0.001 degrees to the west.","title":"Things to try"},{"location":"location-based-aframe/#connecting-to-a-web-server","text":"We will now enhance the example to download data from a web server . The server used will be the Hikar server, used by the Hikar project: https://hikar.org/webapp/map?bbox=west,south,east,north&layers=poi&outProj=4326 This provides OpenStreetMap data for Europe and Turkey (apologies, other parts of the world are not covered due to server constraints). Note how we specify the bounding box with the bbox parameter. window.onload = () => { let downloaded = false; const el = document.querySelector(\"[gps-new-camera]\"); el.addEventListener(\"gps-camera-update-position\", async(e) => { if(!downloaded) { const west = e.detail.position.longitude - 0.01, east = e.detail.position.longitude + 0.01, south = e.detail.position.latitude - 0.01; north = e.detail.position.latitude + 0.01; const response = await fetch(`https://hikar.org/webapp/map?bbox=${west},${south},${east},${north}&layers=poi&outProj=4326`); const pois = await response.json(); pois.features.forEach ( feature => { const entity = document.createElement(\"a-box\"); entity.setAttribute(\"scale\", { x: 20, y: 20, z: 20 }); entity.setAttribute('material', { color: 'red' } ); entity.setAttribute('gps-new-entity-place', { latitude: feature.geometry.coordinates[1], longitude: feature.geometry.coordinates[0] }); document.querySelector(\"a-scene\").appendChild(entity); }); } downloaded = true; }); }; Much of the logic is similar to the previous example, but note that we now send a request to the web server via the fetch API, sending a bounding box surrounding the current position. The server sends back GeoJSON . GeoJSON contains a features array containing each point of interest, and each feature includes a geometry object containing the latitude and longitude witthin a two-member coordinates array. So we loop through each feature, dynamically create an entity (as in the previous example) from the current feature, use the latitude and longitude from the GeoJSON to create the gps-new-entity-place component, and add it to the scene.","title":"Connecting to a web server"},{"location":"location-based-aframe/#things-to-try_2","text":"Try requesting the Hikar URL directly in your browser, supplying a bounding box representing an area you are familiar with, and explore the format used for points of interest of different types. Each GeoJSON feature object has a properties object containing properties describing the point of interest. The amenity property is commonly used: this describes the type of amenity (such as restaurant, cafe, pub, etc). Try colouring the boxes differently depending on point of interest type (e.g. restaurants, cafes, pubs, etc).","title":"Things to try"},{"location":"location-based-aframe/#adding-text-labels","text":"The next example shows how you can add text labels to your POIs. window.onload = () => { let downloaded = false; const el = document.querySelector(\"[gps-new-camera]\"); el.addEventListener(\"gps-camera-update-position\", async(e) => { if(!downloaded) { const west = e.detail.position.longitude - 0.05, east = e.detail.position.longitude + 0.05, south = e.detail.position.latitude - 0.05; north = e.detail.position.latitude + 0.05; console.log(`${west} ${south} ${east} ${north}`); const response = await fetch(`https://hikar.org/webapp/map?bbox=${west},${south},${east},${north}&layers=poi&outProj=4326`); const pois = await response.json(); pois.features.forEach ( feature => { const compoundEntity = document.createElement(\"a-entity\"); compoundEntity.setAttribute('gps-new-entity-place', { latitude: feature.geometry.coordinates[1], longitude: feature.geometry.coordinates[0] }); const box = document.createElement(\"a-box\"); box.setAttribute(\"scale\", { x: 20, y: 20, z: 20 }); box.setAttribute('material', { color: 'red' } ); box.setAttribute(\"position\", { x : 0, y : 20, z: 0 } ); const text = document.createElement(\"a-text\"); const textScale = 100; text.setAttribute(\"look-at\", \"[gps-new-camera]\"); text.setAttribute(\"scale\", { x: textScale, y: textScale, z: textScale }); text.setAttribute(\"value\", feature.properties.name); text.setAttribute(\"align\", \"center\"); compoundEntity.appendChild(box); compoundEntity.appendChild(text); document.querySelector(\"a-scene\").appendChild(compoundEntity); }); } downloaded = true; }); }; How is this working? We now create a compound entity . In A-Frame, a compound entity is an entity which has other entities as children. Here, we will create a compound entity, position it at the POI's latitude and longitude, and add the box, and a new text entity containing the POI name, to it. We create the box as before, and set its y coordinate to 20. This is relative to its parent, i.e. the compound entity. The compound entity is already positioned at the correct latitude and longitude, so we will position the box 20 metres above that position. We then create a text entity, scale it appropriately, and set its value attribute to the name of the feature from the GeoJSON. Note the use of the look-at component. This makes a given A-Frame entity look at another. Here we want the text to look at the camera (i.e. the entity with a gps-new-camera property), so we always see it. The look-at component is a third-party component and must be added to your HTML as follows: We then append the box and text to the compound entity, and append our compound entity to the scene.","title":"Adding text labels"},{"location":"location-based-aframe/#things-to-try_3","text":"Try filtering out POIs with no name, so that only those with a name are displayed. Those without a name should not be displayed, not even as a box. Try implementing logic to re-download POIs if the user moves 0.05 degrees of either latitude or longitude from the previous download position. Each POI in the GeoJSON includes an osm_id property which is a unique OpenStreetMap ID for that POI. Using the osm_id , implement logic so that a POI is not re-added to the scene if it is already present. (This may happen if you move 0.05 degrees but return to an area you have already visited).","title":"Things to try"},{"location":"location-based-three/","text":"Location-based AR.js with three.js - Develop a simple Points of Interest app AR.js 3.4 features a pure three.js API for location-baed AR. Here is a series of tutorials taking you through how to use it, from the basics to a more advanced example: a simple but working Points of Interest app using a live web API. It is expected that you have some basic three.js experience. Do note that this code will not work on Firefox on a mobile device due to limitations of the device orientation API; absolute orientation cannot be obtained. Chrome on Android is recommended. Installing It is recommended to install via npm and build with a bundler such as Webpack. and import it into your application. Here is a sample package.json : { \"dependencies\": { \"@ar-js-org/ar.js\": \"3.4.5\" }, \"devDependencies\": { \"webpack\": \"^5.75.0\", \"webpack-cli\": \"^5.0.0\" }, \"scripts\": { \"build\": \"npx webpack\" } } and a sample webpack.config.js : const path = require('path'); module.exports = { mode: 'development', entry: './index.js', output: { path: path.resolve(__dirname, 'dist'), filename: 'bundle.js' }, optimization: { minimize: false } }; This will build a bundle named bundle.js in the dist subdirectory from a source file index.js . This will be assumed in the examples. Part 1: Hello World Part 2: Using the GPS and Device Orientation Part 3: Connecting to a web API","title":"Location-based AR.js with three.js - Develop a simple Points of Interest app"},{"location":"location-based-three/#location-based-arjs-with-threejs-develop-a-simple-points-of-interest-app","text":"AR.js 3.4 features a pure three.js API for location-baed AR. Here is a series of tutorials taking you through how to use it, from the basics to a more advanced example: a simple but working Points of Interest app using a live web API. It is expected that you have some basic three.js experience. Do note that this code will not work on Firefox on a mobile device due to limitations of the device orientation API; absolute orientation cannot be obtained. Chrome on Android is recommended.","title":"Location-based AR.js with three.js - Develop a simple Points of Interest app"},{"location":"location-based-three/#installing","text":"It is recommended to install via npm and build with a bundler such as Webpack. and import it into your application. Here is a sample package.json : { \"dependencies\": { \"@ar-js-org/ar.js\": \"3.4.5\" }, \"devDependencies\": { \"webpack\": \"^5.75.0\", \"webpack-cli\": \"^5.0.0\" }, \"scripts\": { \"build\": \"npx webpack\" } } and a sample webpack.config.js : const path = require('path'); module.exports = { mode: 'development', entry: './index.js', output: { path: path.resolve(__dirname, 'dist'), filename: 'bundle.js' }, optimization: { minimize: false } }; This will build a bundle named bundle.js in the dist subdirectory from a source file index.js . This will be assumed in the examples. Part 1: Hello World Part 2: Using the GPS and Device Orientation Part 3: Connecting to a web API","title":"Installing"},{"location":"location-based-three/part1/","text":"Location-based AR.js with three.js Part 1 - Hello World! The first part of this tutorial will show you how to create a \"hello world\" application using the pure three.js API for location-based ar.js. It is assumed you are aware of basic three.js concepts, such as the scene, renderer and camera as well as geometries, materials and meshes. This example will set your location to a \"fake\" GPS location and add a box a short distance away. Let's start with the HTML: Location-based AR.js with three.js This example assumes that you have installed AR.js via npm and used Webpack to build the application, as described on the index page for the tutorial . We link in the built bundle of our own code plus the three.js and AR.js dependencies. Here is our own code: save this as index.js . import * as THREE from 'three'; import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'; function main() { const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const box = new THREE.Mesh(geom, mtl); arjs.add(box, -0.72, 51.051); arjs.fakeGps(-0.72, 51.05); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } } main(); Much of this should be familiar to you from basic three.js examples, and is written in the same style as the manual . As normal, we create a THREE.Scene , a THREE.PerspectiveCamera and a THREE.WebGLRenderer using our canvas. What comes next though is new, and specific to AR.js: const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); We use two new objects, both part of the AR.js API. Firstly THREEx.LocationBased is the overall AR.js \"manager\" object and secondly THREEx.WebcamRenderer is responsible for rendering the camera feed. We need to supply our scene and camera as arguments to THREEx.LocationBased and our renderer as an argument to THREEx.WebcamRenderer . The THREEx.WebcamRenderer will create a video element to capture the webcam. Alternatively, if you have a video element already set up in your HTML, you can pass its CSS selector into the WebcamRenderer as an optional argument. For example: const cam = new THREEx.WebcamRenderer(renderer, '#video1'); Next, using standard three.js code, we set up a mesh using a box geometry and red material (i.e. a red box). However, this is where it then gets interesting: arjs.add(box, -0.72, 51.051); Rather than setting the box's position as we would normally do in standard three.js, we add it to a specific real-world location defined by longitude and latitude. The add() method of THREEx.LocationBased allows us to do that. Having positioned our box in a specific real-world location, we now need to place ourselves (i.e. the camera) at a given real-world location We can do this with THREEx.LocationBased s fakeGps() method, which takes longitude and latitude as parameters: arjs.fakeGps(-0.72, 51.05); This plaves us just to the south of the red box. By default, we face north, so the red box will appear in front of us. The remaining code is the standard three.js code for rendering each frame, and dealing with potential screen resizes. However note this code within the rendering function: cam.update(); This API call will render the latest camera frame. Try it! Try it on either a desktop machine or an Android device running Chrome. On a mobile device or desktop you should see the feed from the webcam, and a red box just in front of you. Note that the mobile device will not yet respond to changes in orientation: we will add that next time. For this reason you must ensure the box is to your north as the default view is to face north. Faking rotation on a desktop machine If you do not have a suitable mobile device, you can simulate rotation with the mouse. The code below will do this (add to your main block of code, just before the rendering function): const rotationStep = THREE.Math.degToRad(2); let mousedown = false, lastX =0; window.addEventListener(\"mousedown\", e=> { mousedown = true; }); window.addEventListener(\"mouseup\", e=> { mousedown = false; }); window.addEventListener(\"mousemove\", e=> { if(!mousedown) return; if(e.clientX < lastX) { camera.rotation.y -= rotationStep; if(camera.rotation.y < 0) { camera.rotation.y += 2 * Math.PI; } } else if (e.clientX > lastX) { camera.rotation.y += rotationStep; if(camera.rotation.y > 2 * Math.PI) { camera.rotation.y -= 2 * Math.PI; } } lastX = e.clientX; }); What does this do? Using mouse events, it detects the direction of movement of the mouse when it's pressed down, and in doing so, determines whether to rotate the camera clockwise or anticlockwise. It does this using the clientX property of the event object, which contains the mouse X position. This is compared to the previous value of e.clientX and from this, we can determine whether we moved the mouse to the left or to the right, and rotate accordingly. We move the camera by the amount specified in rotationStep and ensure that the camera rotation is always within the range 0 to 2PI radians (i.e. 360 degrees).","title":"Location-based AR.js with three.js"},{"location":"location-based-three/part1/#location-based-arjs-with-threejs","text":"","title":"Location-based AR.js with three.js"},{"location":"location-based-three/part1/#part-1-hello-world","text":"The first part of this tutorial will show you how to create a \"hello world\" application using the pure three.js API for location-based ar.js. It is assumed you are aware of basic three.js concepts, such as the scene, renderer and camera as well as geometries, materials and meshes. This example will set your location to a \"fake\" GPS location and add a box a short distance away. Let's start with the HTML: Location-based AR.js with three.js This example assumes that you have installed AR.js via npm and used Webpack to build the application, as described on the index page for the tutorial . We link in the built bundle of our own code plus the three.js and AR.js dependencies. Here is our own code: save this as index.js . import * as THREE from 'three'; import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'; function main() { const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const box = new THREE.Mesh(geom, mtl); arjs.add(box, -0.72, 51.051); arjs.fakeGps(-0.72, 51.05); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } } main(); Much of this should be familiar to you from basic three.js examples, and is written in the same style as the manual . As normal, we create a THREE.Scene , a THREE.PerspectiveCamera and a THREE.WebGLRenderer using our canvas. What comes next though is new, and specific to AR.js: const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); We use two new objects, both part of the AR.js API. Firstly THREEx.LocationBased is the overall AR.js \"manager\" object and secondly THREEx.WebcamRenderer is responsible for rendering the camera feed. We need to supply our scene and camera as arguments to THREEx.LocationBased and our renderer as an argument to THREEx.WebcamRenderer . The THREEx.WebcamRenderer will create a video element to capture the webcam. Alternatively, if you have a video element already set up in your HTML, you can pass its CSS selector into the WebcamRenderer as an optional argument. For example: const cam = new THREEx.WebcamRenderer(renderer, '#video1'); Next, using standard three.js code, we set up a mesh using a box geometry and red material (i.e. a red box). However, this is where it then gets interesting: arjs.add(box, -0.72, 51.051); Rather than setting the box's position as we would normally do in standard three.js, we add it to a specific real-world location defined by longitude and latitude. The add() method of THREEx.LocationBased allows us to do that. Having positioned our box in a specific real-world location, we now need to place ourselves (i.e. the camera) at a given real-world location We can do this with THREEx.LocationBased s fakeGps() method, which takes longitude and latitude as parameters: arjs.fakeGps(-0.72, 51.05); This plaves us just to the south of the red box. By default, we face north, so the red box will appear in front of us. The remaining code is the standard three.js code for rendering each frame, and dealing with potential screen resizes. However note this code within the rendering function: cam.update(); This API call will render the latest camera frame.","title":"Part 1 - Hello World!"},{"location":"location-based-three/part1/#try-it","text":"Try it on either a desktop machine or an Android device running Chrome. On a mobile device or desktop you should see the feed from the webcam, and a red box just in front of you. Note that the mobile device will not yet respond to changes in orientation: we will add that next time. For this reason you must ensure the box is to your north as the default view is to face north.","title":"Try it!"},{"location":"location-based-three/part1/#faking-rotation-on-a-desktop-machine","text":"If you do not have a suitable mobile device, you can simulate rotation with the mouse. The code below will do this (add to your main block of code, just before the rendering function): const rotationStep = THREE.Math.degToRad(2); let mousedown = false, lastX =0; window.addEventListener(\"mousedown\", e=> { mousedown = true; }); window.addEventListener(\"mouseup\", e=> { mousedown = false; }); window.addEventListener(\"mousemove\", e=> { if(!mousedown) return; if(e.clientX < lastX) { camera.rotation.y -= rotationStep; if(camera.rotation.y < 0) { camera.rotation.y += 2 * Math.PI; } } else if (e.clientX > lastX) { camera.rotation.y += rotationStep; if(camera.rotation.y > 2 * Math.PI) { camera.rotation.y -= 2 * Math.PI; } } lastX = e.clientX; }); What does this do? Using mouse events, it detects the direction of movement of the mouse when it's pressed down, and in doing so, determines whether to rotate the camera clockwise or anticlockwise. It does this using the clientX property of the event object, which contains the mouse X position. This is compared to the previous value of e.clientX and from this, we can determine whether we moved the mouse to the left or to the right, and rotate accordingly. We move the camera by the amount specified in rotationStep and ensure that the camera rotation is always within the range 0 to 2PI radians (i.e. 360 degrees).","title":"Faking rotation on a desktop machine"},{"location":"location-based-three/part2/","text":"Location-based AR.js with three.js Part 2 - Using the GPS and Device Orientation Having looked at the basics of the three.js location-based API in the first tutorial, we will now look at how to use the real GPS location. Last time, if you remember, we used a \"fake\" location with the THREEx.LocationBased 's fakeGps() call. We will also look at how we can use the device's orientation controls, so that the orientation sensors are tracked and objects will appear in their real-world position when the device is rotated. For example, an object directly north of the user will only appear when the device is facing north. GPS tracking Here is a revised version of the previous example which obtains your real GPS location: import * as THREE from 'three'; import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'; function main() { const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const box = new THREE.Mesh(geom, mtl); // Change this to a location 0.001 degrees of latitude north of you, so that you will face it arjs.add(box, -0.72, 51.051); // Start the GPS arjs.startGps(); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } } main(); Note that we only needed to make one change, we replace the fakeGps() call with: arjs.startGps(); Using the Geolocation API this will make the application start listening for GPS updates. The nice thing is we do not need to do anything else. The LocationBased object automatically updates the camera x and z coordinates to reflect our current GPS location. Specifically, the GPS latitude and longitude are converted to Spherical Mercator, the sign of z reversed (to match the OpenGL coordinate system), and the resulting coordinates used for the camera coordinates. Using the device orientation controls Having looked at obtaining our real GPS position, we will now look at how we can use the orientation controls to ensure our AR scene matches the real world as we rotate the device around. This is, in principle, quite easy: we just need to create a THREEx.DeviceOrientationControls object and update it in our rendering function. This object is based on the original DeviceOrientationControls from three.js. However, there is a slight problem. Unfortunately this will only work in Chrome on Android (it may also work in Chrome on iOS, this needs testing). This is due to the difficulty in obtaining absolute orientation (i.e. our orientation relative to north) using the device orientation API. This can be done on Chrome/Android using the deviceorientationabsolute event (and in fact, the THREEx.DeviceOrientationControls has been modified from the original to handle this event); it can also be done on Safari with webkitCompassHeading (but, due to the lack of an iDevice for testing, has not been implemented yet) but sadly it appears that support on Firefox is completely missing for now. See this table of compatibility for absolute device orientation . So it's recommended you use Chrome on Android for the moment. The example below shows the use of orientation tracking: const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const box = new THREE.Mesh(geom, mtl); // Create the device orientation tracker const deviceOrientationControls = new THREEx.DeviceOrientationControls(camera); // Change this to a location close to you (e.g. 0.001 degrees of latitude north of you) arjs.add(box, -0.72, 51.051); arjs.startGps(); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } // Update the scene using the latest sensor readings deviceOrientationControls.update(); cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } Note how we create a device orientation tracker with: const deviceOrientationControls = new THREEx.DeviceOrientationControls(camera); The device orientation tracker updates the camera, so we need to pass it in as an argument. Also note how we update the device orientation tracker in our rendering function, so that new readings from the sensors are accounted for: deviceOrientationControls.update(); Try it! Try it out. As real GPS location and device orientation is used, you will need a mobile device. You should find that the red box appears in its real world position (ensure it's not too far from you, e.g. 0.001 degrees of longitude to the north) and, due to the use of orientation tracking, only appears in the field of view when you are facing its location.","title":"Location-based AR.js with three.js"},{"location":"location-based-three/part2/#location-based-arjs-with-threejs","text":"","title":"Location-based AR.js with three.js"},{"location":"location-based-three/part2/#part-2-using-the-gps-and-device-orientation","text":"Having looked at the basics of the three.js location-based API in the first tutorial, we will now look at how to use the real GPS location. Last time, if you remember, we used a \"fake\" location with the THREEx.LocationBased 's fakeGps() call. We will also look at how we can use the device's orientation controls, so that the orientation sensors are tracked and objects will appear in their real-world position when the device is rotated. For example, an object directly north of the user will only appear when the device is facing north.","title":"Part 2 - Using the GPS and Device Orientation"},{"location":"location-based-three/part2/#gps-tracking","text":"Here is a revised version of the previous example which obtains your real GPS location: import * as THREE from 'three'; import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'; function main() { const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const box = new THREE.Mesh(geom, mtl); // Change this to a location 0.001 degrees of latitude north of you, so that you will face it arjs.add(box, -0.72, 51.051); // Start the GPS arjs.startGps(); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } } main(); Note that we only needed to make one change, we replace the fakeGps() call with: arjs.startGps(); Using the Geolocation API this will make the application start listening for GPS updates. The nice thing is we do not need to do anything else. The LocationBased object automatically updates the camera x and z coordinates to reflect our current GPS location. Specifically, the GPS latitude and longitude are converted to Spherical Mercator, the sign of z reversed (to match the OpenGL coordinate system), and the resulting coordinates used for the camera coordinates.","title":"GPS tracking"},{"location":"location-based-three/part2/#using-the-device-orientation-controls","text":"Having looked at obtaining our real GPS position, we will now look at how we can use the orientation controls to ensure our AR scene matches the real world as we rotate the device around. This is, in principle, quite easy: we just need to create a THREEx.DeviceOrientationControls object and update it in our rendering function. This object is based on the original DeviceOrientationControls from three.js. However, there is a slight problem. Unfortunately this will only work in Chrome on Android (it may also work in Chrome on iOS, this needs testing). This is due to the difficulty in obtaining absolute orientation (i.e. our orientation relative to north) using the device orientation API. This can be done on Chrome/Android using the deviceorientationabsolute event (and in fact, the THREEx.DeviceOrientationControls has been modified from the original to handle this event); it can also be done on Safari with webkitCompassHeading (but, due to the lack of an iDevice for testing, has not been implemented yet) but sadly it appears that support on Firefox is completely missing for now. See this table of compatibility for absolute device orientation . So it's recommended you use Chrome on Android for the moment. The example below shows the use of orientation tracking: const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const box = new THREE.Mesh(geom, mtl); // Create the device orientation tracker const deviceOrientationControls = new THREEx.DeviceOrientationControls(camera); // Change this to a location close to you (e.g. 0.001 degrees of latitude north of you) arjs.add(box, -0.72, 51.051); arjs.startGps(); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } // Update the scene using the latest sensor readings deviceOrientationControls.update(); cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } Note how we create a device orientation tracker with: const deviceOrientationControls = new THREEx.DeviceOrientationControls(camera); The device orientation tracker updates the camera, so we need to pass it in as an argument. Also note how we update the device orientation tracker in our rendering function, so that new readings from the sensors are accounted for: deviceOrientationControls.update();","title":"Using the device orientation controls"},{"location":"location-based-three/part2/#try-it","text":"Try it out. As real GPS location and device orientation is used, you will need a mobile device. You should find that the red box appears in its real world position (ensure it's not too far from you, e.g. 0.001 degrees of longitude to the north) and, due to the use of orientation tracking, only appears in the field of view when you are facing its location.","title":"Try it!"},{"location":"location-based-three/part3/","text":"Location-based AR.js with three.js Part 3 - Connecting to a web API Having looked at how to use the three.js location-based API, we will now consider an example which connects to a web API providing points of interest. This example does not actually introduce any new AR.js concepts, but shows you how you can work with a web API. import * as THREE from 'three'; import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'; function main() { const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const deviceOrientationControls = new THREEx.DeviceOrientationControls(camera); let fetched = false; // Handle the \"gpsupdate\" event on the LocationBased object // This triggers when a GPS update (from the Geolocation API) occurs // 'pos' is the position object from the Geolocation API. arjs.on(\"gpsupdate\", async(pos) => { if(!fetched) { const response = await fetch(`https://hikar.org/webapp/map?bbox=${pos.coords.longitude-0.01},${pos.coords.latitude-0.01},${pos.coords.longitude+0.01},${pos.coords.latitude+0.01}&layers=poi&outProj=4326`); const geojson = await response.json(); geojson.features.forEach ( feature => { const box = new THREE.Mesh(geom, mtl); arjs.add(box, feature.geometry.coordinates[0], feature.geometry.coordinates[1]); }); fetched = true; } }); arjs.startGps(); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } // Update the scene using the latest sensor readings deviceOrientationControls.update(); cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } } main(); How is this working? The key thing is we handle the gpsupdate event emitted by the LocationBased object when a GPS update occurs. This is specifically emitted when the inbuilt Geolocation API receives a GPS update, and allows us to trigger certain code. Here, we trigger a download from a web API when we get the update. Note that the gpsupdate event handler receives the standard position object of the Geolocation API, so that, for example, its coords property contains the longitude and latitude. We then download data in a 0.02 x 0.02 degree box centred on our current location from the API at https://hikar.org. This provides OpenStreetMap POI data, but only for Europe and Turkey due to server capacity constraints. The data is provided as GeoJSON . So having received the data, we simply loop through it and create one THREE.Mesh for each POI, adding it at the appropriate location (accessible via the coordinates of the geometry of each GeoJSON object). Note the boolean variable fetched which is set to true as soon as we have fetched the data. This prevents data being continuously downloaded from the server every time we get a position update, as it's set to false as soon as data has been downloaded. In a real application you could implement code to download data by tile, so that new data is downloaded whenever you move into a new tile.","title":"Location-based AR.js with three.js"},{"location":"location-based-three/part3/#location-based-arjs-with-threejs","text":"","title":"Location-based AR.js with three.js"},{"location":"location-based-three/part3/#part-3-connecting-to-a-web-api","text":"Having looked at how to use the three.js location-based API, we will now consider an example which connects to a web API providing points of interest. This example does not actually introduce any new AR.js concepts, but shows you how you can work with a web API. import * as THREE from 'three'; import * as THREEx from './node_modules/@ar-js-org/ar.js/three.js/build/ar-threex-location-only.js'; function main() { const canvas = document.getElementById('canvas1'); const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(60, 1.33, 0.1, 10000); const renderer = new THREE.WebGLRenderer({canvas: canvas}); const arjs = new THREEx.LocationBased(scene, camera); const cam = new THREEx.WebcamRenderer(renderer); const geom = new THREE.BoxGeometry(20, 20, 20); const mtl = new THREE.MeshBasicMaterial({color: 0xff0000}); const deviceOrientationControls = new THREEx.DeviceOrientationControls(camera); let fetched = false; // Handle the \"gpsupdate\" event on the LocationBased object // This triggers when a GPS update (from the Geolocation API) occurs // 'pos' is the position object from the Geolocation API. arjs.on(\"gpsupdate\", async(pos) => { if(!fetched) { const response = await fetch(`https://hikar.org/webapp/map?bbox=${pos.coords.longitude-0.01},${pos.coords.latitude-0.01},${pos.coords.longitude+0.01},${pos.coords.latitude+0.01}&layers=poi&outProj=4326`); const geojson = await response.json(); geojson.features.forEach ( feature => { const box = new THREE.Mesh(geom, mtl); arjs.add(box, feature.geometry.coordinates[0], feature.geometry.coordinates[1]); }); fetched = true; } }); arjs.startGps(); requestAnimationFrame(render); function render() { if(canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight) { renderer.setSize(canvas.clientWidth, canvas.clientHeight, false); const aspect = canvas.clientWidth/canvas.clientHeight; camera.aspect = aspect; camera.updateProjectionMatrix(); } // Update the scene using the latest sensor readings deviceOrientationControls.update(); cam.update(); renderer.render(scene, camera); requestAnimationFrame(render); } } main(); How is this working? The key thing is we handle the gpsupdate event emitted by the LocationBased object when a GPS update occurs. This is specifically emitted when the inbuilt Geolocation API receives a GPS update, and allows us to trigger certain code. Here, we trigger a download from a web API when we get the update. Note that the gpsupdate event handler receives the standard position object of the Geolocation API, so that, for example, its coords property contains the longitude and latitude. We then download data in a 0.02 x 0.02 degree box centred on our current location from the API at https://hikar.org. This provides OpenStreetMap POI data, but only for Europe and Turkey due to server capacity constraints. The data is provided as GeoJSON . So having received the data, we simply loop through it and create one THREE.Mesh for each POI, adding it at the appropriate location (accessible via the coordinates of the geometry of each GeoJSON object). Note the boolean variable fetched which is set to true as soon as we have fetched the data. This prevents data being continuously downloaded from the server every time we get a position update, as it's set to false as soon as data has been downloaded. In a real application you could implement code to download data by tile, so that new data is downloaded whenever you move into a new tile.","title":"Part 3 - Connecting to a web API"}]}
\ No newline at end of file
diff --git a/sitemap.xml b/sitemap.xml
index f116c16..ab1cb4a 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -1,27 +1,27 @@
None
- 2024-10-25
+ 2024-10-26dailyNone
- 2024-10-25
+ 2024-10-26dailyNone
- 2024-10-25
+ 2024-10-26dailyNone
- 2024-10-25
+ 2024-10-26dailyNone
- 2024-10-25
+ 2024-10-26dailyNone
- 2024-10-25
+ 2024-10-26daily
\ No newline at end of file
diff --git a/sitemap.xml.gz b/sitemap.xml.gz
index 445ad25..0b7ab7c 100644
Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ