Skip to content
This repository has been archived by the owner on Dec 26, 2023. It is now read-only.
Ahmed Hussaini edited this page Jun 7, 2022 · 9 revisions

SDZWA Sensor Dashboard - Frontend

The SDZWA Sensor Dashboard frontend was built by employing React-Bootstrap components and custom components and ran through a Next.js server. The frontend is connected to the backend using REST API calls such as POST and GET. image

Overall system architecture resembles that of the diagram above. The front-end portion of this project involves the React (Web)App along with connecting to the back end using API calls for retrieving and updating user and sensor data.

Design Choices

Primary design choices for the React app are as follows:

  • Modularity

  • Customizability

  • Flexibility

We wanted our application to follow the design principle of modularity to aid in the implementation along with extensibility in the future. All of the pages in this repo can be thought of as separate system components, that can easily be extended or modified without impact on the remaining pages. The only dependency that exists between pages involves logging in to have a valid authentication token and connecting a sensor tile in the dashboard to a corresponding individual sensor page. The primary component of our react app, the dashboard tile page, easily allows for more components to be added as an extension or deletion of components (via the individual sensor page).

As for customizability, our application has been designed in a generalized manner, which allows for customization for current and future developers. We knew that there may be new sensors or connectors in the future, and did not want the overall system design to interfere with these future operations. All API calls are generalized to support a sensor data type, which with the assistance of the back-end team can be modified if need be. From the sensor data type, the react app can then render a different component based on the need.

Flexibility is the last of our primary design principles followed. Our web application has made use of CSS styling in a manner that supports an easy and responsive user experience, regardless of the platform. For this, we considered viewpoints of all types of devices, from handheld smartphones to full-screen desktops -- regardless the user experience will be seamless.

Pages

Overall System Diagram:

image

This diagram shows all the different pages within our React app, as can be seen in the Pages directory. All pages are loosely connected, requiring a token to complete any API calls. This token is granted at login or registration time by the backend, and signifies the session. This diagram also shows a very high level of our API functionality, covering all basic CRUD operations and user authentication requirements. All pages are currently very simple, and can be extended easily.

Registration:

image

The registration page requires basic user information:

  • First name
  • Last name
  • Email Address
  • Password

All of this information is then passed to the backend via an API call, and securely stored.

Login:

image

Login page resembles that of the registration page, without the name information. A simple API call is used to verify user identity and complete the login process if valid.

Dashboard:

image

The dashboard page is designed to show each sensor separately from one another, in a tiling pattern. This allows for an easy user interaction experience, and comparing data between sensors side by side if needed. From the dashboard page, a user may click on a sensor tile to visit that particular sensor's own page, or the add sensor tile, which would bring them to the add sensor page. Every time a user navigates to the dashboard page, a getAllSensors API call is made to check the database for sensors, and potentially remove or render deleted/new sensors accordingly.

Individual Sensor Page:

image

The individual sensor page resembles its tile in the dashboard, with more information displayed below it. There is potential to add more information below the sensor, as the current display does not show all the information/metadata that our database currently stored. In addition to this, there is a potential extension of this page by adding a sensor health visualization at the bottom of the page to indicate if the sensor is healthy or not based upon communication metrics. The individual sensor page is capable of making editSensor and deleteSensor API calls, depending on the button pressed by the user.

Add/Edit Sensor:

image

The add and edit sensor pages are identical. Each sensor will have a unique ID in the database - adding a sensor generates a new ID, while editing a sensor simply uses the already in place ID. This page has a form with several options for a user to input information. This information serves an administrative purpose of describing the sensor as well as pointing the app to where the data needs to be retrieved from. addSensor or editSensor API calls will propagate these changes into the database accordingly. The following information is required from this page:

  • Sensor Name
  • Sensor Description
  • Data Type
  • Data Source
  • Data URL (URI / Localized location works too)

Future Work

Due to time constraints of the quarter, this is not a completely finished project, and there remains work to be done.

Currently, our dashboard is utilizing data from sample data along with capabilities for an audio-player and video-player using different react player libraries. The zoo's audio and video are of the RSTP stream format and would be live, so there is quite some work to be done to implement RSTP capabilities in the dashboard. We also did not gain access to any of the zoo's sensors, so integrating a real-life sensor into the frontend would be a huge step forward in development.

From the front-end perspective, there remains work to be done with regard to adding more types of visualizations. We are currently using the visx library for our sample data, but there is room to add any other type of data visualization to this project. We simply import these visualizations as a component and place these in the dashboard, so adding new ones would be as simple as adding a new component.

There also remains work that was planned out for implementing a sensor health graph that would be placed on the individual sensor page. This graph may be implemented as a bar chart, with an hourly/daily/weekly report on whether the sensor is communicating/partially communicating/not communicating. To implement this chart, you would have to pull a bar chart from the visx or another library and transfer data from the back end into that chart. Currently, there is no backend support for this data, but that may be added very easily.

There will always be work to be done from a design and user experience perspective, but there are currently no plans for any changes in these regards.