-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make the Dashboard tab configurable, and implement new components #1142
Make the Dashboard tab configurable, and implement new components #1142
Conversation
Up to now, the dashboard has only been shown for MULTILABEL configurations; on ENKETO configurations it was completely hidden. We are making the dashboard more configurable - e-mission/e-mission-docs#1055. The presence of a new field `metrics`.`phone_dashboard_ui` being defined will cause the dashboard to be shown. If `phone_dashboard_ui` is not defined, it will fall back to the current behavior (which is to only show dashboard for MULTILABEL configurations).
e-mission/e-mission-docs#1055 (comment) *** sections A list of sections to show in the dashboard UI. They will appear in the specified order from top to bottom. Options are footprint, active_travel, summary, engagement, surveys. Whichever sections are omitted will not be shown in the UI. Default: ["footprint", "active_travel", "summary"] ***
If there is no active_travel_options.modes_list, set the default active modes
[ Tasks ]
|
Leaflet map encounters an error when prerendered, so we need to render the TimelineScrollList component when the active tab is 'label' 'shouldUpdateTimeline' state is used to determine whether to render the TimelineScrollList or not
purpose : prevent too much API call
@JGreenlee I have completed the frontend code for the survey dashboard. Currently, I am still working on the server code, and I will provide an explanation in the server pull request. I have implemented a new API to fetch survey data every 24 hours because the survey data does not render based on the timeline. It should display the accumulated results. Therefore, I designed it to call the survey API when a user clicks on the dashboard tab and the last updated data was more than 24 hours ago. Additionally, after retrieving survey data, I preprocess it for charts. Regarding the leaderboard, I have configured it to display only the top 5 members, myself, and the bottom 3 members. Please review the code and comment if anything doesn't make sense to you. I will bring my personal laptop to Hawaii so I can communicate via GitHub. Thank you! |
…enlee/e-mission-phone into configurable-app-dashboard
When I first designed the survey dashboard, it showed the accumulated survey data regardless of the selected date range. However, the updated logic is that the survey metric depends on the date range selected, while the leaderboard metric remains accumulated.
Survey Comparison Card: Data depends on date range. Survey Leaderboard Card: Data is accumulated. To prevent confusion for users, we separate the two cards and inform them that the leaderboard data is accumulated.
Instead of querying by timestamp we can now query by "YYYY-MM-DD" strings. The yyyy_mm_dd endpoint also has a different format for metric_list: instead of just an array of metrics, it is a mapping of "metric names" to "grouping fields" (e-mission/e-mission-server#966 (comment)) We will specify this in the config as "metrics_list". The yyyy_mm_dd endpoint will also ask for app_config, so we will pass that through to `fetchMetricsFromServer` and include it in the `query` (note that `survey_info` is the only field that needs to be included) We will be supporting another metric called "response_count"; added a card to the 'summary' section (if it is configured to show) Updated types in appConfigTypes.ts
Making this more generic because we will now support fields other than the labeled mode (such as labeled purpose, BLE sensed mode, and survey that was prompted)
Instead of specifically listing a MetricsCard for each metric, we can read metricsList and map each key to a MetricsCard. But the structure for response_count is an object of {responded: number, not_responded: number}, while the other metrics are just numbers. So we need to adjust change the way days get summed up to account for both.
The 'showsIf' conditions in dfc-fermata currently reference 'confirmedMode'. We want to rename this to 'primary_ble_sensed_mode'. Also see JGreenlee/e-mission-common@0396339
This makes it easier to test aggregate metrics locally
Not bad actually. Looks like it's only |
`new Date(isoString)` assumes that `isoString` is in UTC. We wanted to parse it in the local timezone, which is what luxon does here.
There is no discrepancy between DayOfClientMetricData and DayOfServerMetricData anymore, so this test file gets stripped down a lot
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1142 +/- ##
==========================================
+ Coverage 28.92% 30.59% +1.66%
==========================================
Files 115 118 +3
Lines 5064 5216 +152
Branches 1094 1154 +60
==========================================
+ Hits 1465 1596 +131
- Misses 3597 3618 +21
Partials 2 2
Flags with carried forward coverage won't be shown. Click here to find out more.
|
We are lacking in code coverage, but tests passed |
@JGreenlee I added 'no trip data' message for surveyComparison chart. Also set the maxBarThickness to make it prettier! @shankari |
I am halfway through reviewing this change (15/31 files) but they were the easy 15. Will finish review of the changes to the |
@jiji14 you need to re-run prettier; the test is failing |
@JGreenlee @shankari |
@jiji14 the error appears to be the same, since it was successful on re-run. However, the patch check is still failing since only 16.23% of the newly added code. The project code coverage has gone up from 29% to 31%, but the patch (aka newly added code) is still not tested enough. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am fine with the implementation, but have a lot of comments on the tests. I am merging this now so that we can get it onto staging, but @jiji14, please address the test comments, and improve the code coverage of this patch in a follow-up PR
return new Promise((rs, rj) => { | ||
// when app config does not have "server", localhost is used and no user authentication is required | ||
serverConnConfig ||= { | ||
connectUrl: 'http://localhost:8080' as any, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this is different for android and iOS. localhost
only works for iOS emulators, for android emulators, the laptop it is running on is 10.0.2.2
// Whether to show the uncertainty on the carbon footprint charts, default: true | ||
const showUnlabeledMetrics = | ||
appConfig?.metrics?.phone_dashboard_ui?.footprint_options?.unlabeled_uncertainty ?? true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this a config option? I assume we would always have uncertainty unless every trip is labeled.
We don't want to give people a false sense of certainty.
I found the related commit jiji14@5130653 but it doesn't have any details 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am reviewing this, although I assume it will be modified when we enable the leaderboard
}; | ||
summary_options?: {}; | ||
engagement_options?: { | ||
leaderboard_metric: [string, string]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this doesn't appear to be used anywhere now, I assume this will also be used when we enable the leaderboard
export const trimGroupingPrefix = (label: string) => { | ||
for (let field of groupingFields) { | ||
if (label.startsWith(field)) { | ||
return label.substring(field.length + 1); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when we implement e-mission/e-mission-server#966 (comment) this can be much simplified.
const days2 = [ | ||
{ mode_a: 1, mode_b: 2 }, | ||
{ mode_c: 1, mode_d: 3 }, | ||
] as any as DayOfClientMetricData[]; | ||
it("should return unique labels for days with 'mode_*'", () => { | ||
expect(getUniqueLabelsForDays(days2)).toEqual(['a', 'b', 'c', 'd']); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jiji14 please replace this with a test that actually have duplicate labels (see my comment in #1138 (comment))
|
||
const days2 = [ | ||
{ fmt_time: '2021-01-01T00:00:00Z' }, | ||
{ fmt_time: '2021-01-02T00:00:00Z' }, | ||
{ fmt_time: '2021-01-04T00:00:00Z' }, | ||
{ fmt_time: '2021-01-08T00:00:00Z' }, | ||
{ fmt_time: '2021-01-09T00:00:00Z' }, | ||
{ fmt_time: '2021-01-10T00:00:00Z' }, | ||
] as any as DayOfServerMetricData[]; | ||
it("should segment days with 'fmt_time' into weeks", () => { | ||
expect(segmentDaysByWeeks(days2, '2021-01-10')).toEqual([ | ||
// most recent week | ||
[ | ||
{ fmt_time: '2021-01-04T00:00:00Z' }, | ||
{ fmt_time: '2021-01-08T00:00:00Z' }, | ||
{ fmt_time: '2021-01-09T00:00:00Z' }, | ||
{ fmt_time: '2021-01-10T00:00:00Z' }, | ||
], | ||
// prior week | ||
[{ fmt_time: '2021-01-01T00:00:00Z' }, { fmt_time: '2021-01-02T00:00:00Z' }], | ||
]); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, please fix this test that was removed to get the tests to passs
expect(result).toBe(true); | ||
}); | ||
|
||
it('returns true for all sensed labels', () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for sensed labels, we expect isCustomLabels
to return false
. Right now, this is identical to the previous test.
it('checks for count metric', () => { | ||
const result = getUnitUtilsForMetric('count', imperialConfig); | ||
expect(result).toEqual(['trips', expect.any(Function), expect.any(Function)]); | ||
const mockTrip = { responded: 4, not_responded: 3 }; | ||
expect(result[1](mockTrip)).toBe(mockTrip); | ||
expect(result[2](mockTrip)).toBe(mockTrip + ' trips'); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This passes, but I don't think it is testing the right functionality. count
is a metric to just count the number of trips. The metric that expects a structure with responded
and not_responded
is response_count
@@ -53,3 +61,15 @@ describe('convertSpeed', () => { | |||
expect(convertSpeed(6.7056, true)).toBeCloseTo(15); // Approximately 15 mph | |||
}); | |||
}); | |||
|
|||
describe('useImperialConfig', () => { | |||
it('returns ImperialConfig with imperial units', () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Imperial units are the old ones (popularized by the British Empire), so miles
, mph
etc. These are non-imperial or SI units. Shouldn't we also add a similar check where we expect to see miles
, mph
etc?
This is now on staging as version 1.7.9. The staging server has also been updated to incorporate
A couple of issues that encountered while testing before pushing to staging were
|
500 error has been resolved. @JGreenlee I still see the following issues
Note also that the active minutes table shows overlapping ranges (two weeks, then two one weeks and then per day). Not sure if that is intended, and it was a bit confusing when I first looked at it. |
@shankari |
I am looking into these issues this morning |
When refactoring into TimelineContext, there was a regression regarding the date range that is initially loaded. e-mission#1142 (comment) It used to load 7 days before the pipelineEnd up to today. After the change, it would load 7 days before today up to today. If the user has travel everyday this ends up being equivalent. But if there isn't recent travel, this becomes a problem. So in TimelineContext, the initial dateRange needs to depend on pipelineRange. instead of giving an initial value upfront, we'll let dateRange be null until pipelineRange is set, at which point we'll also set dateRange. This means we have to consider some extra cases where dateRange can be null.
e-mission/e-mission-docs#1055