- Python 3.7+
- Postgresql
Create an aggregate representation of the subscription usage metrics which will be used for more efficiently generating metrics and small reports.
We have two types of usage - data usage and voice usage. The raw usage records usage types for these exist in the DataUsageRecord
and VoiceUsageRecord
tables. Create an aggregate representation (using models or otherwise) that will use the data from these two tables and store aggregated metrics segmented by date.
NOTE: You are not required to write the query to populate the new models you create with data from the raw usage records tables. Those raw usage record tables are there for reference.
Create one or both of the APIs below:
Create an API that accepts a price limit as a request parameter. Find any subscriptions that have reached the price limit on either data and/or voice (check both usage types). Return a list of the subscription id, type(s) of usage that exceeded the price limit, and by how much it's exceeded the limit.
Create an API that fetches data usage metrics and voice usage metrics by subscription id. This endpoint should accept a from date, to date, and usage type request parameter. Return a list of the subscription id, total price of usage for the given dates, and total usage for any subscriptions that had usage during the given from and to dates.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- Write a query to efficiently populate your aggregated usage models from the raw usage record tables.
HINT: Optimize for high volumes of raw usage records, but not long retention periods.
- Improve and optimize the existing code where you see fit.
- Write tests!
There are two way to do it.
- First one using Django ORM. Create new aggregated table and add signals to old raw model on CRUD operations
- Second one using PostgreSQL triggers. The same, but all logic pass to database layers.
- Third one using PostgreSQL view. Different from previous solutions. Create virtual table on database that can be queried.
- Django and DRF
- pytest
- PostgreSQL triggers, procedures, functions and views using Django ORM
- Solution in master branch
- Create ORM signal on pre_save and pre_delete for Data and Voice usage record models. This will allow to create new table where we store aggregated results from both tables
- Solution in feature/sql_triggers branch
- Create triggers, procedures and functions that will create UsageRecord instance on raw models CRUD operations. And SQL script to Django migration system.
- Solution in feature/sql_views branch
- Create aggregated representation of two tables and connect Django model to it using managed=False.
Obviously PostgreSQL triggers, functions and views will work faster than Django signals. But it take more amount of time to write in SQL language properly, but it's worth it.
- Sql triggers. Write better solutions with small functions. Check performance.
- Another idea implementation on Django avoiding signal. Add process of aggregation in the views. It will be more clear to understand. Add more tests.
- Write for bulk CRUD operations on Django ORM and SQL.