diff --git a/INSTALLATION.md b/INSTALLATION.md new file mode 100644 index 00000000..7f0d7559 --- /dev/null +++ b/INSTALLATION.md @@ -0,0 +1,105 @@ + +# Installation +PK-DB is deployed via `docker` and `docker-compose`. + +## Requirements +To setup the development server +the following minimal requirements must be fulfilled +- `docker` +- `docker-compose` +- `Python3.6` + +For elasticsearch the following system settings are required +``` +sudo sysctl -w vm.max_map_count=262144 +``` +To set `vm.max_map_count` persistently change the value in +``` +/etc/sysctl.conf +``` +## Start development server +To start the local development server +```bash +# clone or pull the latest code +git clone https://github.com/matthiaskoenig/pkdb.git +cd pkdb +git pull + +# set environment variables +set -a && source .env.local + +# create/rebuild all docker containers +./docker-purge.sh +``` +This setups a clean database and clean volumes and starts the containers for `pkdb_backend`, `pkdb_frontend`, `elasticsearch` and `postgres`. +You can check that all the containers are running via +```bash +docker container ls +``` +which lists the current containers +``` +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +bc7f9204468f pkdb_backend "bash -c '/usr/local…" 27 hours ago Up 18 hours 0.0.0.0:8000->8000/tcp pkdb_backend_1 +17b8d243e956 pkdb_frontend "/bin/sh -c 'npm run…" 27 hours ago Up 18 hours 0.0.0.0:8080->8080/tcp pkdb_frontend_1 +7730c6fe2210 elasticsearch:6.8.1 "/usr/local/bin/dock…" 27 hours ago Up 18 hours 9300/tcp, 0.0.0.0:9123->9200/tcp pkdb_elasticsearch_1 +e880fbb0f349 postgres:11.4 "docker-entrypoint.s…" 27 hours ago Up 18 hours 0.0.0.0:5433->5432/tcp pkdb_postgres_1 +``` +The locally running develop version of PK-DB can now be accessed via the web browser from +- frontend: http://localhost:8080 +- backend: http://localhost:8000 + +### Fill database +Due to copyright, licensing and privacy issues this repository does not contain any data. +All data is managed via a separate private repository at https://github.com/matthiaskoenig/pkdb_data. +This also includes the curation scripts and curation workflows. + +If you are interested in curating data or contributing data please contact us at https://livermetabolism.com. + +# Docker +[[^]](https://github.com/matthiaskoenig/pkdb#pk-db---a-pharmacokinetics-database) +In the following typical examples to interact with the PK-DB docker containers are provided. + +### Check running containers +To check the running containers use +```bash +watch docker container ls +``` + +### Interactive container mode +```bash +./docker-interactive.sh +``` + +### Container logs +To get access to individual container logs use `docker container logs `. For instance to check the +django backend logs use +```bash +docker container logs pkdb_backend_1 +``` + +### Run command in container +To run commands inside the docker container use +```bash +docker-compose run --rm backend [command] +``` +or to run migrations +```bash +docker-compose run --rm backend python manage.py makemigrations +``` + +### Authentication data +The following examples show how to dump and restore the authentication data. + +Dump authentication data +```bash +docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py dumpdata auth --indent 2 > ./backend/pkdb_app/fixtures/auth.json +docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py dumpdata users --indent 2 > ./backend/pkdb_app/fixtures/users.json +docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py dumpdata rest_email_auth --indent 2 > ./backend/pkdb_app/fixtures/rest_email_auth.json +``` + +Restore authentication data +```bash +docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py loaddata auth pkdb_app/fixtures/auth.json +docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py loaddata users pkdb_app/fixtures/users.json +docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py loaddata rest_email_auth pkdb_app/fixtures/rest_email_auth.json +``` diff --git a/README.md b/README.md index aa93c46c..5af31e10 100644 --- a/README.md +++ b/README.md @@ -11,9 +11,6 @@ and * [How to cite](https://github.com/matthiaskoenig/pkdb#how-to-cite) * [License](https://github.com/matthiaskoenig/pkdb#license) * [Funding](https://github.com/matthiaskoenig/pkdb#funding) -* [Installation](https://github.com/matthiaskoenig/pkdb#installation) -* [REST API](https://github.com/matthiaskoenig/pkdb#rest-api) -* [Docker interaction](https://github.com/matthiaskoenig/pkdb#docker-interaction) ## Overview [[^]](https://github.com/matthiaskoenig/pkdb#pk-db---a-pharmacokinetics-database) @@ -57,132 +54,4 @@ If you use PK-DB code cite in addition [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1406979.svg)](https://doi.org/10.5281/zenodo.1406979) -## Installation -[[^]](https://github.com/matthiaskoenig/pkdb#pk-db---a-pharmacokinetics-database) -PK-DB is deployed via `docker` and `docker-compose`. - -### Requirements -To setup the development server -the following minimal requirements must be fulfilled -- `docker` -- `docker-compose` -- `Python3.6` - -For elasticsearch the following system settings are required -``` -sudo sysctl -w vm.max_map_count=262144 -``` -To set `vm.max_map_count` persistently change the value in -``` -/etc/sysctl.conf -``` -### Start development server -To start the local development server -```bash -# clone or pull the latest code -git clone https://github.com/matthiaskoenig/pkdb.git -cd pkdb -git pull - -# set environment variables -set -a && source .env.local - -# create/rebuild all docker containers -./docker-purge.sh -``` -This setups a clean database and clean volumes and starts the containers for `pkdb_backend`, `pkdb_frontend`, `elasticsearch` and `postgres`. -You can check that all the containers are running via -```bash -docker container ls -``` -which lists the current containers -``` -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -bc7f9204468f pkdb_backend "bash -c '/usr/local…" 27 hours ago Up 18 hours 0.0.0.0:8000->8000/tcp pkdb_backend_1 -17b8d243e956 pkdb_frontend "/bin/sh -c 'npm run…" 27 hours ago Up 18 hours 0.0.0.0:8080->8080/tcp pkdb_frontend_1 -7730c6fe2210 elasticsearch:6.8.1 "/usr/local/bin/dock…" 27 hours ago Up 18 hours 9300/tcp, 0.0.0.0:9123->9200/tcp pkdb_elasticsearch_1 -e880fbb0f349 postgres:11.4 "docker-entrypoint.s…" 27 hours ago Up 18 hours 0.0.0.0:5433->5432/tcp pkdb_postgres_1 -``` -The locally running develop version of PK-DB can now be accessed via the web browser from -- frontend: http://localhost:8080 -- backend: http://localhost:8000 - -### Fill database -Due to copyright, licensing and privacy issues this repository does not contain any data. -All data is managed via a separate private repository at https://github.com/matthiaskoenig/pkdb_data. -This also includes the curation scripts and curation workflows. - -If you are interested in curating data or contributing data please contact us at https://livermetabolism.com. - - -## REST API -[[^]](https://github.com/matthiaskoenig/pkdb#pk-db---a-pharmacokinetics-database) -PKDB provides a REST API which allows simple interaction with the database and easy access of data. -An overview over the REST endpoints is provided at [`http://localhost:8000/api/v1/`](http://localhost:8000/api/v1/). - -### Query examples -The REST API supports elastisearch queries, with syntax examples -available [here](https://django-elasticsearch-dsl-drf.readthedocs.io/en/latest/basic_usage_examples.html) -* http://localhost:8000/api/v1/comments_elastic/?user_lastname=K%C3%B6nig -* http://localhost:8000/api/v1/characteristica_elastic/?group_pk=5&final=true -* http://localhost:8000/api/v1/characteristica_elastic/?search=group_name:female&final=true -* http://localhost:8000/api/v1/substances_elastic/?search:name=cod -* http://localhost:8000/api/v1/substances_elastic/?search=cod -* http://localhost:8000/api/v1/substances_elastic/?ids=1__2__3 -* http://localhost:8000/api/v1/substances_elastic/?ids=1__2__3&ordering=-name -* http://localhost:8000/api/v1/substances_elastic/?name=caffeine&name=acetaminophen - -### Suggestion example -In addition suggestion queries are possible -* http://localhost:8000/api/v1/substances_elastic/suggest/?search:name=cod - -## Docker interaction -[[^]](https://github.com/matthiaskoenig/pkdb#pk-db---a-pharmacokinetics-database) -In the following typical examples to interact with the PK-DB docker containers are provided. - -### Check running containers -To check the running containers use -```bash -watch docker container ls -``` - -### Interactive container mode -```bash -./docker-interactive.sh -``` - -### Container logs -To get access to individual container logs use `docker container logs `. For instance to check the -django backend logs use -```bash -docker container logs pkdb_backend_1 -``` - -### Run command in container -To run commands inside the docker container use -```bash -docker-compose run --rm backend [command] -``` -or to run migrations -```bash -docker-compose run --rm backend python manage.py makemigrations -``` - -### Authentication data -The following examples show how to dump and restore the authentication data. - -Dump authentication data -```bash -docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py dumpdata auth --indent 2 > ./backend/pkdb_app/fixtures/auth.json -docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py dumpdata users --indent 2 > ./backend/pkdb_app/fixtures/users.json -docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py dumpdata rest_email_auth --indent 2 > ./backend/pkdb_app/fixtures/rest_email_auth.json -``` - -Restore authentication data -```bash -docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py loaddata auth pkdb_app/fixtures/auth.json -docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py loaddata users pkdb_app/fixtures/users.json -docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py loaddata rest_email_auth pkdb_app/fixtures/rest_email_auth.json -``` - © 2017-2020 Jan Grzegorzewski & Matthias König; https://livermetabolism.com. diff --git a/TODO.md b/TODO.md deleted file mode 100644 index 8e4ecdbc..00000000 --- a/TODO.md +++ /dev/null @@ -1,14 +0,0 @@ -# TODO -- [ ] show info nodes characteristica details -- [ ] show detail views for study, group, individual, reference (fix 404 on detail buttons), remove reference button -- [ ] Fix API url -- [ ] Fix Layout of main component (refactor in smaller components) -- [ ] Better REST documentation -- [ ] better name for zip file, e.g. pkdb_data_2020-08-23.zip -- [ ] fix data issues (missing titles), descriptions of info nodes - ---- -- [ ] cache info nodes in store -- [ ] implement additional validation rules -- [ ] filter by access or license -- [ ] make scatters available diff --git a/backend/download_extra/README.md b/backend/download_extra/README.md new file mode 100644 index 00000000..aa93c46c --- /dev/null +++ b/backend/download_extra/README.md @@ -0,0 +1,188 @@ +[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1406979.svg)](https://doi.org/10.5281/zenodo.1406979) +[![License (LGPL version 3)](https://img.shields.io/badge/license-LGPLv3.0-blue.svg?style=flat-square)](http://opensource.org/licenses/LGPL-3.0) + + Jan Grzegorzewski +and + Matthias König + +# PK-DB - a pharmacokinetics database + +* [Overview](https://github.com/matthiaskoenig/pkdb#overview) +* [How to cite](https://github.com/matthiaskoenig/pkdb#how-to-cite) +* [License](https://github.com/matthiaskoenig/pkdb#license) +* [Funding](https://github.com/matthiaskoenig/pkdb#funding) +* [Installation](https://github.com/matthiaskoenig/pkdb#installation) +* [REST API](https://github.com/matthiaskoenig/pkdb#rest-api) +* [Docker interaction](https://github.com/matthiaskoenig/pkdb#docker-interaction) + +## Overview +[[^]](https://github.com/matthiaskoenig/pkdb#pk-db---a-pharmacokinetics-database) +[PK-DB](https://pk-db.com) is a database and web interface for pharmacokinetics data and information from clinical trials +as well as pre-clinical research. PK-DB allows to curate pharmacokinetics data integrated with the +corresponding meta-information +- characteristics of studied patient collectives and individuals (age, bodyweight, smoking status, ...) +- applied interventions (e.g., dosing, substance, route of application) +- measured pharmacokinetics time courses and pharmacokinetics parameters (e.g., clearance, half-life, ...). + +Important features are +- the representation of experimental errors and variation +- the representation and normalisation of units +- annotation of information to biological ontologies +- calculation of pharmacokinetics information from time courses (apparent clearance, half-life, ...) +- a workflow for collaborative data curation +- strong validation rules on data, and simple access via a REST API + +PK-DB is available at https://pk-db.com + +## License +[[^]](https://github.com/matthiaskoenig/pkdb#pk-db---a-pharmacokinetics-database) +PK-DB code and documentation is licensed as +* Source Code: [LGPLv3](http://opensource.org/licenses/LGPL-3.0) +* Documentation: [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/) + +## Funding +[[^]](https://github.com/matthiaskoenig/pkdb#pk-db---a-pharmacokinetics-database) +Jan Grzegorzewski and Matthias König are supported by the Federal Ministry of Education and Research (BMBF, Germany) +within the research network Systems Medicine of the Liver ([LiSyM](http://www.lisym.org/), grant number 031L0054). + +## How to cite +[[^]](https://github.com/matthiaskoenig/pkdb#pk-db---a-pharmacokinetics-database) +If you use PK-DB data or the web interface cite + +> *PK-DB: PharmacoKinetics DataBase for Individualized and Stratified Computational Modeling* +> Jan Grzegorzewski, Janosch Brandhorst, Dimitra Eleftheriadou, Kathleen Green, Matthias König +> bioRxiv 760884; doi: https://doi.org/10.1101/760884 + +If you use PK-DB code cite in addition + +[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1406979.svg)](https://doi.org/10.5281/zenodo.1406979) + +## Installation +[[^]](https://github.com/matthiaskoenig/pkdb#pk-db---a-pharmacokinetics-database) +PK-DB is deployed via `docker` and `docker-compose`. + +### Requirements +To setup the development server +the following minimal requirements must be fulfilled +- `docker` +- `docker-compose` +- `Python3.6` + +For elasticsearch the following system settings are required +``` +sudo sysctl -w vm.max_map_count=262144 +``` +To set `vm.max_map_count` persistently change the value in +``` +/etc/sysctl.conf +``` +### Start development server +To start the local development server +```bash +# clone or pull the latest code +git clone https://github.com/matthiaskoenig/pkdb.git +cd pkdb +git pull + +# set environment variables +set -a && source .env.local + +# create/rebuild all docker containers +./docker-purge.sh +``` +This setups a clean database and clean volumes and starts the containers for `pkdb_backend`, `pkdb_frontend`, `elasticsearch` and `postgres`. +You can check that all the containers are running via +```bash +docker container ls +``` +which lists the current containers +``` +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +bc7f9204468f pkdb_backend "bash -c '/usr/local…" 27 hours ago Up 18 hours 0.0.0.0:8000->8000/tcp pkdb_backend_1 +17b8d243e956 pkdb_frontend "/bin/sh -c 'npm run…" 27 hours ago Up 18 hours 0.0.0.0:8080->8080/tcp pkdb_frontend_1 +7730c6fe2210 elasticsearch:6.8.1 "/usr/local/bin/dock…" 27 hours ago Up 18 hours 9300/tcp, 0.0.0.0:9123->9200/tcp pkdb_elasticsearch_1 +e880fbb0f349 postgres:11.4 "docker-entrypoint.s…" 27 hours ago Up 18 hours 0.0.0.0:5433->5432/tcp pkdb_postgres_1 +``` +The locally running develop version of PK-DB can now be accessed via the web browser from +- frontend: http://localhost:8080 +- backend: http://localhost:8000 + +### Fill database +Due to copyright, licensing and privacy issues this repository does not contain any data. +All data is managed via a separate private repository at https://github.com/matthiaskoenig/pkdb_data. +This also includes the curation scripts and curation workflows. + +If you are interested in curating data or contributing data please contact us at https://livermetabolism.com. + + +## REST API +[[^]](https://github.com/matthiaskoenig/pkdb#pk-db---a-pharmacokinetics-database) +PKDB provides a REST API which allows simple interaction with the database and easy access of data. +An overview over the REST endpoints is provided at [`http://localhost:8000/api/v1/`](http://localhost:8000/api/v1/). + +### Query examples +The REST API supports elastisearch queries, with syntax examples +available [here](https://django-elasticsearch-dsl-drf.readthedocs.io/en/latest/basic_usage_examples.html) +* http://localhost:8000/api/v1/comments_elastic/?user_lastname=K%C3%B6nig +* http://localhost:8000/api/v1/characteristica_elastic/?group_pk=5&final=true +* http://localhost:8000/api/v1/characteristica_elastic/?search=group_name:female&final=true +* http://localhost:8000/api/v1/substances_elastic/?search:name=cod +* http://localhost:8000/api/v1/substances_elastic/?search=cod +* http://localhost:8000/api/v1/substances_elastic/?ids=1__2__3 +* http://localhost:8000/api/v1/substances_elastic/?ids=1__2__3&ordering=-name +* http://localhost:8000/api/v1/substances_elastic/?name=caffeine&name=acetaminophen + +### Suggestion example +In addition suggestion queries are possible +* http://localhost:8000/api/v1/substances_elastic/suggest/?search:name=cod + +## Docker interaction +[[^]](https://github.com/matthiaskoenig/pkdb#pk-db---a-pharmacokinetics-database) +In the following typical examples to interact with the PK-DB docker containers are provided. + +### Check running containers +To check the running containers use +```bash +watch docker container ls +``` + +### Interactive container mode +```bash +./docker-interactive.sh +``` + +### Container logs +To get access to individual container logs use `docker container logs `. For instance to check the +django backend logs use +```bash +docker container logs pkdb_backend_1 +``` + +### Run command in container +To run commands inside the docker container use +```bash +docker-compose run --rm backend [command] +``` +or to run migrations +```bash +docker-compose run --rm backend python manage.py makemigrations +``` + +### Authentication data +The following examples show how to dump and restore the authentication data. + +Dump authentication data +```bash +docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py dumpdata auth --indent 2 > ./backend/pkdb_app/fixtures/auth.json +docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py dumpdata users --indent 2 > ./backend/pkdb_app/fixtures/users.json +docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py dumpdata rest_email_auth --indent 2 > ./backend/pkdb_app/fixtures/rest_email_auth.json +``` + +Restore authentication data +```bash +docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py loaddata auth pkdb_app/fixtures/auth.json +docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py loaddata users pkdb_app/fixtures/users.json +docker-compose -f $PKDB_DOCKER_COMPOSE_YAML run --rm backend ./manage.py loaddata rest_email_auth pkdb_app/fixtures/rest_email_auth.json +``` + +© 2017-2020 Jan Grzegorzewski & Matthias König; https://livermetabolism.com. diff --git a/backend/download_extra/TERMS_OF_USE.md b/backend/download_extra/TERMS_OF_USE.md new file mode 100644 index 00000000..42c131c3 --- /dev/null +++ b/backend/download_extra/TERMS_OF_USE.md @@ -0,0 +1,48 @@ +# PK-DB Terms of Use + +## General +1. PK-DB promotes open science through its mission to provide freely available online services, database and software relating to data contributed from life science experiments to the largest possible community. Where we present scientific data generated by others we impose no additional restriction on the use of the contributed data than those provided by the data owner. + +2. PK-DB expects attribution (e.g. in publications, services or products) for any of its online services, databases or software in accordance with good scientific practice. The expected attribution will be indicated on the appropriate web page. + +3. Any feedback provided to PK-DB on its online services will be treated as non-confidential unless the individual or organisation providing the feedback states otherwise. + +4. PK-DB is not liable to you or third parties claiming through you, for any loss or damage. + +5. All scientific data will be made available by a time and release mechanism consistent with the data type (e.g. human data where access needs to be reviewed by a Data Access Committee, pre-publication embargoed for a specific time period). + +6. Personal data held by PK-DB will only be released in exceptional circumstances when required by law or judicial or regulatory order. PK-DB may make information about the total volume of usage of particular software or data available to the public and third party organisations who supply the software or databases without details of any individual’s use. + +7. While we will retain our commitment to OpenScience, we reserve the right to update these Terms of Use at any time. When alterations are inevitable, we will attempt to give reasonable notice of any changes by placing a notice on our website, but you may wish to check each time you use the website. The date of the most recent revision will appear on this, the ‘PK-DB Terms of Use’ page. If you do not agree to these changes, please do not continue to use our online services. We will also make available an archived copy of the previous Terms of Use for comparison. + +8. Any questions or comments concerning these Terms of Use can be addressed to: Matthias König, PK-DB + + +## Online services +1. Users of PK-DB online services agree not to attempt to use any EMBL-EBI computers, files or networks apart from through the service interfaces provided. + +2. The PK-DB websites may use cookies to record information about your online preferences that allow us to personalise your experience of the website. You can control your use of cookies from your web browser, but if you choose not to accept cookies from PK-DB’s websites, you will not be able to take full advantage of all of the website’s features. + +3. PK-DB will make all reasonable effort to maintain continuity of these online services and provide adequate warning of any changes or discontinuities. However, PK-DB accepts no responsibility for the consequences of any temporary or permanent discontinuity in service. + +4. Any attempt to use PK-DB online services to a level that prevents, or looks likely to prevent, PK-DB providing services to others, will result in the use being blocked. PK-DB will attempt to contact the user to discuss their needs and how (and if) these can be met from other sources. + +5. If you post or send offensive, inappropriate or objectionable content anywhere on or to our websites or otherwise engage in any disruptive behaviour on any of our services, we may use your personal information from our security logs to stop such behaviour. Where we reasonably believe that you are or may be in breach of any applicable laws we may use your personal information to inform relevant third parties about the content and your behaviour. + +6. PK-DB has implemented appropriate technical and organisational measures to ensure a level of security which we deem appropriate, taking into account the categories of data we collect and the way we process it. + +7. PK-DB does not accept responsibility for the consequences of any breach of the confidentiality of PK-DB site by third parties. + +## Data services +1. The online data services and databases of PK-DB are generated in part from data contributed by the community who remain the data owners. + +2. When you contribute scientific data to a database through our website or other submission tools this information will be released at a time and in a manner consistent with the scientific data and we may store it permanently. + +3. PK-DB itself places no additional restrictions on the use or redistribution of the data available via its online services other than those provided by the original data owners. + +4. PK-DB does not guarantee the accuracy of any provided data, generated database, software or online service nor the suitability of databases, software and online services for any purpose. + +5. The original data may be subject to rights claimed by third parties, including but not limited to, patent, copyright, other intellectual property rights, biodiversity-related access and benefit-sharing rights. It is the responsibility of users of PK-DB services to ensure that their exploitation of the data does not infringe any of the rights of such third parties. + + +© 2017-2020 Jan Grzegorzewski & Matthias König; https://livermetabolism.com. \ No newline at end of file diff --git a/backend/pkdb_app/_version.py b/backend/pkdb_app/_version.py index c62b6c93..ec4dbea7 100644 --- a/backend/pkdb_app/_version.py +++ b/backend/pkdb_app/_version.py @@ -1,4 +1,4 @@ """ Definition of version string. """ -__version__ = "0.9.1" +__version__ = "0.9.3" diff --git a/backend/pkdb_app/behaviours.py b/backend/pkdb_app/behaviours.py index 013d1372..70b3c92b 100644 --- a/backend/pkdb_app/behaviours.py +++ b/backend/pkdb_app/behaviours.py @@ -54,8 +54,9 @@ def study_sid(self): def map_field(fields): return [f"{field}_map" for field in fields] - -VALUE_FIELDS_NO_UNIT = ["value", "mean", "median", "min", "max", "sd", "se", "cv"] +VALUE_FIELDS_SAME_SCALE = ["value", "mean", "median", "min", "max"] +VALUE_FIELDS_SAME_SCALE = ["value", "mean", "median", "min", "max"] +VALUE_FIELDS_NO_UNIT = VALUE_FIELDS_SAME_SCALE + ["sd", "se", "cv"] VALUE_FIELDS = VALUE_FIELDS_NO_UNIT + ["unit"] VALUE_MAP_FIELDS = map_field(VALUE_FIELDS) diff --git a/backend/pkdb_app/data/models.py b/backend/pkdb_app/data/models.py index ec8c9947..837003e6 100644 --- a/backend/pkdb_app/data/models.py +++ b/backend/pkdb_app/data/models.py @@ -21,9 +21,8 @@ def subsets(self): class Data(models.Model): """ - A Data These are mostly scatterplots or timecourses. + These are mostly scatter or timecourses. """ - class DataTypes(models.TextChoices): """ Data Types. """ Scatter = 'scatter', _('scatter') @@ -36,16 +35,11 @@ class DataTypes(models.TextChoices): dataset = models.ForeignKey(DataSet, related_name="data", on_delete=models.CASCADE, null=True) - class SubSet(Accessible): - """ - - """ name = models.CharField(max_length=CHAR_MAX_LENGTH) data = models.ForeignKey(Data, related_name="subsets", on_delete=models.CASCADE) study = models.ForeignKey('studies.Study', on_delete=models.CASCADE, related_name="subsets") - def get_single_dosing(self) -> Intervention: """Returns a single intervention of type dosing if existing. If multiple dosing interventions exist, no dosing is returned!. @@ -99,6 +93,42 @@ def timecourse_extra_no_intervention(self): 'time_unit': 'outputs__time_unit', 'unit': 'outputs__unit', } + def keys_timecourse_representation(self): + return { + "study_sid":"outputs__study__sid", + "study_name": "outputs__study__name", + "outputs_pk": "outputs__pk", + "subset_pk": "subset_id", + "subset_name": "subset__name", + "interventions": "outputs__interventions__pk", + "group_pk": "outputs__group_id", + "individual_pk": "outputs__individual_id", + "normed": 'outputs__normed', + "calculated": 'outputs__calculated', + "tissue": 'outputs__tissue__info_node__sid', + "tissue_label": 'outputs__tissue__info_node__label', + "method": 'outputs__method__info_node__sid', + "method_label": 'outputs__method__info_node__label', + "label": 'outputs__label', + "output_type": 'outputs__output_type', + "time": 'outputs__time', + 'time_unit': 'outputs__time_unit', + "measurement_type": "outputs__measurement_type__info_node__sid", + "measurement__label": "outputs__measurement_type__info_node__label", + "choice": "outputs__choice__info_node__sid", + "choice_label": "outputs__choice__info_node__label", + "substance": "outputs__substance__info_node__sid", + "substance_label": "outputs__substance__info_node__label", + "value": 'outputs__value', + "mean": 'outputs__mean', + "median": 'outputs__median', + "min": 'outputs__min', + "max": 'outputs__max', + 'sd': 'outputs__sd', + 'se': 'outputs__se', + 'cv': 'outputs__cv', + 'unit': 'outputs__unit', + } def _timecourse_extra(self): return { @@ -112,29 +142,43 @@ def _timecourse_extra(self): } - def merge_values(self, values): + @staticmethod + def none_tuple(values): + if all(pd.isna(v) for v in values): + return (None,) + else: + return tuple(values) - def none_tuple(values): - if all(pd.isna(v) for v in values): - return (None,) - else: - return tuple(values) + @staticmethod + def to_list(tdf): + return tdf.apply(SubSet.none_tuple).apply(SubSet.tuple_or_value) - def to_list(tdf): - return tdf.apply(none_tuple).apply(tuple_or_value) + @staticmethod + def tuple_or_value(values): + if len(set(values)) == 1: + return list(values)[0] + return values - def tuple_or_value(values): - if len(set(values)) == 1: - return list(values)[0] + @staticmethod + def _tuple_or_value(values): + if len(set(values)) == 1: + return list(values)[0] + return tuple(values) - return values + @staticmethod + def merge_values(values=None ,df=None, groupby=("outputs__pk",), sort_values=["outputs__interventions__pk","outputs__time"]): - merged_dict = pd.DataFrame(values).groupby(["outputs__pk"], as_index=False).apply(to_list).to_dict("list") + if values: + df =pd.DataFrame(values) + if sort_values: + df = df.sort_values(sort_values) + merged_dict = df.groupby(list(groupby), as_index=False).apply(SubSet.to_list).to_dict("list") for key, values in merged_dict.items(): if key not in ['outputs__time', 'outputs__value', 'outputs__mean', 'outputs__median', 'outputs__cv', 'outputs__sd' 'outputs__se']: - merged_dict[key] = tuple_or_value(values) + + merged_dict[key] = SubSet.tuple_or_value(values) if all(v is None for v in values): merged_dict[key] = None @@ -166,23 +210,53 @@ def validate_timecourse(self, timecourse): name = self.get_name(timecourse[key], value) else: name = list(timecourse[key]) - raise Exception(f"Subset used for timecourse is not unique on '{key}'. Values are {name}. " - f"Check uniqueness of labels for timecourses.") + raise ValueError(f"Subset used for timecourse is not unique on '{key}'. Values are '{name}'. " + f"Check uniqueness of labels for timecourses.") def timecourse(self): - timecourse = self.merge_values( - self.data_points.prefetch_related('outputs').values(*self._timecourse_extra().values())) - self.reformat_timecourse(timecourse, self._timecourse_extra()) - self.validate_timecourse(timecourse) - return timecourse + """ FIXME: Documentation """ + tc = self.merge_values( + self.data_points.prefetch_related('outputs').values(*self._timecourse_extra().values()), + sort_values=["outputs__interventions__pk", "outputs__time"] + ) + self.reformat_timecourse(tc, self._timecourse_extra()) + self.validate_timecourse(tc) + return tc def reformat_timecourse(self, timecourse, mapping): + """ FIXME: Documentation & type hinting """ for new_key, old_key in mapping.items(): timecourse[new_key] = timecourse.pop(old_key) if new_key == "interventions": if isinstance(timecourse[new_key], int): timecourse[new_key] = (timecourse[new_key],) + def timecourse_representation(self): + """ FIXME: Documentation """ + timecourse = self.merge_values( + self.data_points.values(*self.keys_timecourse_representation().values()),) + self.reformat_timecourse(timecourse, self.keys_timecourse_representation()) + return timecourse + + def keys_scatter_representation(self): + """ FIXME: Documentation """ + return {**self.keys_timecourse_representation(), + "dimension": "dimensions__dimension", + "data_point": "pk" + } + + def scatter_representation(self): + scatter_x = self.merge_values(self.data_points.filter(dimensions__dimension=0).values(*self.keys_scatter_representation().values()), sort_values=None) + self.reformat_timecourse(scatter_x, self.keys_scatter_representation()) + + scatter_y = self.merge_values(self.data_points.filter(dimensions__dimension=1).prefetch_related('outputs').values(*self.keys_scatter_representation().values()),sort_values=None) + self.reformat_timecourse(scatter_y, self.keys_scatter_representation()) + + identical_keys = ["study_sid", "study_name", "subset_pk", "subset_name"] + + return {**{k: v for k, v in scatter_x.items() if k in identical_keys}, + **{f"x_{k}": v for k, v in scatter_x.items() if k not in identical_keys}, + **{f"y_{k}": v for k, v in scatter_y.items() if k not in identical_keys}} class DataPoint(models.Model): """ diff --git a/backend/pkdb_app/data/serializers.py b/backend/pkdb_app/data/serializers.py index 9c85134a..8f27e5df 100644 --- a/backend/pkdb_app/data/serializers.py +++ b/backend/pkdb_app/data/serializers.py @@ -1,4 +1,3 @@ -import collections import traceback from pkdb_app.comments.serializers import DescriptionSerializer, CommentSerializer, CommentElasticSerializer, \ @@ -6,14 +5,13 @@ from pkdb_app.data.models import DataSet, Data, SubSet, Dimension, DataPoint from pkdb_app.outputs.models import Output from pkdb_app.outputs.pk_calculation import pkoutputs_from_timecourse -from pkdb_app.outputs.serializers import OUTPUT_FOREIGN_KEYS, SmallOutputSerializer +from pkdb_app.outputs.serializers import OUTPUT_FOREIGN_KEYS from pkdb_app.serializers import WrongKeyValidationSerializer, ExSerializer, StudySmallElasticSerializer from pkdb_app.subjects.models import DataFile from pkdb_app.utils import _create, create_multiple_bulk, create_multiple_bulk_normalized, list_of_pk from rest_framework import serializers import pandas as pd import numpy as np -from django.apps import apps class DimensionSerializer(WrongKeyValidationSerializer): @@ -48,7 +46,6 @@ def to_internal_value(self, data): self.validate_wrong_keys(data) return data - def create(self, validated_data): validated_data["study"] = self.context["study"] @@ -66,15 +63,11 @@ def create(self, validated_data): # subset_instance.save() return subset_instance - def _validate_time(self, time): if any(np.isnan(np.array(time))): raise serializers.ValidationError({"time": "no time points are allowed to be nan", "detail": time}) - - def calculate_pks_from_timecourses(self, subset): - # calculate pharmacokinetics outputs try: outputs = pkoutputs_from_timecourse(subset) @@ -95,7 +88,7 @@ def calculate_pks_from_timecourses(self, subset): ) interventions = [o.pop("interventions") for o in outputs] - outputs_dj = create_multiple_bulk(subset, "timecourse", outputs, Output) + outputs_dj = create_multiple_bulk(subset, "subset", outputs, Output) for intervention, output in zip(interventions,outputs_dj): output.interventions.add(*intervention) @@ -154,58 +147,69 @@ def create_scatter(self, dimensions, shared, subset_instance): raise serializers.ValidationError( f"Outputs have no values on shared field") + subset_outputs = [] for shared_values, shared_data in data_set.groupby(shared_reformated): x_data = shared_data[shared_data["dimension"] == 0] y_data = shared_data[shared_data["dimension"] == 1] - if len(x_data) != 1 or len(y_data) != 1: raise serializers.ValidationError( - f"Dimensions <{dimensions}> do not match in respect to the shared fields." - f"The shared field <{shared}> with values <{shared_values}>" - f" do not uniquely assign 1 x output to 1 y output. " - f"<{dimensions[0]}> has <{len(x_data)}> outputs. <{dimensions[1]}> has <{len(y_data)}> outputs." + f"Dimensions <{dimensions}> do not match in respect to the shared fields." + f"The shared field <{shared}> with values <{shared_values}>" + f" do not uniquely assign 1 x output to 1 y output. " + f"<{dimensions[0]}> has <{len(x_data)}> outputs. <{dimensions[1]}> has <{len(y_data)}> outputs." ) data_point_instance = DataPoint.objects.create(subset=subset_instance) + x_output = study_outputs.get(pk=x_data["id"]) + y_output = study_outputs.get(pk=y_data["id"]) + subset_outputs.append(x_output) + subset_outputs.append(y_output) Dimension.objects.create(dimension=0, study=study, - output=study_outputs.get(pk=x_data["id"]), + output=x_output, data_point=data_point_instance) Dimension.objects.create(dimension=1, study=study, - output=study_outputs.get(pk=y_data["id"]), + output=y_output, data_point=data_point_instance) - + subset_instance.pks.add(*subset_outputs) def create_timecourse(self, subset_instance, dimensions): study = self.context["study"] if len(dimensions) != 1: raise serializers.ValidationError( - f"Timcourses have to be one dimensional. Dimensions: <{dimensions}> has a len of <{len(dimensions)}>.") - subset_outputs = study.outputs.filter(normed=True,label=dimensions[0]) - if len(subset_outputs) < 2: + f"Timecourses have to be one-dimensional, but '{len(dimensions)}' dimensions found <{dimensions}>.") + subset_outputs = study.outputs.filter(normed=True, label=dimensions[0]) + if len(subset_outputs) == 0: + raise serializers.ValidationError( + f"Timecourses cannot be empty. No outputs found <{dimensions[0]}>.") + if len(subset_outputs) == 1: raise serializers.ValidationError( - f"Timcourses have to consist at least of two outputs. Consider saving the the outputs with the label <{dimensions[0]}> as output_type=output.") + f"Timecourses require at least two outputs, but only a single output exists in timecourse. " + f"Encode the label <{dimensions[0]}> as 'output_type=output' instead of 'output_type=timecourse'.") subset_instance.pks.add(*subset_outputs) if not subset_outputs.exists(): raise serializers.ValidationError( - {"dataset": {"data": [ - {"subsets": {"name": f" Outputs with label <{dimensions}> do not exist."}}]}}) + {"dataset": { + "data": [ + {"subsets": {"name": f"Outputs with label <{dimensions}> do not exist."}} + ] + }} + ) dimensions = [] for output in subset_outputs.iterator(): data_point_instance = DataPoint.objects.create(subset=subset_instance) - dimension = Dimension(dimension=0,study=study, output=output,data_point=data_point_instance) + dimension = Dimension(dimension=0, study=study, output=output,data_point=data_point_instance) dimensions.append(dimension) Dimension.objects.bulk_create(dimensions) self.calculate_pks_from_timecourses(subset_instance) - class DataSerializer(ExSerializer): comments = CommentSerializer( @@ -250,7 +254,6 @@ def create(self, validated_data): return data_instance - class DataSetSerializer(ExSerializer): data = DataSerializer(many=True, read_only=False, required=False, allow_null=True) @@ -366,14 +369,16 @@ class SubSetElasticSerializer(serializers.ModelSerializer): class Meta: model = SubSet - fields = ["pk","study", + fields = ["pk", "study", "name", "data_type", "array"] def get_array(self,object): #return [[SmallOutputSerializer(point.point,many=True, read_only=True).data] for point in object["array"]] - return [[p.to_dict() for p in point.point] for point in object["array"]] + return [point["point"] for point in object.to_dict()["array"]] + + class DataSetElasticSmallSerializer(serializers.ModelSerializer): descriptions = DescriptionElasticSerializer(many=True, read_only=True) comments = CommentElasticSerializer(many=True, read_only=True) diff --git a/backend/pkdb_app/data/views.py b/backend/pkdb_app/data/views.py index e77be49c..838eaa76 100644 --- a/backend/pkdb_app/data/views.py +++ b/backend/pkdb_app/data/views.py @@ -1,5 +1,9 @@ from django_elasticsearch_dsl_drf.constants import LOOKUP_QUERY_IN, LOOKUP_QUERY_EXCLUDE -from django_elasticsearch_dsl_drf.filter_backends import FilteringFilterBackend, IdsFilterBackend, MultiMatchSearchFilterBackend +from django_elasticsearch_dsl_drf.filter_backends import ( + FilteringFilterBackend, + IdsFilterBackend, + MultiMatchSearchFilterBackend +) from pkdb_app.documents import AccessView from pkdb_app.data.documents import DataAnalysisDocument, SubSetDocument from pkdb_app.data.serializers import DataAnalysisSerializer, SubSetElasticSerializer @@ -7,13 +11,8 @@ from pkdb_app.pagination import CustomPagination -############################################################################################### -# Elastic Views -############################################################################################### - - - class DataAnalysisViewSet(AccessView): + swagger_schema = None document = DataAnalysisDocument serializer_class = DataAnalysisSerializer pagination_class = CustomPagination @@ -23,56 +22,65 @@ class DataAnalysisViewSet(AccessView): 'study_sid', 'study_name', 'output_pk', - "data_pk" - + "data_pk", ) multi_match_search_fields = {field: {"boost": 1} for field in search_fields} multi_match_options = { 'operator': 'and' } filter_fields = { - 'study_sid': {'field': 'study_sid.raw', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - - ], - }, - 'study_name': {'field': 'study_name.raw', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - - ], - }, - 'output_pk': {'field': 'output_pk', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - ], - }, - + 'study_sid': { + 'field': 'study_sid.raw', + 'lookups': [ + LOOKUP_QUERY_IN, + LOOKUP_QUERY_EXCLUDE, + ], + }, + 'study_name': { + 'field': 'study_name.raw', + 'lookups': [ + LOOKUP_QUERY_IN, + LOOKUP_QUERY_EXCLUDE, + ], + }, + 'output_pk': { + 'field': 'output_pk', + 'lookups': [ + LOOKUP_QUERY_IN, + LOOKUP_QUERY_EXCLUDE, + ], + }, } class SubSetViewSet(AccessView): + """ Endpoint to query subsets (timecourses and scatters) + + The subets endpoint gives access to the subset data. A Subset is a collection of outputs which can be either a + timecourse or scatter. A timecourse subset consists of outputs measured at different time points. A scatter subset + contains correlated data which commonly are displayed as scatter plots. + """ document = SubSetDocument serializer_class = SubSetElasticSerializer pagination_class = CustomPagination lookup_field = "id" filter_backends = [FilteringFilterBackend, IdsFilterBackend, MultiMatchSearchFilterBackend] - search_fields = ("name", - "data_type", - "study.sid", - "study.name", - "array.data_points.point.outputs.group.name", - "array.data_points.point.outputs.individual.name", - "array.data_points.point.outputs.interventions.name", - "array.data_points.point.outputs.measurement_type.label", - "array.data_points.point.outputs.choice.label", - "array.data_points.point.outputs.substance.label", - "array.data_points.point.outputs.tissue.label", - ) + search_fields = ( + "name", + "data_type", + "study.sid", + "study.name", + "array.data_points.point.outputs.group.name", + "array.data_points.point.outputs.individual.name", + "array.data_points.point.outputs.interventions.name", + "array.data_points.point.outputs.measurement_type.label", + "array.data_points.point.outputs.choice.label", + "array.data_points.point.outputs.substance.label", + "array.data_points.point.outputs.tissue.label", + ) multi_match_search_fields = {field: {"boost": 1} for field in search_fields} multi_match_options = {'operator': 'and'} - filter_fields = { "name": "name.raw", "data_type":"data_type.raw"} + filter_fields = { + "name": "name.raw", + "data_type": "data_type.raw" + } diff --git a/backend/pkdb_app/documents.py b/backend/pkdb_app/documents.py index 519f10ee..31717974 100644 --- a/backend/pkdb_app/documents.py +++ b/backend/pkdb_app/documents.py @@ -1,11 +1,14 @@ import operator from functools import reduce +from django.utils.decorators import method_decorator +from drf_yasg import openapi +from drf_yasg.utils import swagger_auto_schema from rest_framework.generics import get_object_or_404 from django_elasticsearch_dsl import fields, DEDField, Object, collections -from django_elasticsearch_dsl_drf.viewsets import DocumentViewSet +from django_elasticsearch_dsl_drf.viewsets import BaseDocumentViewSet from elasticsearch_dsl import analyzer, token_filter, Q -from pkdb_app.studies.models import Query +from pkdb_app.studies.models import IdCollection from pkdb_app.users.models import PUBLIC from pkdb_app.users.permissions import user_group @@ -13,8 +16,8 @@ elastic_settings = { 'number_of_shards': 1, 'number_of_replicas': 1, - 'max_ngram_diff': 15 - + 'max_ngram_diff': 15, + 'max_terms_count':65536*4, } edge_ngram_filter = token_filter( @@ -56,6 +59,7 @@ def string_field(attr, **kwargs): **kwargs ) + def basic_object(attr, **kwargs): return ObjectField( attr=attr, @@ -66,16 +70,19 @@ def basic_object(attr, **kwargs): **kwargs ) + def info_node(attr, **kwargs): - return fields.ObjectField( + return ObjectField( attr=attr, properties={ 'sid': string_field('sid'), + 'name': string_field('name'), 'label': string_field('label'), }, **kwargs ) + study_field = fields.ObjectField( attr="study", properties={ @@ -84,6 +91,7 @@ def info_node(attr, **kwargs): } ) + def text_field(attr): return fields.TextField( attr=attr, @@ -96,10 +104,8 @@ def text_field(attr): class ObjectField(DEDField, Object): """ - FIXME: DOCUMENT ME - What is this for? I assume to solve some issue with nested ObjectFields. - This looks copy-pasted from some solution. Please provide short description - and link to solution. + + This document Object fields returns a null for any empty field and not an empty dictionary. """ def _get_inner_field_data(self, obj, field_value_to_ignore=None): @@ -146,27 +152,41 @@ def get_value_from_instance(self, instance, field_value_to_ignore=None): return self._get_inner_field_data(objs, field_value_to_ignore) +UUID_PARAM = openapi.Parameter( + 'uuid', + openapi.IN_QUERY, + description="The '/filter/' endpoint returns a UUID. Via the UUID the resulting query can be access.", + type=openapi.TYPE_STRING, + +) + + +@method_decorator(name='list', decorator=swagger_auto_schema( manual_parameters=[UUID_PARAM])) +class AccessView(BaseDocumentViewSet): + """Permissions on views.""" -class AccessView(DocumentViewSet): + def _get_resource(self): + resource = self.request.query_params.get("data_type", self.document.Index.name) + if resource == "timecourse": + return "timecourses" + if resource == "scatter": + return "scatters" + return resource def get_queryset(self): group = user_group(self.request.user) if hasattr(self, "initial_data"): - id_queries = [Q('term', pk=pk) for pk in self.initial_data] if len(id_queries) > 0: - self.search=self.search.query(reduce(operator.ior,id_queries)) + self.search = self.search.query(reduce(operator.ior, id_queries)) else: - #create search that returns empty query + # empty query return self.search.query('match', access__raw="NOTHING") + _uuid = self.request.query_params.get("uuid", []) - _hash = self.request.query_params.get("hash",[]) - if _hash: - - ids = list(get_object_or_404(Query,hash=_hash).ids) - - #ids = list(IdMulti.objects.filter(query=_hash).values_list("value", flat=True)) + if _uuid: + ids = list(get_object_or_404(IdCollection, uuid=_uuid, resource=self._get_resource()).ids) _qs_kwargs = {'values': ids} self.search = self.search.query( @@ -176,13 +196,9 @@ def get_queryset(self): if group == "basic": return self.search.query(Q('term', access__raw=PUBLIC) | Q('term', allowed_users__raw=self.request.user.username)) - elif group == "anonymous": return self.search.query(Q('term', access__raw=PUBLIC)) - elif group in ["admin", "reviewer"]: return self.search.query() - else: raise AssertionError("wrong group name") - diff --git a/backend/pkdb_app/info_nodes/models.py b/backend/pkdb_app/info_nodes/models.py index 5b645a5e..6f825bc9 100644 --- a/backend/pkdb_app/info_nodes/models.py +++ b/backend/pkdb_app/info_nodes/models.py @@ -1,7 +1,7 @@ """ Model for the InfoNodes. - """ +import re from numbers import Number import pint @@ -11,8 +11,9 @@ from pkdb_app.behaviours import Sidable from pkdb_app.info_nodes.units import ureg -from pkdb_app.users.models import User -from pkdb_app.utils import CHAR_MAX_LENGTH, CHAR_MAX_LENGTH_LONG, _validate_requried_key +from pkdb_app.utils import CHAR_MAX_LENGTH, CHAR_MAX_LENGTH_LONG, \ + _validate_required_key_and_value +from rest_framework import serializers class Annotation(models.Model): @@ -24,7 +25,7 @@ class Annotation(models.Model): label = models.CharField(max_length=CHAR_MAX_LENGTH, null=True) url = models.URLField(max_length=CHAR_MAX_LENGTH_LONG, null=False) -# TODO: add cross reference + class CrossReference(models.Model): """ CrossReference. """ name = models.CharField(max_length=CHAR_MAX_LENGTH, null=False) @@ -117,7 +118,6 @@ class Method(AbstractInfoNode): class Route(AbstractInfoNode): """ Route Model """ - info_node = models.OneToOneField( InfoNode, related_name="route", on_delete=models.CASCADE, null=True ) @@ -155,7 +155,10 @@ class MeasurementType(AbstractInfoNode): NO_UNIT = 'NO_UNIT' # todo: remove NO_UNIT and add extra keyword or add an extra measurement_type with optional no units. TIME_REQUIRED_MEASUREMENT_TYPES = ["cumulative amount", "cumulative metabolic ratio", "recovery", "auc_end"] # todo: remove and add extra keyword. - CAN_NEGATIVE = [] # todo remove + CAN_NEGATIVE = [ + "tmax" # tmax can be negative due to time offsets, i.e. pre-simulation with subsequent fall after intervention + # this often happens in placebo simulations + ] ADDITIVE = [] # todo remove units = models.ManyToManyField(Unit, related_name="measurement_types") @@ -200,21 +203,45 @@ def dimension_to_n_unit(self): def p_unit(unit): try: p_unit = ureg(unit) - p_unit.u + p_unit.u # check if pint unit can be accessed return p_unit except (UndefinedUnitError, AttributeError): if unit == "%": - raise ValueError(f"unit: [{unit}] has to written as 'percent'") + raise ValueError(f"unit: [{unit}] has to be encoded as 'percent'") raise ValueError(f"unit [{unit}] is not defined in unit registry or not allowed.") - def is_valid_unit(self, unit): + def is_valid_unit(self, data): + unit = data.get("unit", None) + is_valid = self._is_valid_unit(unit) + if is_valid: + is_valid = self._validate_special(data) + return is_valid + + def _validate_special(self, data): + unit = data.get("unit", None) + if self.info_node.sid == "recovery": + factor = self.p_unit(unit).to("dimensionless") + for key in ["value", "mean", "median"]: + if data.get(key): + if factor.m*data[key] > 2: + msg = f"<{key}> with value <{data[key]}> and unit <{unit}> cannot be greater than " \ + f"<{2/factor.m}>. Note that the unit 'dimensionless'= 'none' = 'percent'/100." + raise serializers.ValidationError({"unit": msg}) + + return True + + def _is_valid_unit(self,unit): + if not re.match("^[\/^*.() µα-ωΑ-Ωa-zA-Z0-9]*$", str(unit)): + msg = f"Unit value <{unit}> contains not allowed characters. " \ + f"Allowed characters are '[\/^*.() µα-ωΑ-Ωa-zA-Z0-9]'." + raise serializers.ValidationError({"unit": msg}) try: p_unit = self.p_unit(unit) except pint.DefinitionSyntaxError: msg = f"The unit [{unit}] has a wrong syntax." - raise ValueError( + raise serializers.ValidationError( {"unit": msg}) if len(self.n_units) != 0: @@ -232,8 +259,9 @@ def is_valid_unit(self, unit): else: return True - def validate_unit(self, unit): - if not self.is_valid_unit(unit): + def validate_unit(self, data): + unit = data.get("unit", None) + if not self.is_valid_unit(data): msg = f"For measurement type `{self.info_node.name}` the unit [{unit}] with dimension {self.unit_dimension(unit)} " \ f"is not allowed." raise ValueError( @@ -304,49 +332,50 @@ def validate_choice(self, choice): def numeric_fields(self): return ["value", "mean", "median", "min", "max", "sd", "se", "cv"] - @property - def can_be_negative(self): - return self.info_node.name in self.CAN_NEGATIVE - def validate_numeric(self, data): + """ Validates the numerics of the data. + + This ensures that measurements are not-negative. + Raises ValueError + :param data: + :return: + """ if self.info_node.dtype in [self.info_node.DTypes.NumericCategorical, self.info_node.DTypes.Numeric]: for field in self.numeric_fields: value = data.get(field) - if not self.can_be_negative: + # validate that not negative + if self.info_node.name not in self.CAN_NEGATIVE: + valid = True if isinstance(value, Number): - rule = value < 0 - # for timecourses - # todo: remove? + valid = not (value < 0) elif isinstance(value, list): - rule = any(v < 0 for v in value) - - else: - rule = False + valid = not any(v < 0 for v in value) - if rule: + if not valid: raise ValueError( {field: f"Numeric values need to be positive (>=0) " f"for all measurement types except " f"<{self.CAN_NEGATIVE}>.", "detail": data}) def validate_complete(self, data): + """Complete validation.""" + # check unit - self.validate_unit(data.get("unit", None)) + self.validate_unit(data) self.validate_numeric(data) choice = data.get("choice", None) d_choice = self.validate_choice(choice) - time_unit = data.get("time_unit", None) if time_unit: self.validate_time_unit(time_unit) if self.time_required: details = f"for measurement type `{self.info_node.name}`" - _validate_requried_key(data, "time", details=details) - _validate_requried_key(data, "time_unit", details=details) + _validate_required_key_and_value(data, "time", details=details) + _validate_required_key_and_value(data, "time_unit", details=details) return {"choice":d_choice} diff --git a/backend/pkdb_app/info_nodes/serializers.py b/backend/pkdb_app/info_nodes/serializers.py index e1300313..3c288e84 100644 --- a/backend/pkdb_app/info_nodes/serializers.py +++ b/backend/pkdb_app/info_nodes/serializers.py @@ -4,7 +4,7 @@ from pkdb_app.info_nodes.documents import InfoNodeDocument from pkdb_app.info_nodes.models import InfoNode, Synonym, Annotation, Unit, MeasurementType, Substance, Choice, Route, \ Form, Tissue, Application, Method, CrossReference -from pkdb_app.serializers import WrongKeyValidationSerializer, ExSerializer, SidLabelSerializer +from pkdb_app.serializers import WrongKeyValidationSerializer, ExSerializer, SidNameLabelSerializer from pkdb_app.utils import update_or_create_multiple from rest_framework.fields import empty @@ -87,14 +87,9 @@ class Meta: model = Substance fields = ["mass", "charge", "formula", "derived"] -class LabelSerializer(serializers.Serializer): - label = serializers.CharField() - sid = serializers.CharField() - class Meta: - fields = ["sid", "label"] class MeasurementTypeExtraSerializer(serializers.ModelSerializer): - choices = LabelSerializer(many=True, read_only=True) + choices = SidNameLabelSerializer(many=True, read_only=True) units = UnitSerializer(many=True, allow_null=True, required=False) class Meta: @@ -240,7 +235,7 @@ def to_representation(self, instance): class InfoNodeElasticSerializer(serializers.ModelSerializer): - parents = SidLabelSerializer(many=True, allow_null=True) + parents = SidNameLabelSerializer(many=True, allow_null=True) annotations = AnnotationSerializer(many=True, allow_null=True) synonyms = serializers.SerializerMethodField() substance = SubstanceExtraSerializer(required=False, allow_null=True) @@ -251,8 +246,5 @@ class Meta: model = InfoNode fields = ["sid", "name", "label", "deprecated", "ntype", "dtype", "description", "synonyms", "parents", "annotations", "xrefs","measurement_type", "substance", ] - def get_parents(self, obj): - return [parent["sid"] for parent in obj.parents] - def get_synonyms(self, obj): return [synonym["name"] for synonym in obj.synonyms] \ No newline at end of file diff --git a/backend/pkdb_app/info_nodes/views.py b/backend/pkdb_app/info_nodes/views.py index cc54f16c..b9d71bd2 100644 --- a/backend/pkdb_app/info_nodes/views.py +++ b/backend/pkdb_app/info_nodes/views.py @@ -2,8 +2,8 @@ from django_elasticsearch_dsl_drf.constants import LOOKUP_QUERY_IN, LOOKUP_QUERY_EXCLUDE from django_elasticsearch_dsl_drf.filter_backends import FilteringFilterBackend, IdsFilterBackend, \ - OrderingFilterBackend, MultiMatchSearchFilterBackend, SearchFilterBackend -from django_elasticsearch_dsl_drf.viewsets import DocumentViewSet + OrderingFilterBackend, MultiMatchSearchFilterBackend, CompoundSearchFilterBackend +from django_elasticsearch_dsl_drf.viewsets import BaseDocumentViewSet from rest_framework import viewsets from pkdb_app.info_nodes.documents import InfoNodeDocument @@ -23,6 +23,7 @@ class InfoNodeViewSet(viewsets.ModelViewSet): + swagger_schema = None permission_classes = (IsAdminUser,) lookup_field = "url_slug" serializer_class = InfoNodeSerializer @@ -36,13 +37,13 @@ def get_serializer(self, *args, **kwargs): return super().get_serializer(*args, **kwargs) -class InfoNodeElasticViewSet(DocumentViewSet): +class InfoNodeElasticViewSet(BaseDocumentViewSet): pagination_class = CustomPagination document = InfoNodeDocument serializer_class = InfoNodeElasticSerializer document_uid_field = "sid__raw" lookup_field = 'sid' - filter_backends = [FilteringFilterBackend, IdsFilterBackend, OrderingFilterBackend, SearchFilterBackend, MultiMatchSearchFilterBackend] + filter_backends = [FilteringFilterBackend, IdsFilterBackend, OrderingFilterBackend, CompoundSearchFilterBackend, MultiMatchSearchFilterBackend] search_fields = ( "sid", "name", diff --git a/backend/pkdb_app/interventions/documents.py b/backend/pkdb_app/interventions/documents.py index c5c7c466..047ea2fe 100644 --- a/backend/pkdb_app/interventions/documents.py +++ b/backend/pkdb_app/interventions/documents.py @@ -72,5 +72,4 @@ class Index: def get_queryset(self): """Not mandatory but to improve performance we can select related in one sql request""" - return super(InterventionDocument, self).get_queryset().select_related( - 'study') + return super(InterventionDocument, self).get_queryset().select_related('study') diff --git a/backend/pkdb_app/interventions/serializers.py b/backend/pkdb_app/interventions/serializers.py index 5de30517..0d8fb6b5 100644 --- a/backend/pkdb_app/interventions/serializers.py +++ b/backend/pkdb_app/interventions/serializers.py @@ -20,13 +20,13 @@ InterventionEx) from ..serializers import ( ExSerializer, - NA_VALUES, StudySmallElasticSerializer, SidLabelSerializer, MappingSerializer) + NA_VALUES, StudySmallElasticSerializer, SidNameLabelSerializer, MappingSerializer) from ..subjects.models import DataFile # ---------------------------------- # Serializer FIELDS # ---------------------------------- -from ..utils import list_of_pk, list_duplicates, _validate_requried_key, _create, create_multiple_bulk, \ - create_multiple_bulk_normalized +from ..utils import list_of_pk, list_duplicates, _validate_required_key, _create, create_multiple_bulk, \ + create_multiple_bulk_normalized, _validate_required_key_and_value MEDICATION = "medication" DOSING = "dosing" @@ -81,20 +81,20 @@ def to_internal_value(self, data): data = self.retransform_ex_fields(data) self.validate_wrong_keys(data, additional_fields=InterventionExSerializer.Meta.fields) - _validate_requried_key(data, "measurement_type") + _validate_required_key(data, "measurement_type") measurement_type = data.get("measurement_type") if any([measurement_type == MEDICATION, measurement_type == DOSING]): - _validate_requried_key(data, "substance") - _validate_requried_key(data, "route") - _validate_requried_key(data, "value") - _validate_requried_key(data, "unit") + _validate_required_key_and_value(data, "substance") + _validate_required_key_and_value(data, "route") + _validate_required_key_and_value(data, "value") + _validate_required_key_and_value(data, "unit") if measurement_type == DOSING: - _validate_requried_key(data, "form") - _validate_requried_key(data, "application") - _validate_requried_key(data, "time") - _validate_requried_key(data, "time_unit") + _validate_required_key_and_value(data, "form") + _validate_required_key_and_value(data, "application") + _validate_required_key_and_value(data, "time") + _validate_required_key_and_value(data, "time_unit") application = data["application"] allowed_applications = ["constant infusion", "single dose"] if not application in allowed_applications: @@ -119,7 +119,6 @@ def validate(self, attrs): class InterventionExSerializer(MappingSerializer): - ###### source = serializers.PrimaryKeyRelatedField( queryset=DataFile.objects.all(), required=False, allow_null=True ) @@ -145,6 +144,7 @@ class Meta: EXTERN_FILE_FIELDS + ["interventions", "comments", "descriptions"] ) + def validate_image(self, value): self._validate_image(value) return value @@ -249,11 +249,9 @@ def create(self, validated_data): return interventionset -############################################################################################### +# ############################################################################################## # Elastic Serializer -############################################################################################### - - +# ############################################################################################## class InterventionSetElasticSmallSerializer(serializers.ModelSerializer): descriptions = DescriptionElasticSerializer(many=True, read_only=True) comments = CommentElasticSerializer(many=True, read_only=True) @@ -278,12 +276,12 @@ class InterventionElasticSerializer(serializers.ModelSerializer): pk = serializers.IntegerField() study = StudySmallElasticSerializer(read_only=True) - measurement_type = SidLabelSerializer(read_only=True) - route = SidLabelSerializer(allow_null=True, read_only=True) - application = SidLabelSerializer(allow_null=True, read_only=True) - form = SidLabelSerializer(allow_null=True, read_only=True) - substance = SidLabelSerializer(allow_null=True, read_only=True) - choice = SidLabelSerializer(allow_null=True, read_only=True) + measurement_type = SidNameLabelSerializer(read_only=True) + route = SidNameLabelSerializer(allow_null=True, read_only=True) + application = SidNameLabelSerializer(allow_null=True, read_only=True) + form = SidNameLabelSerializer(allow_null=True, read_only=True) + substance = SidNameLabelSerializer(allow_null=True, read_only=True) + choice = SidNameLabelSerializer(allow_null=True, read_only=True) value = serializers.FloatField(allow_null=True) mean = serializers.FloatField(allow_null=True) @@ -321,6 +319,9 @@ class Meta: fields = ["study_sid", "study_name", "intervention_pk", "raw_pk", "normed"] + INTERVENTION_FIELDS + MEASUREMENTTYPE_FIELDS + + + def to_representation(self, instance): rep = super().to_representation(instance) for field in VALUE_FIELDS_NO_UNIT + ["time"]: diff --git a/backend/pkdb_app/interventions/views.py b/backend/pkdb_app/interventions/views.py index 14338404..6ca4ad6f 100644 --- a/backend/pkdb_app/interventions/views.py +++ b/backend/pkdb_app/interventions/views.py @@ -8,11 +8,12 @@ from ..pagination import CustomPagination -############################################################################################### -# Elastic Views -############################################################################################### - class ElasticInterventionViewSet(AccessView): + """Endpoint to query interventions. + + Intervention encode what was performed on the subjects. E.g. which dose of a + substance was applied. + """ document = InterventionDocument serializer_class = InterventionElasticSerializer pagination_class = CustomPagination @@ -28,21 +29,20 @@ class ElasticInterventionViewSet(AccessView): "tissue.label", "application.label", 'route.label', - 'time_unit') + 'time_unit' + ) multi_match_search_fields = {field: {"boost": 1} for field in search_fields} filter_fields = { - - 'pk': {'field': 'pk', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - - ], - }, + 'pk': { + 'field': 'pk', + 'lookups': [ + LOOKUP_QUERY_IN, + LOOKUP_QUERY_EXCLUDE, + ], + }, 'normed': 'normed', 'name': 'name.raw', 'choice': 'choice.raw', - 'time_unit': 'time_unit.raw', 'time': 'time', 'value': 'value', @@ -59,29 +59,45 @@ class ElasticInterventionViewSet(AccessView): 'route': 'route.name.raw', 'application': 'application.name.raw', 'measurement_type': 'measurement_type.name.raw', - - 'substance_sid': {'field': 'substance.sid.raw', - 'lookups': [LOOKUP_QUERY_IN, ], }, - 'form_sid': {'field': 'form.sid.raw', - 'lookups': [LOOKUP_QUERY_IN, ], }, - 'route_sid': {'field': 'route.sid.raw', - 'lookups': [LOOKUP_QUERY_IN, ], }, - 'application_sid': {'field': 'application.sid.raw', - 'lookups': [LOOKUP_QUERY_IN, ], }, - - 'measurement_type_sid': {'field': 'measurement_type.sid.raw', - 'lookups': [LOOKUP_QUERY_IN, ], }, + 'substance_sid': { + 'field': 'substance.sid.raw', + 'lookups': [LOOKUP_QUERY_IN], + }, + 'form_sid': { + 'field': 'form.sid.raw', + 'lookups': [LOOKUP_QUERY_IN], + }, + 'route_sid': { + 'field': 'route.sid.raw', + 'lookups': [LOOKUP_QUERY_IN], + }, + 'application_sid': { + 'field': 'application.sid.raw', + 'lookups': [LOOKUP_QUERY_IN], + }, + 'measurement_type_sid': { + 'field': 'measurement_type.sid.raw', + 'lookups': [LOOKUP_QUERY_IN] + }, + } + ordering_fields = { + 'name': 'name.raw', + 'measurement_type': 'measurement_type.raw', + 'choice': 'choice.raw', + 'normed': 'normed', + 'application': 'application.raw', + 'substance': 'substance.raw', + 'value': 'value' } - ordering_fields = {'name': 'name.raw', - 'measurement_type': 'measurement_type.raw', - 'choice': 'choice.raw', - 'normed': 'normed', - 'application': 'application.raw', - 'substance': 'substance.raw', - 'value': 'value'} class ElasticInterventionAnalysisViewSet(AccessView): + """ + + The intervention endpoint gives access to the intervention data. This is mostly a dosing of a substance to the body + of the subject but can also be more vague interventions like a meal uptake or exercise. + """ + swagger_schema = None document = InterventionDocument serializer_class = InterventionElasticSerializerAnalysis pagination_class = CustomPagination diff --git a/backend/pkdb_app/outputs/documents.py b/backend/pkdb_app/outputs/documents.py index a67eff94..529f98c9 100644 --- a/backend/pkdb_app/outputs/documents.py +++ b/backend/pkdb_app/outputs/documents.py @@ -68,8 +68,7 @@ class Django: def get_queryset(self): """Not mandatory but to improve performance we can select related in one sql request""" - return super(OutputDocument, self).get_queryset().select_related( - 'study', 'individual', 'group',).prefetch_related('interventions') + return super(OutputDocument, self).get_queryset()#.prefetch_related("interventions").select_related('study', 'individual__name', 'group'). class Index: name = 'outputs' @@ -138,5 +137,4 @@ class Index: def get_queryset(self): """Not mandatory but to improve performance we can select related in one sql request""" - return super(OutputInterventionDocument, self).get_queryset().select_related( - 'intervention', 'output') \ No newline at end of file + return super(OutputInterventionDocument, self).get_queryset().select_related('intervention', 'output') diff --git a/backend/pkdb_app/outputs/models.py b/backend/pkdb_app/outputs/models.py index 831b0f31..0f409ade 100644 --- a/backend/pkdb_app/outputs/models.py +++ b/backend/pkdb_app/outputs/models.py @@ -110,7 +110,7 @@ class OutputTypes(models.TextChoices): group = models.ForeignKey(Group, null=True, blank=True, on_delete=models.CASCADE) individual = models.ForeignKey(Individual, null=True, blank=True, on_delete=models.CASCADE) interventions = models.ManyToManyField(Intervention, through="OutputIntervention", related_name="outputs") - timecourse = models.ForeignKey('data.Subset', on_delete=models.CASCADE, null=True, blank=True, related_name="pks") + subset = models.ForeignKey('data.Subset', on_delete=models.CASCADE, null=True, blank=True, related_name="pks") tissue = models.ForeignKey(Tissue, related_name="outputs", null=True, blank=True, on_delete=models.CASCADE) method = models.ForeignKey(Method, related_name="outputs", null=True, blank=True, on_delete=models.CASCADE) diff --git a/backend/pkdb_app/outputs/pk_calculation.py b/backend/pkdb_app/outputs/pk_calculation.py index 0b0b31ea..aea19396 100644 --- a/backend/pkdb_app/outputs/pk_calculation.py +++ b/backend/pkdb_app/outputs/pk_calculation.py @@ -2,13 +2,16 @@ Calculate pharmacokinetics """ from typing import List, Dict +from rest_framework import serializers +import logging import warnings import numpy as np -import pandas as pd from django.apps import apps from pkdb_app.info_nodes.units import ureg from pkdb_analysis.pk import pharmacokinetics +logger = logging.getLogger(__name__) + MeasurementType = apps.get_model('info_nodes.MeasurementType') Substance = apps.get_model('info_nodes.Substance') Method = apps.get_model('info_nodes.Method') @@ -28,28 +31,24 @@ def pkoutputs_from_timecourse(subset:Subset) -> List[Dict]: """ outputs = [] dosing = subset.get_single_dosing() + timecourse = subset.timecourse() + # dosing information must exist if not dosing: - # dosing information must exist return outputs - # pharmacokinetics are only calculated on normalized concentrations - timecourse = subset.timecourse() - - - if timecourse["measurement_type_name"] == "concentration": + # pharmacokinetics are only calculated on normalized concentrations + if timecourse["measurement_type_name"] == "concentration": variables = _timecourse_to_pkdict(timecourse, dosing) ctype = variables.pop("ctype", None) + if dosing.application.info_node.name == "single dose" and timecourse["substance"] == dosing.substance.pk: pkinf = pharmacokinetics.TimecoursePK(**variables) - else: _ = variables.pop("dosing", None) _ = variables.pop("intervention_time", None) pkinf = pharmacokinetics.TimecoursePKNoDosing(**variables) - - pk = pkinf.pk key_mapping = { @@ -132,7 +131,7 @@ def _timecourse_to_pkdict(tc: dict, dosing) -> Dict: # pharmacokinetics is only calculated for single dose experiments # where the applied substance is the measured substance! - if MeasurementType.objects.get(info_node__name="restricted dosing").is_valid_unit(dosing.unit): + if MeasurementType.objects.get(info_node__name="restricted dosing")._is_valid_unit(dosing.unit): if dosing.value is not None: pk_dict["dose"] = Q_(dosing.value, dosing.unit) else: diff --git a/backend/pkdb_app/outputs/serializers.py b/backend/pkdb_app/outputs/serializers.py index d7203f62..613efb80 100644 --- a/backend/pkdb_app/outputs/serializers.py +++ b/backend/pkdb_app/outputs/serializers.py @@ -4,7 +4,6 @@ import warnings -from django.db.models import Count from rest_framework import serializers from pkdb_app import utils @@ -21,14 +20,14 @@ CommentElasticSerializer from ..interventions.models import Intervention from ..serializers import ( - ExSerializer, StudySmallElasticSerializer, SidLabelSerializer, SidNameSerializer) + ExSerializer, StudySmallElasticSerializer, SidNameLabelSerializer, SidNameSerializer) from ..subjects.models import Group, DataFile, Individual from ..subjects.serializers import ( EXTERN_FILE_FIELDS, GroupSmallElasticSerializer, IndividualSmallElasticSerializer) # ---------------------------------- # Serializer FIELDS # ---------------------------------- -from ..utils import list_of_pk, _validate_requried_key, create_multiple, _create, create_multiple_bulk_normalized, \ +from ..utils import list_of_pk, _validate_required_key, create_multiple, _create, create_multiple_bulk_normalized, \ create_multiple_bulk EXTRA_FIELDS = ["tissue", "method", "label","output_type"] @@ -104,12 +103,12 @@ def validate(self, attrs): self._validate_group_output(attrs) self.validate_group_individual_output(attrs) - _validate_requried_key(attrs, "measurement_type") + _validate_required_key(attrs, "measurement_type") - _validate_requried_key(attrs, "substance") - _validate_requried_key(attrs, "tissue") - _validate_requried_key(attrs, "interventions") - _validate_requried_key(attrs, "output_type") + _validate_required_key(attrs, "substance") + _validate_required_key(attrs, "tissue") + _validate_required_key(attrs, "interventions") + _validate_required_key(attrs, "output_type") self._validate_timecourse(attrs) @@ -132,15 +131,12 @@ def validate(self, attrs): def _validate_timecourse(self, attrs): if attrs["output_type"] == Output.OutputTypes.Timecourse: - _validate_requried_key(attrs,"label") + _validate_required_key(attrs, "label") if not attrs.get("label",None): msg = "Label is required on on output_type=timecourse" raise serializers.ValidationError(msg) - - - class OutputExSerializer(ExSerializer): source = serializers.PrimaryKeyRelatedField( @@ -326,17 +322,17 @@ class Meta: "calculated"] + OUTPUT_FIELDS + MEASUREMENTTYPE_FIELDS read_only_fields = fields -class SmallOutputSerializer(serializers.ModelSerializer): +class SmallOutputSerializer(serializers.ModelSerializer): group = GroupSmallElasticSerializer() individual = IndividualSmallElasticSerializer() interventions = InterventionSmallElasticSerializer(many=True) - substance = SidLabelSerializer(allow_null=True) - measurement_type = SidLabelSerializer(allow_null=True) - tissue = SidLabelSerializer(allow_null=True) - method = SidLabelSerializer(allow_null=True) - choice = SidLabelSerializer(allow_null=True) + substance = SidNameLabelSerializer(allow_null=True) + measurement_type = SidNameLabelSerializer(allow_null=True) + tissue = SidNameLabelSerializer(allow_null=True) + method = SidNameLabelSerializer(allow_null=True) + choice = SidNameLabelSerializer(allow_null=True) value = serializers.FloatField(allow_null=True) mean = serializers.FloatField(allow_null=True) @@ -360,19 +356,19 @@ class Meta: read_only_fields = fields - class OutputElasticSerializer(serializers.ModelSerializer): + """Main serializer for outputs.""" study = StudySmallElasticSerializer() group = GroupSmallElasticSerializer() individual = IndividualSmallElasticSerializer() interventions = InterventionSmallElasticSerializer(many=True) - substance = SidLabelSerializer(allow_null=True) - measurement_type = SidLabelSerializer(allow_null=True) - tissue = SidLabelSerializer(allow_null=True) - method = SidLabelSerializer(allow_null=True) - choice = SidLabelSerializer(allow_null=True) + substance = SidNameLabelSerializer(allow_null=True) + measurement_type = SidNameLabelSerializer(allow_null=True) + tissue = SidNameLabelSerializer(allow_null=True) + method = SidNameLabelSerializer(allow_null=True) + choice = SidNameLabelSerializer(allow_null=True) value = serializers.FloatField(allow_null=True) mean = serializers.FloatField(allow_null=True) diff --git a/backend/pkdb_app/outputs/views.py b/backend/pkdb_app/outputs/views.py index 2e1c45e7..8208d1b8 100644 --- a/backend/pkdb_app/outputs/views.py +++ b/backend/pkdb_app/outputs/views.py @@ -8,62 +8,53 @@ from ..pagination import CustomPagination -############################################################################################### -# Elastic Views -############################################################################################### - - class OutputInterventionViewSet(AccessView): + """Elastic view for OutputIntervention.""" + swagger_schema = None document = OutputInterventionDocument serializer_class = OutputInterventionSerializer pagination_class = CustomPagination lookup_field = "id" filter_backends = [FilteringFilterBackend, IdsFilterBackend, OrderingFilterBackend, MultiMatchSearchFilterBackend] - search_fields = ('study', 'measurement_type', 'substance', 'group_name', 'individual_name', "tissue", 'time_unit', - 'intervention') + search_fields = ( + 'study', + 'measurement_type', + 'substance', + 'group_name', + 'individual_name', + 'tissue', + 'time_unit', + 'intervention' + ) multi_match_search_fields = {field: {"boost": 1} for field in search_fields} multi_match_options = { 'operator': 'and' } filter_fields = { - 'study_sid': {'field': 'study_sid.raw', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - - ], - }, - 'study_name': {'field': 'study_name.raw', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - - ], - }, - 'output_pk': {'field': 'output_pk', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - ], - }, - 'intervention_pk': {'field': 'intervention_pk', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - ], - }, - 'group_pk': {'field': 'group_pk', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - ], - }, - - 'individual_pk': {'field': 'individual_pk', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - ]}, + 'study_sid': { + 'field': 'study_sid.raw', + 'lookups': [LOOKUP_QUERY_IN, LOOKUP_QUERY_EXCLUDE], + }, + 'study_name': { + 'field': 'study_name.raw', + 'lookups': [LOOKUP_QUERY_IN, LOOKUP_QUERY_EXCLUDE], + }, + 'output_pk': { + 'field': 'output_pk', + 'lookups': [LOOKUP_QUERY_IN, LOOKUP_QUERY_EXCLUDE], + }, + 'intervention_pk': { + 'field': 'intervention_pk', + 'lookups': [LOOKUP_QUERY_IN, LOOKUP_QUERY_EXCLUDE], + }, + 'group_pk': { + 'field': 'group_pk', + 'lookups': [LOOKUP_QUERY_IN, LOOKUP_QUERY_EXCLUDE], + }, + 'individual_pk': { + 'field': 'individual_pk', + 'lookups': [LOOKUP_QUERY_IN, LOOKUP_QUERY_EXCLUDE], + }, 'normed': 'normed', 'calculated': 'calculated', 'tissue': "tissue.raw", @@ -74,46 +65,57 @@ class OutputInterventionViewSet(AccessView): 'choice': 'choice.raw', 'unit': 'unit.raw', } + ordering_fields = { + 'measurement_type': 'measurement_type.raw', + 'tissue': 'tissue.raw', + 'substance': 'substance.raw', + 'group_name': 'group_name.raw', + 'individual_name': 'individual_name.raw', + 'value': 'value', + } - ordering_fields = {'measurement_type': 'measurement_type.raw', - 'tissue': 'tissue.raw', - 'substance': 'substance.raw', - 'group_name': 'group_name.raw', - 'individual_name': 'individual_name.raw', - 'value': 'value', - } - -# Elastic -common_search_fields = ( - 'study.sid', - 'study.name', - 'measurement_type.label', - 'substance.label', - "tissue.label", - "choice.label", - 'time_unit', - 'group.name', - 'individual.name', - 'interventions.name') +class ElasticOutputViewSet(AccessView): + """ Endpoint to query outputs -common_filter_fields = { + The outputs endpoint gives access to the output data. Outputs generally describe what has been measured. + This includes more complex results which cannot be directly measured but are calculated from the measured data. + In the outputs related subjects and interventions are referenced. + """ + document = OutputDocument + serializer_class = OutputElasticSerializer + pagination_class = CustomPagination + lookup_field = "id" + filter_backends = [FilteringFilterBackend, IdsFilterBackend, OrderingFilterBackend, MultiMatchSearchFilterBackend] + search_fields = ( + 'study.sid', + 'study.name', + 'measurement_type.label', + 'substance.label', + "tissue.label", + "choice.label", + 'time_unit', + 'group.name', + 'individual.name', + 'interventions.name', + ) + multi_match_search_fields = {field: {"boost": 1} for field in search_fields} + multi_match_options = {'operator': 'and'} + filter_fields = { 'study_name': 'study.name.raw', 'study_sid': 'study.sid.raw', - 'group_pk': {'field': 'group.pk', - 'lookups': [ - LOOKUP_QUERY_IN, - ], - }, - 'individual_pk': {'field': 'individual.pk', - 'lookups': [ - LOOKUP_QUERY_IN, - ]}, - 'interventions_pk': {'field': 'interventions.pk', - 'lookups': [ - LOOKUP_QUERY_IN, - ], - }, + 'group_pk': { + 'field': 'group.pk', + 'lookups': [LOOKUP_QUERY_IN], + }, + 'individual_pk': { + 'field': 'individual.pk', + 'lookups': [LOOKUP_QUERY_IN], + }, + 'interventions_pk': { + 'field': 'interventions.pk', + 'lookups': [LOOKUP_QUERY_IN], + }, 'tissue': "tissue.name.raw", 'time': 'time.raw', 'choice': 'choice.name.raw', @@ -121,37 +123,32 @@ class OutputInterventionViewSet(AccessView): 'calculated': 'calculated', 'unit': 'unit.raw', 'substance': 'substance.name.raw', - 'output_type': {'field': 'output_type.raw', - 'lookups': [LOOKUP_QUERY_IN, LOOKUP_QUERY_EXCLUDE], }, - 'substance_sid': {'field': 'substance.sid.raw', - 'lookups': [LOOKUP_QUERY_IN, ], }, + 'output_type': { + 'field': 'output_type.raw', + 'lookups': [LOOKUP_QUERY_IN, LOOKUP_QUERY_EXCLUDE], + }, + 'substance_sid': { + 'field': 'substance.sid.raw', + 'lookups': [LOOKUP_QUERY_IN], + }, 'measurement_type': 'measurement_type.sid.raw', - 'measurement_type_sid': {'field': 'measurement_type.sid.raw', - 'lookups': [LOOKUP_QUERY_IN, ], }, - 'method_sid': {'field': 'method.sid.raw', - 'lookups': [LOOKUP_QUERY_IN, ], }, - 'tissue_sid': {'field': 'tissue.sid.raw', - 'lookups': [LOOKUP_QUERY_IN, ], }, + 'measurement_type_sid': { + 'field': 'measurement_type.sid.raw', + 'lookups': [LOOKUP_QUERY_IN], + }, + 'method_sid': { + 'field': 'method.sid.raw', + 'lookups': [LOOKUP_QUERY_IN], + }, + 'tissue_sid': { + 'field': 'tissue.sid.raw', + 'lookups': [LOOKUP_QUERY_IN], + }, } - -common_ordering_fields = { + ordering_fields = { 'measurement_type': 'measurement_type.name.raw', 'tissue': 'tissue.name.raw', 'group': 'group.name', 'individual': 'individual.name', 'substance': 'substance.name', } - - -class ElasticOutputViewSet(AccessView): - document = OutputDocument - serializer_class = OutputElasticSerializer - pagination_class = CustomPagination - lookup_field = "id" - filter_backends = [FilteringFilterBackend, IdsFilterBackend, OrderingFilterBackend, MultiMatchSearchFilterBackend] - search_fields = common_search_fields - multi_match_search_fields = {field: {"boost": 1} for field in search_fields} - multi_match_options = {'operator': 'and'} - filter_fields = common_filter_fields - ordering_fields = common_ordering_fields - diff --git a/backend/pkdb_app/pagination.py b/backend/pkdb_app/pagination.py index 5544eed3..1a14c42c 100644 --- a/backend/pkdb_app/pagination.py +++ b/backend/pkdb_app/pagination.py @@ -14,4 +14,4 @@ def get_paginated_response(self, data): "prev_page_url": self.get_previous_link(), "data": {"count": self.page.paginator.count, "data": data}, } - ) + ) \ No newline at end of file diff --git a/backend/pkdb_app/response_pagination.py b/backend/pkdb_app/response_pagination.py new file mode 100644 index 00000000..d4d557d9 --- /dev/null +++ b/backend/pkdb_app/response_pagination.py @@ -0,0 +1,33 @@ +from collections import OrderedDict + +from drf_yasg import openapi +from drf_yasg.inspectors import DjangoRestResponsePagination +from rest_framework.pagination import LimitOffsetPagination, PageNumberPagination, CursorPagination +class ResponsePagination(DjangoRestResponsePagination): + """Provides response schema pagination warpping for django-rest-framework's LimitOffsetPagination, + PageNumberPagination and CursorPagination + """ + + def get_paginated_response(self, paginator, response_schema): + assert response_schema.type == openapi.TYPE_ARRAY, "array return expected for paged response" + paged_schema = None + if isinstance(paginator, (LimitOffsetPagination, PageNumberPagination, CursorPagination)): + has_count = not isinstance(paginator, CursorPagination) + paged_schema = openapi.Schema( + type=openapi.TYPE_OBJECT, + properties=OrderedDict(( + ('current_page', openapi.Schema(type=openapi.TYPE_INTEGER)), + ('last_page', openapi.Schema(type=openapi.TYPE_INTEGER)), + ('next_page_url', openapi.Schema(type=openapi.TYPE_STRING, format=openapi.FORMAT_URI, x_nullable=True)), + ('prev_page_url', openapi.Schema(type=openapi.TYPE_STRING, format=openapi.FORMAT_URI, x_nullable=True)), + ('data', openapi.Schema( + type=openapi.TYPE_OBJECT, + properties=OrderedDict(( + ('count', openapi.Schema(type=openapi.TYPE_INTEGER) if has_count else None), + ('data', response_schema))), + required=['data'])) + )), + required=['data'] + ) + + return paged_schema \ No newline at end of file diff --git a/backend/pkdb_app/serializers.py b/backend/pkdb_app/serializers.py index ec3d2247..358daee8 100644 --- a/backend/pkdb_app/serializers.py +++ b/backend/pkdb_app/serializers.py @@ -549,7 +549,8 @@ def _validate_disabled_data(self, data_dict, disabled): ) def _validate_individual_characteristica(self, data_dict): - disabled = ["sd", "se", "min", "max", "cv", "mean", "median"] + disabled = ["sd", "se", "min", "cv", "mean", "median"] + # max is allowed and represents the detection limit. disabled += map_field(disabled) self._validate_disabled_data(data_dict, disabled) @@ -749,11 +750,11 @@ class SidNameSerializer(serializers.Serializer): sid = serializers.CharField(allow_null=True) name = serializers.CharField(allow_null=True) -class SidLabelSerializer(serializers.Serializer): +class SidNameLabelSerializer(serializers.Serializer): sid = serializers.CharField(allow_null=True) + name = serializers.CharField(allow_null=True) label = serializers.CharField(allow_null=True) - def validate_dict(dic): if not isinstance(dic, dict): raise serializers.ValidationError( diff --git a/backend/pkdb_app/settings.py b/backend/pkdb_app/settings.py index 20a73984..5e462a44 100755 --- a/backend/pkdb_app/settings.py +++ b/backend/pkdb_app/settings.py @@ -22,6 +22,7 @@ # Third party apps "rest_framework", # utilities for rest apis + 'drf_yasg', # swapper api "django_filters", # for filtering rest endpoints "corsheaders", @@ -172,6 +173,24 @@ "django.db.backends": {"handlers": ["console"], "level": "INFO"}, }, } +SWAGGER_SETTINGS = { + 'USE_SESSION_AUTH': False, + + 'DEFAULT_FIELD_INSPECTORS': [ + 'drf_yasg.inspectors.CamelCaseJSONFilter', + 'drf_yasg.inspectors.InlineSerializerInspector', + 'drf_yasg.inspectors.RelatedFieldInspector', + 'drf_yasg.inspectors.ChoiceFieldInspector', + 'drf_yasg.inspectors.FileFieldInspector', + 'drf_yasg.inspectors.DictFieldInspector', + 'drf_yasg.inspectors.SimpleFieldInspector', + 'drf_yasg.inspectors.StringDefaultFieldInspector', + ], + 'DEFAULT_PAGINATOR_INSPECTORS': [ + 'pkdb_app.response_pagination.ResponsePagination', + 'drf_yasg.inspectors.CoreAPICompatInspector', + ], +} # Django Rest Framework REST_FRAMEWORK = { @@ -191,12 +210,17 @@ "DEFAULT_AUTHENTICATION_CLASSES": ( "rest_framework.authentication.SessionAuthentication", "rest_framework.authentication.TokenAuthentication", + "rest_framework.authentication.BasicAuthentication", + ), "DEFAULT_FILTER_BACKENDS": ( "django_filters.rest_framework.DjangoFilterBackend", ), } + + + DATABASES = { "default": { "ENGINE": "django.db.backends.postgresql", diff --git a/backend/pkdb_app/statistics.py b/backend/pkdb_app/statistics.py index 3c9ffdc9..3aeeff5b 100644 --- a/backend/pkdb_app/statistics.py +++ b/backend/pkdb_app/statistics.py @@ -1,7 +1,9 @@ """ Basic information and statistics about data base content. """ +from django.db.models import Count, F, Q from pkdb_app.data.models import SubSet, Data +from pkdb_app.info_nodes.models import Substance from rest_framework import serializers from rest_framework import viewsets from rest_framework.response import Response @@ -9,13 +11,62 @@ from pkdb_app._version import __version__ from pkdb_app.interventions.models import Intervention from pkdb_app.outputs.models import Output -from pkdb_app.studies.documents import StudyDocument from pkdb_app.studies.models import Study, Reference -from pkdb_app.studies.serializers import StudyElasticStatisticsSerializer -from pkdb_app.studies.views import ElasticStudyViewSet from pkdb_app.subjects.models import Group, Individual +''' +Substance statistics: /statistics/substances/ +{ + version: 0.9.2, + substances: { + caffeine: { + studies: { + count: 20, + }, + interventions: { + count: 20, + }, + outputs: { + count: 20 + }, + timecourses: { + count: 40 + }, + scatters: { + count: 30 + } + }, + ... + } + +} + + +''' +class SubstanceStatisticsViewSet(viewsets.ViewSet): + def list(self,request): + substances_interventions = Substance.objects.annotate(label=F('info_node__label'),intervention_count=Count("intervention", filter=Q(intervention__normed=True))).order_by('info_node__label') + substances_outputs = Substance.objects.annotate(label=F('info_node__label'),output_count=Count("output", filter=Q(output__normed=True))).order_by('info_node__label') + + data = zip(substances_outputs.values("info_node__label"),substances_outputs.values("output_count"), substances_interventions.values("intervention_count")) + + result = [] + for x in data: + res = {} + for v in x: + res = {**res, **v} + result.append(res) + + return Response(result) + + +class SubstanceStatisticsSerializer(serializers.Serializer): + label = serializers.CharField() + intervention_count = serializers.IntegerField(allow_null=True) + output_count = serializers.IntegerField(allow_null=True) + + class Statistics(object): """ Basic database statistics. """ @@ -29,15 +80,14 @@ def __init__(self): self.output_count = Output.objects.filter(normed=True).count() self.output_calculated_count = Output.objects.filter(normed=True, calculated=True).count() self.timecourse_count = SubSet.objects.filter(data__data_type=Data.DataTypes.Timecourse).count() - - self.studies = StudyElasticStatisticsSerializer(StudyDocument().get_queryset()).data + self.scatter_count = SubSet.objects.filter(data__data_type=Data.DataTypes.Scatter).count() class StatisticsViewSet(viewsets.ViewSet): - """ - Get database statistics including version. - """ + """ Endpoint to query PK-DB statistics + Get database statistics consisting of count and version information. + """ def list(self, request): instance = Statistics() serializer = StatisticsSerializer(instance) @@ -60,6 +110,6 @@ def to_representation(self, instance): "output_count", "output_calculated_count", 'timecourse_count', - "studies", + 'scatter_count', ] } diff --git a/backend/pkdb_app/studies/models.py b/backend/pkdb_app/studies/models.py index 24167315..7c8fa792 100644 --- a/backend/pkdb_app/studies/models.py +++ b/backend/pkdb_app/studies/models.py @@ -2,7 +2,7 @@ Django model for Study. """ import datetime -import uuid +from django.utils.timezone import make_aware from django.contrib.postgres.fields import ArrayField from django.db import models @@ -53,7 +53,7 @@ class Reference(models.Model): This is the main class describing the publication or reference which describes the study. In most cases this is a published paper, but could be a thesis or unpublished. """ - sid = models.CharField(max_length=CHAR_MAX_LENGTH, unique=True, validators=[alphanumeric]) + sid = models.CharField(max_length=CHAR_MAX_LENGTH, unique=True, validators=[alphanumeric], ) pmid = models.CharField(max_length=CHAR_MAX_LENGTH, null=True, validators=[alphanumeric]) # optional name = models.CharField(max_length=CHAR_MAX_LENGTH) doi = models.CharField(max_length=150, null=True) # optional @@ -102,7 +102,7 @@ class Study(Sidable, models.Model): Mainly reported as a single publication. """ - sid = models.CharField(max_length=CHAR_MAX_LENGTH, unique=True, validators=[alphanumeric]) + sid = models.CharField(max_length=CHAR_MAX_LENGTH, unique=True, validators=[alphanumeric], help_text="Study Identifer") date = models.DateField(default=datetime.date.today) name = models.CharField(max_length=CHAR_MAX_LENGTH, unique=True) access = models.CharField(max_length=CHAR_MAX_LENGTH, choices=STUDY_ACCESS_CHOICES) @@ -112,7 +112,7 @@ class Study(Sidable, models.Model): ) licence = models.CharField(max_length=CHAR_MAX_LENGTH, null=True, choices=STUDY_LICENCE_CHOICES) creator = models.ForeignKey( - User, related_name="creator_of_studies", on_delete=models.CASCADE, null=True + User, related_name="creator_of_studies", on_delete=models.CASCADE ) curators = models.ManyToManyField( User, related_name="curator_of_studies", through=Rating @@ -295,9 +295,17 @@ def delete(self, *args, **kwargs): super().delete(*args, **kwargs) +def expire(): + expire_datetime = datetime.datetime.now() + datetime.timedelta(days=1) + return make_aware(expire_datetime) -class Query(models.Model): +# FIXME: rename to something what it is (FilterQuery, IdCollection ?) +class IdCollection(models.Model): + """ + DOCUMENT ME + """ + class Recourses(models.TextChoices): """ Recourse Types""" Studies = 'studies', _('studies') @@ -305,12 +313,15 @@ class Recourses(models.TextChoices): Individuals = 'individuals', _('individuals') Interventions = 'interventions', _('interventions') Outputs = 'outputs', _('outputs') + Scatter = 'scatter', _('scatter') + Timecourses = 'timecourses', _('timecourses') - resource = models.CharField(choices=Recourses.choices, max_length=CHAR_MAX_LENGTH) - hash = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) + resource = models.CharField(choices=Recourses.choices, max_length=CHAR_MAX_LENGTH) + uuid = models.UUIDField(null=False, blank=False, editable=False) ids = ArrayField(models.IntegerField(), null=True, blank=True) + expire = models.DateTimeField(default=expire(), blank=True, editable=False) + + class Meta: + unique_together = ['uuid', 'resource'] -#class IdMulti(models.Model): -# query = models.ForeignKey(Query, related_name="ids",on_delete=models.CASCADE, null=False) -# value = models.IntegerField(primary_key=False) \ No newline at end of file diff --git a/backend/pkdb_app/studies/serializers.py b/backend/pkdb_app/studies/serializers.py index 4cb7974f..88c176de 100644 --- a/backend/pkdb_app/studies/serializers.py +++ b/backend/pkdb_app/studies/serializers.py @@ -3,6 +3,7 @@ """ from collections import OrderedDict +from drf_yasg.utils import swagger_auto_schema from pkdb_app.data.models import DataSet from pkdb_app.data.serializers import DataSetSerializer, DataSetElasticSmallSerializer from rest_framework import serializers @@ -18,13 +19,13 @@ DescriptionElasticSerializer from ..interventions.models import DataFile, InterventionSet from ..interventions.serializers import InterventionSetSerializer, InterventionSetElasticSmallSerializer -from ..serializers import WrongKeyValidationSerializer, SidSerializer, StudySmallElasticSerializer, SidLabelSerializer +from ..serializers import WrongKeyValidationSerializer, SidSerializer, StudySmallElasticSerializer, SidNameLabelSerializer from ..subjects.models import GroupSet, IndividualSet from ..subjects.serializers import GroupSetSerializer, IndividualSetSerializer, DataFileElasticSerializer, \ GroupSetElasticSmallSerializer, IndividualSetElasticSmallSerializer from ..users.models import User from ..users.serializers import UserElasticSerializer -from ..utils import update_or_create_multiple, create_multiple, list_duplicates, _validate_requried_key, \ +from ..utils import update_or_create_multiple, create_multiple, list_duplicates, _validate_required_key, \ _validate_not_allowed_key @@ -43,11 +44,16 @@ def to_internal_value(self, data): self.validate_wrong_keys(data) return super().to_internal_value(data) + def validate(self, attrs): + """Validate author.""" + if attrs.get("first_name") == "Max" and attrs.get("last_name").startswith("Musterman"): + raise serializers.ValidationError("Replace 'Max Mustermann' with the correct authors in ") + return super().validate(attrs) + class ReferenceSerializer(WrongKeyValidationSerializer): authors = AuthorSerializer(many=True, read_only=False) - class Meta: model = Reference fields = ( @@ -61,6 +67,12 @@ class Meta: "date", "authors", ) + extra_kwargs = { + "name": {"error_messages": {"required": "add name to reference.json"}}, + "pmid": {"error_messages": {"required": "add pmid to reference.json"}}, + "sid": {"error_messages": {"required": "add sid to reference.json"}}, + + } def create(self, validated_data): authors_data = validated_data.pop("authors", []) @@ -81,6 +93,16 @@ def to_internal_value(self, data): self.validate_wrong_keys(data) return super().to_internal_value(data) + def validate(self, attrs): + """Validate reference information.""" + if attrs.get("journal").startswith("Add your title"): + raise serializers.ValidationError("Add a journal to .") + if attrs.get("title").startswith("Add your title"): + raise serializers.ValidationError("Add a title to .") + if attrs.get("date") == "1000-10-10": + raise serializers.ValidationError("Replace '1000-10-10' with the correct date in .") + return super().validate(attrs) + class CuratorRatingSerializer(serializers.ModelSerializer): rating = serializers.FloatField(min_value=0, max_value=5) @@ -107,8 +129,7 @@ class StudySerializer(SidSerializer): queryset=Reference.objects.all(), required=True, allow_null=False ) groupset = GroupSetSerializer(read_only=False, required=False, allow_null=True) - curators = CuratorRatingSerializer(many=True, required=False, - allow_null=True) + curators = CuratorRatingSerializer(many=True) collaborators = utils.SlugRelatedField( queryset=User.objects.all(), slug_field="username", @@ -119,8 +140,6 @@ class StudySerializer(SidSerializer): creator = utils.SlugRelatedField( queryset=User.objects.all(), slug_field="username", - required=False, - allow_null=True, ) descriptions = DescriptionSerializer( many=True, read_only=False, required=False, allow_null=True @@ -174,7 +193,12 @@ def to_internal_value(self, data): data["creator"] = self.get_or_val_error(User, username=creator) # curators to internal - if "curators" in data: + if hasattr(data,"curators"): + if len(data.get("curators",[])) == 0: + raise serializers.ValidationError( + {"curators": "At least One curator is required"} + ) + else: ratings = [] for curator_and_rating in data.get("curators", []): rating_dict = {} @@ -348,8 +372,9 @@ def create_relations(self, study, related): if "curators" in related: - study.ratings.all().delete() + if related["curators"]: + study.ratings.all().delete() for curator in related["curators"]: curator["study"] = study Rating.objects.create(**curator) @@ -384,7 +409,7 @@ def create_relations(self, study, related): def validate(self, attrs): if str(attrs.get("sid")).startswith("PKDB"): - _validate_requried_key(attrs, "date", extra_message="For a study with a '^PKDB\d+$' identifier " + _validate_required_key(attrs, "date", extra_message="For a study with a '^PKDB\d+$' identifier " "the date must be set in the study.json.") else: if attrs.get("date", None) is not None: @@ -393,7 +418,7 @@ def validate(self, attrs): if "curators" in attrs and "creator" in attrs: if attrs["creator"] not in [curator["user"] for curator in attrs["curators"]]: - error_json = {"creator": "Creator must be in curator."} + error_json = {"curators": "Creator must be in curators."} raise serializers.ValidationError(error_json) return super().validate(attrs) @@ -472,26 +497,34 @@ class Meta: "output_calculated_count", "creator", + "curators", "substances", ] read_only_fields = fields - +@swagger_auto_schema(tags=['Studies']) class StudyElasticSerializer(serializers.ModelSerializer): + """ + Study serializer. + """ + pk = serializers.CharField() + sid = serializers.CharField(help_text="This is the string id.") reference = ReferenceElasticSerializer() - name = serializers.CharField() - licence = serializers.CharField() + name = serializers.CharField(help_text="Name of the study. The convention is to deduce the name from the " + "refererence with the following pattern " + "'[Author][PublicationYear][A-Z(optional)]'." ) + licence = serializers.CharField(help_text="Licence",) access = serializers.CharField() curators = CuratorRatingElasticSerializer(many=True, ) creator = UserElasticSerializer() collaborators = UserElasticSerializer(many=True, ) - substances = SidLabelSerializer(many=True, ) + substances = SidNameLabelSerializer(many=True, ) files = serializers.SerializerMethodField() # DataFileElasticSerializer(many=True, ) diff --git a/backend/pkdb_app/studies/views.py b/backend/pkdb_app/studies/views.py index ea7bd4fb..d0e3391b 100644 --- a/backend/pkdb_app/studies/views.py +++ b/backend/pkdb_app/studies/views.py @@ -1,7 +1,9 @@ import tempfile +import uuid import zipfile from collections import namedtuple +from datetime import datetime from io import StringIO from typing import Dict import time @@ -13,23 +15,27 @@ from django.core.exceptions import ObjectDoesNotExist from django.db.models import Q as DQ, Prefetch from django.http import JsonResponse, HttpResponse +from django.utils.decorators import method_decorator from django.views.decorators.csrf import csrf_exempt from django_elasticsearch_dsl_drf.constants import LOOKUP_QUERY_IN from django_elasticsearch_dsl_drf.filter_backends import FilteringFilterBackend, \ - OrderingFilterBackend, IdsFilterBackend, MultiMatchSearchFilterBackend, SearchFilterBackend + OrderingFilterBackend, IdsFilterBackend, MultiMatchSearchFilterBackend, CompoundSearchFilterBackend from django_elasticsearch_dsl_drf.viewsets import BaseDocumentViewSet, DocumentViewSet +from drf_yasg import openapi +from drf_yasg.utils import swagger_auto_schema from elasticsearch import helpers from elasticsearch_dsl.query import Q from pkdb_app.data.documents import DataAnalysisDocument, SubSetDocument -from pkdb_app.data.serializers import DataAnalysisSerializer +from pkdb_app.data.models import SubSet, Data, DataPoint from pkdb_app.data.views import SubSetViewSet, DataAnalysisViewSet +from pkdb_app.documents import AccessView, UUID_PARAM from pkdb_app.interventions.serializers import InterventionElasticSerializerAnalysis from pkdb_app.outputs.serializers import OutputInterventionSerializer from pkdb_app.subjects.serializers import GroupCharacteristicaSerializer, IndividualCharacteristicaSerializer from rest_framework.generics import get_object_or_404 from rest_framework.response import Response -from rest_framework import filters, status +from rest_framework import filters, status, serializers from rest_framework import viewsets from rest_framework.parsers import MultiPartParser, FormParser, JSONParser @@ -41,7 +47,7 @@ from pkdb_app.studies.documents import ReferenceDocument, StudyDocument from pkdb_app.subjects.documents import GroupDocument, IndividualDocument, \ GroupCharacteristicaDocument, IndividualCharacteristicaDocument -from pkdb_app.subjects.models import GroupCharacteristica, IndividualCharacteristica +from pkdb_app.subjects.models import GroupCharacteristica, IndividualCharacteristica, Group, Individual from pkdb_app.users.models import PUBLIC from pkdb_app.users.permissions import IsAdminOrCreatorOrCurator, StudyPermission, user_group from rest_framework.views import APIView @@ -59,13 +65,14 @@ from pkdb_app.outputs.models import Output from pkdb_app.interventions.models import Intervention from pkdb_app.outputs.views import ElasticOutputViewSet, OutputInterventionViewSet -from pkdb_app.studies.models import Study, Query, Reference +from pkdb_app.studies.models import Study, IdCollection, Reference from pkdb_app.subjects.views import GroupViewSet, IndividualViewSet, GroupCharacteristicaViewSet, \ IndividualCharacteristicaViewSet class ReferencesViewSet(viewsets.ModelViewSet): """ ReferenceViewSet """ + swagger_schema = None queryset = Reference.objects.all() parser_classes = (JSONParser, MultiPartParser, FormParser) serializer_class = ReferenceSerializer @@ -80,13 +87,15 @@ class ReferencesViewSet(viewsets.ModelViewSet): "pmid", "title", "abstract", - "journal") + "journal" + ) search_fields = filter_fields permission_classes = (IsAdminOrCreatorOrCurator,) class StudyViewSet(viewsets.ModelViewSet): """ StudyViewSet """ + swagger_schema = None queryset = Study.objects.all() serializer_class = StudySerializer filter_backends = ( @@ -98,7 +107,6 @@ class StudyViewSet(viewsets.ModelViewSet): lookup_field = "sid" permission_classes = (StudyPermission,) - @staticmethod def filter_on_permissions(user, queryset): @@ -216,8 +224,13 @@ def related_elastic_dict(study): docs_dict[ReferenceDocument] = study.reference return docs_dict +@method_decorator(name='list', decorator=swagger_auto_schema(manual_parameters=[UUID_PARAM])) +class ElasticStudyViewSet(BaseDocumentViewSet, APIView): + """ Endpoint to query studies -class ElasticStudyViewSet(BaseDocumentViewSet): + The studies endpoint gives access to the studies data. A study is a container of consistent + pharmacokinetics data. This container mostly contains data reported in a single scientific paper. + """ document_uid_field = "sid__raw" lookup_field = "sid" document = StudyDocument @@ -243,38 +256,58 @@ class ElasticStudyViewSet(BaseDocumentViewSet): 'files', 'substances.sid' 'substances.label' - ) multi_match_search_fields = {field: {"boost": 1} for field in search_fields} multi_match_options = { 'operator': 'and' } - filter_fields = { 'sid': 'sid.raw', - 'name': {'field': 'name.raw', - 'lookups':[ LOOKUP_QUERY_IN, ],}, - 'reference_name': {'field': 'reference.name.raw', - 'lookups': [LOOKUP_QUERY_IN, ], }, - 'creator': {'field': 'creator.username.raw', - 'lookups': [LOOKUP_QUERY_IN, ], }, - 'curators': {'field': 'curators.username.raw', - 'lookups': [LOOKUP_QUERY_IN, ], }, + 'name': { + 'field': 'name.raw', + 'lookups': [LOOKUP_QUERY_IN], + }, + 'reference_name': { + 'field': 'reference.name.raw', + 'lookups': [LOOKUP_QUERY_IN], + }, + 'creator': { + 'field': 'creator.username.raw', + 'lookups': [LOOKUP_QUERY_IN], + }, + 'curators': { + 'field': 'curators.username.raw', + 'lookups': [LOOKUP_QUERY_IN] + }, 'collaborator': 'collaborators.name.raw', - 'licence': 'licence.raw', - 'access': 'access.raw', + 'licence': { + 'field': 'licence.raw', + 'lookups': [LOOKUP_QUERY_IN], + }, + 'access': { + 'field': 'access.raw', + 'lookups': [LOOKUP_QUERY_IN], + }, 'substance': 'substances.name.raw', } ordering_fields = { 'sid': 'sid', } + @swagger_auto_schema(responses={200: StudyElasticSerializer(many=False)}) + def get_object(self): + """ Test """ + return super().get_object() + + @swagger_auto_schema(responses={200: StudyElasticSerializer(many=True)}, manual_parameters=[UUID_PARAM]) def get_queryset(self): + """ Test """ group = user_group(self.request.user) - _hash = self.request.query_params.get("hash", []) - if _hash: - ids = list(get_object_or_404(Query,hash=_hash).ids) + _uuid = self.request.query_params.get("uuid", []) + if _uuid: + ids = list(get_object_or_404(IdCollection, uuid=_uuid, resource=self.document.Index.name).ids) + _qs_kwargs = {'values': ids} self.search = self.search.query( @@ -286,7 +319,6 @@ def get_queryset(self): return self.search.query() elif group == "basic": - qs = self.search.query( Q('match', access__raw=PUBLIC) | Q('match', creator__username__raw=self.request.user.username) | @@ -294,39 +326,40 @@ def get_queryset(self): Q('match', collaborators__username__raw=self.request.user.username) ) - return qs elif group == "anonymous": - qs = self.search.query( 'match', **{"access__raw": PUBLIC} ) - return qs class ElasticReferenceViewSet(BaseDocumentViewSet): """Read/query/search references. """ + swagger_schema = None document_uid_field = "sid__raw" lookup_field = "sid" document = ReferenceDocument pagination_class = CustomPagination permission_classes = (IsAdminOrCreatorOrCurator,) serializer_class = ReferenceElasticSerializer - filter_backends = [FilteringFilterBackend, IdsFilterBackend, OrderingFilterBackend, SearchFilterBackend, MultiMatchSearchFilterBackend] - search_fields = [ + filter_backends = [FilteringFilterBackend, IdsFilterBackend, OrderingFilterBackend, CompoundSearchFilterBackend, MultiMatchSearchFilterBackend] + search_fields = ( 'sid', 'pmid', 'name', 'title', - 'abstract',] + 'abstract', + ) multi_match_search_fields = {field: {"boost": 1} for field in search_fields} multi_match_options = { 'operator': 'and' } - filter_fields = {'name': 'name.raw', } + filter_fields = { + 'name': 'name.raw' + } ordering_fields = { 'sid': 'sid', "pk": 'pk', @@ -343,13 +376,10 @@ class ElasticReferenceViewSet(BaseDocumentViewSet): class PKData(object): - """ - PKData represents a consistent set of pharmacokinetical data. - - returns a concise PKData - """ + """ PKData represents a consistent set of pharmacokinetic data. """ def __init__(self, request, + concise: bool = True, interventions_query: dict = None, groups_query: dict = None, individuals_query: dict = None, @@ -358,6 +388,7 @@ def __init__(self, ): # --- Init --- + time_start = time.time() self.request = request @@ -365,22 +396,25 @@ def __init__(self, time_init = time.time() - self.outputs = Output.objects.select_related("study__sid").prefetch_related( + self.outputs = Output.objects.filter(normed=True).select_related("study__sid").prefetch_related( Prefetch( 'interventions', queryset=Intervention.objects.only('id'))).only( - 'group_id', 'individual_id', "id", "interventions__id", "timecourse__id", "output_type") + 'group_id', 'individual_id', "id", "interventions__id", "subset__id", "output_type") + # --- Elastic --- if studies_query: self.studies_query = studies_query studies_pks = self.study_pks() time_elastic_studies = time.time() - self.outputs = self.outputs.filter(study__sid__in=studies_pks) + self.outputs = self.outputs.filter(study_id__in=studies_pks) else: - self.outputs = self.outputs.filter(study_id__in=Subquery(StudyViewSet.filter_on_permissions(request.user,Study.objects).values_list("id", flat=True))) + studies_pks = StudyViewSet.filter_on_permissions(request.user,Study.objects).values_list("id", flat=True) + self.outputs = self.outputs.filter(study_id__in=Subquery(studies_pks)) + self.studies = Study.objects.filter(id__in=studies_pks) if groups_query or individuals_query: self.groups_query = groups_query @@ -390,68 +424,96 @@ def __init__(self, self.individuals_query = individuals_query individuals_pks = self.individual_pks() time_elastic_individuals = time.time() + if concise: + self.outputs = self.outputs.filter( + DQ(group_id__in=groups_pks) | DQ(individual_id__in=individuals_pks)) + else: + self.studies = self.studies.filter(DQ(groups__id__in=groups_pks) | DQ(individuals__id__in=individuals_pks)) - self.outputs = self.outputs.filter( - DQ(group_id__in=groups_pks) | DQ(individual_id__in=individuals_pks)) if interventions_query: self.interventions_query = {"normed": "true", **interventions_query} interventions_pks = self.intervention_pks() time_elastic_interventions = time.time() - self.outputs = self.outputs.filter(interventions__id__in=interventions_pks) + if concise: + self.outputs = self.outputs.filter(interventions__id__in=interventions_pks) + else: + self.studies = self.studies.filter(interventions__id__in=interventions_pks) if outputs_query: self.outputs_query = {"normed": "true", **outputs_query} outputs_pks = self.output_pks() time_elastic_outputs = time.time() - self.outputs = self.outputs.filter(id__in=outputs_pks) - - time_elastic = time.time() - - studies = set() - groups = set() - individuals = set() - interventions = set() - outputs =set() - timecourses =set() - - time_loop_start = time.time() - - for output in self.outputs.filter(normed=True).values("study_id","group_id", "individual_id", "id", "interventions__id", "timecourse__id", "output_type"): - studies.add(output["study_id"]) - if output["group_id"]: - groups.add(output["group_id"]) + if concise: + self.outputs = self.outputs.filter(id__in=outputs_pks) else: - individuals.add(output["individual_id"]) - outputs.add(output["id"]) - - if (output["timecourse__id"] is not None) & (output["output_type"] == Output.OutputTypes.Timecourse): - timecourses.add(output["timecourse__id"]) - - if output["interventions__id"]: - interventions.add(output["interventions__id"]) - + self.studies = self.studies.filter(outputs__id__in=outputs_pks) - time_loop_end = time.time() - + time_elastic = time.time() + time_loop_start = time.time() + if concise: + studies = set() + groups = set() + individuals = set() + interventions = set() + outputs = set() + timecourses = set() + scatters = set() + + for output in self.outputs.values("study_id","group_id", "individual_id", "id", "interventions__id", "subset__id", "output_type"): + studies.add(output["study_id"]) + if output["group_id"]: + groups.add(output["group_id"]) + else: + individuals.add(output["individual_id"]) + outputs.add(output["id"]) + + if output["interventions__id"]: + interventions.add(output["interventions__id"]) + + if (output["subset__id"] is not None) & (output["output_type"] == Output.OutputTypes.Timecourse): + timecourses.add(output["subset__id"]) + + if (output["subset__id"] is not None) & (output["output_type"] == Output.OutputTypes.Array): + scatters.add(output["subset__id"]) + + self.ids = { + "studies": list(studies), + "groups": list(groups), + "individuals": list(individuals), + "interventions": list(interventions), + "outputs": list(outputs), + "timecourses": list(timecourses), + "scatters": list(scatters), + } - self.ids = { - "studies": list(studies), - "groups": list(groups), - "individuals": list(individuals), - "interventions": list(interventions), - "outputs": list(outputs), - "timecourses": list(timecourses), - } + else: + study_pks = self.studies.distinct().values_list("pk", flat=True) + + self.interventions = Intervention.objects.filter(study_id__in=study_pks, normed=True) + self.groups = Group.objects.filter(study_id__in=study_pks) + self.individuals = Individual.objects.filter(study_id__in=study_pks) + self.outputs = Output.objects.filter(study_id__in=study_pks, normed=True) + self.subset = SubSet.objects.filter(study_id__in=study_pks) + + self.ids = { + "studies": list(study_pks), + "groups": list(self.groups.values_list("pk", flat=True)), + "individuals": list(self.individuals.values_list("pk", flat=True)), + "interventions": list(self.interventions.values_list("pk", flat=True)), + "outputs": list(self.outputs.values_list("pk", flat=True)), + "timecourses": list(self.subset.filter(data__data_type=Data.DataTypes.Timecourse).values_list("pk", flat=True)), + "scatters": list(self.subset.filter(data__data_type=Data.DataTypes.Scatter).values_list("pk", flat=True)), + } + time_loop_end = time.time() time_django = time.time() - print("-" * 80) for q in connection.queries: print("db query:", q["time"]) @@ -467,7 +529,6 @@ def empty_get(self): """create an get request with no parameters in the url.""" return RequestFactory().get("/").GET.copy() - def intervention_pks(self): return self._pks(view_class=ElasticInterventionViewSet, query_dict=self.interventions_query) @@ -484,7 +545,7 @@ def subset_pks(self): return self._pks(view_class=SubSetViewSet, query_dict=self.subsets_query) def study_pks(self): - return self._pks(view_class=ElasticStudyViewSet, query_dict=self.studies_query, pk_field="sid") + return self._pks(view_class=ElasticStudyViewSet, query_dict=self.studies_query, pk_field="pk") def set_request_get(self, query_dict:Dict): """ @@ -500,7 +561,6 @@ def set_request_get(self, query_dict:Dict): def _pks(self, view_class: DocumentViewSet, query_dict: Dict, pk_field: str="pk", scan_size=10000): """ query elastic search for pks. - """ self.set_request_get(query_dict) view = view_class(request=self.request) @@ -512,21 +572,61 @@ def _pks(self, view_class: DocumentViewSet, query_dict: Dict, pk_field: str="pk" def data_by_query_dict(self,query_dict, viewset, serializer): view = viewset(request=self.request) queryset = view.filter_queryset(view.get_queryset()) - queryset = queryset.filter("terms",**query_dict) - return serializer(queryset.params(size=10000).scan(), many=True).data + queryset = queryset.filter("terms",**query_dict).source(serializer.Meta.fields) + return [hit.to_dict() for hit in queryset.params(size=10000).scan()] + +class ResponseSerializer(serializers.Serializer): + """Documentation of response schema.""" + uuid = serializers.UUIDField( + required=True, + allow_null=False, + help_text="The resulting queries can be accessed by adding this uuid as " + "an argument to the endpoints: /studies/, /groups/, /individuals/, /outputs/, /timecourses/, /subsets/." + ) + studies = serializers.IntegerField(required=True, allow_null=False, help_text="Number of resulting studies.") + groups = serializers.IntegerField(required=True, allow_null=False, help_text="Number of resulting groups.") + individuals = serializers.IntegerField(required=True, allow_null=False, help_text="Number of resulting individuals.") + outputs = serializers.IntegerField(required=True, allow_null=False, help_text="Number of resulting outputs.") + timecourses = serializers.IntegerField(required=True, allow_null=False, help_text="Number of resulting timecourses.") + scatters = serializers.IntegerField(required=True, allow_null=False, help_text="Number of resulting scatters.") class PKDataView(APIView): + """Endpoint to filter and query data. + + The filter endpoint is the main endpoint for complex queries, such as searches and filtering. A filter query returns + a unique id corresponding to the query, which allows to access the complete set of tables + (studies, groups, individuals and interventions, outputs, timecourses, and scatters) for the search. + In addition an overview of the counts in the tables is provided. + ``` + { + "uuid": "6a15733e-0659-4224-985a-9c71120911d5", + "studies": 430, + "groups": 887, + "individuals": 5748, + "interventions": 1291, + "outputs": 70636, + "timecourses": 2946, + "scatters": 37 + } + ``` + Two main parameters control the output of the filter query: + * `download`: which allows to download the results as zip archive + * `concise`: switching between concise and non-concise data + + The filter endpoint provides the option of filtering on any of the tables mentioned + early. Arguments can be provided with the prefixes `['studies__' , 'groups__', 'individuals__', 'interventions__', + 'outputs__', 'subsets__']` for the respective tables. + """ EXTRA = { "study": "studies__", - "intervention": "interventions__", "group": "groups__", "individual": "individuals__", + "intervention": "interventions__", "output": "outputs__", "subsets": "subsets__", - } def _get_param(self, key, request): @@ -537,12 +637,55 @@ def _get_param(self, key, request): param[key_request[string_len:]] = value return param + # additional parameters + download__param = openapi.Parameter( + 'download', + openapi.IN_QUERY, + description="The download parameter allows to download the results of the filter query. " + "If set to True, a zip archive is returned containing '.csv' files for all tables.", + type=openapi.TYPE_BOOLEAN, + default=False + ) + + concise__param = openapi.Parameter( + 'concise', + openapi.IN_QUERY, + description="The concise parameter to reduce the set to the most concise amount " + "of instances in each table or to return studies which meet the " + "filtered criteria and all the content (related set tables) of the " + "studies. E.g. Filtering for “thalf -- elimination half life” with “" + "concise:true” will return all studies containing “thalf” outputs, " + "all interventions which have been applied before measuring thalf, " + "and all groups and individuals for which half has been measured. " + "Filtering for “thalf -- elimination half life” with “concise:false” " + "will return all studies containing “thalf” outputs, all interventions " + "which have been applied in these studies, and all groups and individuals " + "in these studies.", + type=openapi.TYPE_BOOLEAN, + default=True + ) + + @swagger_auto_schema( + manual_parameters=[concise__param, download__param], + responses={ + 200: openapi.Response( + description="Returns a 'uuid' and the number of entries for each table. " + "This 'uuid' can be used as an argument in the endpoints of the " + "tables (studies, groups, individuals, interventions, outputs, subsets). " + "For subsets endpoint the 'data_type'['timecourse', 'scatter'] " + "has to be provided.", + schema=ResponseSerializer) + } + + ) + def get(self, request, *args, **kw): time_start_request = time.time() request.GET = request.GET.copy() pkdata = PKData( request=request, + concise="false" != request.GET.get("concise", True), studies_query=self._get_param("study", request), groups_query=self._get_param("group", request), individuals_query=self._get_param("individual", request), @@ -550,61 +693,93 @@ def get(self, request, *args, **kw): outputs_query=self._get_param("output", request), ) - - - - - - time_pkdata = time.time() - resources = {} + # calculation of uuid queries = [] + delete_queries = IdCollection.objects.filter(expire__lte=datetime.now()) + delete_queries.delete() + _uuid = uuid.uuid4() + resources = {"uuid": _uuid} for resource, ids in pkdata.ids.items(): - query = Query(resource=resource, ids=ids) + query = IdCollection(resource=resource, ids=ids, uuid=_uuid) queries.append(query) - resources[resource] = {"hash": query.hash, "count": len(ids)} - Query.objects.bulk_create(queries) + resources[resource] = len(ids) + IdCollection.objects.bulk_create(queries) - time_hash = time.time() + time_uuid = time.time() + if request.GET.get("download") == "true": - if request.GET.get("download"): - Sheet = namedtuple("Sheet", ["sheet_name", "query_dict", "viewset", "serializer"]) - table_content = { - "studies": Sheet("Studies", {"pk":pkdata.ids["studies"]}, ElasticStudyViewSet, StudyAnalysisSerializer), - "groups": Sheet("Groups", {"group_pk":pkdata.ids["groups"]}, GroupCharacteristicaViewSet, GroupCharacteristicaSerializer), - "individuals": Sheet("Individuals", {"individual_pk": pkdata.ids["individuals"]}, IndividualCharacteristicaViewSet,IndividualCharacteristicaSerializer), - "interventions": Sheet("Interventions",{"pk":pkdata.ids["interventions"]} ,ElasticInterventionAnalysisViewSet, InterventionElasticSerializerAnalysis), - "outputs": Sheet("Outputs",{"output_pk":pkdata.ids["outputs"]}, OutputInterventionViewSet, OutputInterventionSerializer), - "timecourses": Sheet("Timecourses", {"subset_pk": pkdata.ids["timecourses"]}, DataAnalysisViewSet,DataAnalysisSerializer), + def serialize_scatter(ids): + scatter_subsets = SubSet.objects.filter(id__in=ids).prefetch_related('data_points') + return [t.scatter_representation() for t in scatter_subsets] + + Sheet = namedtuple("Sheet", ["sheet_name", "query_dict", "viewset", "serializer", "function"]) + table_content = { + "studies": Sheet("Studies", {"pk": pkdata.ids["studies"]}, ElasticStudyViewSet, StudyAnalysisSerializer, None), + "groups": Sheet("Groups", {"group_pk": pkdata.ids["groups"]}, GroupCharacteristicaViewSet, GroupCharacteristicaSerializer, None), + "individuals": Sheet("Individuals", {"individual_pk": pkdata.ids["individuals"]}, IndividualCharacteristicaViewSet,IndividualCharacteristicaSerializer, None), + "interventions": Sheet("Interventions", {"pk": pkdata.ids["interventions"]} ,ElasticInterventionAnalysisViewSet, InterventionElasticSerializerAnalysis, None), + "outputs": Sheet("Outputs", {"output_pk": pkdata.ids["outputs"]}, OutputInterventionViewSet, OutputInterventionSerializer, None), + #"timecourses": Sheet("Timecourses", {"subset_pk": pkdata.ids["timecourses"]}, None, None, serialize_timecourses), + "scatters": Sheet("Scatter", {"subset_pk": pkdata.ids["scatters"]}, None, None, serialize_scatter), } + + with tempfile.SpooledTemporaryFile() as tmp: with zipfile.ZipFile(tmp, 'w', zipfile.ZIP_DEFLATED) as archive: + download_times = {} + for key, sheet in table_content.items(): - print(key) + download_time_start = time.time() + string_buffer = StringIO() - data = pkdata.data_by_query_dict(sheet.query_dict,sheet.viewset,sheet.serializer) - pd.DataFrame(data).to_csv(string_buffer) + if sheet.function: + df = pd.DataFrame(sheet.function(sheet.query_dict["subset_pk"])) + + else: + data = pkdata.data_by_query_dict(sheet.query_dict,sheet.viewset,sheet.serializer) + df = pd.DataFrame(data) + def sorted_tuple(v): + return sorted(tuple(v)) + + if key=="outputs": + + timecourse_df = df[df["output_type"] == Output.OutputTypes.Timecourse] + timecourse_df = pd.pivot_table(data=timecourse_df,index=["output_pk"], aggfunc=sorted_tuple).apply(SubSet.to_list) + timecourse_df = pd.pivot_table(data=timecourse_df,index=["label","study_name"], aggfunc=tuple).apply(SubSet.to_list) + timecourse_df.to_csv(string_buffer) + archive.writestr(f'timecourse.csv', string_buffer.getvalue()) + + df.to_csv(string_buffer) archive.writestr(f'{key}.csv', string_buffer.getvalue()) + download_times[key] = time.time()-download_time_start + archive.write('download_extra/README.md', 'README.md') + archive.write('download_extra/TERMS_OF_USE.md', 'TERMS_OF_USE.md') + + + tmp.seek(0) resp = HttpResponse(tmp.read(), content_type='application/x-zip-compressed') resp['Content-Disposition'] = "attachment; filename=%s" % "pkdata.zip" - return resp + print("-" * 80) + print("File Creation") + for k, v in download_times.items(): + print(k, v) + return resp response = Response(resources, status=status.HTTP_200_OK) time_response = time.time() print("-" * 80) print("pkdata:", time_pkdata - time_start_request) - print("hash:", time_hash - time_pkdata) + print("uuid:", time_uuid - time_pkdata) print("-" * 80) print("total:", time_response - time_start_request) print("-" * 80) - - return response diff --git a/backend/pkdb_app/subjects/documents.py b/backend/pkdb_app/subjects/documents.py index bcc64814..dca8aa30 100644 --- a/backend/pkdb_app/subjects/documents.py +++ b/backend/pkdb_app/subjects/documents.py @@ -29,6 +29,7 @@ 'cv': fields.FloatField(), 'unit': string_field('unit'), 'count': fields.IntegerField('count'), + 'group_count':fields.IntegerField('group_count') }, multi=True ) diff --git a/backend/pkdb_app/subjects/serializers.py b/backend/pkdb_app/subjects/serializers.py index b224441c..96d2d9ed 100644 --- a/backend/pkdb_app/subjects/serializers.py +++ b/backend/pkdb_app/subjects/serializers.py @@ -1,10 +1,11 @@ from django.core.exceptions import ObjectDoesNotExist, MultipleObjectsReturned from django.db.models import Q +from drf_yasg.utils import swagger_serializer_method from rest_framework import serializers from pkdb_app.behaviours import map_field, MEASUREMENTTYPE_FIELDS, EX_MEASUREMENTTYPE_FIELDS from pkdb_app.info_nodes.serializers import MeasurementTypeableSerializer -from pkdb_app.serializers import StudySmallElasticSerializer, SidLabelSerializer +from pkdb_app.serializers import StudySmallElasticSerializer, SidNameLabelSerializer from .models import ( Group, GroupSet, @@ -19,7 +20,7 @@ from ..comments.serializers import DescriptionSerializer, CommentSerializer, DescriptionElasticSerializer, \ CommentElasticSerializer from ..serializers import WrongKeyValidationSerializer, ExSerializer, ReadSerializer -from ..utils import list_of_pk, _validate_requried_key, create_multiple, _create +from ..utils import list_of_pk, _validate_required_key, create_multiple, _create CHARACTERISTICA_FIELDS = ['count'] CHARACTERISTICA_MAP_FIELDS = map_field(CHARACTERISTICA_FIELDS) @@ -86,7 +87,15 @@ def to_internal_value(self, data): self.validate_wrong_keys(data, additional_fields=CharacteristicaExSerializer.Meta.fields) return super(serializers.ModelSerializer, self).to_internal_value(data) + @staticmethod + def validate_count(count): + if count < 1: + raise serializers.ValidationError(f"count <{count}> has to be greater or equal to 1. ") + return count + + def validate(self, attrs): + try: # perform via dedicated function on categorials for info_node in ['substance', 'measurement_type']: @@ -121,12 +130,8 @@ def to_internal_value(self, data): data = self.retransform_map_fields(data) data = self.retransform_ex_fields(data) self.validate_wrong_keys(data, additional_fields=GroupExSerializer.Meta.fields) + _validate_required_key(data, 'count') - _validate_requried_key(data, 'count') - - for characteristica_single in data.get('characteristica', []): - disabled = ['value'] - self._validate_disabled_data(characteristica_single, disabled) return super(serializers.ModelSerializer, self).to_internal_value(data) @@ -143,6 +148,20 @@ def _validate_required_measurement_type(measurement_type, characteristica): f"on the `all` group.", 'details': characteristica} ) + @staticmethod + def _validate_group_characteristica_count(characteristica, group_count): + if int(characteristica.get("count",group_count)) > int(group_count): + raise serializers.ValidationError( + { + 'characteristica': f"A characteristica count has to be smaller or equal to its group 'count'.", + 'details': { + "characteristica":characteristica, + "group_count": group_count, + } + } + ) + + def validate(self, attrs): ''' validates species information on group with name all @@ -154,6 +173,11 @@ def validate(self, attrs): for measurement_type in ['species', 'healthy', 'sex']: self._validate_required_measurement_type(measurement_type, characteristica) + for characteristica_single in attrs.get('characteristica', []): + disabled = ['value'] + self._validate_disabled_data(characteristica_single, disabled) + self._validate_group_characteristica_count(characteristica_single, attrs.get("count")) + return super().validate(attrs) def to_representation(self, instance): @@ -609,13 +633,14 @@ class CharacteristicaElasticSerializer(serializers.ModelSerializer): sd = serializers.FloatField(allow_null=True) se = serializers.FloatField(allow_null=True) cv = serializers.FloatField(allow_null=True) - measurement_type = SidLabelSerializer() - substance = SidLabelSerializer(allow_null=True) - choice = SidLabelSerializer(allow_null=True) - + measurement_type = SidNameLabelSerializer() + substance = SidNameLabelSerializer(allow_null=True) + choice = SidNameLabelSerializer(allow_null=True) + group_count = serializers.IntegerField(allow_null=True) class Meta: model = Characteristica - fields = ['pk'] + CHARACTERISTICA_FIELDS + MEASUREMENTTYPE_FIELDS + ['normed'] # + ['access','allowed_users'] + fields = ['pk'] + CHARACTERISTICA_FIELDS + MEASUREMENTTYPE_FIELDS + ['group_count']+['normed'] # + ['access','allowed_users'] + read_only_fields = fields # Group related Serializer @@ -654,8 +679,7 @@ class Meta: 'characteristica', ) - # FIXME: Remove this. - + @swagger_serializer_method(CharacteristicaElasticSerializer(many=True)) def get_characteristica(self, instance): if instance.characteristica_all_normed: return CharacteristicaElasticSerializer(instance.characteristica_all_normed, many=True, read_only=True).data @@ -687,6 +711,12 @@ class IndividualElasticSerializer(serializers.ModelSerializer): group = GroupSmallElasticSerializer(read_only=True) characteristica = serializers.SerializerMethodField() + @swagger_serializer_method(serializer_or_field=CharacteristicaElasticSerializer) + def get_characteristica(self, instance): + if instance.characteristica_all_normed: + return CharacteristicaElasticSerializer(instance.characteristica_all_normed, many=True, read_only=True).data + return [] + class Meta: model = Individual fields = ( @@ -697,12 +727,6 @@ class Meta: 'characteristica', ) - # FIXME: Remove this. - def get_characteristica(self, instance): - if instance.characteristica_all_normed: - return CharacteristicaElasticSerializer(instance.characteristica_all_normed, many=True, read_only=True).data - return [] - class GroupCharacteristicaSerializer(serializers.ModelSerializer): class Meta: diff --git a/backend/pkdb_app/subjects/views.py b/backend/pkdb_app/subjects/views.py index d66ad88b..4364672f 100644 --- a/backend/pkdb_app/subjects/views.py +++ b/backend/pkdb_app/subjects/views.py @@ -1,18 +1,19 @@ from django_elasticsearch_dsl_drf.constants import LOOKUP_QUERY_IN, LOOKUP_QUERY_EXCLUDE -############################################################ -# Elastic Search Views -########################################################### from django_elasticsearch_dsl_drf.filter_backends import ( FilteringFilterBackend, OrderingFilterBackend, IdsFilterBackend, - MultiMatchSearchFilterBackend, SearchFilterBackend) + MultiMatchSearchFilterBackend, CompoundSearchFilterBackend) from rest_framework import viewsets from pkdb_app.documents import AccessView from pkdb_app.pagination import CustomPagination -from pkdb_app.subjects.documents import IndividualDocument, GroupDocument, \ - GroupCharacteristicaDocument, IndividualCharacteristicaDocument +from pkdb_app.subjects.documents import ( + IndividualDocument, + GroupDocument, + GroupCharacteristicaDocument, + IndividualCharacteristicaDocument +) from pkdb_app.subjects.models import DataFile from pkdb_app.subjects.serializers import ( DataFileSerializer, @@ -23,7 +24,7 @@ ) from pkdb_app.users.permissions import StudyPermission -common_subject_fields = { +subject_filter_fields = { 'study': 'study.raw', 'name': 'name.raw', 'choice_sid': { @@ -39,16 +40,21 @@ LOOKUP_QUERY_IN, LOOKUP_QUERY_EXCLUDE, ], - }, + }, } + + class GroupViewSet(AccessView): + """ Endpoint to query groups + + The groups endpoint gives access to the groups data. A group is a collection of individuals for which data was + reported collectively. + """ document = GroupDocument serializer_class = GroupElasticSerializer lookup_field = 'id' filter_backends = [FilteringFilterBackend, IdsFilterBackend, OrderingFilterBackend, MultiMatchSearchFilterBackend] pagination_class = CustomPagination - - # Define search fields search_fields = ( 'characteristica_all_normed.measurement_type.label', 'characteristica_all_normed.choice.label', @@ -56,23 +62,17 @@ class GroupViewSet(AccessView): 'name', 'study.name', 'study.sid', - ) multi_match_search_fields = {field: {"boost": 1} for field in search_fields} multi_match_options = { 'operator': 'and' } - - # Filter fields filter_fields = { 'id': 'id', 'pk': 'pk', 'parent': 'group.name.raw', - **common_subject_fields - + **subject_filter_fields } - - # Define ordering fields ordering_fields = { 'id': 'id', 'study': 'study.raw', @@ -81,15 +81,16 @@ class GroupViewSet(AccessView): } - class IndividualViewSet(AccessView): + """ Endpoint to query individuals + + The individual endpoint gives access to the individual subjects data. + """ document = IndividualDocument serializer_class = IndividualElasticSerializer lookup_field = 'id' filter_backends = [FilteringFilterBackend, IdsFilterBackend, OrderingFilterBackend, MultiMatchSearchFilterBackend] pagination_class = CustomPagination - - # Define search fields search_fields = ( 'characteristica_all_normed.measurement_type.label', 'characteristica_all_normed.choice.label', @@ -98,69 +99,63 @@ class IndividualViewSet(AccessView): 'study.name', 'study.sid', 'group.name', - ) multi_match_search_fields = {field: {"boost": 1} for field in search_fields} multi_match_options = { 'operator': 'and' } - - # Filter fields filter_fields = { 'pk': 'pk', 'id': 'id', 'name': 'name.raw', 'group_name': 'group.name.raw', - **common_subject_fields + **subject_filter_fields } - - # Define ordering fields ordering_fields = { 'id': 'id', 'group': 'group.raw', } -common_filter_fields = { - 'study_sid': {'field': 'study_sid.raw', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - - ], - }, - 'study_name': {'field': 'study_name.raw', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - - ], - }, +characteristica_filter_fields = { + 'study_sid': { + 'field': 'study_sid.raw', + 'lookups': [ + LOOKUP_QUERY_IN, + LOOKUP_QUERY_EXCLUDE, + ], + }, + 'study_name': { + 'field': 'study_name.raw', + 'lookups': [ + LOOKUP_QUERY_IN, + LOOKUP_QUERY_EXCLUDE, + ], + }, 'characteristica_pk': { - 'field': 'characteristica_pk', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - ], + 'field': 'characteristica_pk', + 'lookups': [ + LOOKUP_QUERY_IN, + LOOKUP_QUERY_EXCLUDE, + ], }, - 'count': 'count', 'measurement_type': 'measurement_type.raw', 'measurement_type_sid': { - 'field': 'measurement_type.sid.raw', - 'lookups': [ + 'field': 'measurement_type.sid.raw', + 'lookups': [ LOOKUP_QUERY_IN, LOOKUP_QUERY_EXCLUDE, - ], + ], }, 'choice': 'choice.raw', 'choice_sid': { - 'field': 'choice.sid.raw', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - ], + 'field': 'choice.sid.raw', + 'lookups': [ + LOOKUP_QUERY_IN, + LOOKUP_QUERY_EXCLUDE, + ], }, 'substance': 'substance.raw', 'value': 'value', @@ -172,51 +167,55 @@ class IndividualViewSet(AccessView): 'sd': 'sd', 'cv': 'cv', 'unit': 'unit.raw', - } +} + + class GroupCharacteristicaViewSet(AccessView): + """ Endpoint to query group characteristica + + The endpoint gives access to characteristica information for groups. + """ + swagger_schema = None document = GroupCharacteristicaDocument serializer_class = GroupCharacteristicaSerializer pagination_class = CustomPagination lookup_field = 'id' - filter_backends = [FilteringFilterBackend, IdsFilterBackend, OrderingFilterBackend,SearchFilterBackend, MultiMatchSearchFilterBackend] - - search_fields = ( - ) + filter_backends = [ + FilteringFilterBackend, + IdsFilterBackend, + OrderingFilterBackend, + CompoundSearchFilterBackend, + MultiMatchSearchFilterBackend + ] + search_fields = () multi_match_search_fields = {field: {"boost": 1} for field in search_fields} multi_match_options = { 'operator': 'and' } - filter_fields = { - - - 'group_name': {'field': 'group_name', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - - ], - }, - - 'group_pk': {'field': 'group_pk', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - - ], - }, - 'group_parent_pk': {'field': 'group_parent_pk', - 'lookups': [ - LOOKUP_QUERY_IN, - LOOKUP_QUERY_EXCLUDE, - - ], - }, - + 'group_name': { + 'field': 'group_name', + 'lookups': [ + LOOKUP_QUERY_IN, + LOOKUP_QUERY_EXCLUDE, + ], + }, + 'group_pk': { + 'field': 'group_pk', + 'lookups': [ + LOOKUP_QUERY_IN, + LOOKUP_QUERY_EXCLUDE, + ], + }, + 'group_parent_pk': { + 'field': 'group_parent_pk', + 'lookups': [ + LOOKUP_QUERY_IN, + LOOKUP_QUERY_EXCLUDE, + ], + }, 'group_count': 'group_count', - - **common_filter_fields - + **characteristica_filter_fields } ordering_fields = { 'choice': 'choice.raw', @@ -225,6 +224,11 @@ class GroupCharacteristicaViewSet(AccessView): class IndividualCharacteristicaViewSet(AccessView): + """ Endpoint to query individual characteristica + + The endpoint gives access to characteristica information for individuals. + """ + swagger_schema = None document = IndividualCharacteristicaDocument serializer_class = IndividualCharacteristicaSerializer pagination_class = CustomPagination @@ -239,15 +243,12 @@ class IndividualCharacteristicaViewSet(AccessView): 'group.name', 'study.name', 'study.sid', - ) multi_match_search_fields = {field: {"boost": 1} for field in search_fields} multi_match_options = { 'operator': 'and' } - filter_fields = { - 'individual_name': { 'field': 'individual_name', 'lookups': [ @@ -269,7 +270,7 @@ class IndividualCharacteristicaViewSet(AccessView): LOOKUP_QUERY_EXCLUDE, ], }, - **common_filter_fields + **characteristica_filter_fields } ordering_fields = { @@ -282,6 +283,7 @@ class IndividualCharacteristicaViewSet(AccessView): # Views queried not from elastic search ########################################################### class DataFileViewSet(viewsets.ModelViewSet): + swagger_schema = None queryset = DataFile.objects.all() serializer_class = DataFileSerializer permission_classes = (StudyPermission,) diff --git a/backend/pkdb_app/urls.py b/backend/pkdb_app/urls.py index 308edee9..824c14e9 100755 --- a/backend/pkdb_app/urls.py +++ b/backend/pkdb_app/urls.py @@ -3,12 +3,16 @@ """ from django.conf.urls import url from django.urls import path, include +from drf_yasg.views import get_schema_view from pkdb_app.data.views import DataAnalysisViewSet, SubSetViewSet from rest_framework.authtoken.views import obtain_auth_token from rest_framework.routers import DefaultRouter +from .views import CustomOpenAPISchemaGenerator +from drf_yasg import openapi + from .statistics import ( - StatisticsViewSet, + StatisticsViewSet, SubstanceStatisticsViewSet, ) from .info_nodes.views import ( @@ -51,13 +55,13 @@ # Misc URLs # ----------------------------------------------------------------------------- router.register("statistics", StatisticsViewSet, basename="statistics") +router.register("statistics/substances", SubstanceStatisticsViewSet, basename="statistics") # ----------------------------------------------------------------------------- # Elastic URLs # ----------------------------------------------------------------------------- router.register("studies", ElasticStudyViewSet, basename="studies") router.register("references", ElasticReferenceViewSet, basename="references") - router.register("groups", GroupViewSet, basename="groups_elastic") router.register("individuals", IndividualViewSet, basename="individuals") router.register("interventions", ElasticInterventionViewSet, basename="interventions") @@ -79,32 +83,80 @@ router.register('_info_nodes', InfoNodeViewSet, basename="_info_nodes") # django -router.register("interventions_analysis", ElasticInterventionAnalysisViewSet, basename="interventions_analysis") -router.register("groups_analysis", GroupCharacteristicaViewSet, basename="groups_analysis") -router.register("individuals_analysis", IndividualCharacteristicaViewSet, basename="individuals_analysis") -router.register("output_analysis", OutputInterventionViewSet, basename="output_analysis") -router.register("data_analysis", DataAnalysisViewSet, basename="data_analysis") +router.register("flat/interventions", ElasticInterventionAnalysisViewSet, basename="interventions_analysis") +router.register("flat/groups", GroupCharacteristicaViewSet, basename="groups_analysis") +router.register("flat/individuals", IndividualCharacteristicaViewSet, basename="individuals_analysis") +router.register("flat/output", OutputInterventionViewSet, basename="output_analysis") +router.register("flat/data", DataAnalysisViewSet, basename="data_analysis") -#router.register("pkdata", PKDataView, basename="pkdata") -urlpattern_views = [] urlpatterns = [ - # authentification - path('api-token-auth/', ObtainAuthTokenCustom.as_view()), - path('api-auth/', include("rest_framework.urls", namespace="rest_framework")), - # api path("api/v1/", include(router.urls)), - path('api/v1/pkdata/', PKDataView.as_view()), + path('api/v1/filter/', PKDataView.as_view()), +] +#router.register("pkdata", PKDataView, basename="pkdata") + +schema_view = get_schema_view( + openapi.Info( + title="PK-DB REST API", + default_version='v1', + description=""" + PK-DB provides web services based on REST to search, filter, retrieve and download data. + + The data in PK-DB is structured based on **studies**, with a single study corresponding to a single source of information. In most cases such a study corresponds to a single publication or a single clinical trial. + + A study in PK-DB reports pharmacokinetics information for the subjects under investigation in the study. These subjects are characterised by properties such as their *sex*, *age*, *body weight*, *ethnicity* or *health status*. Depending on the reported information, subject information is stored for **groups** and/or **individuals**. + + A second class of information are the **interventions** which were performed on the subjects. Most of the interventions in pharmacokinetics studies is application of a certain dose of a substance (e.g. 1 mg paracetamol orally as tablet). In addition interventions can also consist of other things changed between the studied subjects or groups, such as food which was applied. + + Finally, pharmacokinetics measurements are performed on the subject. These are often *concentration* measurements in certain tissue of the subject. These can either be single measurements (**outputs**) or time profiles (**time courses**). Additionally, derived pharmacokinetics parameters such as *AUC*, *clearance*, or *half-lives* are commonly reported. Correlations between theses outputs are often shown in form of **scatter** plots. + + Meta-information is encoded in the form of an **info nodes** which for a given field encodes meta-data such as description, synonyms, annotations and database cross-references. + + The REST API provides endpoints for + * overview of PK-DB statistics (`statistics`) + * searching and filtering of data (`filter`) + * accessing study information (`studies`) + * accessing groups (`groups`) and individuals (`individuals`) + * accessing interventions (`interventions`) + * accessing outputs (`outputs`) and subsets (`subsets`) + * accessing info_nodes information (`info_nodes`) + + Data can be downloaded using the filter and search endpoint. + + Python examples demonstrating the use of the API are available at + https://github.com/matthiaskoenig/pkdb/blob/develop/docs/pkdb_api.ipynb + + If you are interested in contributing to the database please contact Matthias König. + """, + terms_of_service="https://github.com/matthiaskoenig/pkdb/blob/develop/TERMS_OF_USE.md", + contact=openapi.Contact(email="koenigmx@hu-berlin.de", name="Matthias König"), + license=openapi.License(name="GNU Lesser General Public License v3 (LGPLv3)"), + ), + generator_class=CustomOpenAPISchemaGenerator, + public=False, + patterns=urlpatterns, +) + +urlpatterns = urlpatterns + [ path("api/v1/update_index/", update_index_study), # media files url(r'^media/(?P.*)$', serve_protected_document, name='serve_protected_document'), + # authentification + path('api-token-auth/', ObtainAuthTokenCustom.as_view()), + path('api-auth/', include("rest_framework.urls", namespace="rest_framework")), + url(r'^accounts/', include('rest_email_auth.urls')), path('verify/?P[-\w]+)', obtain_auth_token), path('reset/?P[-\w]+)', obtain_auth_token), + + url(r'^api/v1/swagger(?P\.json|\.yaml)$', schema_view.without_ui(cache_timeout=0), name='schema-json'), + url(r'^api/v1/swagger/$', schema_view.with_ui('swagger', cache_timeout=0), name='schema-swagger-ui'), + url(r'^api/v1/redoc/$', schema_view.with_ui('redoc', cache_timeout=0), name='schema-redoc'), ] diff --git a/backend/pkdb_app/users/views.py b/backend/pkdb_app/users/views.py index 57d44b2f..94a6a3d5 100644 --- a/backend/pkdb_app/users/views.py +++ b/backend/pkdb_app/users/views.py @@ -14,13 +14,14 @@ class UserViewSet( """ Updates and retrieves user accounts """ - + swagger_schema = None queryset = User.objects.all() serializer_class = UserSerializer permission_classes = (IsAdminUser,) class UserGroupViewSet(viewsets.ModelViewSet): + swagger_schema = None queryset = Group.objects.all() serializer_class = UserGroupSerializer permission_classes = (IsAdminUser,) @@ -30,7 +31,7 @@ class UserCreateViewSet(mixins.CreateModelMixin, mixins.UpdateModelMixin, viewse """ Creates user accounts """ - + swagger_schema = None queryset = User.objects.all() serializer_class = CreateUserSerializer permission_classes = (IsAdminUser,) @@ -40,4 +41,5 @@ class UserCreateViewSet(mixins.CreateModelMixin, mixins.UpdateModelMixin, viewse class ObtainAuthTokenCustom(ObtainAuthToken): + swagger_schema = None serializer_class = AuthTokenSerializerCostum diff --git a/backend/pkdb_app/utils.py b/backend/pkdb_app/utils.py index 7d49ad87..3eed0fe7 100644 --- a/backend/pkdb_app/utils.py +++ b/backend/pkdb_app/utils.py @@ -3,9 +3,7 @@ """ import copy import os - -from django.http import Http404 -from django.shortcuts import _get_queryset +import pandas as pd from django.utils.translation import gettext_lazy as _ from rest_framework import serializers @@ -21,6 +19,7 @@ class SlugRelatedField(serializers.SlugRelatedField): def list_duplicates(seq): + # FIXME: use colletions.Counter seen = set() seen_add = seen.add # adds all elements it doesn't know yet to seen and all other to seen_twice @@ -82,7 +81,7 @@ def ensure_dir(file_path): os.makedirs(directory) -def update_or_create_multiple(parent, children, related_name, lookup_fields=[]): +def update_or_create_multiple(parent, children, related_name, lookup_fields=None): for child in children: lookup_dict = {} instance_child = getattr(parent, related_name) @@ -93,8 +92,6 @@ def update_or_create_multiple(parent, children, related_name, lookup_fields=[]): else: lookup_dict = child - # instance_child.update_or_create(**lookup_dict, defaults=child) - try: if instance_child.model.__name__ in ["Choice", "Unit"]: obj = instance_child.get(**lookup_dict) @@ -110,17 +107,13 @@ def update_or_create_multiple(parent, children, related_name, lookup_fields=[]): obj.save() - - except instance_child.model.DoesNotExist: instance_dict = {**lookup_dict, **child} instance_child.create(**instance_dict) - def create_multiple(parent, children, related_name): instance_child = getattr(parent, related_name) - return [instance_child.create(**child) for child in children] @@ -134,8 +127,10 @@ def create_multiple_bulk_normalized(notnormalized_instances, model_class): return model_class.objects.bulk_create( [initialize_normed(notnorm_instance) for notnorm_instance in notnormalized_instances]) -def _create(validated_data, model_manager=None, model_serializer= None, create_multiple_keys=[], add_multiple_keys=[], pop=[]): - poped_data = {related: validated_data.pop(related, []) for related in pop} + +def _create(validated_data, model_manager=None, model_serializer=None, + create_multiple_keys=[], add_multiple_keys=[], pop=[]): + popped_data = {related: validated_data.pop(related, []) for related in pop} related_data_create = {related: validated_data.pop(related, []) for related in create_multiple_keys} related_data_add = {related: validated_data.pop(related, []) for related in add_multiple_keys} if model_manager is not None: @@ -149,9 +144,10 @@ def _create(validated_data, model_manager=None, model_serializer= None, create_ create_multiple(instance, item, key) for key, item in related_data_add.items(): - getattr(instance,key).add(*item) + getattr(instance, key).add(*item) + + return instance, popped_data - return instance, poped_data def initialize_normed(notnorm_instance): norm = copy.copy(notnorm_instance) @@ -162,13 +158,11 @@ def initialize_normed(notnorm_instance): try: norm.individual_id = notnorm_instance.individual.pk - except AttributeError: pass try: norm.group_id = notnorm_instance.group.pk - except AttributeError: pass @@ -202,14 +196,21 @@ def set_keys(d, value, *keys): d = d[key] d[keys[-1]] = value +def _validate_required_key_and_value(attrs, key, details=None, extra_message: str = ""): + if pd.isnull(attrs.get(key,None)) or pd.isna(attrs.get(key,None)): + error_json = {key: f"The key <{key}> is required. {extra_message}"} + if details: + error_json["details"] = details + raise serializers.ValidationError(error_json) -def _validate_requried_key(attrs, key, details=None, extra_message=""): +def _validate_required_key(attrs, key, details=None, extra_message: str = ""): if key not in attrs: error_json = {key: f"The key <{key}> is required. {extra_message}"} if details: error_json["details"] = details raise serializers.ValidationError(error_json) + def _validate_not_allowed_key(attrs, key, details=None, extra_message=""): if key in attrs: error_json = {key: f"The key <{key}> is not allowed. {extra_message}"} diff --git a/backend/pkdb_app/views.py b/backend/pkdb_app/views.py index 7458c438..a81ecc20 100644 --- a/backend/pkdb_app/views.py +++ b/backend/pkdb_app/views.py @@ -2,13 +2,16 @@ Views """ import os +from copy import copy -from django.http import FileResponse, HttpResponseForbidden +from django.http import FileResponse, HttpResponseForbidden from django.shortcuts import get_object_or_404 from rest_framework.authentication import TokenAuthentication from pkdb_app.users.permissions import get_study_file_permission from .subjects.models import DataFile +from drf_yasg.generators import OpenAPISchemaGenerator + def serve_protected_document(request, file): try: @@ -23,8 +26,74 @@ def serve_protected_document(request, file): # Split the elements of the path response = FileResponse(datafile.file, ) response["Content-Disposition"] = "attachment; filename=" + file_name - return response - else: - return HttpResponseForbidden() + return HttpResponseForbidden() + + +class CustomOpenAPISchemaGenerator(OpenAPISchemaGenerator): + """Definition of additional API parameters.""" + + @staticmethod + def _params(table, swagger): + if swagger.paths.get(f'/{table}/'): + _params = [] + params = swagger.paths.get(f'/{table}/').get('get').get("parameters").copy() + + for p in params: + _p = copy(p) + if p.name not in ["ordering", "search_multi_match", "page", "page_size"]: + if table not in p.name: + _p.name = f'{table}__{p.name}' + _params.append(_p) + return _params + + def get_schema(self, request=None, public=False): + """Generate a :class:`.Swagger` object with custom tags""" + + swagger = super().get_schema(request, public) + swagger.tags = [ + { + "name": "filter", + "description": "Filter and search queries (corresponds to search in web interface)" + }, + { + "name": "groups", + "description": "Query groups" + }, + { + "name": "individuals", + "description": "Query individual subjects" + }, + { + "name": "info_nodes", + "description": "Query info nodes" + }, + { + "name": "outputs", + "description": "Query outputs" + }, + { + "name": "statistics", + "description": "Query PK-DB statistics" + }, + { + "name": "studies", + "description": "Query studies" + }, + { + "name": "interventions", + "description": "Query interventions" + }, + { + "name": "subsets", + "description": "Query subsets (timecourses and scatters)" + }, + + ] + + for table in ["studies", "individuals", "groups", "interventions", "outputs", "subsets"]: + if self._params(table, swagger): + swagger.paths.get('/filter/').get('get')["parameters"] += self._params(table, swagger) + + return swagger diff --git a/backend/requirements.txt b/backend/requirements.txt index d3d62d3d..3d71973e 100644 --- a/backend/requirements.txt +++ b/backend/requirements.txt @@ -5,7 +5,7 @@ json-logging>=1.2.6 psycopg2-binary>=2.8.5 # django -Django == 3.1 +Django == 3.1.1 django-model-utils>=4.0.0 django-extra-fields>=3.0.0 django-storages>=1.9.1 @@ -14,7 +14,8 @@ django-cors-headers>=3.4.0 django-rest-email-auth>=2.1.0 # REST API -djangorestframework>=3.11.1 +djangorestframework==3.11.1 +drf-yasg >= 1.17.1 elasticsearch-dsl==7.2.1 django-elasticsearch-dsl==7.1.4 django-elasticsearch-dsl-drf>=0.20.8 @@ -25,7 +26,7 @@ pandas>=1.1.0 numpy>=1.19.1 scipy>=1.5.2 matplotlib>=3.3.0 -pint>=0.14 +pint>=0.16.1 pkdb-analysis>=0.1.5 diff --git a/docker-compose-develop.yml b/docker-compose-develop.yml index ea923d32..a523e035 100644 --- a/docker-compose-develop.yml +++ b/docker-compose-develop.yml @@ -28,7 +28,7 @@ services: postgres: restart: always - image: postgres:12.3 + image: postgres:13.0 ports: - "5433:5432" volumes: @@ -51,7 +51,7 @@ services: memlock: soft: -1 hard: -1 - image: elasticsearch:7.8.1 + image: elasticsearch:7.9.2 ports: - "9123:9200" volumes: diff --git a/docker-compose-production.yml b/docker-compose-production.yml index b92e4270..f8d6b4b4 100644 --- a/docker-compose-production.yml +++ b/docker-compose-production.yml @@ -31,7 +31,7 @@ volumes: services: postgres: restart: always - image: postgres:12.3 + image: postgres:13.0 ports: - "5433:5432" volumes: @@ -43,7 +43,7 @@ services: elasticsearch: restart: always - image: elasticsearch:7.8.1 + image: elasticsearch:7.9.2 environment: - "ES_JAVA_OPTS=-Xms3g -Xmx3g" - bootstrap.memory_lock=true @@ -94,7 +94,7 @@ services: nginx: restart: always - image: nginx:1.19.1 + image: nginx:1.19.2 ports: - 8888:80 volumes: diff --git a/docs/pkdb_api.ipynb b/docs/pkdb_api.ipynb new file mode 100644 index 00000000..e0c39f98 --- /dev/null +++ b/docs/pkdb_api.ipynb @@ -0,0 +1,795 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "jupyter": { + "outputs_hidden": true + }, + "pycharm": { + "name": "#%% md\n" + } + }, + "source": [ + "# PKDB-REST API\n", + "This document provides examples querying data from PK-DB via the REST API. In the following the python `requests` package is used to make the web service requests.\n", + "\n", + "The complete API documentation is available from https://pk-db.com/api/v1/swagger/.\n", + "\n", + "For questions and information please contact konigmatt@googlemail.com" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "# API base url\n", + "base_url = \"https://pk-db.com/api/v1\" " + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "jupyter": { + "outputs_hidden": false + }, + "pycharm": { + "name": "#%%\n" + } + }, + "outputs": [], + "source": [ + "import requests\n", + "from requests import Response\n", + "from pprint import pprint\n", + "import pandas as pd\n", + "\n", + "\n", + "def json_print(r: Response):\n", + " \"\"\"Simple print for JSON content of response.\"\"\"\n", + " json = r.json()\n", + " pprint(json, sort_dicts=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Statistics\n", + "The `/statistics/` endpoint allows to retrieve a basic overview of the content of PK-DB, consisting of the counts and version information.\n", + "\n", + "To try the query in your browser use \n", + "https://pk-db.com/api/v1/statistics/?format=json" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "jupyter": { + "outputs_hidden": false + }, + "pycharm": { + "name": "#%%\n" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'version': '0.9.2a4',\n", + " 'study_count': 489,\n", + " 'reference_count': 489,\n", + " 'group_count': 1346,\n", + " 'individual_count': 5505,\n", + " 'intervention_count': 1329,\n", + " 'output_count': 67695,\n", + " 'output_calculated_count': 11473,\n", + " 'timecourse_count': 2957,\n", + " 'scatter_count': 36}\n" + ] + } + ], + "source": [ + "# query endpoint and print results\n", + "r = requests.get(f'{base_url}/statistics/')\n", + "json_print(r)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Info nodes\n", + "Information in PK-DB is organized as info nodes. Meta-information is encoded in the form of the info nodes which for a given field encodes meta-data such as description, synonyms, annotations and database cross-references. The information in the info nodes can be used to map data to other databases.\n", + "\n", + "### Get info node information\n", + "Information on info nodes can be retrieved using the `sid` with the `info_nodes` endpoint. An overview of the existing info nodes is available from the info nodes tab https://pk-db.com/curation. \n", + "\n", + "In the following example we query the information for the substance `caffeine` with the`sid=caf`\n", + "\n", + "To try the query in your browser use \n", + "https://pk-db.com/api/v1/info_nodes/caf/?format=json" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'sid': 'caf',\n", + " 'name': 'caffeine',\n", + " 'label': 'caffeine',\n", + " 'deprecated': False,\n", + " 'ntype': 'substance',\n", + " 'dtype': 'undefined',\n", + " 'description': 'A methylxanthine alkaloid found in the seeds, nuts, or leaves '\n", + " 'of a number of plants native to South America and East Asia '\n", + " 'that is structurally related to adenosine and acts primarily '\n", + " 'as an adenosine receptor antagonist with psychotropic and '\n", + " 'anti-inflammatory activities.',\n", + " 'synonyms': ['1,3,7-TMX',\n", + " '1,3,7-Trimethylxanthine',\n", + " '1,3,7-trimethyl-2,6-dioxopurine',\n", + " '1,3,7-trimethyl-3,7-dihydro-1H-purine-2,6-dione',\n", + " '1,3,7-trimethylpurine-2,6-dione',\n", + " '1,3,7-trimethylxanthine',\n", + " '1-methyltheobromine',\n", + " '137MX',\n", + " '3,7-Dihydro-1,3,7-trimethyl-1H-purin-2,6-dion',\n", + " '3,7-Dihydro-1,3,7-trimethyl-1H-purine-2,6-dione',\n", + " '7-methyltheophylline',\n", + " 'CAF',\n", + " 'CAFFEINE',\n", + " 'Caffeine',\n", + " 'Coffein',\n", + " 'Koffein',\n", + " 'Methyltheobromine',\n", + " 'Thein',\n", + " 'Theine',\n", + " 'Trimethylxanthine',\n", + " 'anhydrous caffeine',\n", + " 'cafeina',\n", + " 'cafeine',\n", + " 'caffeine',\n", + " 'guaranine',\n", + " 'mateina',\n", + " 'methyltheobromine',\n", + " 'teina',\n", + " 'theine'],\n", + " 'parents': [],\n", + " 'annotations': [{'label': 'caffeine',\n", + " 'relation': 'BQB_IS',\n", + " 'term': 'CHEBI:27732',\n", + " 'collection': 'chebi',\n", + " 'description': 'A trimethylxanthine in which the three '\n", + " 'methyl groups are located at positions 1, 3, '\n", + " 'and 7. A purine alkaloid that occurs '\n", + " 'naturally in tea and coffee.',\n", + " 'url': 'https://www.ebi.ac.uk/chebi/searchId.do?chebiId=CHEBI:27732'},\n", + " {'label': 'Caffeine',\n", + " 'relation': 'BQB_IS',\n", + " 'term': 'C328',\n", + " 'collection': 'ncit',\n", + " 'description': 'A methylxanthine alkaloid found in the '\n", + " 'seeds, nuts, or leaves of a number of plants '\n", + " 'native to South America and East Asia that '\n", + " 'is structurally related to adenosine and '\n", + " 'acts primarily as an adenosine receptor '\n", + " 'antagonist with psychotropic and '\n", + " 'anti-inflammatory activities. Upon '\n", + " 'ingestion, caffeine binds to adenosine '\n", + " 'receptors in the central nervous system '\n", + " '(CNS), which inhibits adenosine binding. '\n", + " 'This inhibits the adenosine-mediated '\n", + " 'downregulation of CNS activity; thus, '\n", + " 'stimulating the activity of the medullary, '\n", + " 'vagal, vasomotor, and respiratory centers in '\n", + " 'the brain. This agent also promotes '\n", + " 'neurotransmitter release that further '\n", + " 'stimulates the CNS. The anti-inflammatory '\n", + " 'effects of caffeine are due the nonselective '\n", + " 'competitive inhibition of phosphodiesterases '\n", + " '(PDEs). Inhibition of PDEs raises the '\n", + " 'intracellular concentration of cyclic AMP '\n", + " '(cAMP), activates protein kinase A, and '\n", + " 'inhibits leukotriene synthesis, which leads '\n", + " 'to reduced inflammation and innate immunity.',\n", + " 'url': 'http://ncit.nci.nih.gov/ncitbrowser/ConceptReport.jsp?dictionary=NCI%20Thesaurus&code=C328'},\n", + " {'label': None,\n", + " 'relation': 'BQB_IS',\n", + " 'term': 'RYYVLZVUVIJVGH-UHFFFAOYSA-N',\n", + " 'collection': 'inchikey',\n", + " 'description': None,\n", + " 'url': 'http://www.chemspider.com/inchikey=RYYVLZVUVIJVGH-UHFFFAOYSA-N'}],\n", + " 'xrefs': [{'name': 'chembl',\n", + " 'accession': 'CHEMBL113',\n", + " 'url': 'https://www.ebi.ac.uk/chembldb/compound/inspect/CHEMBL113'},\n", + " {'name': 'drugbank',\n", + " 'accession': 'DB00201',\n", + " 'url': 'http://www.drugbank.ca/drugs/DB00201'},\n", + " {'name': 'pdb',\n", + " 'accession': 'CFF',\n", + " 'url': 'http://www.ebi.ac.uk/pdbe-srv/pdbechem/chemicalCompound/show/CFF'},\n", + " {'name': 'gtopdb',\n", + " 'accession': '407',\n", + " 'url': 'http://www.guidetopharmacology.org/GRAC/LigandDisplayForward?ligandId=407'},\n", + " {'name': 'kegg_ligand',\n", + " 'accession': 'C07481',\n", + " 'url': 'http://www.genome.jp/dbget-bin/www_bget?C07481'},\n", + " {'name': 'chebi',\n", + " 'accession': '27732',\n", + " 'url': 'http://www.ebi.ac.uk/chebi/searchId.do?chebiId=CHEBI%3A27732'},\n", + " {'name': 'zinc',\n", + " 'accession': 'ZINC000000001084',\n", + " 'url': 'http://zinc15.docking.org/substances/ZINC000000001084'},\n", + " {'name': 'emolecules',\n", + " 'accession': '27517656',\n", + " 'url': 'https://www.emolecules.com/cgi-bin/more?vid=27517656'},\n", + " {'name': 'emolecules',\n", + " 'accession': '493944',\n", + " 'url': 'https://www.emolecules.com/cgi-bin/more?vid=493944'},\n", + " {'name': 'ibm',\n", + " 'accession': 'F5DC77C5C625DA4D47FA47B7105235AE',\n", + " 'url': 'http://www-935.ibm.com/services/us/gbs/bao/siip/nih/?sid=F5DC77C5C625DA4D47FA47B7105235AE'},\n", + " {'name': 'atlas',\n", + " 'accession': 'caffeine',\n", + " 'url': 'http://www.ebi.ac.uk/gxa/query?conditionQuery=caffeine'},\n", + " {'name': 'fdasrs',\n", + " 'accession': '3G6A5W338E',\n", + " 'url': 'http://fdasis.nlm.nih.gov/srs/ProxyServlet?mergeData=true&objectHandle=DBMaint&APPLICATION_NAME=fdasrs&actionHandle=default&nextPage=jsp/srs/ResultScreen.jsp&TXTSUPERLISTID=3G6A5W338E'},\n", + " {'name': 'surechembl',\n", + " 'accession': 'SCHEMBL5671',\n", + " 'url': 'https://www.surechembl.org/chemical/SCHEMBL5671'},\n", + " {'name': 'pharmgkb',\n", + " 'accession': 'PA448710',\n", + " 'url': 'https://www.pharmgkb.org/drug/PA448710'},\n", + " {'name': 'hmdb',\n", + " 'accession': 'HMDB0001847',\n", + " 'url': 'http://www.hmdb.ca/metabolites/HMDB0001847'},\n", + " {'name': 'pubchem_tpharma',\n", + " 'accession': '14772978',\n", + " 'url': 'http://pubchem.ncbi.nlm.nih.gov/substance/14772978'},\n", + " {'name': 'pubchem',\n", + " 'accession': '2519',\n", + " 'url': 'http://pubchem.ncbi.nlm.nih.gov/compound/2519'},\n", + " {'name': 'mcule',\n", + " 'accession': 'MCULE-3362813910',\n", + " 'url': 'https://mcule.com/MCULE-3362813910'},\n", + " {'name': 'nmrshiftdb2',\n", + " 'accession': '10016316',\n", + " 'url': 'http://nmrshiftdb.org/molecule/10016316'},\n", + " {'name': 'lincs',\n", + " 'accession': 'LSM-2026',\n", + " 'url': 'http://identifiers.org/lincs.smallmolecule/LSM-2026'},\n", + " {'name': 'actor',\n", + " 'accession': '58-08-2',\n", + " 'url': 'http://actor.epa.gov/actor/chemical.xhtml?casrn=58-08-2'},\n", + " {'name': 'nikkaji',\n", + " 'accession': 'J2.330B',\n", + " 'url': 'http://jglobal.jst.go.jp/en/redirect?Nikkaji_No=J2.330B'},\n", + " {'name': 'bindingdb',\n", + " 'accession': '10849',\n", + " 'url': 'http://www.bindingdb.org/bind/chemsearch/marvin/MolStructure.jsp?monomerid=10849'},\n", + " {'name': 'comptox',\n", + " 'accession': 'DTXSID0020232',\n", + " 'url': 'https://comptox.epa.gov/dashboard/DTXSID0020232'},\n", + " {'name': 'drugcentral',\n", + " 'accession': '463',\n", + " 'url': 'http://drugcentral.org/drugcard/463'},\n", + " {'name': 'metabolights',\n", + " 'accession': 'MTBLC27732',\n", + " 'url': 'http://www.ebi.ac.uk/metabolights/MTBLC27732'},\n", + " {'name': 'brenda',\n", + " 'accession': '207634',\n", + " 'url': 'https://www.brenda-enzymes.org/ligand.php?brenda_ligand_id=207634'},\n", + " {'name': 'brenda',\n", + " 'accession': '207635',\n", + " 'url': 'https://www.brenda-enzymes.org/ligand.php?brenda_ligand_id=207635'},\n", + " {'name': 'brenda',\n", + " 'accession': '51266',\n", + " 'url': 'https://www.brenda-enzymes.org/ligand.php?brenda_ligand_id=51266'},\n", + " {'name': 'brenda',\n", + " 'accession': '7965',\n", + " 'url': 'https://www.brenda-enzymes.org/ligand.php?brenda_ligand_id=7965'},\n", + " {'name': 'brenda',\n", + " 'accession': '882',\n", + " 'url': 'https://www.brenda-enzymes.org/ligand.php?brenda_ligand_id=882'},\n", + " {'name': 'rhea',\n", + " 'accession': '27732',\n", + " 'url': 'http://www.rhea-db.org/searchresults?q=CHEBI:27732'},\n", + " {'name': 'dailymed',\n", + " 'accession': 'CAFFEINE',\n", + " 'url': 'https://dailymed.nlm.nih.gov/dailymed/search.cfm?adv=1&labeltype=human&query=ACTIVEMOIETY:(CAFFEINE'},\n", + " {'name': 'clinicaltrials',\n", + " 'accession': 'ANHYDROUS CAFFEINE',\n", + " 'url': 'https://www.clinicaltrials.gov/ct2/results?&type=Intr&intr=%22ANHYDROUS%20CAFFEINE%22'},\n", + " {'name': 'clinicaltrials',\n", + " 'accession': 'CAFCIT',\n", + " 'url': 'https://www.clinicaltrials.gov/ct2/results?&type=Intr&intr=%22CAFCIT%22'},\n", + " {'name': 'clinicaltrials',\n", + " 'accession': 'CAFFEINE',\n", + " 'url': 'https://www.clinicaltrials.gov/ct2/results?&type=Intr&intr=%22CAFFEINE%22'},\n", + " {'name': 'clinicaltrials',\n", + " 'accession': 'CAFFEINE CITRATE',\n", + " 'url': 'https://www.clinicaltrials.gov/ct2/results?&type=Intr&intr=%22CAFFEINE%20CITRATE%22'},\n", + " {'name': 'clinicaltrials',\n", + " 'accession': 'PEYONA',\n", + " 'url': 'https://www.clinicaltrials.gov/ct2/results?&type=Intr&intr=%22PEYONA%22'},\n", + " {'name': 'InChIKey through ChemSpider',\n", + " 'accession': 'RYYVLZVUVIJVGH-UHFFFAOYSA-N',\n", + " 'url': 'http://www.chemspider.com/inchikey=RYYVLZVUVIJVGH-UHFFFAOYSA-N'},\n", + " {'name': 'InChiKey resolver at NCI',\n", + " 'accession': 'RYYVLZVUVIJVGH-UHFFFAOYSA-N',\n", + " 'url': 'http://cactus.nci.nih.gov/chemical/structure/RYYVLZVUVIJVGH-UHFFFAOYSA-N/names'}],\n", + " 'measurement_type': None,\n", + " 'substance': {'mass': 194.19076, 'charge': 0.0, 'formula': 'C8H10N4O2'}}\n" + ] + } + ], + "source": [ + "# query caffeine info_node\n", + "r = requests.get(f'{base_url}/info_nodes/caf/')\n", + "json_print(r)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Search info node\n", + "Info nodes can be search via the `search` argument to the `/info_nodes/` endpoint. \n", + "\n", + "In the following example info nodes containing `caffeine` are searched. The results are paginated and if more then a single page of results exists the results from multiple pages have to be combined. \n", + "The We parse the JSON response in a pandas DataFrame and display `sid`, `name`, `label` and `description` for the top 10 results.\n", + "\n", + "To try the query in your browser use \n", + "https://pk-db.com/api/v1/info_nodes/?search=caffeine&format=json" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Number of info nodes on page: 41\n" + ] + }, + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
sidnamelabeldescription
0cafcaffeinecaffeineA methylxanthine alkaloid found in the seeds, ...
1caffeine-citratecaffeine citratecaffeine citrateCommercial citrate of caffeine, though not a d...
2caffeine-monohydratecaffeine monohydratecaffeine monohydrateCaffeine monohydrate.
317u17U17UMetabolite of caffeine.
4pxparaxanthineparaxanthineA dimethylxanthine having the two methyl group...
5tptheophyllinetheophyllineA natural alkaloid derivative of xanthine isol...
6137mu137MU137MUMetabolite of caffeine.
7137tmu137TMU137TMUMetabolite of caffeine.
813dmu13DMU13DMUMetabolite of caffeine.
913mu13MU13MUMetabolite of caffeine.
\n", + "
" + ], + "text/plain": [ + " sid name label \\\n", + "0 caf caffeine caffeine \n", + "1 caffeine-citrate caffeine citrate caffeine citrate \n", + "2 caffeine-monohydrate caffeine monohydrate caffeine monohydrate \n", + "3 17u 17U 17U \n", + "4 px paraxanthine paraxanthine \n", + "5 tp theophylline theophylline \n", + "6 137mu 137MU 137MU \n", + "7 137tmu 137TMU 137TMU \n", + "8 13dmu 13DMU 13DMU \n", + "9 13mu 13MU 13MU \n", + "\n", + " description \n", + "0 A methylxanthine alkaloid found in the seeds, ... \n", + "1 Commercial citrate of caffeine, though not a d... \n", + "2 Caffeine monohydrate. \n", + "3 Metabolite of caffeine. \n", + "4 A dimethylxanthine having the two methyl group... \n", + "5 A natural alkaloid derivative of xanthine isol... \n", + "6 Metabolite of caffeine. \n", + "7 Metabolite of caffeine. \n", + "8 Metabolite of caffeine. \n", + "9 Metabolite of caffeine. " + ] + }, + "execution_count": 6, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# query info nodes about caffeine\n", + "r = requests.get(f'{base_url}/info_nodes/?search=caffeine')\n", + "json = r.json()\n", + "\n", + "# The 'data' key contains all the response data consisting of count and actual data\n", + "count = json[\"data\"][\"count\"]\n", + "print(f\"Number of info nodes on page: {count}\")\n", + "\n", + "# conversion of result data to a pandas DataFrame\n", + "data = json[\"data\"][\"data\"]\n", + "df = pd.DataFrame.from_dict(data)\n", + "\n", + "# printing selected columns\n", + "df[[\"sid\", \"name\", \"label\", \"description\"]].head(10)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Get all info nodes\n", + "To retrieve all available info nodes use the `/info_nodes/` endpoint.\n", + "\n", + "To try the query in your browser use \n", + "https://pk-db.com/api/v1/info_nodes/?format=json" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "jupyter": { + "outputs_hidden": false + }, + "pycharm": { + "name": "#%%\n" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Number of info nodes: 1030\n" + ] + } + ], + "source": [ + "r = requests.get(f'{base_url}/info_nodes/')\n", + "json = r.json()\n", + "print(f\"Number of info nodes: {json['data']['count']}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To access the next page of a paginated page use the `page` argument.\n", + "\n", + "For instance to access the page 2 use\n", + "https://pk-db.com/api/v1/info_nodes/?page=2&format=json" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Search and filter data\n", + "The `/filter/` endpoint is the main endpoint to search and filter data. The endpoint returns a `uuid` to access the information of the results and and overview of the counts. The `studies__*`, `groups__*`, `individuals_*`, ... arguments allow to search and filter on the respective information of the studies. These arguments correspond to the search flags in web search.\n", + "\n", + "In the following example we filter the information for the study with the name `Abernethy1982`. Importantly, the `uuid` is not permanent. To run the following queries\n", + "\n", + "To try the query in your browser use \n", + "https://pk-db.com/api/v1/info_nodes/filter/?studies__name=Abernethy1982&format=json" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": { + "jupyter": { + "outputs_hidden": false + }, + "pycharm": { + "name": "#%%\n" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'uuid': '0c178b63-6a93-4d75-92b0-e2f42372b0ea',\n", + " 'studies': 1,\n", + " 'groups': 4,\n", + " 'individuals': 46,\n", + " 'interventions': 1,\n", + " 'outputs': 147,\n", + " 'timecourses': 4,\n", + " 'scatter': 0}\n", + "\n", + "uuid: 0c178b63-6a93-4d75-92b0-e2f42372b0ea\n" + ] + } + ], + "source": [ + "r = requests.get(f'{base_url}/filter/?studies__name=Abernethy1982')\n", + "json_print(r)\n", + "uuid = r.json()['uuid']\n", + "print(f\"\\nuuid: {uuid}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Accessing data for search query \n", + "The `groups`, `individuals`, `interventions`, `outputs`, `timecourses` and `scatters` can now be loaded using the `uuid`" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "https://pk-db.com/api/v1/groups/?uuid=0c178b63-6a93-4d75-92b0-e2f42372b0ea&format=json\n", + "groups: 4\n", + "https://pk-db.com/api/v1/individuals/?uuid=0c178b63-6a93-4d75-92b0-e2f42372b0ea&format=json\n", + "individuals: 46\n", + "https://pk-db.com/api/v1/interventions/?uuid=0c178b63-6a93-4d75-92b0-e2f42372b0ea&format=json\n", + "interventions: 1\n", + "https://pk-db.com/api/v1/outputs/?uuid=0c178b63-6a93-4d75-92b0-e2f42372b0ea&format=json\n", + "outputs: 147\n", + "https://pk-db.com/api/v1/subsets/?data_type=timecourse&uuid=0c178b63-6a93-4d75-92b0-e2f42372b0ea&format=json\n", + "timecourses: 4\n", + "https://pk-db.com/api/v1/subsets/?data_type=scatter&uuid=0c178b63-6a93-4d75-92b0-e2f42372b0ea&format=json\n", + "scatters: 0\n" + ] + } + ], + "source": [ + "# query information via uuid\n", + "for endpoint in [\"groups\", \"individuals\", \"interventions\", \"outputs\"]:\n", + " url = f\"{base_url}/{endpoint}/?uuid={uuid}&format=json\"\n", + " print(url)\n", + " r = requests.get(url)\n", + " count = r.json()[\"data\"][\"count\"]\n", + " print(f\"{endpoint}: {count}\")\n", + " \n", + "# query timecourses and scatters\n", + "for data_type in [\"timecourse\", \"scatter\"]:\n", + " url = f\"{base_url}/subsets/?data_type={data_type}&uuid={uuid}&format=json\"\n", + " print(url)\n", + " r = requests.get(url)\n", + " count = r.json()[\"data\"][\"count\"]\n", + " print(f\"{data_type}s: {count}\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Download data\n", + "Data can be downloaded using the `download` argument returning the information as zip archive." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "https://pk-db.com/api/v1/filter/?studies__name=Abernethy1982&download=true\n", + "created temporary directory /tmp/tmpipkyxwfy\n", + "['interventions.csv', 'studies.csv', 'timecourses.csv', 'scatter.csv', 'groups.csv', 'individuals.csv', 'outputs.csv']\n", + " study_sid study_name output_pk intervention_pk group_pk \\\n", + "0 PKDB00198 Abernethy1982 29 2 6.0 \n", + "1 PKDB00198 Abernethy1982 31 2 4.0 \n", + "2 PKDB00198 Abernethy1982 23 2 3.0 \n", + "3 PKDB00198 Abernethy1982 32 2 4.0 \n", + "4 PKDB00198 Abernethy1982 30 2 6.0 \n", + ".. ... ... ... ... ... \n", + "142 PKDB00198 Abernethy1982 288 2 NaN \n", + "143 PKDB00198 Abernethy1982 289 2 NaN \n", + "144 PKDB00198 Abernethy1982 290 2 NaN \n", + "145 PKDB00198 Abernethy1982 293 2 NaN \n", + "146 PKDB00198 Abernethy1982 294 2 NaN \n", + "\n", + " individual_pk normed calculated tissue method ... substance \\\n", + "0 NaN True False plasma NaN ... paracetamol \n", + "1 NaN True False plasma NaN ... paracetamol \n", + "2 NaN True False plasma NaN ... paracetamol \n", + "3 NaN True False plasma NaN ... paracetamol \n", + "4 NaN True False plasma NaN ... paracetamol \n", + ".. ... ... ... ... ... ... ... \n", + "142 4.0 True True plasma NaN ... paracetamol \n", + "143 4.0 True True plasma NaN ... paracetamol \n", + "144 4.0 True True plasma NaN ... paracetamol \n", + "145 4.0 True True plasma NaN ... paracetamol \n", + "146 4.0 True True plasma NaN ... paracetamol \n", + "\n", + " value mean median min max sd se cv \\\n", + "0 NaN 19.380 NaN 11.9400 29.3400 NaN NaN NaN \n", + "1 NaN 2.320 NaN 1.7300 3.1700 NaN NaN NaN \n", + "2 NaN 0.810 NaN 0.5300 1.3100 NaN NaN NaN \n", + "3 NaN 61.400 NaN 47.0000 82.1000 NaN NaN NaN \n", + "4 NaN 0.273 NaN 0.2286 0.4164 NaN NaN NaN \n", + ".. ... ... ... ... ... .. .. .. \n", + "142 0.017373 NaN NaN NaN NaN NaN NaN NaN \n", + "143 37.415283 NaN NaN NaN NaN NaN NaN NaN \n", + "144 0.012036 NaN NaN NaN NaN NaN NaN NaN \n", + "145 122.530075 NaN NaN NaN NaN NaN NaN NaN \n", + "146 123.189330 NaN NaN NaN NaN NaN NaN NaN \n", + "\n", + " unit \n", + "0 liter / hour \n", + "1 hour \n", + "2 liter / kilogram \n", + "3 liter \n", + "4 liter / hour / kilogram \n", + ".. ... \n", + "142 gram * hour / liter \n", + "143 liter / hour \n", + "144 gram / liter \n", + "145 liter \n", + "146 liter \n", + "\n", + "[147 rows x 26 columns]\n" + ] + } + ], + "source": [ + "import os\n", + "import requests, zipfile, io\n", + "import tempfile\n", + "\n", + "url = f\"{base_url}/filter/?studies__name=Abernethy1982&download=true\"\n", + "print(url)\n", + "\n", + "r = requests.get(url)\n", + "z = zipfile.ZipFile(io.BytesIO(r.content))\n", + "\n", + "with tempfile.TemporaryDirectory() as tmpdir:\n", + " print('created temporary directory', tmpdir)\n", + " z.extractall(tmpdir)\n", + " \n", + " # zip contains information on studies, groups, individuals, interventions, outputs, timecourses, scatters\n", + " print(os.listdir(tmpdir))\n", + " \n", + " # loading the outputs as DataFrame\n", + " df = pd.read_csv(os.path.join(tmpdir, \"outputs.csv\"), index_col=0)\n", + " print(df)\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "pkdb_api", + "language": "python", + "name": "pkdb_api" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.2" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/docs/requirements_api.txt b/docs/requirements_api.txt new file mode 100644 index 00000000..898d94e0 --- /dev/null +++ b/docs/requirements_api.txt @@ -0,0 +1,10 @@ +# cd docs +# mkvirtualenv pkdb_api --python=python3.8 +# (pkdb_api) pip install -r requirements_api.txt +# (pkdb_api) ipython kernel install --name "pkdb_api" --user +# jupyter lab pkdb_api.ipynb + + +requests +pandas +jupyterlab \ No newline at end of file diff --git a/frontend/Dockerfile-develop b/frontend/Dockerfile-develop index 81348ab1..16025672 100644 --- a/frontend/Dockerfile-develop +++ b/frontend/Dockerfile-develop @@ -1,4 +1,4 @@ -FROM node:14.6.0 as build-stage +FROM node:14.11.0 as build-stage WORKDIR /app COPY package*.json /app/ RUN npm install diff --git a/frontend/Dockerfile-production b/frontend/Dockerfile-production index c74603bc..1a406441 100644 --- a/frontend/Dockerfile-production +++ b/frontend/Dockerfile-production @@ -1,5 +1,5 @@ # build stage -FROM node:14.6.0 as build-stage +FROM node:14.11.0 as build-stage WORKDIR /app COPY package*.json /app/ RUN npm install @@ -7,7 +7,7 @@ COPY . /app/ RUN npm run build # production stage -FROM alpine:3.9 as production-stage +FROM alpine:3.12 as production-stage RUN mkdir -p /vue COPY --from=build-stage /app/dist /vue diff --git a/frontend/package.json b/frontend/package.json index d9ccb59b..43bdf54f 100644 --- a/frontend/package.json +++ b/frontend/package.json @@ -1,6 +1,6 @@ { "name": "pkdb", - "version": "0.8.0", + "version": "0.9.2", "private": true, "scripts": { "serve": "vue-cli-service serve", @@ -10,38 +10,41 @@ }, "dependencies": { "@statnett/vue-plotly": "^0.3.2", - "acorn": "^7.1.0", - "axios": "^0.19.1", + "acorn": "^7.4.0", + "axios": "^0.19.2", "base-64": "^0.1.0", - "vega": "^5.9.1", - "vega-embed": "^6.2.2", - "vega-lite": "^4.0.2", - "vue": "^2.6.11", - "vue-auth-image": "0.0.3", - "vue-plotly": "^1.0.1", + "color-normalize": "1.5.0", + "color-rgba": "2.1.1", + "color-parse": "1.3.8", + "vega": "^5.16.1", + "vega-embed": "^6.12.2", + "vega-lite": "^4.16.7", + "vue": "^2.6.12", + "vue-auth-image": "^0.0.3", + "vue-multiselect": "^2.1.6", + "vue-plotly": "^1.1.0", "vue-resource": "^1.5.1", - "vue-router": "^3.0.6", + "vue-router": "^3.4.5", "vue-text-highlight": "^2.0.10", - "vuetify": "^2.2.9", - "vuex": "^3.1.0", - "vuex-persist": "^2.0.0", - "vue-multiselect": "^2.1.6" + "vuetify": "^2.3.10", + "vuex": "^3.5.1", + "vuex-persist": "^2.3.0" }, "devDependencies": { - "@fortawesome/fontawesome-free": "^5.12.0", + "@fortawesome/fontawesome-free": "^5.14.0", "@vue/cli-plugin-babel": "^3.12.1", "@vue/cli-plugin-eslint": "^3.12.1", "@vue/cli-plugin-unit-mocha": "^3.11.1", "@vue/cli-service": "^4.1.2", "@vue/test-utils": "^1.0.0-beta.30", - "chai": "^4.1.2", - "css-loader": "^3.4.2", - "node-sass": "^4.13.1", + "chai": "^4.2.0", + "css-loader": "^3.6.0", + "node-sass": "^4.14.1", "sass-loader": "^8.0.2", - "style-loader": "^1.1.3", - "stylus": "^0.54.7", + "style-loader": "^1.2.1", + "stylus": "^0.54.8", "stylus-loader": "^3.0.2", "vue-cli-plugin-vuetify": "^0.2.1", - "vue-template-compiler": "^2.6.11" + "vue-template-compiler": "^2.6.12" } } diff --git a/frontend/public/assets/images/logo-PKDB.png b/frontend/public/assets/images/logo-PKDB.png new file mode 100644 index 00000000..b0a92dcf Binary files /dev/null and b/frontend/public/assets/images/logo-PKDB.png differ diff --git a/frontend/public/assets/images/pkexample.png b/frontend/public/assets/images/pkexample.png new file mode 100644 index 00000000..4b06f4f2 Binary files /dev/null and b/frontend/public/assets/images/pkexample.png differ diff --git a/frontend/public/assets/images/pkexample2.png b/frontend/public/assets/images/pkexample2.png new file mode 100644 index 00000000..e6a4c656 Binary files /dev/null and b/frontend/public/assets/images/pkexample2.png differ diff --git a/frontend/src/App.vue b/frontend/src/App.vue index 9a382789..dd92f689 100644 --- a/frontend/src/App.vue +++ b/frontend/src/App.vue @@ -1,11 +1,8 @@ \ No newline at end of file + diff --git a/frontend/src/components/FooterBar.vue b/frontend/src/components/FooterBar.vue deleted file mode 100644 index e988773b..00000000 --- a/frontend/src/components/FooterBar.vue +++ /dev/null @@ -1,20 +0,0 @@ - - - - - \ No newline at end of file diff --git a/frontend/src/components/Home.vue b/frontend/src/components/Home.vue index beec1597..334678ff 100644 --- a/frontend/src/components/Home.vue +++ b/frontend/src/components/Home.vue @@ -1,30 +1,83 @@ diff --git a/frontend/src/components/ListView.vue b/frontend/src/components/ListView.vue deleted file mode 100644 index 85b918ed..00000000 --- a/frontend/src/components/ListView.vue +++ /dev/null @@ -1,33 +0,0 @@ - - - - - \ No newline at end of file diff --git a/frontend/src/components/Navigation.vue b/frontend/src/components/Navigation.vue deleted file mode 100644 index f71466ab..00000000 --- a/frontend/src/components/Navigation.vue +++ /dev/null @@ -1,120 +0,0 @@ - - - - - \ No newline at end of file diff --git a/frontend/src/components/Overview.vue b/frontend/src/components/Overview.vue deleted file mode 100644 index 09e59020..00000000 --- a/frontend/src/components/Overview.vue +++ /dev/null @@ -1,164 +0,0 @@ - - - - - \ No newline at end of file diff --git a/frontend/src/components/Search.vue b/frontend/src/components/Search.vue deleted file mode 100644 index 234eaf68..00000000 --- a/frontend/src/components/Search.vue +++ /dev/null @@ -1,225 +0,0 @@ - - - - - diff --git a/frontend/src/components/detail/StudyDetail.vue b/frontend/src/components/_deprecated/StudyDetail.vue similarity index 100% rename from frontend/src/components/detail/StudyDetail.vue rename to frontend/src/components/_deprecated/StudyDetail.vue diff --git a/frontend/src/components/deprecated/Group.vue b/frontend/src/components/deprecated/Group.vue deleted file mode 100644 index ff01a3e7..00000000 --- a/frontend/src/components/deprecated/Group.vue +++ /dev/null @@ -1,34 +0,0 @@ - - - - \ No newline at end of file diff --git a/frontend/src/components/deprecated/Groups.vue b/frontend/src/components/deprecated/Groups.vue deleted file mode 100644 index e5f24643..00000000 --- a/frontend/src/components/deprecated/Groups.vue +++ /dev/null @@ -1,17 +0,0 @@ - - - - - diff --git a/frontend/src/components/deprecated/Individual.vue b/frontend/src/components/deprecated/Individual.vue deleted file mode 100644 index d45bfb84..00000000 --- a/frontend/src/components/deprecated/Individual.vue +++ /dev/null @@ -1,32 +0,0 @@ - - - - \ No newline at end of file diff --git a/frontend/src/components/deprecated/Individuals.vue b/frontend/src/components/deprecated/Individuals.vue deleted file mode 100644 index b27c7ff7..00000000 --- a/frontend/src/components/deprecated/Individuals.vue +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/frontend/src/components/deprecated/InfoNode.vue b/frontend/src/components/deprecated/InfoNode.vue deleted file mode 100644 index d26b13fd..00000000 --- a/frontend/src/components/deprecated/InfoNode.vue +++ /dev/null @@ -1,40 +0,0 @@ - - - - diff --git a/frontend/src/components/deprecated/Intervention.vue b/frontend/src/components/deprecated/Intervention.vue deleted file mode 100644 index 4cb06f74..00000000 --- a/frontend/src/components/deprecated/Intervention.vue +++ /dev/null @@ -1,30 +0,0 @@ - - - - \ No newline at end of file diff --git a/frontend/src/components/deprecated/Interventions.vue b/frontend/src/components/deprecated/Interventions.vue deleted file mode 100644 index e5797177..00000000 --- a/frontend/src/components/deprecated/Interventions.vue +++ /dev/null @@ -1,17 +0,0 @@ - - - - - \ No newline at end of file diff --git a/frontend/src/components/deprecated/MeasurementTypeSearchChoice.vue b/frontend/src/components/deprecated/MeasurementTypeSearchChoice.vue deleted file mode 100644 index 3229238b..00000000 --- a/frontend/src/components/deprecated/MeasurementTypeSearchChoice.vue +++ /dev/null @@ -1,131 +0,0 @@ - - - - - \ No newline at end of file diff --git a/frontend/src/components/deprecated/MeasurementTypeSearchSingle.vue b/frontend/src/components/deprecated/MeasurementTypeSearchSingle.vue deleted file mode 100644 index 18b5c14a..00000000 --- a/frontend/src/components/deprecated/MeasurementTypeSearchSingle.vue +++ /dev/null @@ -1,125 +0,0 @@ - - - - \ No newline at end of file diff --git a/frontend/src/components/deprecated/Output.vue b/frontend/src/components/deprecated/Output.vue deleted file mode 100644 index 467b1d47..00000000 --- a/frontend/src/components/deprecated/Output.vue +++ /dev/null @@ -1,30 +0,0 @@ - - - - \ No newline at end of file diff --git a/frontend/src/components/deprecated/Outputs.vue b/frontend/src/components/deprecated/Outputs.vue deleted file mode 100644 index af51df99..00000000 --- a/frontend/src/components/deprecated/Outputs.vue +++ /dev/null @@ -1,18 +0,0 @@ - - - - - \ No newline at end of file diff --git a/frontend/src/components/deprecated/Reference.vue b/frontend/src/components/deprecated/Reference.vue deleted file mode 100644 index 8a2b7e9e..00000000 --- a/frontend/src/components/deprecated/Reference.vue +++ /dev/null @@ -1,31 +0,0 @@ - - - - \ No newline at end of file diff --git a/frontend/src/components/deprecated/References.vue b/frontend/src/components/deprecated/References.vue deleted file mode 100644 index f80b9df7..00000000 --- a/frontend/src/components/deprecated/References.vue +++ /dev/null @@ -1,18 +0,0 @@ - - - - - \ No newline at end of file diff --git a/frontend/src/components/deprecated/Studies.vue b/frontend/src/components/deprecated/Studies.vue deleted file mode 100644 index 0a6629e4..00000000 --- a/frontend/src/components/deprecated/Studies.vue +++ /dev/null @@ -1,16 +0,0 @@ - - - - - \ No newline at end of file diff --git a/frontend/src/components/deprecated/Study.vue b/frontend/src/components/deprecated/Study.vue deleted file mode 100644 index c01e2607..00000000 --- a/frontend/src/components/deprecated/Study.vue +++ /dev/null @@ -1,48 +0,0 @@ - - - - \ No newline at end of file diff --git a/frontend/src/components/deprecated/Timecourse.vue b/frontend/src/components/deprecated/Timecourse.vue deleted file mode 100644 index 01cfd129..00000000 --- a/frontend/src/components/deprecated/Timecourse.vue +++ /dev/null @@ -1,30 +0,0 @@ - - - - \ No newline at end of file diff --git a/frontend/src/components/deprecated/Timecourses.vue b/frontend/src/components/deprecated/Timecourses.vue deleted file mode 100644 index 42af5d6d..00000000 --- a/frontend/src/components/deprecated/Timecourses.vue +++ /dev/null @@ -1,17 +0,0 @@ - - - - - \ No newline at end of file diff --git a/frontend/src/components/detail/CharacteristicaCard.vue b/frontend/src/components/detail/CharacteristicaCard.vue index e95641c8..1ab351c8 100644 --- a/frontend/src/components/detail/CharacteristicaCard.vue +++ b/frontend/src/components/detail/CharacteristicaCard.vue @@ -1,57 +1,90 @@ + + \ No newline at end of file diff --git a/frontend/src/components/detail/FileImageView.vue b/frontend/src/components/detail/FileImageView.vue index 1e624cea..02387985 100644 --- a/frontend/src/components/detail/FileImageView.vue +++ b/frontend/src/components/detail/FileImageView.vue @@ -1,37 +1,36 @@ - \ No newline at end of file diff --git a/frontend/src/components/detail/GroupDetail.vue b/frontend/src/components/detail/GroupDetail.vue deleted file mode 100644 index adb59a29..00000000 --- a/frontend/src/components/detail/GroupDetail.vue +++ /dev/null @@ -1,37 +0,0 @@ - - - - - \ No newline at end of file diff --git a/frontend/src/components/detail/IndividualDetail.vue b/frontend/src/components/detail/IndividualDetail.vue deleted file mode 100644 index 621ee3c3..00000000 --- a/frontend/src/components/detail/IndividualDetail.vue +++ /dev/null @@ -1,43 +0,0 @@ - - - - - \ No newline at end of file diff --git a/frontend/src/components/detail/InfoNodeDetail.vue b/frontend/src/components/detail/InfoNodeDetail.vue index 6055d842..f50f4277 100644 --- a/frontend/src/components/detail/InfoNodeDetail.vue +++ b/frontend/src/components/detail/InfoNodeDetail.vue @@ -1,29 +1,47 @@ @@ -69,9 +86,12 @@ import Annotation from "../info_node/Annotation"; import Xref from "../info_node/Xref"; +import {ApiInteractionMixin} from "../../apiInteraction"; +import {IconsMixin} from "../../icons"; export default { name: 'InfoNodeDetail', + mixins: [ApiInteractionMixin, IconsMixin], components: { Annotation, Xref, @@ -87,9 +107,18 @@ export default { } }, computed: { - highlight(){ - return this.$store.state.highlight + substance_class: function (){ + let label = "generic"; + let annotations = this.data.annotations + for (const annotation of annotations){ + if (annotation.collection == "inchikey"){ + label = "specific"; + break; + } + } + return label }, + parents_labels: function () { let labels = [] let parents = this.data.parents diff --git a/frontend/src/components/detail/InterventionDetail.vue b/frontend/src/components/detail/InterventionDetail.vue index 84a6749a..818bd5af 100644 --- a/frontend/src/components/detail/InterventionDetail.vue +++ b/frontend/src/components/detail/InterventionDetail.vue @@ -1,58 +1,61 @@ diff --git a/frontend/src/components/detail/ReferenceDetail.vue b/frontend/src/components/detail/ReferenceDetail.vue index 0a23f207..44168f3e 100644 --- a/frontend/src/components/detail/ReferenceDetail.vue +++ b/frontend/src/components/detail/ReferenceDetail.vue @@ -1,47 +1,50 @@ \ No newline at end of file diff --git a/frontend/src/components/detail/ScatterDetails.vue b/frontend/src/components/detail/ScatterDetails.vue new file mode 100644 index 00000000..cf22fe8e --- /dev/null +++ b/frontend/src/components/detail/ScatterDetails.vue @@ -0,0 +1,82 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/detail/ScatterIcon.vue b/frontend/src/components/detail/ScatterIcon.vue new file mode 100644 index 00000000..f6de9f28 --- /dev/null +++ b/frontend/src/components/detail/ScatterIcon.vue @@ -0,0 +1,126 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/detail/StudyOverview.vue b/frontend/src/components/detail/StudyOverview.vue index af6f349c..c9b9916c 100644 --- a/frontend/src/components/detail/StudyOverview.vue +++ b/frontend/src/components/detail/StudyOverview.vue @@ -1,158 +1,148 @@ \ No newline at end of file diff --git a/frontend/src/components/detail/SubjectDetail.vue b/frontend/src/components/detail/SubjectDetail.vue new file mode 100644 index 00000000..ff3f5958 --- /dev/null +++ b/frontend/src/components/detail/SubjectDetail.vue @@ -0,0 +1,82 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/dialogs/DownloadDialog.vue b/frontend/src/components/dialogs/DownloadDialog.vue new file mode 100644 index 00000000..f667bab3 --- /dev/null +++ b/frontend/src/components/dialogs/DownloadDialog.vue @@ -0,0 +1,56 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/dialogs/ReferenceDialog.vue b/frontend/src/components/dialogs/ReferenceDialog.vue deleted file mode 100644 index 2dfe8f98..00000000 --- a/frontend/src/components/dialogs/ReferenceDialog.vue +++ /dev/null @@ -1,64 +0,0 @@ - - - - - \ No newline at end of file diff --git a/frontend/src/components/About.vue b/frontend/src/components/home/About.vue similarity index 95% rename from frontend/src/components/About.vue rename to frontend/src/components/home/About.vue index b70fe89b..52b95a5d 100644 --- a/frontend/src/components/About.vue +++ b/frontend/src/components/home/About.vue @@ -1,7 +1,7 @@ diff --git a/frontend/src/components/info_node/Pubmed.vue b/frontend/src/components/info_node/Pubmed.vue new file mode 100644 index 00000000..81deb902 --- /dev/null +++ b/frontend/src/components/info_node/Pubmed.vue @@ -0,0 +1,25 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/info_node/Xref.vue b/frontend/src/components/info_node/Xref.vue index ab066151..f527ad02 100644 --- a/frontend/src/components/info_node/Xref.vue +++ b/frontend/src/components/info_node/Xref.vue @@ -1,21 +1,23 @@ diff --git a/frontend/src/components/lib/Annotations.vue b/frontend/src/components/lib/Annotations.vue index 51d7cfe9..0a6ca1fd 100644 --- a/frontend/src/components/lib/Annotations.vue +++ b/frontend/src/components/lib/Annotations.vue @@ -1,10 +1,17 @@ diff --git a/frontend/src/components/lib/Comments.vue b/frontend/src/components/lib/Comments.vue index cf15ef8f..d1aa9db2 100644 --- a/frontend/src/components/lib/Comments.vue +++ b/frontend/src/components/lib/Comments.vue @@ -1,16 +1,10 @@ diff --git a/frontend/src/components/lib/CountBadge.vue b/frontend/src/components/lib/CountBadge.vue index 5381236b..4cb2845b 100644 --- a/frontend/src/components/lib/CountBadge.vue +++ b/frontend/src/components/lib/CountBadge.vue @@ -1,7 +1,8 @@ + + + + \ No newline at end of file diff --git a/frontend/src/components/lib/buttons/DataButton.vue b/frontend/src/components/lib/buttons/DataButton.vue new file mode 100644 index 00000000..817d6c94 --- /dev/null +++ b/frontend/src/components/lib/buttons/DataButton.vue @@ -0,0 +1,25 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/lib/buttons/DownloadButton.vue b/frontend/src/components/lib/buttons/DownloadButton.vue new file mode 100644 index 00000000..613e15de --- /dev/null +++ b/frontend/src/components/lib/buttons/DownloadButton.vue @@ -0,0 +1,41 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/lib/buttons/HideSearchButton.vue b/frontend/src/components/lib/buttons/HideSearchButton.vue new file mode 100644 index 00000000..128e94fd --- /dev/null +++ b/frontend/src/components/lib/buttons/HideSearchButton.vue @@ -0,0 +1,43 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/lib/buttons/JsonButton.vue b/frontend/src/components/lib/buttons/JsonButton.vue index 8654fde1..581b5e97 100644 --- a/frontend/src/components/lib/buttons/JsonButton.vue +++ b/frontend/src/components/lib/buttons/JsonButton.vue @@ -10,6 +10,7 @@ :disabled="resource_url ? false : true" title="JSON for query" icon + target="_blank" > fas fa-code diff --git a/frontend/src/components/lib/buttons/LinkButton.vue b/frontend/src/components/lib/buttons/LinkButton.vue index f1c9aa6f..0395858b 100644 --- a/frontend/src/components/lib/buttons/LinkButton.vue +++ b/frontend/src/components/lib/buttons/LinkButton.vue @@ -6,7 +6,8 @@ :color="color" :to="to" :title="title" - :disabled="to ? false : true" + @click="update_details" + :disabled="disabled" > {{ faIcon(icon) }} @@ -14,11 +15,30 @@ diff --git a/frontend/src/components/lib/buttons/SearchHelpButton.vue b/frontend/src/components/lib/buttons/SearchHelpButton.vue new file mode 100644 index 00000000..77fe34b6 --- /dev/null +++ b/frontend/src/components/lib/buttons/SearchHelpButton.vue @@ -0,0 +1,35 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/lib/buttons/SingleStudyButton.vue b/frontend/src/components/lib/buttons/SingleStudyButton.vue new file mode 100644 index 00000000..d2314564 --- /dev/null +++ b/frontend/src/components/lib/buttons/SingleStudyButton.vue @@ -0,0 +1,31 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/lib/chips/CountChip.vue b/frontend/src/components/lib/chips/CountChip.vue index 2f58cfa1..dbf110b2 100644 --- a/frontend/src/components/lib/chips/CountChip.vue +++ b/frontend/src/components/lib/chips/CountChip.vue @@ -6,50 +6,51 @@ flat pill small - :to="to" > - - {{ faIcon }} - + + + {{ faIcon }} + {{ count }} diff --git a/frontend/src/components/lib/chips/FileChip.vue b/frontend/src/components/lib/chips/FileChip.vue index 96a73a88..dc3f9ebf 100644 --- a/frontend/src/components/lib/chips/FileChip.vue +++ b/frontend/src/components/lib/chips/FileChip.vue @@ -2,14 +2,14 @@ - - {{ faIcon('file_image') }} - {{ faIcon('file') }} + + {{ faIcon('file_image') }} + {{ faIcon('file') }} - {{ faIcon('file_excel') }} + {{ faIcon('file_excel') }} - {{ faIcon('file_pdf') }} - {{ faIcon('file') }} + {{ faIcon('file_pdf') }} + {{ faIcon('file') }}    {{ name(file) }} diff --git a/frontend/src/components/lib/chips/InfoNodeChip.vue b/frontend/src/components/lib/chips/InfoNodeChip.vue new file mode 100644 index 00000000..416440df --- /dev/null +++ b/frontend/src/components/lib/chips/InfoNodeChip.vue @@ -0,0 +1,36 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/lib/chips/ObjectChip.vue b/frontend/src/components/lib/chips/ObjectChip.vue index a78d2076..a78051df 100644 --- a/frontend/src/components/lib/chips/ObjectChip.vue +++ b/frontend/src/components/lib/chips/ObjectChip.vue @@ -1,127 +1,162 @@ - diff --git a/frontend/src/components/navigation/Account.vue b/frontend/src/components/navigation/Account.vue new file mode 100644 index 00000000..e22f8176 --- /dev/null +++ b/frontend/src/components/navigation/Account.vue @@ -0,0 +1,40 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/navigation/DetailDrawer.vue b/frontend/src/components/navigation/DetailDrawer.vue new file mode 100644 index 00000000..e91723f0 --- /dev/null +++ b/frontend/src/components/navigation/DetailDrawer.vue @@ -0,0 +1,51 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/navigation/DropDownMenu.vue b/frontend/src/components/navigation/DropDownMenu.vue new file mode 100644 index 00000000..f852b1ca --- /dev/null +++ b/frontend/src/components/navigation/DropDownMenu.vue @@ -0,0 +1,62 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/navigation/Navigation.vue b/frontend/src/components/navigation/Navigation.vue new file mode 100644 index 00000000..917056b3 --- /dev/null +++ b/frontend/src/components/navigation/Navigation.vue @@ -0,0 +1,99 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/plots/ScatterPlot.vue b/frontend/src/components/plots/ScatterPlot.vue new file mode 100644 index 00000000..61bf7d60 --- /dev/null +++ b/frontend/src/components/plots/ScatterPlot.vue @@ -0,0 +1,172 @@ + + + + + \ No newline at end of file diff --git a/frontend/src/components/plots/TimecoursePlot.vue b/frontend/src/components/plots/TimecoursePlot.vue index 8da88135..999f2c10 100644 --- a/frontend/src/components/plots/TimecoursePlot.vue +++ b/frontend/src/components/plots/TimecoursePlot.vue @@ -97,17 +97,18 @@ x: this.timecourse.time, y: this.values.y, type: 'scatter', + mode: 'markers+lines', error_y: { - type: 'data', - array: this.errors.y, - visible: true, - color: '#555555', - }, - marker: { - color: '#000000', - size: 8 - }, - }] + type: 'data', + array: this.errors.y, + visible: true, + color: '#555555', + }, + marker: { + color: '#000000', + size: 8 + }, + }] }, layout(){ var xaxis = { diff --git a/frontend/src/components/search/ConciseCheckBox.vue b/frontend/src/components/search/ConciseCheckBox.vue new file mode 100644 index 00000000..e447ccfe --- /dev/null +++ b/frontend/src/components/search/ConciseCheckBox.vue @@ -0,0 +1,25 @@ + + + + \ No newline at end of file diff --git a/frontend/src/components/search/InfoNodeSearch.vue b/frontend/src/components/search/InfoNodeSearch.vue index 743a0af4..42970aad 100644 --- a/frontend/src/components/search/InfoNodeSearch.vue +++ b/frontend/src/components/search/InfoNodeSearch.vue @@ -18,9 +18,7 @@ --> {{props.option.description}} - @@ -157,6 +155,7 @@ export default { .multiselect__content-wrapper { overflow-x: -moz-hidden-unscrollable !important; overflow-y: auto !important; + z-index: 100; width: 100% !important; } diff --git a/frontend/src/components/search/InterventionSearchForm.vue b/frontend/src/components/search/InterventionSearchForm.vue index d36efe39..1045b3a6 100644 --- a/frontend/src/components/search/InterventionSearchForm.vue +++ b/frontend/src/components/search/InterventionSearchForm.vue @@ -1,10 +1,10 @@ diff --git a/frontend/src/components/search/OutputSearchForm.vue b/frontend/src/components/search/OutputSearchForm.vue index 75ad5979..b7387f7f 100644 --- a/frontend/src/components/search/OutputSearchForm.vue +++ b/frontend/src/components/search/OutputSearchForm.vue @@ -1,13 +1,13 @@ @@ -43,6 +43,17 @@ export default { value: value }) }, }, + scatter_query: { + get(){ + return this.$store.state.queries_output_types.scatter_query + }, + set (value) { + this.$store.dispatch('updateQueryAction', { + query_type: "queries_output_types", + key: "scatter_query", + value: value }) + }, + }, }, } \ No newline at end of file diff --git a/frontend/src/components/search/ReferenceSearch.vue b/frontend/src/components/search/ReferenceSearch.vue index d2d2c596..648143f8 100644 --- a/frontend/src/components/search/ReferenceSearch.vue +++ b/frontend/src/components/search/ReferenceSearch.vue @@ -35,7 +35,7 @@ + + \ No newline at end of file diff --git a/frontend/src/components/search/StudySearch.vue b/frontend/src/components/search/StudySearch.vue index 0bbf1ccb..e93d9a51 100644 --- a/frontend/src/components/search/StudySearch.vue +++ b/frontend/src/components/search/StudySearch.vue @@ -35,6 +35,7 @@ block text large + color="black" text-left class="v-btn-multiselect" v-on:mouseover.native="mouseover(props.option)"> @@ -84,8 +85,8 @@ export default { }) }, mouseover(option) { - this.$store.state.show_type = "study" - this.$store.state.detail_info = option + this.$store.state.show_type = "study"; + this.$store.state.detail_info = option; this.$store.state.detail_display = true }, sync_search(search) diff --git a/frontend/src/components/search/StudySearchForm.vue b/frontend/src/components/search/StudySearchForm.vue index abe39665..3559599f 100644 --- a/frontend/src/components/search/StudySearchForm.vue +++ b/frontend/src/components/search/StudySearchForm.vue @@ -1,8 +1,13 @@ @@ -15,6 +20,31 @@ export default { components: { StudySearch, UserSearch, - } + }, + computed: { + licence_open: { + get(){ + return this.$store.state.licence_boolean.open + }, + set (value) { + this.$store.dispatch('updateQueryAction', { + query_type: "licence_boolean", + key: "open", + value: value, }) + } + }, + licence_closed: { + get(){ + return this.$store.state.licence_boolean.closed + }, + set (value) { + this.$store.dispatch('updateQueryAction', { + query_type: "licence_boolean", + key: "closed", + value: value, }) + }, + }, + }, } + \ No newline at end of file diff --git a/frontend/src/components/search/SubjectSearchForm.vue b/frontend/src/components/search/SubjectSearchForm.vue index 6faf1bfc..54b929ff 100644 --- a/frontend/src/components/search/SubjectSearchForm.vue +++ b/frontend/src/components/search/SubjectSearchForm.vue @@ -1,64 +1,28 @@ \ No newline at end of file diff --git a/frontend/src/components/search/UserSearch.vue b/frontend/src/components/search/UserSearch.vue index d72ca41d..82ed8cfe 100644 --- a/frontend/src/components/search/UserSearch.vue +++ b/frontend/src/components/search/UserSearch.vue @@ -14,7 +14,8 @@ vue-multiselect element :multiple="true" :custom-label="customLabel" @input = update_store - :searchable="true"> + :searchable="true" + > diff --git a/frontend/src/components/tables/GroupsTable.vue b/frontend/src/components/tables/GroupsTable.vue index d7dee722..24560421 100644 --- a/frontend/src/components/tables/GroupsTable.vue +++ b/frontend/src/components/tables/GroupsTable.vue @@ -1,9 +1,14 @@ diff --git a/frontend/src/components/tables/InfoNodeTable.vue b/frontend/src/components/tables/InfoNodeTable.vue index d8639be7..a65e45cf 100644 --- a/frontend/src/components/tables/InfoNodeTable.vue +++ b/frontend/src/components/tables/InfoNodeTable.vue @@ -1,20 +1,11 @@ @@ -90,31 +74,36 @@ Choices
- - - {{ choice.label }} - + + {{ choice.label }} + + + {{ choice.label }} | + {{ choice.name }} + +
- Mass: {{ + Mass: {{ item.substance.mass }}
Charge: {{ item.substance.charge }}
+ :queries="highlight.split(/[ ,]+/)">{{ item.substance.charge }}
Formula: {{ item.substance.formula }}
+ :queries="highlight.split(/[ ,]+/)">{{ item.substance.formula }}
@@ -143,20 +132,22 @@ import TableToolbar from '../tables/TableToolbar'; import NoData from '../tables/NoData'; import Annotation from "../info_node/Annotation"; import Xref from "../info_node/Xref"; +import InfoNodeChip from "../lib/chips/InfoNodeChip"; +import {StoreInteractionMixin} from "../../storeInteraction"; export default { name: "InfoNodeTable", components: { + InfoNodeChip, NoData, TableToolbar, Annotation, Xref, }, - mixins: [searchTableMixin, UrlMixin], + mixins: [searchTableMixin, UrlMixin, StoreInteractionMixin], data() { return { otype: "info_nodes", - ntypes: ["all", "info_node", "choice", "measurement_type", "application", "tissue", "method", "route", "form", "substance"], otype_single: "info_nodes", headers: [ {text: '', value: 'buttons', sortable: false}, diff --git a/frontend/src/components/tables/InterventionsTable.vue b/frontend/src/components/tables/InterventionsTable.vue index 33ac0ab3..7da91c9c 100644 --- a/frontend/src/components/tables/InterventionsTable.vue +++ b/frontend/src/components/tables/InterventionsTable.vue @@ -1,7 +1,10 @@ - - - \ No newline at end of file diff --git a/frontend/src/components/tables/ScatterTable.vue b/frontend/src/components/tables/ScatterTable.vue new file mode 100644 index 00000000..a04ca50b --- /dev/null +++ b/frontend/src/components/tables/ScatterTable.vue @@ -0,0 +1,136 @@ + + + + + diff --git a/frontend/src/components/tables/StudiesTable.vue b/frontend/src/components/tables/StudiesTable.vue index 641542cb..07b607bc 100644 --- a/frontend/src/components/tables/StudiesTable.vue +++ b/frontend/src/components/tables/StudiesTable.vue @@ -1,82 +1,81 @@ - \ No newline at end of file + + \ No newline at end of file diff --git a/frontend/src/components/tables/TableTabs.vue b/frontend/src/components/tables/TableTabs.vue new file mode 100644 index 00000000..c99113a9 --- /dev/null +++ b/frontend/src/components/tables/TableTabs.vue @@ -0,0 +1,144 @@ + + + + + + + \ No newline at end of file diff --git a/frontend/src/components/tables/TableToolbar.vue b/frontend/src/components/tables/TableToolbar.vue index e488fb64..95dc0998 100644 --- a/frontend/src/components/tables/TableToolbar.vue +++ b/frontend/src/components/tables/TableToolbar.vue @@ -19,9 +19,10 @@