Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Figure S3 explanation #5

Open
atuldeshpande opened this issue Oct 9, 2020 · 2 comments
Open

Figure S3 explanation #5

atuldeshpande opened this issue Oct 9, 2020 · 2 comments

Comments

@atuldeshpande
Copy link

We used dyngen for evaluation of SINGE and other methods to identify the regulatory networks from dyngen data and had a few questions regarding Figure S3.
Unfortunately, we weren't able to locally run the scripts in this repository. Can you please clarify what the baseline precision (number of true edges/ total number of possible edges) in the ground truth network in Figure S3 are? Also, could you explain what the multiple points in the plots for each method correspond to? Are those for different hyperparameter settings?

@rcannood
Copy link
Member

rcannood commented Oct 12, 2020

Hey Atul!

setup

Unfortunately, we weren't able to locally run the scripts in this repository.

Could you let me know what the problems you are having in setting up the dyngen manuscript codebase are?

Normally it should just be cloning the dyngen_manuscript and using devtools to install it:

git clone https://github.com/dynverse/dyngen_manuscript.git
Rscript -e 'devtools::install()'

points in plot

Also, could you explain what the multiple points in the plots for each method correspond to? Are those for different hyperparameter settings?

If you want, you can check out the results yourself. Once you open up the Rstudio project (dyngen_analysis.Rproj), you can load the evaluation results using the following code.

library(tidyverse)
library(dyngen.manuscript)

exp <- start_analysis("usecase_network_inference")

out <- exp$result("scores.rds")

out$aucs

out$aucs contains the scores for each individual cell for each of the 14 datasets (with 3 replicates) and for each evaluated method. Each point in Figure S3 corresponds to one line in out$summ corresponds to the average scores across all cells, also for each of the different datasets and methods. For this analysis, no changes are made to the hyperparameters, it's just the defaults.

baseline scores

Can you please clarify what the baseline precision (number of true edges/ total number of possible edges) in the ground truth network in Figure S3 are?

The baseline mean AUROC given a random prediction is 0.5 (by definition). For the AUPR it should be around 0.01, but this will vary from dataset to dataset. Would you like me to check out the values for this?

Kind regards,
Robrecht

@atuldeshpande
Copy link
Author

atuldeshpande commented Oct 16, 2020

For the AUPR it should be around 0.01, but this will vary from dataset to dataset. Would you like me to check out the values for this?

Yes, that would be great, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants