-
Notifications
You must be signed in to change notification settings - Fork 17
pygetpapers
getpapers
(https://github.com/petermr/openVirus/wiki/getpapers), the primary scraper that we've been using so far, is written in Java and requires Node.js
to run. Driven by the problems of maintaining and extending the Node-based getpapers
, we've decided to re-write the whole thing in Python and call it pygetpapers
.
- PMR
- Ayush
- Dheeraj
- Shweata
PMR: This project is well suited to a modular approach, both in content and functionality. For example, each target repo is a subproject and as long as the framework is well designed it should be possible to add repos independently. An important aspect (missing at the moment) is "how to add a new repo" for example.
These are much too general. Who contributed them? Please expand:
What does this mean?
Which date?
This is an EPMC option (I think). What is the current query format? We will need to customise this for the user.
This is too general
Which raw files? Does EPMC have an interface? Do we want these files? Why? What are they used for?
Why? This is not part of getpapers. It is already done by ami
Out of scope. This is ami-search.
getpapers
had no default for number of hits (-k
option). This often resulted in downloading the whole database. High priority
User should be able to set number of hits per page. This wasn't explicit in getpapers
. May also be able to restart failed searches. Low priority.
The use of brackets and quotes can be confusing and lead to errors. It will also be useful when querying using a list of terms. Medium priority