-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
handles end_value and row_order in sql_database #388
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* enable testing * bump dlt version * pushes correct gdrive path * fixes formatting, removez az logs --------- Co-authored-by: Marcin Rudolf <[email protected]>
* skip facebook tests * skip matomo tests * skip personio tests * replace postgres to duckdb * return kafka, skip strapi
Close queue Add requirements.txt Remove redundant config option Add revised README Make api simpler * Add batching of results Add logging and batch size configuration * Add pytest-mock and scrapy Close queue when exiting Check if queue close is called Log number of batches Fix linting issues Fix linting issues Mark scrapy source Fix linting issue Format code Yield! * Adjust tests * Add pytest-twisted * Add twisted to scrapy dependencies * Add twisted to dev dependencies * Add review comments * Add more checks and do not exit when queue is empty * Create QueueClosedError and handle in listener to exit loop * Simplify code * Stop crawling if queue is closed * Fix linting issues * Fix linting issues * Adjust tests and disable telnet server for scrapy * Remove pytest-twisted * Refactor scrapy item pipeline * Eliminate custom spider * Use pytest.mark.forked to run tests for ALL_DESTINATIONS * Add pytest-forked * Update lockfile * Use scrapy signals * Hide batching and retrieving logic inside queue * Add more types * Extend default scrapy settings * Extract pipeline and scrapy runners * Simplify helpers code * Cleanup code * Add start_urls_file configuration option * Sync scrapy log level with dlt log level * Expose simple scraping pipeline runner * Adjust config file * Connect signals in ScrapyRunner.init * Register source and do cleanups * Better scrapy setting passing and minor cleanups * Remove reduntant code comments * Call engine_stopped callback in finally block * Add more docstrings related to runners * Adjust batch size * Fix queue batching bugs * Pass crawler instance to item_scraped callback * Add advanced example to pipeline code * Access settings override for scrapy * Rewrite tests * Small readme update for bing wembaster * Adjust queue read timeout * Extract test utils for scraping source * Add stream generator to queue to handle generator exit exception * Extract singal registering and tearing down as context manager * Adjust and cleanup example pipeline source file * Cleanup scraping helpers * Adjust tests for scraping pipeline * Add callback access to scraping resource * Update readme * Cleanup code * Import ParamSpec from typing extensions * Fix linting issues * Fix linting issues * Set encoding when opening the file with urls * Adjust typing for scraping testing utils * Use proper Union syntax * Adjust mock patch module path for scraping tests * Use latest dlt version * Adjust mock patch module path for scraping tests * Adjust tests and mark ones to skip * Cleanup tests and utils for scraping source * Re-use spy on queue.close calls * Use append write_disposition by default for scraping source * Update test skip reason * Stop crawler manually * Return self from __call__ * Check if crawler.stop is actually called * Check if crawling has already been stopping * Test to verify resource name generation and override * Adjust resource name selection * Add more docstrings and update readme * Update readme * Add scrapy configuration in example pipeline * Shutdown twisted reactor after module tests * Use simple run_pipeline * Close the queue after timeout * Rewrite a comment and use break instead of return in while loop * Update comments * Mock queue with alternative implementation * Adjust mock patch path * Add logging when scrapy stops and re-arrange code actions * Stop crawler in on_engine_stopped * Call on_engine_stopped from on_item_scraped if the queue is closed * Skip test
* Rename new-verified-source.md to build-new-verified-source.md * Rename source-request.md to request-new-source.md * Update request-new-source.md * fixes references --------- Co-authored-by: Marcin Rudolf <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Tell us what you do here
end_value
to select upper bound in SQL WHERE clauserow_order
incremental argument