Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

handles end_value and row_order in sql_database #388

Merged
merged 15 commits into from
Mar 22, 2024

Conversation

rudolfix
Copy link
Contributor

@rudolfix rudolfix commented Mar 3, 2024

Tell us what you do here

  • supports end_value to select upper bound in SQL WHERE clause
  • this allows for backfill loads ie. on Airflow
  • will order the results according row_order incremental argument
  • to be merged when 0.4.6 is released (with row_order)

@rudolfix rudolfix self-assigned this Mar 3, 2024
rudolfix and others added 9 commits March 6, 2024 08:52
* enable testing

* bump dlt version

* pushes correct gdrive path

* fixes formatting, removez az logs

---------

Co-authored-by: Marcin Rudolf <[email protected]>
* skip facebook tests

* skip matomo tests

* skip personio tests

* replace postgres to duckdb

* return kafka, skip strapi
Close queue

Add requirements.txt

Remove redundant config option

Add revised README

Make api simpler

* Add batching of results

Add logging and batch size configuration


* Add pytest-mock and scrapy

Close queue when exiting

Check if queue close is called

Log number of batches

Fix linting issues

Fix linting issues

Mark scrapy source

Fix linting issue

Format code

Yield!

* Adjust tests

* Add pytest-twisted

* Add twisted to scrapy dependencies

* Add twisted to dev dependencies

* Add review comments

* Add more checks and do not exit when queue is empty

* Create QueueClosedError and handle in listener to exit loop

* Simplify code

* Stop crawling if queue is closed

* Fix linting issues

* Fix linting issues

* Adjust tests and disable telnet server for scrapy

* Remove pytest-twisted

* Refactor scrapy item pipeline

* Eliminate custom spider

* Use pytest.mark.forked to run tests for ALL_DESTINATIONS

* Add pytest-forked

* Update lockfile

* Use scrapy signals

* Hide batching and retrieving logic inside queue

* Add more types

* Extend default scrapy settings

* Extract pipeline and scrapy runners

* Simplify helpers code

* Cleanup code

* Add start_urls_file configuration option

* Sync scrapy log level with dlt log level

* Expose simple scraping pipeline runner

* Adjust config file

* Connect signals in ScrapyRunner.init

* Register source and do cleanups

* Better scrapy setting passing and minor cleanups

* Remove reduntant code comments

* Call engine_stopped callback in finally block

* Add more docstrings related to runners

* Adjust batch size

* Fix queue batching bugs

* Pass crawler instance to item_scraped callback

* Add advanced example to pipeline code

* Access settings override for scrapy

* Rewrite tests

* Small readme update for bing wembaster

* Adjust queue read timeout

* Extract test utils for scraping source

* Add stream generator to queue to handle generator exit exception

* Extract singal registering and tearing down as context manager

* Adjust and cleanup example pipeline source file

* Cleanup scraping helpers

* Adjust tests for scraping pipeline

* Add callback access to scraping resource

* Update readme

* Cleanup code

* Import ParamSpec from typing extensions

* Fix linting issues

* Fix linting issues

* Set encoding when opening the file with urls

* Adjust typing for scraping testing utils

* Use proper Union syntax

* Adjust mock patch module path for scraping tests

* Use latest dlt version

* Adjust mock patch module path for scraping tests

* Adjust tests and mark ones to skip

* Cleanup tests and utils for scraping source

* Re-use spy on queue.close calls

* Use append write_disposition by default for scraping source

* Update test skip reason

* Stop crawler manually

* Return self from __call__

* Check if crawler.stop is actually called

* Check if crawling has already been stopping

* Test to verify resource name generation and override

* Adjust resource name selection

* Add more docstrings and update readme

* Update readme

* Add scrapy configuration in example pipeline

* Shutdown twisted reactor after module tests

* Use simple run_pipeline

* Close the queue after timeout

* Rewrite a comment and use break instead of return in while loop

* Update comments

* Mock queue with alternative implementation

* Adjust mock patch path

* Add logging when scrapy stops and re-arrange code actions

* Stop crawler in on_engine_stopped

* Call on_engine_stopped from on_item_scraped if the queue is closed

* Skip test
* Rename new-verified-source.md to build-new-verified-source.md

* Rename source-request.md to request-new-source.md

* Update request-new-source.md

* fixes references

---------

Co-authored-by: Marcin Rudolf <[email protected]>
@rudolfix rudolfix marked this pull request as ready for review March 6, 2024 07:57
@rudolfix rudolfix merged commit 62d0330 into master Mar 22, 2024
14 checks passed
@rudolfix rudolfix deleted the rfix/adds-end-date-sql-database branch March 22, 2024 08:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants