Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Postgres replication #392

Merged
merged 44 commits into from
May 2, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
cb58fbc
WIP
Feb 25, 2024
d218a3a
WIP
Mar 3, 2024
945da6d
Merge branch 'master' of https://github.com/dlt-hub/verified-sources …
Mar 3, 2024
fa9a4c1
move config to correct position
Mar 4, 2024
4cdf823
extend SQLAlchemy type mapping
Mar 12, 2024
8808812
add initial support for postgres replication
Mar 12, 2024
1914acf
add credentials instruction
Mar 12, 2024
36739ec
Merge branch 'master' of https://github.com/dlt-hub/verified-sources …
Mar 12, 2024
cc6a11d
undo adding secret
Mar 12, 2024
f815361
add module docstring
Mar 12, 2024
a318fee
use from import to prevent AttributeError when running test_dlt_init.py
Mar 13, 2024
8aed399
enable multiple tables per publication
Mar 15, 2024
9fc3c39
add support for schema replication
Mar 16, 2024
8c2f905
add support for unmapped data types
Mar 16, 2024
a0af605
add test for init_replication
Mar 17, 2024
051830c
update docstrings
Mar 17, 2024
656989a
return resource instead of single-element list
Mar 17, 2024
d014645
add example pipeline
Mar 18, 2024
269422e
add more example pipelines
Mar 18, 2024
c674f24
add nullability hints
Mar 18, 2024
a919c82
add README
Mar 18, 2024
57b5e1e
add sql_database dependency instruction
Mar 19, 2024
5636e07
batch data items per table and yield hints only once
Mar 22, 2024
2713464
postpone replication column hints to preserve order
Mar 22, 2024
eec75f0
refactor to use resource decorator
Mar 22, 2024
aae3754
Merge branch 'master' of https://github.com/dlt-hub/verified-sources …
Mar 22, 2024
493147d
add support for table schema changes
Mar 23, 2024
7bd211b
optimize message type detection for performance
Mar 25, 2024
48442ba
upgrade dlt to 0.4.8
Apr 9, 2024
d303efd
Merge branch 'master' of https://github.com/dlt-hub/verified-sources …
Apr 9, 2024
524945f
enables to run tests in parallel
rudolfix Apr 14, 2024
ab005a1
Merge branch 'master' into 933-postgres-replication
rudolfix Apr 14, 2024
c596180
fixes format
rudolfix Apr 14, 2024
34610b6
make test more specific to handle postgres version differences
Apr 15, 2024
7a07045
add postgres server version requirement for schema replication functi…
Apr 15, 2024
61712b4
removed whitespace
Apr 15, 2024
fd1d973
explicitly fetch credentials from pg_replication source
Apr 22, 2024
8bc4da3
add superuser check
Apr 22, 2024
796c980
Merge branch 'master' into 933-postgres-replication
rudolfix May 1, 2024
77fb1dd
updates lock file
rudolfix May 1, 2024
8a1d910
use psycopg2-binary instead of psycopg2
May 2, 2024
b0d2abb
use destination-specific escape identifier
May 2, 2024
f63ceff
replace string literal with int literal
May 2, 2024
22758fe
include pypgoutput decoders in library
May 2, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 49 additions & 1 deletion poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,12 @@ black = "^23.3.0"
pypdf2 = "^3.0.1"
greenlet = "<3.0.0"
confluent-kafka = "^2.3.0"
types-psycopg2 = "^2.9.0"

[tool.poetry.group.sql_database.dependencies]
sqlalchemy = ">=1.4"
pymysql = "^1.0.3"
pypgoutput = "0.0.3"

[tool.poetry.group.google_sheets.dependencies]
google-api-python-client = "^2.78.0"
Expand Down
5 changes: 4 additions & 1 deletion sources/.dlt/example.secrets.toml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,10 @@ location = "US"
### Sources
[sources]

## local postgres as source
sql_database.credentials="postgresql://loader:loader@localhost:5432/dlt_data"

## chess pipeline
# the section below defines secrets for "chess_dlt_config_example" source in chess/__init__.py
[sources.chess]
secret_str="secret string" # a string secret
secret_str="secret string" # a string secret
13 changes: 13 additions & 0 deletions sources/sql_database/pg_replication/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
## Prerequisites

The Postgres user needs to have the `LOGIN` and `REPLICATION` attributes assigned:

```sql
CREATE ROLE replication_user WITH LOGIN REPLICATION;
```

It also needs `CREATE` privilege on the database:

```sql
GRANT CREATE ON DATABASE dlt_data TO replication_user;
```
64 changes: 64 additions & 0 deletions sources/sql_database/pg_replication/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
from typing import Optional, Sequence

import dlt

from dlt.common.schema.typing import TColumnNames
from dlt.sources import DltResource
from dlt.sources.credentials import ConnectionStringCredentials

from .helpers import table_replication_items, TableChangesResourceConfiguration


@dlt.sources.config.with_config(
sections=("sources", "sql_database"),
spec=TableChangesResourceConfiguration,
)
def table_changes(
credentials: ConnectionStringCredentials = dlt.secrets.value,
table: str = dlt.config.value,
primary_key: TColumnNames = None,
include_columns: Optional[Sequence[str]] = dlt.config.value,
slot_name: str = dlt.config.value,
publication_name: str = dlt.config.value,
upto_lsn: Optional[int] = None,
) -> DltResource:
"""Returns a dlt resource that yields data items for changes in a postgres table.

Relies on a dedicated replication slot and publication that publishes DML
operations (i.e. `insert`, `update`, and/or `delete`) for the table (helper
method `init_table_replication` can be used to set this up).
Uses `merge` write disposition to merge changes into destination table(s).

Args:
credentials (ConnectionStringCredentials): Postgres database credentials.
table (str): Name of the table that is replicated
primary_key (TColumnNames): Names of one or multiple columns serving as
primary key on the table. Used to deduplicate data items in the `merge`
operation.
include_columns (Optional[Sequence[str]]): Optional sequence of names of
columns to include in the generated data itemes. Any columns not in the
sequence are excluded. If not provided, all columns are included.
slot_name (str): Name of the replication slot to consume replication
messages from. Each table is expected to have a dedicated slot.
publication_name (str): Name of the publication that published DML operations
for the table. Each table is expected to have a dedicated publication.
upto_lsn Optional[int]: Optional integer LSN value upto which the replication
slot is consumed. If not provided, all messages in the slot are consumed,
ensuring all new changes in the source table are included.

Returns:
DltResource that yields data items for changes in the postgres table.
"""
return dlt.resource(
table_replication_items,
name=table,
write_disposition="merge",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for tables with INSERT only (ie. logs) append will also work?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, should work, didn't think of that. I can fetch details about the publication using pg_publication catalog and use append if it only publishes insert. Will be more efficient than merge.

primary_key=primary_key,
rudolfix marked this conversation as resolved.
Show resolved Hide resolved
columns={"lsn": {"dedup_sort": "desc"}},
)(
credentials=credentials,
slot_name=slot_name,
publication_name=publication_name,
include_columns=include_columns,
upto_lsn=upto_lsn,
)
Loading
Loading