Skip to content

Commit

Permalink
fix internal docusaurus links
Browse files Browse the repository at this point in the history
  • Loading branch information
sh-rp committed Dec 11, 2024
1 parent 9962669 commit 1fc9891
Show file tree
Hide file tree
Showing 5 changed files with 9 additions and 9 deletions.
2 changes: 1 addition & 1 deletion docs/website/docs/dlt-ecosystem/destinations/duckdb.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ to disable tz adjustments.

## Destination configuration

By default, a DuckDB database will be created in the current working directory with a name `<pipeline_name>.duckdb` (`chess.duckdb` in the example above). After loading, it is available in `read/write` mode via `with pipeline.sql_client() as con:`, which is a wrapper over `DuckDBPyConnection`. See [duckdb docs](https://duckdb.org/docs/api/python/overview#persistent-storage) for details. If you want to read data, use [datasets](../general-usage/dataset-access/dataset) instead of the sql client.
By default, a DuckDB database will be created in the current working directory with a name `<pipeline_name>.duckdb` (`chess.duckdb` in the example above). After loading, it is available in `read/write` mode via `with pipeline.sql_client() as con:`, which is a wrapper over `DuckDBPyConnection`. See [duckdb docs](https://duckdb.org/docs/api/python/overview#persistent-storage) for details. If you want to read data, use [datasets](../../general-usage/dataset-access/dataset) instead of the sql client.

The `duckdb` credentials do not require any secret values. [You are free to pass the credentials and configuration explicitly](../../general-usage/destination.md#pass-explicit-credentials). For example:
```py
Expand Down
6 changes: 3 additions & 3 deletions docs/website/docs/dlt-ecosystem/transformations/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@ If you'd like to transform your data after a pipeline load, you have 3 options a

If you need to preprocess some of your data before it is loaded, you can learn about strategies to:

* [Rename columns](../general-usage/customising-pipelines/renaming_columns)
* [Pseudonymize columns](../general-usage/customising-pipelines/pseudonymizing_columns)
* [Remove columns](../general-usage/customising-pipelines/removing_columns)
* [Rename columns](../../general-usage/customising-pipelines/renaming_columns)
* [Pseudonymize columns](../../general-usage/customising-pipelines/pseudonymizing_columns)
* [Remove columns](../../general-usage/customising-pipelines/removing_columns)

This is particularly useful if you are trying to remove data related to PII or other sensitive data, you want to remove columns that are not needed for your use case or you are using a destination that does not support certain data types in your source data.

Expand Down
4 changes: 2 additions & 2 deletions docs/website/docs/dlt-ecosystem/transformations/python.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ keywords: [transform, pandas]

# Transforming data in python with dataframes or arrow tables

You can transform your data in python using pandas dataframes or arrow tables. To get started, please read the [dataset docs](../general-usage/dataset-access/dataset).
You can transform your data in python using pandas dataframes or arrow tables. To get started, please read the [dataset docs](../../general-usage/dataset-access/dataset).


## Interactively transforming your data in python

Using the methods explained in the [dataset docs](../general-usage/dataset-access/dataset), you can fetch data from your destination into a dataframe or arrow table in your local python process and work with it interactively. This even works for filesystem destinations:
Using the methods explained in the [dataset docs](../../general-usage/dataset-access/dataset), you can fetch data from your destination into a dataframe or arrow table in your local python process and work with it interactively. This even works for filesystem destinations:


The example below reads GitHub reactions data from the `issues` table and
Expand Down
4 changes: 2 additions & 2 deletions docs/website/docs/dlt-ecosystem/transformations/sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ connection.

:::info
* This method will work for all sql destinations supported by `dlt`, but not for the filesystem destination.
* Read the [sql client docs](../general-usage/dataset-access/dataset) for more information on how to access data with the sql client.
* If you are simply trying to read data, you should use the powerful [dataset interface](../general-usage/dataset-access/dataset) instead.
* Read the [sql client docs](../../ general-usage/dataset-access/dataset) for more information on how to access data with the sql client.
* If you are simply trying to read data, you should use the powerful [dataset interface](../../general-usage/dataset-access/dataset) instead.
:::


Expand Down
2 changes: 1 addition & 1 deletion docs/website/docs/general-usage/dataset-access/dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,7 @@ other_pipeline = dlt.pipeline(pipeline_name="other_pipeline", destination="duckd
other_pipeline.run(limited_items_relation.iter_arrow(chunk_size=10_000), table_name="limited_items")
```

Learn more about [transforming data in python with dataframes or arrow tables](../dlt-ecosystem/transformations/python).
Learn more about [transforming data in python with dataframes or arrow tables](../../dlt-ecosystem/transformations/python).

### Using `ibis` to query the data

Expand Down

0 comments on commit 1fc9891

Please sign in to comment.