From 21c36f8f4a5aea7b944345de01bffa4bf24c87df Mon Sep 17 00:00:00 2001 From: Nick Woolmer <29717167+nwoolmer@users.noreply.github.com> Date: Mon, 7 Oct 2024 19:46:10 +0100 Subject: [PATCH 1/2] add new functions from recent releases (#60) Adds functions from 8.1.2 and some that were added earlier. --- reference/function/finance.md | 130 ++++++++++++++++++++++++++++++++-- reference/function/numeric.md | 50 +++++++++++++ 2 files changed, 176 insertions(+), 4 deletions(-) diff --git a/reference/function/finance.md b/reference/function/finance.md index 07739faf..c62207f2 100644 --- a/reference/function/finance.md +++ b/reference/function/finance.md @@ -28,9 +28,19 @@ Let's take the below order book as an example. A _buy market order_ with the size of 50 would wipe out the first two price levels of the _Ask_ side of the book, and would also trade on the third level. -The full price of the trade: `14 * $14.50 + 16 * $14.60 + (50 - 14 - 16) * $14.80 = $732.6` -The average price of the instrument in this trade: `$732.6 / 50 = $14.652` +The full price of the trade: +$$ +14 \cdot \$14.50 + 16 \cdot \$14.60 + (50 - 14 - 16) \cdot \$14.80 = \$732.60 +$$ + + + +The average price of the instrument in this trade: + +$$ +\$732.60 / 50 = \$14.652 +$$ This average trade price is the output of the function when executed with the parameters taken from the above example: @@ -115,11 +125,80 @@ SELECT ts, L2PRICE(100, askSize1, ask1, askSize2, ask2, askSize3, ask3) | 2024-05-22T09:40:15.175000Z | 0.565999999999 | | 2024-05-22T09:40:15.522000Z | 0.483 | + +## mid + +`mid(bid, ask)` - calculates the midpoint of a bidding price and asking price. + +Returns null if either argument is NaN or null. + +### Parameters + +- `bid` is any numeric bidding price value. +- `ask` is any numeric asking price value. + +### Return value + +Return value type is `double`. + +### Examples + +```questdb-sql +SELECT mid(1.5760, 1.5763) +``` + +| mid | +|:-------------| +| 1.57615 | + + + +## spread_bps + +`spread_bps(bid, ask)` - calculates the quoted bid-ask spread, based on the highest bidding price, +and the lowest asking price. + +The result is provided in basis points, and the calculation is: + +$$ +\frac +{\text{spread}\left(\text{bid}, \text{ask}\right)} +{\text{mid}\left(\text{bid}, \text{ask}\right)} +\cdot +10\,000 +$$ + + +### Parameters + +- `bid` is any numeric bidding price value. +- `ask` is any numeric asking price value. + +### Return value + +Return value type is `double`. + +### Examples + +```questdb-sql +SELECT spread_bps(1.5760, 1.5763) +``` + +| spread_bps | +|:---------------| +| 1.903372140976 | + ## vwap `vwap(price, quantity)` - Calculates the volume-weighted average price (VWAP) -based on the given price and quantity columns. This is a handy replacement for -the `sum(price * quantity) / sum(quantity)` expression. +based on the given price and quantity columns. This is defined by the following formula: + +$$ +\text{vwap} = +\frac +{\text{sum}\left(\text{price} \cdot \text{quantity}\right)} +{\text{sum}\left(\text{quantity}\right)} +$$ ### Parameters @@ -140,3 +219,46 @@ FROM (SELECT x FROM long_sequence(100)); | vwap | | :--- | | 67 | + + +## wmid + +`wmid(bidSize, bidPrice, askPrice, askSize)` - calculates the weighted mid-price +for a sized bid/ask pair. + +It is calculated with these formulae: + +$$ +\text{imbalance} = +\frac +{ \text{bidSize} } +{ \left( \text{bidSize} + \text{askSize} \right) } +$$ + +$$ +\text{wmid} = \text{askPrice} \cdot \text{imbalance} ++ \text{bidPrice} +\cdot \left( 1 - \text{imbalance} \right) +$$ + +### Parameters + +- `bidSize` is any numeric value representing the size of the bid offer. +- `bidPrice` is any numeric value representing the bidding price. +- `askPrice` is any numeric value representing the asking price. +- `askSize` is any numeric value representing the size of the ask offer. + +### Return value + +Return value type is `double`. + +### Examples + +```questdb-sql +SELECT wmid(100, 5, 6, 100) +``` + +| wmid | +|:-----| +| 5.5 | + diff --git a/reference/function/numeric.md b/reference/function/numeric.md index 9e129e3e..00040edc 100644 --- a/reference/function/numeric.md +++ b/reference/function/numeric.md @@ -106,6 +106,56 @@ SELECT floor(15.75) as RoundedDown; | --------- | | 15 | + +## greatest + +`greatest(args...)` returns the largest entry in a series of numbers. + +**Arguments:** + +- `args...` is a variable-size list of `long` or `double` values. + +**Return value:** + +Return value type is `double` or `long`. + +**Examples:** + +```questdb-sql +SELECT greatest(11, 3, 8, 15) +``` + +| greatest | +|----------| +| 15 | + + + +## least + +`least(args...)` returns the smallest entry in a series of numbers. + +**Arguments:** + +- `args...` is a variable-size list of `long` or `double` values. + +**Return value:** + +Return value type is `double` or `long`. + +**Examples:** + +```questdb-sql +SELECT least(11, 3, 8, 15) +``` + +| least | +|-------| +| 3 | + + + + ## ln `ln(value)` return the natural logarithm (**log*e***) of a given number. From 6858c83e4b939b6e783ec961fa5240bbe64eb9ad Mon Sep 17 00:00:00 2001 From: goodroot <9484709+goodroot@users.noreply.github.com> Date: Mon, 7 Oct 2024 13:33:31 -0700 Subject: [PATCH 2/2] Add new interlinks (#61) * improves interlink coverage --- concept/root-directory-structure.md | 2 +- configuration.md | 44 +++++++-------- deployment/aws-official-ami.md | 14 ++--- deployment/capacity-planning.md | 4 +- deployment/digitalocean.md | 7 ++- deployment/docker.md | 54 +++++++++---------- deployment/google-cloud-platform.md | 4 +- deployment/systemd.md | 2 +- guides/active-directory-pingfederate.md | 14 ++--- guides/create-database.md | 2 +- guides/enterprise-quick-start.md | 10 ++-- guides/import-csv.md | 2 +- guides/replication-tuning.md | 2 +- ingestion-overview.md | 6 +-- operations/data-retention.md | 5 +- operations/design-for-performance.md | 2 +- .../openid-connect-oidc-integration.mdx | 28 +++++----- operations/rbac.md | 22 ++++---- operations/updating-data.md | 2 +- .../_options-not-windows.partial.mdx | 2 +- .../_options-windows.partial.mdx | 2 +- quick-start.mdx | 2 +- reference/api/rest.md | 3 +- reference/command-line-options.md | 2 +- reference/function/aggregation.md | 8 +-- reference/sql/cancel-query.md | 2 +- reference/sql/overview.md | 42 +++++++-------- reference/sql/sample-by.md | 2 +- reference/sql/select.md | 2 +- reference/sql/update.md | 2 +- sidebars.js | 2 +- .../{data-bento.md => databento.md} | 4 +- third-party-tools/flink.md | 2 +- third-party-tools/grafana.md | 4 +- third-party-tools/kafka/questdb-kafka.md | 6 +++ third-party-tools/mindsdb.md | 9 ++-- third-party-tools/overview.md | 6 +-- third-party-tools/prometheus.md | 12 ++--- third-party-tools/spark.md | 14 ++--- third-party-tools/telegraf.md | 4 +- web-console.md | 7 ++- 41 files changed, 183 insertions(+), 182 deletions(-) rename third-party-tools/{data-bento.md => databento.md} (92%) diff --git a/concept/root-directory-structure.md b/concept/root-directory-structure.md index d97c5c6f..2ea388fb 100644 --- a/concept/root-directory-structure.md +++ b/concept/root-directory-structure.md @@ -239,7 +239,7 @@ Log files look like this: ## `public` directory -Contains the web files for the Web Console: +Contains the web files for the [Web Console](/docs/web-console/): ```filestructure └── public diff --git a/configuration.md b/configuration.md index 9265c674..ae2b022e 100644 --- a/configuration.md +++ b/configuration.md @@ -4,17 +4,17 @@ description: Server configuration keys reference documentation. --- import { ConfigTable } from "@theme/ConfigTable" -import sharedWorkerConfig from "./configuration-utils/_shared-worker.config.json" -import httpConfig from "./configuration-utils/_http.config.json" -import cairoConfig from "./configuration-utils/_cairo.config.json" -import parallelSqlConfig from "./configuration-utils/_parallel-sql.config.json" -import walConfig from "./configuration-utils/_wal.config.json" -import csvImportConfig from "./configuration-utils/_csv-import.config.json" -import postgresConfig from "./configuration-utils/_postgres.config.json" -import tcpConfig from "./configuration-utils/_tcp.config.json" -import udpConfig from "./configuration-utils/_udp.config.json" -import replicationConfig from "./configuration-utils/_replication.config.json" -import oidcConfig from "./configuration-utils/_oidc.config.json" +import sharedWorkerConfig from "./configuration-utils/\_shared-worker.config.json" +import httpConfig from "./configuration-utils/\_http.config.json" +import cairoConfig from "./configuration-utils/\_cairo.config.json" +import parallelSqlConfig from "./configuration-utils/\_parallel-sql.config.json" +import walConfig from "./configuration-utils/\_wal.config.json" +import csvImportConfig from "./configuration-utils/\_csv-import.config.json" +import postgresConfig from "./configuration-utils/\_postgres.config.json" +import tcpConfig from "./configuration-utils/\_tcp.config.json" +import udpConfig from "./configuration-utils/\_udp.config.json" +import replicationConfig from "./configuration-utils/\_replication.config.json" +import oidcConfig from "./configuration-utils/\_oidc.config.json" This page describes methods for configuring QuestDB server settings. @@ -115,7 +115,7 @@ configuration) every other subsystem. ### HTTP server -This section describes configuration settings for the Web Console and the REST +This section describes configuration settings for the [Web Console](/docs/web-console/) and the REST API available by default on port `9000`. For details on the use of this component, refer to the [web console documentation](/docs/web-console/) page. @@ -143,16 +143,16 @@ CSV files. Settings for `COPY`: #### CSV import configuration for Docker diff --git a/deployment/aws-official-ami.md b/deployment/aws-official-ami.md index 9949b115..6544f0da 100644 --- a/deployment/aws-official-ami.md +++ b/deployment/aws-official-ami.md @@ -15,7 +15,7 @@ software vendors that runs on AWS. This guide describes how to launch QuestDB via the AWS Marketplace using the official listing. This document also describes usage instructions after you have launched the instance, including hints for authentication, the available interfaces, and tips for accessing the REST API -and web console. +and [Web Console](/docs/web-console/). ## Prerequisites @@ -51,7 +51,7 @@ For details on the server properties and using this file, see the The default ports used by QuestDB interfaces are as follows: -- Web console & REST API is available on port `9000` +- [Web Console](/docs/web-console/) & REST API is available on port `9000` - PostgreSQL wire protocol available on `8812` - InfluxDB line protocol `9009` (TCP and UDP) - Health monitoring & Prometheus `/metrics` `9003` @@ -143,14 +143,14 @@ systemctl stop questdb.service - Download and copy over the new binary ( - - {`wget https://github.com/questdb/questdb/releases/download/${release.name}/questdb-${release.name}-no-jre-bin.tar.gz \\ +renderText={(release) => ( + +{`wget https://github.com/questdb/questdb/releases/download/${release.name}/questdb-${release.name}-no-jre-bin.tar.gz \\ tar xzvf questdb-${release.name}-no-jre-bin.tar.gz cp questdb-${release.name}-no-jre-bin/questdb.jar /usr/local/bin/questdb.jar cp questdb-${release.name}-no-jre-bin/questdb.jar /usr/local/bin/questdb-${release.name}.jar`} - - )} + +)} /> - Restart the service again: diff --git a/deployment/capacity-planning.md b/deployment/capacity-planning.md index ac2de541..72cb3c1d 100644 --- a/deployment/capacity-planning.md +++ b/deployment/capacity-planning.md @@ -358,9 +358,9 @@ If you are running the QuestDB using `systemd`, you will also need to set the `L If you have followed the [setup guide](https://questdb.io/docs/deployment/systemd/), then the file should be called `questdb.service` and be located at `~/.config/systemd/user/questdb.service`. -Add this property to the `[Service]` section, setting it to at least `1048576`, or higher if you have set higher OS-wide limits. +Add this property to the `[Service]` section, setting it to at least `1048576`, or higher if you have set higher OS-wide limits. -Then restart the service. If you have configured these settings correctly, any warnings in the web console should now be cleared. +Then restart the service. If you have configured these settings correctly, any warnings in the [Web Console](/docs/web-console/) should now be cleared. #### Setting system-wide open file limit on MacOS: diff --git a/deployment/digitalocean.md b/deployment/digitalocean.md index b3681bf6..c7125eac 100644 --- a/deployment/digitalocean.md +++ b/deployment/digitalocean.md @@ -1,8 +1,7 @@ --- title: Launch QuestDB on DigitalOcean sidebar_label: DigitalOcean Droplet -description: - This document describes how to launch DigitalOcean droplet with QuestDB +description: This document describes how to launch DigitalOcean droplet with QuestDB --- DigitalOcean is a platform with software listings from independent vendors that @@ -10,7 +9,7 @@ run on cloud resources. This guide describes how to launch QuestDB via the DigitalOcean marketplace using the official listing. This document also describes usage instructions after you have launched the instance, including hints for authentication, the available interfaces, and tips for accessing the -REST API and web console. +REST API and [Web Console](/docs/web-console/). ## Prerequisites @@ -75,7 +74,7 @@ For details on the server properties and using this file, see the The default ports used by QuestDB interfaces are as follows: -- Web console & REST API is available on port `9000` +- [Web Console](/docs/web-console/) & REST API is available on port `9000` - PostgreSQL wire protocol available on `8812` - InfluxDB line protocol `9009` (TCP and UDP) - Health monitoring & Prometheus `/metrics` `9003` diff --git a/deployment/docker.md b/deployment/docker.md index 5dbc607b..15e6879a 100644 --- a/deployment/docker.md +++ b/deployment/docker.md @@ -27,13 +27,13 @@ Once Docker is installed, you will need to pull QuestDB's image from This can be done with a single command using: ( - - {`docker run \\ +renderText={(release) => ( + +{`docker run \\ -p 9000:9000 -p 9009:9009 -p 8812:8812 -p 9003:9003 \\ questdb/questdb:${release.name}`} - - )} + +)} /> This command starts a Docker container from `questdb/questdb` image. In @@ -79,11 +79,11 @@ By default, `questdb/questdb` points to the latest QuestDB version available on Docker. However, it is recommended to define the version used. ( - - {`questdb/questdb:${release.name}`} - - )} +renderText={(release) => ( + +{`questdb/questdb:${release.name}`} + +)} /> ## Environment variables @@ -177,16 +177,16 @@ following example demonstrated how to mount the current directory to a QuestDB container using the `-v` flag in a Docker `run` command: ( - - {`docker run -p 9000:9000 \\ +renderText={(release) => ( + +{`docker run -p 9000:9000 \\ -p 9009:9009 \\ -p 8812:8812 \\ -p 9003:9003 \\ -v "$(pwd):/var/lib/questdb" \\ questdb/questdb:${release.name}`} - - )} + +)} /> The current directory will then have data persisted to disk for convenient @@ -211,14 +211,14 @@ http.bind.to=0.0.0.0:4000 Running the container with the `-v` flag allows for mounting the current directory to QuestDB's `conf` directory in the container. With the server -configuration above, HTTP ports for the web console and REST API will be +configuration above, HTTP ports for the [Web Console](/docs/web-console/) and REST API will be available on [localhost:4000](http://localhost:4000): ```bash docker run -v "$(pwd):/var/lib/questdb/conf" -p 4000:4000 questdb/questdb ``` -:::note +:::note If you wish to use ZFS for your QuestDB deployment, with Docker, then you will need to enable ZFS on the host volume that Docker uses. @@ -261,21 +261,21 @@ docker rm dd363939f261 3. Download the latest QuestDB image: ( - - {`docker pull questdb/questdb:${release.name}`} - - )} +renderText={(release) => ( + +{`docker pull questdb/questdb:${release.name}`} + +)} /> 4. Start a new container with the new version and the same volume mounted: ( - - {`docker run -p 8812:8812 -p 9000:9000 -v "$(pwd):/var/lib/questdb" questdb/questdb:${release.name}`} - - )} +renderText={(release) => ( + +{`docker run -p 8812:8812 -p 9000:9000 -v "$(pwd):/var/lib/questdb" questdb/questdb:${release.name}`} + +)} /> ### Writing logs to disk diff --git a/deployment/google-cloud-platform.md b/deployment/google-cloud-platform.md index a73480ec..bfe8b253 100644 --- a/deployment/google-cloud-platform.md +++ b/deployment/google-cloud-platform.md @@ -124,7 +124,7 @@ tag** `questdb` will now have this firewall rule applied. The ports we have opened are: -- `9000` for the REST API and Web Console +- `9000` for the REST API and [Web Console](/docs/web-console/) - `8812` for the PostgreSQL wire protocol ## Verify the deployment @@ -145,7 +145,7 @@ To verify that the QuestDB deployment is operating as expected: 1. Copy the **External IP** of the instance 2. Navigate to `http://:9000` in a browser -The Web Console should now be visible: +The [Web Console](/docs/web-console/) should now be visible: -The QuestDB Web Console is a SPA (Single Page App). +The QuestDB [Web Console](/docs/web-console/) is a SPA (Single Page App). As a result, it cannot store safely a client secret. @@ -33,7 +33,7 @@ Instead it can use PKCE (Proof Key for Code Exchange) to secure the flow. As shown above, leave the client authentication disabled. -We also have to white list the URL of the Web Console as a redirection URL: +We also have to white list the URL of the [Web Console](/docs/web-console/) as a redirection URL: We can instruct PingFederate to automatically authorize the scopes requested by -the Web Console. +the [Web Console](/docs/web-console/). The user will not be presented the extra window asking for consent after authentication: @@ -55,7 +55,7 @@ authentication: width={500} /> -The Web Console uses the +The [Web Console](/docs/web-console/) uses the [Authorization Code Flow](/docs/operations/openid-connect-oidc-integration/#authentication-and-authorization-flow), and refreshes tokens automatically. @@ -96,7 +96,7 @@ QuestDB does not require any special setup regarding the access token. We recommend that you do not to use shorter tokens than the default 28 characters. -As the QuestDB Web Console refreshes the token automatically, there is no need +As the QuestDB [Web Console](/docs/web-console/) refreshes the token automatically, there is no need for long-lived tokens: -It is also important to whitelist the Web Console's URL on the CORS list: +It is also important to whitelist the [Web Console](/docs/web-console/)'s URL on the CORS list: For an example, click _Demo this query_ in the below snippet. This will run a -query within our public demo instance and Web Console: +query within our public demo instance and [Web Console](/docs/web-console/): ```questdb-sql title='Navigate time with SQL' demo SELECT @@ -298,56 +298,56 @@ curl -G \ The `node-fetch` package can be installed using `npm i node-fetch`. ```javascript -const fetch = require("node-fetch") +const fetch = require("node-fetch"); -const HOST = "http://localhost:9000" +const HOST = "http://localhost:9000"; async function createTable() { try { - const query = "CREATE TABLE IF NOT EXISTS trades (name VARCHAR, value INT)" + const query = "CREATE TABLE IF NOT EXISTS trades (name VARCHAR, value INT)"; const response = await fetch( `${HOST}/exec?query=${encodeURIComponent(query)}`, - ) - const json = await response.json() + ); + const json = await response.json(); - console.log(json) + console.log(json); } catch (error) { - console.log(error) + console.log(error); } } async function insertData() { try { - const query = "INSERT INTO trades VALUES('abc', 123456)" + const query = "INSERT INTO trades VALUES('abc', 123456)"; const response = await fetch( `${HOST}/exec?query=${encodeURIComponent(query)}`, - ) - const json = await response.json() + ); + const json = await response.json(); - console.log(json) + console.log(json); } catch (error) { - console.log(error) + console.log(error); } } async function updateData() { try { - const query = "UPDATE trades SET value = 9876 WHERE name = 'abc'" + const query = "UPDATE trades SET value = 9876 WHERE name = 'abc'"; const response = await fetch( `${HOST}/exec?query=${encodeURIComponent(query)}`, - ) - const json = await response.json() + ); + const json = await response.json(); - console.log(json) + console.log(json); } catch (error) { - console.log(error) + console.log(error); } } -createTable().then(insertData).then(updateData) +createTable().then(insertData).then(updateData); ``` @@ -429,7 +429,7 @@ For more information, see the Now... SQL! It's query time. -Whether you want to use the Web Console, PostgreSQL or REST HTTP (or both), +Whether you want to use the [Web Console](/docs/web-console/), PostgreSQL or REST HTTP (or both), query construction is rich. To brush up and learn what's unique in QuestDB, consider the following: diff --git a/reference/sql/sample-by.md b/reference/sql/sample-by.md index d15072b0..73d12eb4 100644 --- a/reference/sql/sample-by.md +++ b/reference/sql/sample-by.md @@ -4,7 +4,7 @@ sidebar_label: SAMPLE BY description: SAMPLE BY SQL keyword reference documentation. --- -`SAMPLE BY` is used on time-series data to summarize large datasets into +`SAMPLE BY` is used on [time-series data](/blog/what-is-time-series-data/) to summarize large datasets into aggregates of homogeneous time chunks as part of a [SELECT statement](/docs/reference/sql/select/). diff --git a/reference/sql/select.md b/reference/sql/select.md index a909ff9e..4da49320 100644 --- a/reference/sql/select.md +++ b/reference/sql/select.md @@ -302,7 +302,7 @@ For more information, please refer to the ### SAMPLE BY -Aggregates time-series data into homogeneous time chunks. For example daily +Aggregates [time-series data](/blog/what-is-time-series-data/) into homogeneous time chunks. For example daily average, monthly maximum etc. This function requires a [designated timestamp](/docs/concept/designated-timestamp/). diff --git a/reference/sql/update.md b/reference/sql/update.md index 989a9422..e349ca26 100644 --- a/reference/sql/update.md +++ b/reference/sql/update.md @@ -15,7 +15,7 @@ Updates data in a database table. - the same `columnName` cannot be specified multiple times after the SET keyword as it would be ambiguous - the designated timestamp column cannot be updated as it would lead to altering - history of the time-series data + history of the [time-series data](/blog/what-is-time-series-data/) - If the target partition is [attached by a symbolic link](/docs/reference/sql/alter-table-attach-partition/#symbolic-links), the partition is read-only. `UPDATE` operation on a read-only partition will diff --git a/sidebars.js b/sidebars.js index f7c589e8..bf244341 100644 --- a/sidebars.js +++ b/sidebars.js @@ -430,7 +430,7 @@ module.exports = { id: "third-party-tools/overview", }, "third-party-tools/cube", - "third-party-tools/data-bento", + "third-party-tools/databento", "third-party-tools/embeddable", "third-party-tools/flink", "third-party-tools/grafana", diff --git a/third-party-tools/data-bento.md b/third-party-tools/databento.md similarity index 92% rename from third-party-tools/data-bento.md rename to third-party-tools/databento.md index 289f7365..a8d0180d 100644 --- a/third-party-tools/data-bento.md +++ b/third-party-tools/databento.md @@ -3,12 +3,12 @@ title: "Databento" description: Guide to ingest and analyze live multi-stream market data from Databento using QuestDB and Grafana. --- -[Databento](https://databento.com/) is a market data aggregator that provides a single, +[Databento](/docs/third-party-tools/databento/) is a market data aggregator that provides a single, normalized feed covering multiple venues, simplifying the process of ingesting live market data. It interfaces well with QuestDB for real-time data analysis and visualization in Grafana. -This guide will show how to ingest live market data from Databento into QuestDB and visualize it using Grafana. +This guide will show how to ingest live market data from [Databento](/docs/third-party-tools/databento/) into QuestDB and visualize it using Grafana. For a deeper dive, see our [Databento & QuestDB blog](/blog/ingesting-live-market-data-data-bento/). diff --git a/third-party-tools/flink.md b/third-party-tools/flink.md index 729b4220..c193d963 100644 --- a/third-party-tools/flink.md +++ b/third-party-tools/flink.md @@ -97,7 +97,7 @@ Flink. The overall steps are the followings: This command used Flink SQL to insert a row into the `Orders` table in Flink. The table is connected to QuestDB, so the row is also into QuestDB. -- Go to the QuestDB web console [http://localhost:9000](http://localhost:9000) +- Go to the QuestDB [Web Console](/docs/web-console/) [http://localhost:9000](http://localhost:9000) and execute this query: ```questdb-sql diff --git a/third-party-tools/grafana.md b/third-party-tools/grafana.md index c0d30d11..644e55bd 100644 --- a/third-party-tools/grafana.md +++ b/third-party-tools/grafana.md @@ -8,7 +8,7 @@ description: import Screenshot from "@theme/Screenshot" [Grafana](https://grafana.com/) is a popular observability and monitoring -application used to visualize data and enable time-series data analysis. +application used to visualize data and enable [time-series data analysis](/glossary/time-series-analysis/). QuestDB is available within Grafana via the [official QuestDB plugin](https://grafana.com/grafana/plugins/questdb-questdb-datasource/). @@ -44,7 +44,7 @@ password:admin ## Start QuestDB The Docker version runs on port `8812` for the database connection and port -`9000` for the Web Console and REST interface: +`9000` for the [Web Console](/docs/web-console/) and REST interface: ```shell docker run --add-host=host.docker.internal:host-gateway \ diff --git a/third-party-tools/kafka/questdb-kafka.md b/third-party-tools/kafka/questdb-kafka.md index fa8aa52e..fdda0775 100644 --- a/third-party-tools/kafka/questdb-kafka.md +++ b/third-party-tools/kafka/questdb-kafka.md @@ -469,7 +469,9 @@ correct column types instead of relying on the connector to infer them. See the paragraph below. ### Target table considerations + #### Table name + By default, the target table name in QuestDB is the same as the Kafka topic name from which a message originates. When a connector is configured to read from multiple topics, it uses a separate table for each topic. @@ -478,6 +480,7 @@ Set the table configuration option to override this behavior. Once set, the conn regardless of the topic from which they originate. Example: + ```shell title="Configuration file an explicit table name" name=questdb-sink connector.class=io.questdb.kafka.QuestDBSinkConnector @@ -493,6 +496,7 @@ variables in the table name: `${topic}`, `${key}`. The connector will replace these variables with the actual topic name and key value from the Kafka message. Example: + ```shell title="Configuration file with a templated table name" name=questdb-sink connector.class=io.questdb.kafka.QuestDBSinkConnector @@ -502,11 +506,13 @@ table=from_kafka_${topic}_${key}_suffix [...] ``` + The placeholder `${key}` will be replaced with the actual key value from the Kafka message. If the key is not present in the message, the placeholder will be replaced with the string `null`. #### Table schema + When a target table does not exist in QuestDB, it will be created automatically. This is the recommended approach for development and testing. diff --git a/third-party-tools/mindsdb.md b/third-party-tools/mindsdb.md index 97051b8a..9fcf7208 100644 --- a/third-party-tools/mindsdb.md +++ b/third-party-tools/mindsdb.md @@ -1,7 +1,6 @@ --- title: MindsDB -description: - Guide for getting started in Machine Learning with MindsDB and QuestDB +description: Guide for getting started in Machine Learning with MindsDB and QuestDB --- [MindsDB](https://mindsdb.com/questdb-machine-learning/) provides Machine @@ -96,7 +95,7 @@ The container has these mount points: The container is running `Debian GNU/Linux 11 (bullseye)` and exposes these ports: -- 9000: QuestDB Web Console +- 9000: QuestDB [Web Console](/docs/web-console/) - 8812: QuestDB pg-wire - 9009: QuestDB InfluxDB Line Protocol ingress line protocol - 47334: MindsDB WebConsole @@ -109,7 +108,7 @@ There are different ways to [insert data to QuestDB](/docs/ingestion-overview/). #### SQL -We can access QuestDB's web console at +We can access QuestDB's [Web Console](/docs/web-console/) at [http://localhost:9000](http://localhost:9000). Run the following SQL query to create a simple table: @@ -271,7 +270,7 @@ The result should be something like this: ``` Beyond SELECT statements, for instance when we need to save the results of a -query into a new table, we need to use QuestDB's web console available at +query into a new table, we need to use QuestDB's [Web Console](/docs/web-console/) available at [http://localhost:9000](http://localhost:9000): ```questdb-sql diff --git a/third-party-tools/overview.md b/third-party-tools/overview.md index 90f6981c..28b0ae4c 100644 --- a/third-party-tools/overview.md +++ b/third-party-tools/overview.md @@ -16,7 +16,7 @@ Interact with and visualize your QuestDB data using these powerful visualization platforms: - **[Grafana](/docs/third-party-tools/grafana/):** Create stunning dashboards - and interactive graphs for time-series data visualization. + and interactive graphs for [time-series data](/blog/what-is-time-series-data/) visualization. - [Superset](/docs/third-party-tools/superset/): Build interactive visualizations and perform ad-hoc data analysis. @@ -41,10 +41,10 @@ integrations: Enhance your data analysis and processing capabilities with QuestDB through these tools: -- [Pandas](/docs/third-party-tools/pandas/): Analyze time-series data in Python +- [Pandas](/docs/third-party-tools/pandas/): Analyze [time-series data](/blog/what-is-time-series-data/) in Python with powerful data structures. - [MindsDB](/docs/third-party-tools/mindsdb/): Build machine learning models for - predictive analytics on time-series data. + predictive analytics on[time-series data](/blog/what-is-time-series-data/). - [Embeddable](/docs/third-party-tools/embeddable/): Developer toolkit for building fast, interactive customer-facing analytics. diff --git a/third-party-tools/prometheus.md b/third-party-tools/prometheus.md index 40f84074..f842e7fb 100644 --- a/third-party-tools/prometheus.md +++ b/third-party-tools/prometheus.md @@ -11,7 +11,7 @@ import InterpolateReleaseData from "../../src/components/InterpolateReleaseData" import CodeBlock from "@theme/CodeBlock" Prometheus is an open-source systems monitoring and alerting toolkit. Prometheus -collects and stores metrics as time-series data, i.e. metrics information is +collects and stores metrics as [time-series data](/blog/what-is-time-series-data/), i.e. metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels. @@ -51,15 +51,15 @@ When running QuestDB via Docker, port `9003` must be exposed and the metrics configuration can be enabled via the `QDB_METRICS_ENABLED` environment variable: ( - - {`docker run \\ +renderText={(release) => ( + +{`docker run \\ -e QDB_METRICS_ENABLED=TRUE \\ -p 8812:8812 -p 9000:9000 -p 9003:9003 -p 9009:9009 \\ -v "$(pwd):/var/lib/questdb" \\ questdb/questdb:${release.name}`} - - )} + +)} /> To verify that metrics are being exposed correctly by QuestDB, navigate to diff --git a/third-party-tools/spark.md b/third-party-tools/spark.md index 393d72fa..e912d917 100644 --- a/third-party-tools/spark.md +++ b/third-party-tools/spark.md @@ -93,18 +93,18 @@ postgresql-42.5.1.jar First, start QuestDB. If you are using Docker run the following command: ( - - {`docker run -p 9000:9000 -p 8812:8812 questdb/questdb:${release.name}`} - - )} +renderText={(release) => ( + +{`docker run -p 9000:9000 -p 8812:8812 questdb/questdb:${release.name}`} + +)} /> The port mappings allow us to connect to QuestDB's REST and PostgreSQL Wire Protocol endpoints. The former is required for opening the Web Console, and the latter is used by Spark to connect to the database. -Open the Web Console in your browser at +Open the [Web Console](/docs/web-console/) in your browser at [http://localhost:9000](http://localhost:9000). Run the following SQL commands using the console: @@ -210,7 +210,7 @@ After the execution is completed, we can check the content of the SELECT * FROM trades_enriched; ``` -The enriched data should be displayed in the Web Console. +The enriched data should be displayed in the [Web Console](/docs/web-console/). ## See also diff --git a/third-party-tools/telegraf.md b/third-party-tools/telegraf.md index af206ef0..74fe056b 100644 --- a/third-party-tools/telegraf.md +++ b/third-party-tools/telegraf.md @@ -95,7 +95,7 @@ load configuration files from and paste the following example: [[outputs.influxdb_v2]] # Use InfluxDB Line Protocol to write metrics to QuestDB urls = ["http://localhost:9000"] -# Disable gzip compression +# Disable gzip compression content_encoding = "identity" # -- INPUT PLUGINS -- # @@ -148,7 +148,7 @@ Telegraf should report the following if configured correctly: ## Verifying the integration -1. Navigate to the QuestDB Web Console at `http://127.0.0.1:9000/`. The Schema +1. Navigate to the QuestDB [Web Console](/docs/web-console/) at `http://127.0.0.1:9000/`. The Schema Navigator in the top left should display two new tables: - `cpu` generated from `inputs.cpu` diff --git a/web-console.md b/web-console.md index b19cc656..cef96064 100644 --- a/web-console.md +++ b/web-console.md @@ -1,7 +1,6 @@ --- title: Web Console -description: - Learn how to use the QuestDB Web Console. Launch queries, create +description: Learn how to use the QuestDB Web Console. Launch queries, create visualizations and more. Includes pictures and examples. --- @@ -9,7 +8,7 @@ import Screenshot from "@theme/Screenshot" ## Web Console -The Web Console is a client that allows you to interact with QuestDB. It +The [Web Console](/docs/web-console/) is a client that allows you to interact with QuestDB. It provides UI tools to query data and visualize the results in a table or plot.