diff --git a/clients/ingest-c-and-cpp.md b/clients/ingest-c-and-cpp.md
index 09b4d96f..78dfa00b 100644
--- a/clients/ingest-c-and-cpp.md
+++ b/clients/ingest-c-and-cpp.md
@@ -23,38 +23,39 @@ Key features of the QuestDB C & C++ client include:
health monitoring
- **Automatic write retries**: Reuse connections and retry after interruptions
-This guide aims to help you swiftly set up and begin using the QuestDB C++
-client.
-
-## C++
-
-
-
-Explore the full capabilities of the C++ client via the
-[C++ README](https://github.com/questdb/c-questdb-client/blob/main/doc/CPP.md).
### Requirements
-- Requires a C++ compiler and standard libraries.
+- Requires a C/C++ compiler and standard libraries.
- Assumes QuestDB is running. If it's not, refer to
[the general quick start](/docs/quick-start/).
### Client Installation
-Clone the GitHub repository and compile the source code:
+You need to add the client as a dependency to your project. Depending on your environment,
+you can do this in different ways. Please check the documentation at the
+[client's repository](https://github.com/questdb/c-questdb-client/blob/main/doc/DEPENDENCY.md).
-```bash
-git clone https://github.com/questdb/c-questdb-client.git
-cd c-questdb-client
-make
-```
-This will compile the client library, which can then be linked to your C++
-projects.
+## C++
-### Connection
+:::note
-The QuestDB C client supports basic connection and authentication
+This section is for the QuestDB C++ client.
+
+For the QuestDB C Client, see the below seciton.
+
+:::
+
+
+
+Explore the full capabilities of the C++ client via the
+[C++ README](https://github.com/questdb/c-questdb-client/blob/main/doc/CPP.md).
+
+
+## Authentication
+
+The QuestDB C++ client supports basic connection and authentication
configurations.
Here is an example of how to configure and use the client for data ingestion:
@@ -69,46 +70,138 @@ auto sender = questdb::ingress::line_sender::from_conf(
```
+You can also pass the connection configuration via the `QDB_CLIENT_CONF` environment variable:
+
+```bash
+export QDB_CLIENT_CONF="http::addr=localhost:9000;username=admin;password=quest;"
+```
+
+Then you use it like this:
+
+```c
+auto sender = questdb::ingress::line_sender::from_env();
+```
+
+When using QuestDB Enterprise, authentication can also be done via REST token.
+Please check the [RBAC docs](/docs/operations/rbac/#authentication) for more info.
+
+
### Basic data insertion
+Basic insertion (no-auth):
+
```c
-questdb::ingress::line_sender_buffer buffer;
-buffer
- .table("cpp_cars")
- .symbol("id", "d6e5fe92-d19f-482a-a97a-c105f547f721")
- .column("x", 30.5)
- .at(timestamp_nanos::now());
+// main.cpp
+#include
+
+int main()
+{
+ auto sender = questdb::ingress::line_sender::from_conf(
+ "http::addr=localhost:9000;");
+
+ questdb::ingress::line_sender_buffer buffer;
+ buffer
+ .table("trades")
+ .symbol("symbol","ETH-USD")
+ .symbol("side","sell")
+ .column("price", 2615.54)
+ .column("amount", 0.00044)
+ .at(questdb::ingress::timestamp_nanos::now());
-// To insert more records, call `buffer.table(..)...` again.
+ // To insert more records, call `buffer.table(..)...` again.
-sender.flush(buffer);
+ sender.flush(buffer);
+ return 0;
+}
```
-## C
+These are the main steps it takes:
-
+- Use `questdb::ingress::line_sender::from_conf` to get the `sender` object
+- Populate a `Buffer` with one or more rows of data
+- Send the buffer using `sender.flush()`(`Sender::flush`)
-Explore the full capabilities of the C client via the
-[C README](https://github.com/questdb/c-questdb-client/blob/main/doc/C.md).
+In this case, the designated timestamp will be the one at execution time.
-### Requirements
+Let's see now an example with timestamps, custom timeout, basic auth, and error control.
-- Requires a C compiler and standard libraries.
-- Assumes QuestDB is running. If it's not, refer to
- [the general quick start](/docs/quick-start/).
+```cpp
+#include
+#include
+#include
+
+int main()
+{
+ try
+ {
+ // Create a sender using HTTP protocol
+ auto sender = questdb::ingress::line_sender::from_conf(
+ "http::addr=localhost:9000;username=admin;password=quest;retry_timeout=20000;");
+
+ // Get the current time as a timestamp
+ auto now = std::chrono::system_clock::now();
+ auto duration = now.time_since_epoch();
+ auto nanos = std::chrono::duration_cast(duration).count();
+
+ // Add rows to the buffer of the sender with the same timestamp
+ questdb::ingress::line_sender_buffer buffer;
+ buffer
+ .table("trades")
+ .symbol("symbol", "ETH-USD")
+ .symbol("side", "sell")
+ .column("price", 2615.54)
+ .column("amount", 0.00044)
+ .at(questdb::ingress::timestamp_nanos(nanos));
+
+ buffer
+ .table("trades")
+ .symbol("symbol", "BTC-USD")
+ .symbol("side", "sell")
+ .column("price", 39269.98)
+ .column("amount", 0.001)
+ .at(questdb::ingress::timestamp_nanos(nanos));
+
+ // Transactionality check
+ if (!buffer.transactional()) {
+ std::cerr << "Buffer is not transactional" << std::endl;
+ sender.close();
+ return 1;
+ }
+
+ // Flush the buffer of the sender, sending the data to QuestDB
+ sender.flush(buffer);
+
+ // Close the connection after all rows ingested
+ sender.close();
+ return 0;
+ }
+ catch (const questdb::ingress::line_sender_error& err)
+ {
+ std::cerr << "Error running example: " << err.what() << std::endl;
+ return 1;
+ }
+}
+```
-### Client Installation
+As you can see, both events now are using the same timestamp. We recommended using the original event timestamps when
+ingesting data into QuestDB. Using the current timestamp will hinder the ability to deduplicate rows which is
+[important for exactly-once processing](/docs/reference/api/ilp/overview/#exactly-once-delivery-vs-at-least-once-delivery).
-Clone the GitHub repository and compile the source code:
-```bash
-git clone https://github.com/questdb/c-questdb-client.git
-cd c-questdb-client
-make
-```
+## C
+
+:::note
+
+This sectioni s for the QuestDB C client.
+
+Skip to the bottom of this page for information relating to both the C and C++ clients.
+
+:::
+
+
+Explore the full capabilities of the C client via the
+[C README](https://github.com/questdb/c-questdb-client/blob/main/doc/C.md).
-This will compile the client library, which can then be linked to your C
-projects.
### Connection
@@ -124,68 +217,263 @@ data ingestion:
line_sender_utf8 conf = QDB_UTF8_LITERAL(
"http::addr=localhost:9000;");
-line_sender_error* err = NULL;
-line_sender* sender = sender = line_sender_from_conf(&err);
+line_sender_error *error = NULL;
+line_sender *sender = line_sender_from_conf(
+ line_sender_utf8, &error);
if (!sender) {
/* ... handle error ... */
}
```
-### Basic data insertion
+You can also pass the connection configuration via the `QDB_CLIENT_CONF` environment variable:
+```bash
+export QDB_CLIENT_CONF="http::addr=localhost:9000;username=admin;password=quest;"
+```
+
+Then you use it like this:
```c
-line_sender_table_name table_name = QDB_TABLE_NAME_LITERAL("c_cars");
-line_sender_column_name id_name = QDB_COLUMN_NAME_LITERAL("id");
-line_sender_column_name x_name = QDB_COLUMN_NAME_LITERAL("x");
+#include
+...
+line_sender *sender = line_sender_from_env(&error);
-line_sender_buffer* buffer = line_sender_buffer_new();
+```
-if (!line_sender_buffer_table(buffer, table_name, &err))
- goto on_error;
+### Basic data insertion
-line_sender_utf8 id_value = QDB_UTF8_LITERAL(
- "d6e5fe92-d19f-482a-a97a-c105f547f721");
-if (!line_sender_buffer_symbol(buffer, id_name, id_value, &err))
- goto on_error;
+```c
+// line_sender_trades_example.c
+#include
+#include
+#include
+
+int main() {
+ // Initialize line sender
+ line_sender_error *error = NULL;
+ line_sender *sender = line_sender_from_conf(
+ QDB_UTF8_LITERAL("http::addr=localhost:9000;username=admin;password=quest;"), &error);
+
+ if (error != NULL) {
+ size_t len;
+ const char *msg = line_sender_error_msg(error, &len);
+ fprintf(stderr, "Failed to create line sender: %.*s\n", (int)len, msg);
+ line_sender_error_free(error);
+ return 1;
+ }
+
+ // Print success message
+ printf("Line sender created successfully\n");
+
+ // Initialize line sender buffer
+ line_sender_buffer *buffer = line_sender_buffer_new();
+ if (buffer == NULL) {
+ fprintf(stderr, "Failed to create line sender buffer\n");
+ line_sender_close(sender);
+ return 1;
+ }
+
+ // Add data to buffer for ETH-USD trade
+ if (!line_sender_buffer_table(buffer, QDB_TABLE_NAME_LITERAL("trades"), &error)) goto error;
+ if (!line_sender_buffer_symbol(buffer, QDB_COLUMN_NAME_LITERAL("symbol"), QDB_UTF8_LITERAL("ETH-USD"), &error)) goto error;
+ if (!line_sender_buffer_symbol(buffer, QDB_COLUMN_NAME_LITERAL("side"), QDB_UTF8_LITERAL("sell"), &error)) goto error;
+ if (!line_sender_buffer_column_f64(buffer, QDB_COLUMN_NAME_LITERAL("price"), 2615.54, &error)) goto error;
+ if (!line_sender_buffer_column_f64(buffer, QDB_COLUMN_NAME_LITERAL("amount"), 0.00044, &error)) goto error;
+ if (!line_sender_buffer_at_nanos(buffer, line_sender_now_nanos(), &error)) goto error;
+
+
+ // Flush the buffer to QuestDB
+ if (!line_sender_flush(sender, buffer, &error)) {
+ size_t len;
+ const char *msg = line_sender_error_msg(error, &len);
+ fprintf(stderr, "Failed to flush data: %.*s\n", (int)len, msg);
+ line_sender_error_free(error);
+ line_sender_buffer_free(buffer);
+ line_sender_close(sender);
+ return 1;
+ }
+
+ // Print success message
+ printf("Data flushed successfully\n");
+
+ // Free resources
+ line_sender_buffer_free(buffer);
+ line_sender_close(sender);
+
+ return 0;
+
+error:
+ {
+ size_t len;
+ const char *msg = line_sender_error_msg(error, &len);
+ fprintf(stderr, "Error: %.*s\n", (int)len, msg);
+ line_sender_error_free(error);
+ line_sender_buffer_free(buffer);
+ line_sender_close(sender);
+ return 1;
+ }
+}
-if (!line_sender_buffer_column_f64(buffer, x_name, 30.5, &err))
- goto on_error;
+```
-if (!line_sender_buffer_at_nanos(buffer, line_sender_now_nanos(), &err))
- goto on_error;
+In this case, the designated timestamp will be the one at execution time.
-// To insert more records, call `line_sender_buffer_table(..)...` again.
+Let's see now an example with timestamps, custom timeout, basic auth, error control, and transactional
+awareness.
-if (!line_sender_flush(sender, buffer, &err))
- goto on_error;
-line_sender_close(sender);
+```c
+// line_sender_trades_example.c
+#include
+#include
+#include
+#include
+
+int main() {
+ // Initialize line sender
+ line_sender_error *error = NULL;
+ line_sender *sender = line_sender_from_conf(
+ QDB_UTF8_LITERAL(
+ "http::addr=localhost:9000;username=admin;password=quest;retry_timeout=20000;"
+ ), &error);
+
+ if (error != NULL) {
+ size_t len;
+ const char *msg = line_sender_error_msg(error, &len);
+ fprintf(stderr, "Failed to create line sender: %.*s\n", (int)len, msg);
+ line_sender_error_free(error);
+ return 1;
+ }
+
+ // Print success message
+ printf("Line sender created successfully\n");
+
+ // Initialize line sender buffer
+ line_sender_buffer *buffer = line_sender_buffer_new();
+ if (buffer == NULL) {
+ fprintf(stderr, "Failed to create line sender buffer\n");
+ line_sender_close(sender);
+ return 1;
+ }
+
+ // Get current time in nanoseconds
+ int64_t nanos = line_sender_now_nanos();
+
+ // Add data to buffer for ETH-USD trade
+ if (!line_sender_buffer_table(buffer, QDB_TABLE_NAME_LITERAL("trades"), &error)) goto error;
+ if (!line_sender_buffer_symbol(buffer, QDB_COLUMN_NAME_LITERAL("symbol"), QDB_UTF8_LITERAL("ETH-USD"), &error)) goto error;
+ if (!line_sender_buffer_symbol(buffer, QDB_COLUMN_NAME_LITERAL("side"), QDB_UTF8_LITERAL("sell"), &error)) goto error;
+ if (!line_sender_buffer_column_f64(buffer, QDB_COLUMN_NAME_LITERAL("price"), 2615.54, &error)) goto error;
+ if (!line_sender_buffer_column_f64(buffer, QDB_COLUMN_NAME_LITERAL("amount"), 0.00044, &error)) goto error;
+ if (!line_sender_buffer_at_nanos(buffer, nanos, &error)) goto error;
+
+ // Add data to buffer for BTC-USD trade
+ if (!line_sender_buffer_table(buffer, QDB_TABLE_NAME_LITERAL("trades"), &error)) goto error;
+ if (!line_sender_buffer_symbol(buffer, QDB_COLUMN_NAME_LITERAL("symbol"), QDB_UTF8_LITERAL("BTC-USD"), &error)) goto error;
+ if (!line_sender_buffer_symbol(buffer, QDB_COLUMN_NAME_LITERAL("side"), QDB_UTF8_LITERAL("sell"), &error)) goto error;
+ if (!line_sender_buffer_column_f64(buffer, QDB_COLUMN_NAME_LITERAL("price"), 39269.98, &error)) goto error;
+ if (!line_sender_buffer_column_f64(buffer, QDB_COLUMN_NAME_LITERAL("amount"), 0.001, &error)) goto error;
+ if (!line_sender_buffer_at_nanos(buffer, nanos, &error)) goto error;
+
+ // If we detect multiple tables within the same buffer, we abort to avoid potential
+ // inconsistency issues. Read below in this page for transaction details
+ if (!line_sender_buffer_transactional(buffer)) {
+ fprintf(stderr, "Buffer is not transactional\n");
+ line_sender_buffer_free(buffer);
+ line_sender_close(sender);
+ return 1;
+ }
+
+ // Flush the buffer to QuestDB
+ if (!line_sender_flush(sender, buffer, &error)) {
+ size_t len;
+ const char *msg = line_sender_error_msg(error, &len);
+ fprintf(stderr, "Failed to flush data: %.*s\n", (int)len, msg);
+ line_sender_error_free(error);
+ line_sender_buffer_free(buffer);
+ line_sender_close(sender);
+ return 1;
+ }
+
+ // Print success message
+ printf("Data flushed successfully\n");
+
+ // Free resources
+ line_sender_buffer_free(buffer);
+ line_sender_close(sender);
+
+ return 0;
+
+error:
+ {
+ size_t len;
+ const char *msg = line_sender_error_msg(error, &len);
+ fprintf(stderr, "Error: %.*s\n", (int)len, msg);
+ line_sender_error_free(error);
+ line_sender_buffer_free(buffer);
+ line_sender_close(sender);
+ return 1;
+ }
+}
+
```
-## Health check
+As you can see, both events use the same timestamp. We recommended using the original event timestamps when
+ingesting data into QuestDB. Using the current timestamp hinder the ability to deduplicate rows which is
+[important for exactly-once processing](#/docs/clients/java_ilp/#exactly-once-delivery-vs-at-least-once-delivery).
-To monitor your active connection, there is a `ping` endpoint:
-```shell
-curl -I http://localhost:9000/ping
-```
+## Other Considerations for both C and C++
+
+### Configuration options
-Returns (pong!):
+The easiest way to configure the line sender is the configuration string. The
+general structure is:
-```shell
-HTTP/1.1 204 OK
-Server: questDB/1.0
-Date: Fri, 2 Feb 2024 17:09:38 GMT
-Transfer-Encoding: chunked
-Content-Type: text/plain; charset=utf-8
-X-Influxdb-Version: v2.7.4
+```plain
+::addr=host:port;param1=val1;param2=val2;...
```
-Determine whether an instance is active and confirm the version of InfluxDB Line
-Protocol with which you are interacting.
+`transport` can be `http`, `https`, `tcp`, or `tcps`. The C/C++ and Rust clients share
+the same codebase. Please refer to the
+[Rust client's documentation](https://docs.rs/questdb-rs/latest/questdb/ingress) for the
+full details on configuration.
+
+### Don't forget to flush
+
+The sender and buffer objects are entirely decoupled. This means that the sender
+won't get access to the data in the buffer until you explicitly call
+`sender.flush` or `line_sender_flush`.
+This may lead to a pitfall where you drop a buffer that still has some data in it,
+resulting in permanent data loss.
+
+Unlike other official QuestDB clients, the Rust client does not supports auto-flushing
+via configuration.
+
+A common technique is to flush periodically on a timer and/or once the buffer
+exceeds a certain size. You can check the buffer's size by calling
+`buffer.size()` or `line_sender_buffer_size(..)`.
+
+The default `flush()` method clears the buffer after sending its data. If you
+want to preserve its contents (for example, to send the same data to multiple
+QuestDB instances), call `sender.flush_and_keep(&mut buffer)` instead.
+
+### Transactional flush
+
+As described in the [ILP overview](/docs/reference/api/ilp/overview#http-transaction-semantics),
+the HTTP transport has some support for transactions.
+
+To ensure in advance that a flush will not affect more than one table, call
+`buffer.transactional()` or `line_sender_buffer_transactional(buffer)` as we demonstrated on
+the examples in this document.
+
+This call will return false if the flush wouldn't be data-transactional.
## Next Steps
+Please refer to the [ILP overview](/docs/reference/api/ilp/overview) for details
+about transactions, error control, delivery guarantees, health check, or table and
+column auto-creation.
+
With data flowing into QuestDB, now it's time to for analysis.
To learn _The Way_ of QuestDB SQL, see the
diff --git a/clients/ingest-dotnet.md b/clients/ingest-dotnet.md
index 71d5c577..0093029d 100644
--- a/clients/ingest-dotnet.md
+++ b/clients/ingest-dotnet.md
@@ -11,6 +11,16 @@ import { ILPClientsTable } from "@theme/ILPClientsTable"
QuestDB supports the .NET ecosystem with its dedicated .NET client, engineered
for high-throughput data ingestion, focusing on insert-only operations.
+Apart from blazing fast ingestion, our clients provide these key benefits:
+
+- **Automatic table creation**: No need to define your schema upfront.
+- **Concurrent schema changes**: Seamlessly handle multiple data streams with
+ on-the-fly schema modifications
+- **Optimized batching**: Use strong defaults or curate the size of your batches
+- **Health checks and feedback**: Ensure your system's integrity with built-in
+ health monitoring
+- **Automatic write retries**: Reuse connections and retry after interruptions
+
This quick start guide aims to familiarize you with the fundamental features of
the .NET client, including how to establish a connection, authenticate, and
perform basic insert operations.
@@ -23,7 +33,7 @@ perform basic insert operations.
- QuestDB must be running. If not, see
[the general quick start guide](/docs/quick-start/).
-## Quickstart
+## Client installation
The latest version of the library is
[2.0.0](https://www.nuget.org/packages/net-questdb-client/)
@@ -35,15 +45,6 @@ The NuGet package can be installed using the dotnet CLI:
dotnet add package net-questdb-client
```
-The .NET ILP client streams data to QuestDB using the ILP format.
-
-The format is a text protocol with the following form:
-
-`table,symbol=value column1=value1 column2=value2 nano_timestamp`
-
-The client provides a convenient API to manage the construction and sending of
-ILP rows.
-
:::note
`Sender` is single-threaded, and uses a single connection to the database.
@@ -53,33 +54,117 @@ tasking.
:::
-### Basic usage
+## Authentication
+
+### HTTP
+
+The HTTP protocol supports authentication via
+[Basic Authentication](https://datatracker.ietf.org/doc/html/rfc7617), and
+[Token Authentication](https://datatracker.ietf.org/doc/html/rfc6750).
+
+**Basic Authentication**
+
+Configure Basic Authentication with the `username` and `password` parameters:
```csharp
-using var sender = Sender.New("http::addr=localhost:9000;");
-await sender.Table("metric_name")
- .Symbol("Symbol", "value")
- .Column("number", 10)
- .Column("double", 12.23)
- .Column("string", "born to shine")
- .AtAsync(new DateTime(2021, 11, 25, 0, 46, 26));
+using QuestDB;
+ ...
+using var sender = Sender.New("http::addr=localhost:9000;username=admin;password=quest;");
+ ...
+```
+
+**Token Authentication**
+
+_QuestDB Enterprise Only_
+
+Configure Token Authentication with the `username` and `token` parameters:
+
+```csharp
+using var sender = Sender.New("http::addr=localhost:9000;username=admin;token=");
+```
+
+### TCP
+
+TCP authentication can be configured using JWK tokens:
+
+```csharp
+using var sender = Sender.New("tcp::addr=localhost:9000;username=admin;token=");
+```
+
+The connection string can also be built programatically. See [Configuration](#configuration) for details.
+
+## Basic insert
+
+Basic insertion (no-auth):
+
+```csharp
+using System;
+using QuestDB;
+
+using var sender = Sender.New("http::addr=localhost:9000;");
+await sender.Table("trades")
+ .Symbol("symbol", "ETH-USD")
+ .Symbol("side", "sell")
+ .Column("price", 2615.54)
+ .Column("amount", 0.00044)
+ .AtNowAsync();
+await sender.Table("trades")
+ .Symbol("symbol", "BTC-USD")
+ .Symbol("side", "sell")
+ .Column("price", 39269.98)
+ .Column("amount", 0.001)
+ .AtNowAsync();
await sender.SendAsync();
```
-### Multi-line send (sync)
+In this case, the designated timestamp will be the one at execution time. Let's see now an example with timestamps, custom auto-flushing, basic auth, and error reporting.
```csharp
-using var sender = Sender.New("http::addr=localhost:9000;auto_flush=off;");
-for(int i = 0; i < 100; i++)
+using QuestDB;
+using System;
+using System.Threading.Tasks;
+
+class Program
{
- sender.Table("metric_name")
- .Column("counter", i)
- .At(DateTime.UtcNow);
+ static async Task Main(string[] args)
+ {
+ using var sender = Sender.New("http::addr=localhost:9000;username=admin;password=quest;auto_flush_rows=100;auto_flush_interval=1000;");
+
+ var now = DateTime.UtcNow;
+ try
+ {
+ await sender.Table("trades")
+ .Symbol("symbol", "ETH-USD")
+ .Symbol("side", "sell")
+ .Column("price", 2615.54)
+ .Column("amount", 0.00044)
+ .AtAsync(now);
+
+ await sender.Table("trades")
+ .Symbol("symbol", "BTC-USD")
+ .Symbol("side", "sell")
+ .Column("price", 39269.98)
+ .Column("amount", 0.001)
+ .AtAsync(now);
+
+ await sender.SendAsync();
+
+ Console.WriteLine("Data flushed successfully.");
+ }
+ catch (Exception ex)
+ {
+ Console.Error.WriteLine($"Error: {ex.Message}");
+ }
+ }
}
-sender.Send();
```
-## Initialisation
+As you can see, both events use the same timestamp. We recommended using the original event timestamps when
+ingesting data into QuestDB. Using the current timestamp hinder the ability to deduplicate rows which is
+[important for exactly-once processing](/docs/reference/api/ilp/overview/#exactly-once-delivery-vs-at-least-once-delivery).
+
+
+## Configuration
Construct new Senders via the `Sender` factory.
@@ -87,12 +172,12 @@ It is mandatory to provide the `addr` config, as this defines the transport
protocol and the server location.
By default, the HTTP protocol uses `9000`, the same as the other HTTP endpoints.
-Optionally, TCP uses `9009'.
+Optionally, TCP uses `9009`.
### With a configuration string
It is recommended, where possible, to initialise the sender using a
-[configuration string](https://questdb.io/docs/reference/api/ilp/overview/#configuration-strings).
+[configuration string](https://questdb.io/docs/reference/api/ilp/overview/#client-side-configuration).
Configuration strings provide a convenient shorthand for defining client
properties, and are validated during construction of the `Sender`.
@@ -135,27 +220,6 @@ var options = new ConfigurationBuilder()
.Get();
```
-### Choosing a protocol
-
-The client currently supports streaming ILP data over HTTP and TCP transports.
-
-The sender performs some validation, but it is still possible that errors are
-present and the server will reject the data.
-
-With the TCP protocol, this will lead to a dropped connection and an error
-server-side.
-
-With the HTTP transport, errors will be returned via standard HTTP responses and
-propagated to the user via `IngressError`.
-
-HTTP transport also provides better guarantees around transactionality for
-submitted data.
-
-In general, it is recommended to use the HTTP transport. If the absolute highest
-performance is required, then in some cases, the TCP transport will be faster.
-However, it is important to use deduplication keys judiciously in your table
-schemas, as this will help guard against duplication of data in the error case.
-
## Preparing Data
Senders use an internal buffer to convert input values into an ILP-compatible
@@ -246,25 +310,6 @@ QuestDB's deduplication feature, and should be avoided where possible.
:::
-#### Designated timestamp
-
-QuestDB clusters the table around a
-[designated timestamp](/docs/concept/designated-timestamp/).
-
-The timestamp provided in the `At*` calls will be used as the designated
-timestamp.
-
-Choosing the right timestamp is critical for performance!
-
-#### Table creation
-
-If the table corresponding to the ILP submission does not exist, it will be
-automatically created, with a 'best guess' schema. This may not be optimal for
-your use case, but this functionality does provide flexibility in what the
-database will accept.
-
-It is recommended, when possible, to create your tables ahead of time using a
-thought-out schema. This can be done via APIs other than the ILP ingestion.
## Flushing
@@ -365,9 +410,15 @@ Server-side transactions are only for a single table. Therefore, a request
containing multiple tables will be split into a single transaction per table. If
a transaction fails for one table, other transactions may still complete.
-For true transactionality, one can use the transaction feature to enforce a
+For data transactionality, one can use the transaction feature to enforce a
batch only for a single table.
+:::caution
+
+As described in the [ILP overview](/docs/reference/api/ilp/overview#http-transaction-semantics), the HTTP transport has some limitations for transactions when adding new columns.
+
+:::
+
Transactions follow this flow:
```mermaid
@@ -483,6 +534,8 @@ sender.Clear(); // empties the internal buffer
## Security
+_QuestDB Enterprise offers native TLS support_
+
### TLS
Enable TLS via the `https` or `tcps` protocol, along with other associated
@@ -497,160 +550,33 @@ For development purposes, the verification of TLS certificates can be disabled:
using var sender = Sender.New("https::addr=localhost:9000;tls_verify=unsafe_off;");
```
-### Authentication
-
-The client supports both TLS encryption, and authentication.
-
-The authentication credentials can be set up by following the
-[RBAC](https://questdb.io/docs/operations/rbac/) documentation.
-
-#### HTTP
-
-The HTTP protocol supports authentication via
-[Basic Authentication](https://datatracker.ietf.org/doc/html/rfc7617), and
-[Token Authentication](https://datatracker.ietf.org/doc/html/rfc6750).
-
-**Basic Authentication**
-
-Configure Basic Authentication with the `username` and `password` parameters:
-
-```csharp
-using var sender = Sender.New("http::addr=localhost:9000;username=admin;password=quest;");
-```
-
-**Token Authentication**
-
-Configure Token Authentication with the `username` and `token` parameters:
-
-```csharp
-using var sender = Sender.New("http::addr=localhost:9000;username=admin;token=");
-```
-
-#### TCP
-
-TCP authentication can be configured using JWK tokens:
-
-```csharp
-using var sender = Sender.New("tcp::addr=localhost:9000;username=admin;token=");
-```
-
-## Examples
-
-### Basic Usage
-
-```csharp
-using System;
-using QuestDB;
-
-using var sender = Sender.New("http::addr=localhost:9000;");
-await sender.Table("trades")
- .Symbol("pair", "USDGBP")
- .Symbol("type", "buy")
- .Column("traded_price", 0.83)
- .Column("limit_price", 0.84)
- .Column("qty", 100)
- .Column("traded_ts", new DateTime(2022, 8, 6, 7, 35, 23, 189, DateTimeKind.Utc))
- .AtAsync(DateTime.UtcNow);
-await sender.Table("trades")
- .Symbol("pair", "GBPJPY")
- .Column("traded_price", 135.97)
- .Column("qty", 400)
- .AtAsync(DateTime.UtcNow);
-await sender.SendAsync();
-```
-
-### Streaming data
-
-```csharp
-using System.Diagnostics;
-using QuestDB;
-
-var rowsToSend = 1e6;
-
-using var sender = Sender.New("http::addr=localhost:9000;auto_flush=on;auto_flush_rows=75000;auto_flush_interval=off;");
-
-var timer = new Stopwatch();
-timer.Start();
-
-for (var i = 0; i < rowsToSend; i++)
-{
- await sender.Table("trades")
- .Symbol("pair", "USDGBP")
- .Symbol("type", "buy")
- .Column("traded_price", 0.83)
- .Column("limit_price", 0.84)
- .Column("qty", 100)
- .Column("traded_ts", new DateTime(
- 2022, 8, 6, 7, 35, 23, 189, DateTimeKind.Utc))
- .AtAsync(DateTime.UtcNow);
-}
-
-// Ensure no pending rows.
-await sender.SendAsync();
-
-timer.Stop();
-
-Console.WriteLine(
- $"Wrote {rowsToSend} rows in {timer.Elapsed.TotalSeconds} seconds at a rate of {rowsToSend / timer.Elapsed.TotalSeconds} rows/second.");
-```
-
### HTTP TLS with Basic Authentication
```csharp
-using QuestDB;
-
// Runs against QuestDB Enterprise, demonstrating HTTPS and Basic Authentication support.
using var sender =
Sender.New("https::addr=localhost:9000;tls_verify=unsafe_off;username=admin;password=quest;");
-await sender.Table("trades")
- .Symbol("pair", "USDGBP")
- .Symbol("type", "buy")
- .Column("traded_price", 0.83)
- .Column("limit_price", 0.84)
- .Column("qty", 100)
- .Column("traded_ts", new DateTime(
- 2022, 8, 6, 7, 35, 23, 189, DateTimeKind.Utc))
- .AtAsync(DateTime.UtcNow);
-await sender.Table("trades")
- .Symbol("pair", "GBPJPY")
- .Column("traded_price", 135.97)
- .Column("qty", 400)
- .AtAsync(DateTime.UtcNow);
-await sender.SendAsync();
```
### TCP TLS with JWK Authentication
```csharp
-using System;
-using QuestDB;
-
// Demonstrates TCPS connection against QuestDB Enterprise
using var sender =
Sender.New(
"tcps::addr=localhost:9009;tls_verify=unsafe_off;username=admin;token=NgdiOWDoQNUP18WOnb1xkkEG5TzPYMda5SiUOvT1K0U=;");
// See: https://questdb.io/docs/reference/api/ilp/authenticate
-await sender.Table("trades")
- .Symbol("pair", "USDGBP")
- .Symbol("type", "buy")
- .Column("traded_price", 0.83)
- .Column("limit_price", 0.84)
- .Column("qty", 100)
- .Column("traded_ts", new DateTime(
- 2022, 8, 6, 7, 35, 23, 189, DateTimeKind.Utc))
- .AtAsync(DateTime.UtcNow);
-await sender.Table("trades")
- .Symbol("pair", "GBPJPY")
- .Column("traded_price", 135.97)
- .Column("qty", 400)
- .AtAsync(DateTime.UtcNow);
-await sender.SendAsync();
+
```
## Next Steps
+Please refer to the [ILP overview](/docs/reference/api/ilp/overview) for details
+about transactions, error control, delivery guarantees, health check, or table and
+column auto-creation.
+
Dive deeper into the .NET client capabilities by exploring more examples
provided in the
[GitHub repository](https://github.com/questdb/dotnet-questdb-client).
diff --git a/clients/ingest-go.md b/clients/ingest-go.md
index 2b17870f..f2ee0dba 100644
--- a/clients/ingest-go.md
+++ b/clients/ingest-go.md
@@ -71,18 +71,39 @@ Or, set the QDB_CLIENT_CONF environment variable and call
1. Export the configuration string as an environment variable:
```bash
- export QDB_CLIENT_CONF="addr=localhost:9000;username=admin;password=quest;"
+ export QDB_CLIENT_CONF="http::addr=localhost:9000;username=admin;password=quest;"
```
2. Then in your Go code:
```Go
client, err := questdb.LineSenderFromEnv(context.TODO())
```
+Alternatively, you can use the built-in Go API to specify the connection options.
+
+ ```go
+ package main
+
+import (
+ "context"
+ qdb "github.com/questdb/go-questdb-client/v3"
+)
+
+
+func main() {
+ ctx := context.TODO()
+
+ client, err := qdb.NewLineSender(context.TODO(), qdb.WithHttp(), qdb.WithAddress("localhost:9000"), qdb.WithBasicAuth("admin", "quest"))
+```
+
+
+When using QuestDB Enterprise, authentication can also be done via REST token.
+Please check the [RBAC docs](/docs/operations/rbac/#authentication) for more info.
+
## Basic Insert
-Example: inserting data from a temperature sensor.
+Example: inserting executed trades for cryptocurrencies.
-Without authentication:
+Without authentication and using the current timestamp:
```Go
package main
@@ -90,7 +111,6 @@ package main
import (
"context"
"github.com/questdb/go-questdb-client/v3"
- "time"
)
func main() {
@@ -101,12 +121,12 @@ func main() {
panic("Failed to create client")
}
- timestamp := time.Now()
- err = client.Table("sensors").
- Symbol("id", "toronto1").
- Float64Column("temperature", 20.0).
- Float64Column("humidity", 0.5).
- At(ctx, timestamp)
+ err = client.Table("trades").
+ Symbol("symbol", "ETH-USD").
+ Symbol("side", "sell").
+ Float64Column("price", 2615.54).
+ Float64Column("amount", 0.00044).
+ AtNow(ctx)
if err != nil {
panic("Failed to insert data")
@@ -119,57 +139,74 @@ func main() {
}
```
-## Limitations
+In this case, the designated timestamp will be the one at execution time. Let's see now an example with an explicit timestamp, custom auto-flushing, and basic auth.
+
+```Go
+package main
+
+import (
+ "context"
+ "github.com/questdb/go-questdb-client/v3"
+ "time"
+)
-### Transactionality
+func main() {
+ ctx := context.TODO()
-The Go client does not support full transactionality:
+ client, err := questdb.LineSenderFromConf(ctx, "http::addr=localhost:9000;username=admin;password=quest;auto_flush_rows=100;auto_flush_interval=1000;")
+ if err != nil {
+ panic("Failed to create client")
+ }
-- Data for the first table in an HTTP request will be committed even if the
- second table's commit fails.
-- An implicit commit occurs each time a new column is added to a table. This
- action cannot be rolled back if the request is aborted or encounters parse
- errors.
+ timestamp := time.Now()
+ err = client.Table("trades").
+ Symbol("symbol", "ETH-USD").
+ Symbol("side", "sell").
+ Float64Column("price", 2615.54).
+ Float64Column("amount", 0.00044).
+ At(ctx, timestamp)
-### Timestamp column
+ if err != nil {
+ panic("Failed to insert data")
+ }
-QuestDB's underlying InfluxDB Line Protocol (ILP) does not name timestamps,
-leading to an automatic column name of timestamp. To use a custom name,
-pre-create the table with the desired timestamp column name:
+ err = client.Flush(ctx)
+ // You can flush manually at any point.
+ // If you don't flush manually, the client will flush automatically
+ // when a row is added and either:
+ // * The buffer contains 75000 rows (if HTTP) or 600 rows (if TCP)
+ // * The last flush was more than 1000ms ago.
+ // Auto-flushing can be customized via the `auto_flush_..` params.
-```sql
-CREATE TABLE temperatures (
- ts timestamp,
- sensorID symbol,
- sensorLocation symbol,
- reading double
-) timestamp(my_ts);
+ if err != nil {
+ panic("Failed to flush data")
+ }
+}
```
+We recommended to use User-assigned timestamps when ingesting data into QuestDB.
+ Using the current timestamp hinder the ability to deduplicate rows which is
+[important for exactly-once processing](/docs/reference/api/ilp/overview/#exactly-once-delivery-vs-at-least-once-delivery).
-## Health check
+## Configuration options
-To monitor your active connection, there is a `ping` endpoint:
+The minimal configuration string needs to have the protocol, host, and port, as in:
-```shell
-curl -I http://localhost:9000/ping
+```
+http::addr=localhost:9000;
```
-Returns (pong!):
+In the Go client, you can set the configuration options via the standard config string,
+which is the same across all clients, or using [the built-in API](https://pkg.go.dev/github.com/questdb/go-questdb-client/v3#LineSenderOption).
-```shell
-HTTP/1.1 204 OK
-Server: questDB/1.0
-Date: Fri, 2 Feb 2024 17:09:38 GMT
-Transfer-Encoding: chunked
-Content-Type: text/plain; charset=utf-8
-X-Influxdb-Version: v2.7.4
-```
+For all the extra options you can use, please check [the client docs](https://pkg.go.dev/github.com/questdb/go-questdb-client/v3#LineSenderFromConf)
-Determine whether an instance is active and confirm the version of InfluxDB Line
-Protocol with which you are interacting.
## Next Steps
+Please refer to the [ILP overview](/docs/reference/api/ilp/overview) for details
+about transactions, error control, delivery guarantees, health check, or table and
+column auto-creation.
+
Explore the full capabilities of the Go client via
[Go.dev](https://pkg.go.dev/github.com/questdb/go-questdb-client/v3).
diff --git a/clients/ingest-node.md b/clients/ingest-node.md
index 2aca8f8b..180e0a90 100644
--- a/clients/ingest-node.md
+++ b/clients/ingest-node.md
@@ -41,10 +41,41 @@ Install the QuestDB Node.js client via npm:
npm i -s @questdb/nodejs-client
```
-## Basic Usage
+## Authentication
+
+Passing in a configuration string with basic auth:
+
+```javascript
+const { Sender } = require("@questdb/nodejs-client");
+
+const conf = "http::addr=localhost:9000;username=admin;password=quest;"
+const sender = Sender.fromConfig(conf);
+ ...
+```
+
+Passing via the `QDB_CLIENT_CONF` env var:
+
+```bash
+export QDB_CLIENT_CONF="http::addr=localhost:9000;username=admin;password=quest;"
+```
+
+```javascript
+const { Sender } = require("@questdb/nodejs-client");
+
+
+const sender = Sender.fromEnv();
+ ...
+```
+
+When using QuestDB Enterprise, authentication can also be done via REST token.
+Please check the [RBAC docs](/docs/operations/rbac/#authentication) for more info.
+
+## Basic insert
+
+Example: inserting executed trades for cryptocurrencies.
+
+Without authentication and using the current timestamp.
-A simple example to connect to QuestDB, insert some data into a table, and flush
-the data:
```javascript
const { Sender } = require("@questdb/nodejs-client")
@@ -55,42 +86,96 @@ async function run() {
// add rows to the buffer of the sender
await sender
- .table("prices")
- .symbol("instrument", "EURUSD")
- .floatColumn("bid", 1.0195)
- .floatColumn("ask", 1.0221)
- .at(Date.now(), "ms")
- await sender
- .table("prices")
- .symbol("instrument", "GBPUSD")
- .floatColumn("bid", 1.2076)
- .floatColumn("ask", 1.2082)
- .at(Date.now(), "ms")
+ .table("trades")
+ .symbol("symbol", "ETH-USD")
+ .symbol("side", "sell")
+ .floatColumn("price", 2615.54)
+ .floatColumn("amount", 0.00044)
+ .atNow()
// flush the buffer of the sender, sending the data to QuestDB
// the buffer is cleared after the data is sent, and the sender is ready to accept new data
await sender.flush()
- // add rows to the buffer again, and send it to the server
+ // close the connection after all rows ingested
+ // unflushed data will be lost
+ await sender.close()
+}
+
+run().then(console.log).catch(console.error)
+```
+
+In this case, the designated timestamp will be the one at execution time. Let's see now an example with an explicit
+timestamp, custom auto-flushing, and basic auth.
+
+
+```javascript
+const { Sender } = require("@questdb/nodejs-client")
+
+async function run() {
+ // create a sender using HTTP protocol
+ const sender = Sender.fromConfig(
+ "http::addr=localhost:9000;username=admin;password=quest;auto_flush_rows=100;auto_flush_interval=1000;"
+ )
+
+ // Calculate the current timestamp. You could also parse a date from your source data.
+ const timestamp = Date.now();
+
+ // add rows to the buffer of the sender
+ await sender
+ .table("trades")
+ .symbol("symbol", "ETH-USD")
+ .symbol("side", "sell")
+ .floatColumn("price", 2615.54)
+ .floatColumn("amount", 0.00044)
+ .at(timestamp, "ms")
+
+ // add rows to the buffer of the sender
await sender
- .table("prices")
- .symbol("instrument", "EURUSD")
- .floatColumn("bid", 1.0197)
- .floatColumn("ask", 1.0224)
- .at(Date.now(), "ms")
+ .table("trades")
+ .symbol("symbol", "BTC-USD")
+ .symbol("side", "sell")
+ .floatColumn("price", 39269.98)
+ .floatColumn("amount", 0.001)
+ .at(timestamp, "ms")
+
+
+ // flush the buffer of the sender, sending the data to QuestDB
+ // the buffer is cleared after the data is sent, and the sender is ready to accept new data
await sender.flush()
+
// close the connection after all rows ingested
+ // unflushed data will be lost
await sender.close()
}
run().then(console.log).catch(console.error)
```
+As you can see, both events now are using the same timestamp. We recommended to use the original event timestamps when
+ingesting data into QuestDB. Using the current timestamp hinder the ability to deduplicate rows which is
+[important for exactly-once processing](/docs/reference/api/ilp/overview/#exactly-once-delivery-vs-at-least-once-delivery).
+
+
+## Configuration options
+
+The minimal configuration string needs to have the protocol, host, and port, as in:
+
+```
+http::addr=localhost:9000;
+```
+
+For all the extra options you can use, please check [the client docs](https://questdb.github.io/nodejs-questdb-client/SenderOptions.html)
+
+
## Next Steps
-Dive deeper into the Node.js client capabilities by exploring more examples
-provided in the
+Please refer to the [ILP overview](/docs/reference/api/ilp/overview) for details
+about transactions, error control, delivery guarantees, health check, or table and
+column auto-creation.
+
+Dive deeper into the Node.js client capabilities, including TypeScript and Worker Threads examples, by exploring the
[GitHub repository](https://github.com/questdb/nodejs-questdb-client).
To learn _The Way_ of QuestDB SQL, see the
diff --git a/clients/ingest-python.md b/clients/ingest-python.md
index 9a635f52..7d048a76 100644
--- a/clients/ingest-python.md
+++ b/clients/ingest-python.md
@@ -84,13 +84,27 @@ with Sender.from_conf(conf) as sender:
Passing via the `QDB_CLIENT_CONF` env var:
-```python
+```bash
export QDB_CLIENT_CONF="http::addr=localhost:9000;username=admin;password=quest;"
```
-## Basic insert
+```python
+from questdb.ingress import Sender
+
+with Sender.from_env() as sender:
+ ...
+```
+
+```python
+from questdb.ingress import Sender, Protocol
+
+with Sender(Protocol.Http, 'localhost', 9000, username='admin', password='quest') as sender:
+```
-Consider something such as a temperature sensor.
+When using QuestDB Enterprise, authentication can also be done via REST token.
+Please check the [RBAC docs](/docs/operations/rbac/#authentication) for more info.
+
+## Basic insert
Basic insertion (no-auth):
@@ -100,33 +114,19 @@ from questdb.ingress import Sender, TimestampNanos
conf = f'http::addr=localhost:9000;'
with Sender.from_conf(conf) as sender:
sender.row(
- 'sensors',
- symbols={'id': 'toronto1'},
- columns={'temperature': 20.0, 'humidity': 0.5},
+ 'trades',
+ symbols={'symbol': 'ETH-USD', 'side': 'sell'},
+ columns={'price': 2615.54, 'amount': 0.00044},
+ at=TimestampNanos.now())
+ sender.row(
+ 'trades',
+ symbols={'symbol': 'BTC-USD', 'side': 'sell'},
+ columns={'price': 39269.98, 'amount': 0.001},
at=TimestampNanos.now())
sender.flush()
```
-The same temperature senesor, but via a Pandas dataframe:
-
-```python
-import pandas as pd
-from questdb.ingress import Sender
-
-df = pd.DataFrame({
- 'id': pd.Categorical(['toronto1', 'paris3']),
- 'temperature': [20.0, 21.0],
- 'humidity': [0.5, 0.6],
- 'timestamp': pd.to_datetime(['2021-01-01', '2021-01-02'])})
-
-conf = f'http::addr=localhost:9000;'
-with Sender.from_conf(conf) as sender:
- sender.dataframe(df, table_name='sensors', at='timestamp')
-```
-
-What about market data?
-
-A "full" example, with timestamps and auto-flushing:
+In this case, the designated timestamp will be the one at execution time. Let's see now an example with timestamps, custom auto-flushing, basic auth, and error reporting.
```python
from questdb.ingress import Sender, IngressError, TimestampNanos
@@ -136,25 +136,25 @@ import datetime
def example():
try:
- conf = f'http::addr=localhost:9000;'
+ conf = f'http::addr=localhost:9000;username=admin;password=quest;auto_flush_rows=100;auto_flush_interval=1000;'
with Sender.from_conf(conf) as sender:
# Record with provided designated timestamp (using the 'at' param)
# Notice the designated timestamp is expected in Nanoseconds,
# but timestamps in other columns are expected in Microseconds.
- # The API provides convenient functions
+ # You can use the TimestampNanos or TimestampMicros classes,
+ # or you can just pass a datetime object
sender.row(
'trades',
symbols={
- 'pair': 'USDGBP',
- 'type': 'buy'},
+ 'symbol': 'ETH-USD',
+ 'side': 'sell'},
columns={
- 'traded_price': 0.83,
- 'limit_price': 0.84,
- 'qty': 100,
- 'traded_ts': datetime.datetime(
- 2022, 8, 6, 7, 35, 23, 189062,
- tzinfo=datetime.timezone.utc)},
- at=TimestampNanos.now())
+ 'price': 2615.54,
+ 'amount': 0.00044,
+ },
+ at=datetime.datetime(
+ 2022, 3, 8, 18, 53, 57, 609765,
+ tzinfo=datetime.timezone.utc))
# You can call `sender.row` multiple times inside the same `with`
# block. The client will buffer the rows and send them in batches.
@@ -178,70 +178,77 @@ if __name__ == '__main__':
example()
```
-The above generates rows of InfluxDB Line Protocol (ILP) flavoured data:
+We recommended `User`-assigned timestamps when ingesting data into QuestDB.
+Using `Server`-assigned timestamps hinders the ability to deduplicate rows which is
+[important for exactly-once processing](/docs/reference/api/ilp/overview/#exactly-once-delivery-vs-at-least-once-delivery).
+
+
+The same `trades` insert, but via a Pandas dataframe:
```python
-trades,pair=USDGBP,type=sell traded_price=0.82,limit_price=0.81,qty=150,traded_ts=1659784523190000000\n
-trades,pair=EURUSD,type=buy traded_price=1.18,limit_price=1.19,qty=200,traded_ts=1659784523191000000\n
-trades,pair=USDJPY,type=sell traded_price=110.5,limit_price=110.4,qty=80,traded_ts=1659784523192000000\n
-```
+import pandas as pd
+from questdb.ingress import Sender
-## Limitations
+df = pd.DataFrame({
+ 'symbol': pd.Categorical(['ETH-USD', 'BTC-USD']),
+ 'side': pd.Categorical(['sell', 'sell']),
+ 'price': [2615.54, 39269.98],
+ 'amount': [0.00044, 0.001],
+ 'timestamp': pd.to_datetime(['2022-03-08T18:03:57.609765Z', '2022-03-08T18:03:57.710419Z'])})
-### Transactionality
+conf = f'http::addr=localhost:9000;'
+with Sender.from_conf(conf) as sender:
+ sender.dataframe(df, table_name='trades', at=TimestampNanos.now())
+```
-The client does not provide full transactionality in all cases:
+Note that you can also add a column of your dataframe with your timestamps and
+reference that column in the `at` parameter:
-- Data for the first table in an HTTP request will be committed even if the
- second table's commit fails.
-- An implicit commit occurs each time a new column is added to a table. This
- action cannot be rolled back if the request is aborted or encounters parse
- errors.
+```python
+import pandas as pd
+from questdb.ingress import Sender
-### Timestamp column
+df = pd.DataFrame({
+ 'symbol': pd.Categorical(['ETH-USD', 'BTC-USD']),
+ 'side': pd.Categorical(['sell', 'sell']),
+ 'price': [2615.54, 39269.98],
+ 'amount': [0.00044, 0.001],
+ 'timestamp': pd.to_datetime(['2022-03-08T18:03:57.609765Z', '2022-03-08T18:03:57.710419Z'])})
-The underlying ILP protocol sends timestamps to QuestDB without a name.
+conf = f'http::addr=localhost:9000;'
+with Sender.from_conf(conf) as sender:
+ sender.dataframe(df, table_name='trades', at='timestamp')
+```
-Therefore, if you provide it one, say `my_ts`, you will find that the timestamp
-column is named `timestamp`.
+## Configuration options
-To address this, issue a CREATE TABLE statement to create the table in advance:
+The minimal configuration string needs to have the protocol, host, and port, as in:
-```questdb-sql title="Creating a timestamp named my_ts"
-CREATE TABLE temperatures (
- ts timestamp,
- sensorID symbol,
- sensorLocation symbol,
- reading double
-) timestamp(my_ts);
+```
+http::addr=localhost:9000;
```
-Now, when you can send data to the specified column.
+In the Python client, you can set the configuration options via the standard config string,
+which is the same across all clients, or using [the built-in API](https://py-questdb-client.readthedocs.io/en/latest/sender.html#sender-programmatic-construction).
-## Health check
-To monitor your active connection, there is a `ping` endpoint:
+For all the extra options you can use, please check [the client docs](https://py-questdb-client.readthedocs.io/en/latest/conf.html#sender-conf)
-```shell
-curl -I http://localhost:9000/ping
-```
-Returns (pong!):
+## Transactional flush
-```shell
-HTTP/1.1 204 OK
-Server: questDB/1.0
-Date: Fri, 2 Feb 2024 17:09:38 GMT
-Transfer-Encoding: chunked
-Content-Type: text/plain; charset=utf-8
-X-Influxdb-Version: v2.7.4
-```
+As described at the [ILP overview](/docs/reference/api/ilp/overview#http-transaction-semantics),
+the HTTP transport has some support for transactions.
-Determine whether an instance is active and confirm the version of InfluxDB Line
-Protocol with which you are interacting.
+The python client exposes [an API](https://py-questdb-client.readthedocs.io/en/latest/sender.html#http-transactions)
+to make working with transactions more convenient
## Next steps
+Please refer to the [ILP overview](/docs/reference/api/ilp/overview) for general details
+about transactions, error control, delivery guarantees, health check, or table and
+column auto-creation. The [Python client docs](https://py-questdb-client.readthedocs.io/en/latest/sender.html) explain how to apply those concepts using the built-in API.
+
For full docs, checkout
[ReadTheDocs](https://py-questdb-client.readthedocs.io/en).
diff --git a/clients/ingest-rust.md b/clients/ingest-rust.md
index ed6f40d1..2a2c1611 100644
--- a/clients/ingest-rust.md
+++ b/clients/ingest-rust.md
@@ -33,10 +33,35 @@ Add the QuestDB client to your project using the command line:
cargo add questdb-rs
```
-## Quick example
+## Authentication
-This snippet connects to QuestDB running locally, creates the table `sensors`,
-and adds one row to it:
+This is how you'd set up the client to authenticate using the HTTP Basic
+authentication:
+
+```rust
+let mut sender = Sender::from_conf(
+ "https::addr=localhost:9000;username=admin;password=quest;"
+)?;
+```
+
+You can also pass the connection configuration via the `QDB_CLIENT_CONF` environment variable:
+
+```bash
+export QDB_CLIENT_CONF="http::addr=localhost:9000;username=admin;password=quest;"
+```
+
+Then you use it like this:
+
+```rust
+let mut sender = Sender::from_env()?;
+```
+
+When using QuestDB Enterprise, authentication can also be done via REST token.
+Please check the [RBAC docs](/docs/operations/rbac/#authentication) for more info.
+
+## Basic insert
+
+Basic insertion (no-auth):
```rust
use questdb::{
@@ -50,10 +75,11 @@ fn main() -> Result<()> {
let mut sender = Sender::from_conf("http::addr=localhost:9000;")?;
let mut buffer = Buffer::new();
buffer
- .table("sensors")?
- .symbol("id", "toronto1")?
- .column_f64("temperature", 20.0)?
- .column_i64("humidity", 50)?
+ .table("trades")?
+ .symbol("symbol", "ETH-USD")?
+ .symbol("side", "sell")?
+ .column_f64("price", 2615.54)?
+ .column_f64("amount", 0.00044)?
.at(TimestampNanos::now())?;
sender.flush(&mut buffer)?;
Ok(())
@@ -62,11 +88,56 @@ fn main() -> Result<()> {
These are the main steps it takes:
-- Use `Sender::from_conf()` to get the `Sender` object
+- Use `Sender::from_conf()` to get the `sender` object
- Populate a `Buffer` with one or more rows of data
- Send the buffer using `sender.flush()`(`Sender::flush`)
-## Configuration string
+In this case, the designated timestamp will be the one at execution time.
+
+Let's see now an example with timestamps using Chrono, custom timeout, and basic auth.
+
+You need to enable the `chrono_timestamp` feature to the QuestDB crate and add the Chrono crate.
+
+```bash
+cargo add questdb-rs --features chrono_timestamp
+cargo add chrono
+```
+
+```rust
+use questdb::{
+ Result,
+ ingress::{
+ Sender,
+ Buffer,
+ TimestampNanos
+ },
+};
+use chrono::Utc;
+
+fn main() -> Result<()> {
+ let mut sender = Sender::from_conf(
+ "http::addr=localhost:9000;username=admin;password=quest;retry_timeout=20000;"
+ )?;
+ let mut buffer = Buffer::new();
+ let current_datetime = Utc::now();
+
+ buffer
+ .table("trades")?
+ .symbol("symbol", "ETH-USD")?
+ .symbol("side", "sell")?
+ .column_f64("price", 2615.54)?
+ .column_f64("amount", 0.00044)?
+ .at(TimestampNanos::from_datetime(current_datetime)?)?;
+
+ sender.flush(&mut buffer)?;
+ Ok(())
+}
+```
+
+Using the current timestamp hinder the ability to deduplicate rows which is
+[important for exactly-once processing](/docs/reference/api/ilp/overview/#exactly-once-delivery-vs-at-least-once-delivery).
+
+## Configuration options
The easiest way to configure the line sender is the configuration string. The
general structure is:
@@ -86,14 +157,28 @@ won't get access to the data in the buffer until you explicitly call
`sender.flush(&mut buffer)` or a variant. This may lead to a pitfall where you
drop a buffer that still has some data in it, resulting in permanent data loss.
+Unlike other official QuestDB clients, the rust client does not supports auto-flushing
+via configuration.
+
A common technique is to flush periodically on a timer and/or once the buffer
-exceeds a certain size. You can check the buffer's size by the calling
+exceeds a certain size. You can check the buffer's size by calling
`buffer.len()`.
The default `flush()` method clears the buffer after sending its data. If you
want to preserve its contents (for example, to send the same data to multiple
QuestDB instances), call `sender.flush_and_keep(&mut buffer)` instead.
+
+## Transactional flush
+
+As described at the [ILP overview](/docs/reference/api/ilp/overview#http-transaction-semantics),
+the HTTP transport has some support for transactions.
+
+In order to ensure in advance that a flush will not affect more than one table, call
+`sender.flush_and_keep_with_flags(&mut buffer, true)`.
+This call will refuse to flush a buffer if the flush wouldn't be data-transactional.
+
+
## Error handling
The two supported transport modes, HTTP and TCP, handle errors very differently.
@@ -121,33 +206,9 @@ on the reason. When this has happened, the sender transitions into an error
state, and it is permanently unusable. You must drop it and create a new sender.
You can inspect the sender's error state by calling `sender.must_close()`.
-## Authentication example: HTTP Basic
-
-This is how you'd set up the client to authenticate using the HTTP Basic
-authentication:
-
-```no_run
-let mut sender = Sender::from_conf(
- "https::addr=localhost:9000;username=testUser1;password=Yfym3fgMv0B9;"
-)?;
-```
-
-Go to [the docs](https://docs.rs/questdb-rs/latest/questdb/ingress) for the
-other available options.
+For more details about the HTTP and TCP transports, please refer to the
+[ILP overview](/docs/reference/api/ilp/overview#transport-selection).
-## Configure using the environment variable
-
-You can set the `QDB_CLIENT_CONF` environment variable:
-
-```bash
-export QDB_CLIENT_CONF="https::addr=localhost:9000;username=admin;password=quest;"
-```
-
-Then you use it like this:
-
-```rust
-let mut sender = Sender::from_env()?;
-```
## Crate features
@@ -171,67 +232,12 @@ These features are opt-in:
- `insecure-skip-verify`: Allows skipping server certificate validation in TLS
(this compromises security).
-## Usage considerations
-
-### Transactional flush
-
-When using HTTP, you can arrange that each `flush()` call happens within its own
-transaction. For this to work, your buffer must contain data that targets only
-one table. This is because QuestDB doesn't support multi-table transactions.
-
-In order to ensure in advance that a flush will be transactional, call
-`sender.flush_and_keep_with_flags(&mut buffer, true)`.
-This call will refuse to flush a buffer if the flush wouldn't be transactional.
-
-### When to choose the TCP transport?
-
-The TCP transport mode is raw and simplistic: it doesn't report any errors to
-the caller (the server just disconnects), has no automatic retries, requires
-manual handling of connection failures, and doesn't support transactional
-flushing.
-
-However, TCP has a lower overhead than HTTP and it's worthwhile to try out as an
-alternative in a scenario where you have a constantly high data rate and/or deal
-with a high-latency network connection.
-
-### Timestamp column name
-
-InfluxDB Line Protocol (ILP) does not give a name to the designated timestamp,
-so if you let this client auto-create the table, it will have the default name.
-To use a custom name, create the table using a DDL statement:
-
-```sql
-CREATE TABLE sensors (
- my_ts timestamp,
- id symbol,
- temperature double,
- humidity double,
-) timestamp(my_ts);
-```
-
-## Health check
-
-The QuestDB server has a "ping" endpoint you can access to see if it's alive,
-and confirm the version of InfluxDB Line Protocol with which you are
-interacting:
-
-```shell
-curl -I http://localhost:9000/ping
-```
-
-Example of the expected response:
-
-```shell
-HTTP/1.1 204 OK
-Server: questDB/1.0
-Date: Fri, 2 Feb 2024 17:09:38 GMT
-Transfer-Encoding: chunked
-Content-Type: text/plain; charset=utf-8
-X-Influxdb-Version: v2.7.4
-```
-
## Next steps
+Please refer to the [ILP overview](/docs/reference/api/ilp/overview) for details
+about transactions, error control, delivery guarantees, health check, or table and
+column auto-creation.
+
Explore the full capabilities of the Rust client via the
[Crate API page](https://docs.rs/questdb-rs/latest/questdb/).
diff --git a/clients/java_ilp.md b/clients/java_ilp.md
index d730bb9a..b09dcda8 100644
--- a/clients/java_ilp.md
+++ b/clients/java_ilp.md
@@ -11,9 +11,16 @@ import CodeBlock from "@theme/CodeBlock"
import InterpolateReleaseData from "../../src/components/InterpolateReleaseData"
import { RemoteRepoExample } from "@theme/RemoteRepoExample"
-The QuestDB Java client is baked right into the QuestDB binary.
-It requires no additional configuration steps.
+:::note
+
+This is the reference for the QuestDB Java Client when QuestDB is used as a server.
+
+For embedded QuestDB, please check our [Java Embedded Guide](/docs/reference/api/java-embedded/).
+
+:::
+
+The QuestDB Java client is baked right into the QuestDB binary.
The client provides the following benefits:
@@ -94,6 +101,9 @@ This sample configures a client to use HTTP transport with TLS enabled for a
connection to a QuestDB server. It also instructs the client to authenticate
using HTTP Basic Authentication.
+When using QuestDB Enterprise, authentication can also be done via REST token.
+Please check the [RBAC docs](/docs/operations/rbac/#authentication) for more info.
+
## Client instantiation
@@ -155,21 +165,6 @@ There are three ways to create a client instance:
7. Go to the step no. 2 to start a new row.
8. Use `close()` to dispose the Sender after you no longer need it.
-## Transport selection
-
-Client supports the following transport options:
-
-- HTTP (default port 9000)
-- TCP (default port 9009)
-
-The HTTP transport is recommended for most use cases. It provides feedback on
-errors, automatically retries failed requests, and is easier to configure. The
-TCP transport is kept for compatibility with older QuestDB versions. It has
-limited error feedback, no automatic retries, and requires manual handling of
-connection failures. However, while HTTP is recommended, TCP has a lower
-overhead than HTTP and may be useful in high-throughput scenarios in
-high-latency networks.
-
## Flushing
Client accumulates data into an internal buffer. Flushing the buffer sends the
@@ -183,17 +178,18 @@ An explicit flush can be done by calling the `flush()` method.
```java
try (Sender sender = Sender.fromConfig("http::addr=localhost:9000;")) {
- sender.table("weather_sensor")
- .symbol("id", "toronto1")
- .doubleColumn("temperature", 23.5)
- .doubleColumn("humidity", 0.49)
- .atNow();
- sender.flush();
- sender.table("weather_sensor")
- .symbol("id", "dubai2")
- .doubleColumn("temperature", 41.2)
- .doubleColumn("humidity", 0.34)
- .atNow();
+ sender.table("trades")
+ .symbol("symbol", "ETH-USD")
+ .symbol("side", "sell")
+ .doubleColumn("price", 2615.54)
+ .doubleColumn("amount", 0.00044)
+ .atNow();
+ sender.table("trades")
+ .symbol("symbol", "TC-USD")
+ .symbol("side", "sell")
+ .doubleColumn("price", 39269.98)
+ .doubleColumn("amount", 0.001)
+ .atNow();
sender.flush();
}
```
@@ -257,27 +253,6 @@ client receives no additional error information from the server. This limitation
significantly contributes to the preference for HTTP transport over TCP
transport.
-### Exactly-once delivery vs at-least-once delivery
-
-The retrying behavior of the HTTP transport can lead to some data being sent to
-the server more than once.
-
-**Example**: Client sends a batch to the server, the server receives the batch,
-processes it, but fails to send a response back to the client due to a network
-error. The client will retry sending the batch to the server. This means the
-server will receive the batch again and process it again. This can lead to
-duplicated rows in the server.
-
-The are two ways to mitigate this issue:
-
-- Use [QuestDB deduplication feature](/docs/concept/deduplication/) to remove
- duplicated rows. QuestDB server can detect and remove duplicated rows
- automatically, resulting in exactly-once processing. This is recommended when
- using the HTTP transport with retrying enabled.
-- Disable retrying by setting `retry_timeout` to 0. This will make the client
- send the batch only once, failed requests will not be retried and the client
- will receive an error. This effectively turns the client into an at-most-once
- delivery.
## Designated timestamp considerations
@@ -290,11 +265,12 @@ There are two ways to assign a designated timestamp to a row:
```java
java.time.Instant timestamp = Instant.now(); // or any other timestamp
- sender.table("weather_sensor")
- .symbol("id", "toronto1")
- .doubleColumn("temperature", 23.5)
- .doubleColumn("humidity", 0.49)
- .at(timestamp);
+ sender.table("trades")
+ .symbol("symbol", "ETH-USD")
+ .symbol("side", "sell")
+ .doubleColumn("price", 2615.54)
+ .doubleColumn("amount", 0.00044)
+ .at(timestamp);
```
The `Instant` class is part of the `java.time` package and is used to
@@ -308,16 +284,17 @@ There are two ways to assign a designated timestamp to a row:
2. Server-assigned timestamp: The server automatically assigns a timestamp to
the row based on the server's wall-clock time. Example:
```java
- sender.table("weather_sensor")
- .symbol("id", "toronto1")
- .doubleColumn("temperature", 23.5)
- .doubleColumn("humidity", 0.49)
- .atNow();
+ sender.table("trades")
+ .symbol("symbol", "ETH-USD")
+ .symbol("side", "sell")
+ .doubleColumn("price", 2615.54)
+ .doubleColumn("amount", 0.00044)
+ .atNow();
```
We recommended to use User-assigned timestamps when ingesting data into QuestDB.
Using Server-assigned hinder the ability to deduplicate rows which is
-[important for exactly-once processing](#exactly-once-delivery-vs-at-least-once-delivery).
+[important for exactly-once processing](/docs/reference/api/ilp/overview/#exactly-once-delivery-vs-at-least-once-delivery).
:::note
@@ -326,31 +303,6 @@ rows with older timestamps are ingested before rows with newer timestamps.
:::
-## Table and column auto-creation
-
-When sending data to a table that does not exist, the server will create the
-table automatically. This also applies to columns that do not exist. The server
-will use the first row of data to determine the column types.
-
-If the table already exists, the server will validate that the columns match the
-existing table. If the columns do not match, the server will return a
-non-recoverable error which is propagated to the client as a
-`LineSenderException`.
-
-If you're using QuestDB Enterprise, you must grant further permissions to the
-authenticated user:
-
-```sql
-CREATE SERVICE ACCOUNT ingest_user; -- creates a service account to be used by a client
-GRANT ilp, create table TO ingest_user; -- grants permissions to ingest data and create tables
-GRANT add column, insert ON all tables TO ingest_user; -- grants permissions to add columns and insert data to all tables
--- OR
-GRANT add column, insert ON table1, table2 TO ingest_user; -- grants permissions to add columns and insert data to specific tables
-```
-
-Read more setup details in the
-[Enterprise quickstart](/docs/guides/enterprise-quick-start/#4-ingest-data-influxdb-line-protocol)
-and the [role-based access control](/docs/operations/rbac/) guides.
## Configuration options
@@ -373,20 +325,6 @@ When using the configuration string, the following options are available:
- `username`: Username for TCP authentication.
- `token`: Token for TCP authentication.
-### TLS encryption
-
-TLS in enabled by selecting the `https` or `tcps` protocol. The following
-options are available:
-
-- `tls_roots` : Path to a Java keystore file containing trusted root
- certificates. Defaults to the system default trust store.
-- `tls_roots_password` : Password for the keystore file. It's always required
- when `tls_roots` is set.
-- `tls_verify` : Whether to verify the server's certificate. This should only be
- used for testing as a last resort and never used in production as it makes the
- connection vulnerable to man-in-the-middle attacks. Options are `on` or
- `unsafe_off`. Defaults to `on`.
-
### Auto-flushing
- `auto_flush` : Global switch for the auto-flushing behavior. Options are `on`
@@ -424,8 +362,27 @@ controls the auto-flushing behavior of the TCP transport.
`request_timeout`. This is useful for large requests. You can set this value
to `0` to disable this logic.
+### TLS encryption
+
+To enable TLS, select the `https` or `tcps` protocol.
+
+The following options are available:
+
+- `tls_roots` : Path to a Java keystore file containing trusted root
+ certificates. Defaults to the system default trust store.
+- `tls_roots_password` : Password for the keystore file. It's always required
+ when `tls_roots` is set.
+- `tls_verify` : Whether to verify the server's certificate. This should only be
+ used for testing as a last resort and never used in production as it makes the
+ connection vulnerable to man-in-the-middle attacks. Options are `on` or
+ `unsafe_off`. Defaults to `on`.
+
+
## Other considerations
+- Please refer to the [ILP overview](/docs/reference/api/ilp/overview) for details
+about transactions, error control, delivery guarantees, health check, or table and
+column auto-creation.
- The Sender is not thread-safe. For multiple threads to send data to QuestDB,
each thread should have its own Sender instance. An object pool can also be
used to re-use Sender instances.
@@ -435,25 +392,3 @@ controls the auto-flushing behavior of the TCP transport.
pattern can be used to ensure that the Sender is closed.
- The method `flush()` can be called to force sending the internal buffer to a
server, even when the buffer is not full yet.
-
-### Health check
-
-To monitor your active connection, there is a `ping` endpoint:
-
-```shell
-curl -I http://localhost:9000/ping
-```
-
-Returns (pong!):
-
-```shell
-HTTP/1.1 204 OK
-Server: questDB/1.0
-Date: Fri, 2 Feb 2024 17:09:38 GMT
-Transfer-Encoding: chunked
-Content-Type: text/plain; charset=utf-8
-X-Influxdb-Version: v2.7.4
-```
-
-Determine whether an instance is active and confirm the version of InfluxDB Line
-Protocol with which you are interacting.
diff --git a/ingestion-overview.md b/ingestion-overview.md
index 70c7a7bf..a503967c 100644
--- a/ingestion-overview.md
+++ b/ingestion-overview.md
@@ -47,10 +47,9 @@ higher throughput. It also provides some key benefits:
An example of "data-in" - via the line - appears as:
```shell
-# temperature sensor example
-readings,city=London temperature=23.2 1465839830100400000\n
-readings,city=London temperature=23.6 1465839830100700000\n
-readings,make=Honeywell temperature=23.2,humidity=0.443 1465839830100800000\n
+trades,symbol=ETH-USD,side=sell price=2615.54,amount=0.00044 1646762637609765000\n
+trades,symbol=BTC-USD,side=sell price=39269.98,amount=0.001 1646762637710419000\n
+trades,symbol=ETH-USD,side=buy price=2615.4,amount=0.002 1646762637764098000\n
```
Once inside of QuestDB, it's yours to manipulate and query via extended SQL.
diff --git a/introduction.md b/introduction.md
index c7e0c119..aec0c1d0 100644
--- a/introduction.md
+++ b/introduction.md
@@ -13,7 +13,7 @@ import CodeBlock from "@theme/CodeBlock"
QuestDB is an Apache 2.0 open source columnar database that specializes in time
series. It offers category-leading ingestion throughput and fast SQL queries
with operational simplicity. QuestDB reduces operational costs and overcomes
-ingestion bottlenecks, offering greatly simplify overall ingress infrastructure.
+ingestion bottlenecks, offering greatly simplified overall ingress infrastructure.
This introduction provides a brief overview on:
@@ -93,10 +93,17 @@ efficiency and value.
Writing blazing-fast queries syntax and creating real-time
[Grafana](/docs/third-party-tools/grafana/) is done via familiar SQL:
-```sql title="Navigate time with SQL"
-SELECT timestamp, sensorName, tempC
-FROM sensors LATEST ON timestamp
-PARTITION BY sensorName;
+```questdb-sql title='Navigate time with SQL' demo
+SELECT
+ timestamp, symbol,
+ first(price) AS open,
+ last(price) AS close,
+ min(price),
+ max(price),
+ sum(amount) AS volume
+FROM trades
+WHERE timestamp > dateadd('d', -1, now())
+SAMPLE BY 15m;
```
Intrigued? The best way to see whether QuestDB is right for you is to try it
diff --git a/reference/api/ilp/overview.md b/reference/api/ilp/overview.md
index d2372a3a..59d3aa32 100644
--- a/reference/api/ilp/overview.md
+++ b/reference/api/ilp/overview.md
@@ -27,9 +27,16 @@ This supporting document thus provides an overview to aid in client selection
and initial configuration:
1. [Client libraries](/docs/reference/api/ilp/overview/#client-libraries)
-2. [Configuration](/docs/reference/api/ilp/overview/#configuration)
-3. [Authentication](/docs/reference/api/ilp/overview/#authentication)
-4. [Transactionality caveat](/docs/reference/api/ilp/overview/#transactionality-caveat)
+2. [Server-Side Configuration](/docs/reference/api/ilp/overview/#server-side-configuration)
+3. [Transport Selection](/docs/reference/api/ilp/overview/#transport-selection)
+4. [Client-Side Configuration](/docs/reference/api/ilp/overview/#client-side-configuration)
+5. [Error handling](/docs/reference/api/ilp/overview/#error-handling)
+6. [Authentication](/docs/reference/api/ilp/overview/#authentication)
+7. [Table and Column Auto-creation](/docs/reference/api/ilp/overview/#table-and-column-auto-creation)
+8. [Timestamp Column Name](/docs/reference/api/ilp/overview/#timestamp-column-name)
+9. [HTTP Transaction semantics](/docs/reference/api/ilp/overview/#http-transaction-semantics)
+10. [Exactly-once delivery](/docs/reference/api/ilp/overview/#exactly-once-delivery-vs-at-least-once-delivery)
+11. [Health Check](/docs/reference/api/ilp/overview/#health-check)
## Client libraries
@@ -58,7 +65,7 @@ following is set in `server.conf`:
line.http.enabled=true
```
-## Configuration
+## Server-Side Configuration
The HTTP receiver configuration can be completely customized using
[QuestDB configuration keys for ILP](/docs/configuration/#influxdb-line-protocol-ilp).
@@ -69,32 +76,27 @@ port, load balancing, and more.
For more guidance in how to tune QuestDB, see
[capacity planning](/docs/deployment/capacity-planning/).
-## Authentication
-
-:::note
+## Transport selection
-Using [QuestDB Enterprise](/enterprise/)?
+The ILP protocol in QuestDB supports the following transport options:
-Skip to [advanced security features](/docs/operations/rbac/) instead, which
-provides holistic security out-of-the-box.
+- HTTP (default port 9000)
+- TCP (default port 9009)
-:::
-
-InfluxDB Line Protocol supports authentication.
-
-A similar pattern is used across all client libraries.
-
-This document will break down and demonstrate the configuration keys and core
-configuration options.
+On QuestDB Enterprise HTTPS and TCPS are also available.
-Once a client has been selected and configured, resume from your language client
-documentation.
+The HTTP(s) transport is recommended for most use cases. It provides feedback on
+errors, automatically retries failed requests, and is easier to configure. The
+TCP(s) transport is kept for compatibility with older QuestDB versions. It has
+limited error feedback, no automatic retries, and requires manual handling of
+connection failures. However, while HTTP is recommended, TCP has slightly lower
+overhead than HTTP and may be useful in high-throughput scenarios in
+high-latency networks.
-### Configuration strings
-Configuration strings combine a set of key/value pairs.
+## Client-Side Configuration
-Assembling a string connects an ILP client to a QuestDB ILP server.
+Clients connect to a QuestDB using ILP via a configuration string. Configuration strings combine a set of key/value pairs.
The standard configuration string pattern is:
@@ -107,7 +109,7 @@ schema::key1=value1;key2=value2;key3=value3;
It is made up of the following parts:
- **Schema**: One of the specified schemas in the
- [base values](/docs/reference/api/ilp/overview/#base-parameters) section below
+ [core parameters](/docs/reference/api/ilp/overview/#core-parameters) section below
- **Key=Value**: Each key-value pair sets a specific parameter for the client
- **Terminating semicolon**: A semicolon must follow the last key-value pair
@@ -118,10 +120,8 @@ Below is a list of common parameters that ILP clients will accept.
These params facilitate connection to QuestDB's ILP server and define
client-specific behaviors.
-Some are shared across all clients, while some are client specific.
-
-See the [Usage section](/docs/reference/api/ilp/overview/#usage) for write
-examples that use these schemas.
+Some are shared across all clients, while some are client specific. Refer to
+the clients documentation for details.
:::warning
@@ -135,15 +135,11 @@ Exposing these values may expose your database to bad actors.
- **schema**: Specifies the transport method, with support for: `http`, `https`,
`tcp` & `tcps`
-- **addr**: The address and port of the QuestDB server.
+- **addr**: The address and port of the QuestDB server, as in `localhost:9000`.
#### HTTP Parameters
-- **username**: Username for HTTP authentication.
-- **password** (SENSITIVE): Password for HTTP Basic authentication.
-- **token** (SENSITIVE): Bearer token for HTTP Token authentication.
- - Open source HTTP users are unable to generate tokens. For TCP token auth,
- see the below section.
+- **password** (SENSITIVE): Password for HTTP Basic Authentication.
- **request_min_throughput**: Expected throughput for network send to the
database server, in bytes.
- Defaults to 100 KiB/s
@@ -156,6 +152,33 @@ Exposing these values may expose your database to bad actors.
milliseconds.
- Defaults to 10 seconds.
- Not all errors are retriable.
+- **token** (SENSITIVE): Bearer token for HTTP Token authentication.
+ - Open source HTTP users are unable to generate tokens. For TCP token auth,
+ see the below section.
+- **username**: Username for HTTP Basic Authentication.
+
+#### TCP Parameters
+
+:::note
+
+These parameters are only useful when using ILP over TCP with authentication
+enabled. Most users should use ILP over HTTP. These parameters are listed for
+completeness and for users who have specific requirements.
+
+:::
+
+_See the [Authentication](/docs/reference/api/ilp/overview/#authentication) section below for configuration._
+
+- **auth_timeout**: Timeout for TCP authentication with QuestDB server, in
+ milliseconds.
+ - Default 15 seconds.
+- **token** (SENSITIVE): TCP Authentication `d` parameter.
+ - **token_x** (SENSITIVE): TCP Authentication `x` parameter.
+ - Used in C/C++/Rust/Python clients.
+ - **token_y** (SENSITIVE): TCP Authentication `y` parameter.
+ - Used in C/C++/Rust/Python clients.
+- **username**: Username for TCP authentication.
+
#### Auto-flushing behavior
@@ -164,10 +187,8 @@ Exposing these values may expose your database to bad actors.
- Default is “on” for clients that support auto-flushing (all except C, C++ &
Rust).
-- **auto_flush_rows**: Auto-flushing is triggered above this row count.
-
- - Defaults to `75,000` for HTTP, and `600` for TCP.
- - If set, this implies “auto_flush=on”.
+- **auto_flush_bytes** Auto-flushing is triggered above this buffer size.
+ - Disabled by default.
- **auto_flush_interval**: Auto-flushing is triggered after this time period has
elapsed since the last flush, in milliseconds.
@@ -176,19 +197,27 @@ Exposing these values may expose your database to bad actors.
- This is not a periodic timer - it will only be checked on the next row
creation.
-- **auto_flush_bytes** Auto-flushing is triggered above this buffer size.
- - Disabled by default.
+- **auto_flush_rows**: Auto-flushing is triggered above this row count.
-#### Network configuration
+ - Defaults to `75,000` for HTTP, and `600` for TCP.
+ - If set, this implies “auto_flush=on”.
-_Optional._
+#### Buffer configuration
-- **bind_interface**: Specify the local network interface for outbound
- connections.
- - Not to be confused with the QuestDB port in the `addr` param.
+- **init_buf_size**: Set the initial (but growable) size of the buffer in bytes.
+ - Defaults to `64 KiB`.
+- **max_buf_size**: Sets the growth limit of the buffer in bytes.
+ - Defaults to `100 MiB`.
+ - Clients will error if this is exceeded.
+- **max_name_len**: The maximum alloable number of UTF-8 bytes in the table or
+ column names.
+ - Defaults to `127`.
+ - Related to length limits for filenames on the user's host OS.
#### TLS configuration
+_QuestDB Enterprise only._
+
- **tls_verify**: Toggle verification of TLS certificates. Default is `on`.
- **tls_roots**: Specify the source of bundled TLS certificates.
- The defaults and possible param values are client-specific.
@@ -201,39 +230,201 @@ _Optional._
clients.
- Java for instance would apply `tls_roots=/path/to/Java/key/store`
-#### Buffer configuration
+#### Network configuration
-- **init_buf_size**: Set the initial (but growable) size of the buffer in bytes.
- - Defaults to `64 KiB`.
-- **max_buf_size**: Sets the growth limit of the buffer in bytes.
- - Defaults to `100 MiB`.
- - Clients will error if this is exceeded.
-- **max_name_len**: The maximum alloable number of UTF-8 bytes in the table or
- column names.
- - Defaults to `127`.
- - Related to length limits for filenames on the user's host OS.
+- **bind_interface**: Optionally, specify the local network interface for outbound
+ connections. Useful if you have multiple interfaces or an accelerated network interface (e.g. Solarflare)
+ - Not to be confused with the QuestDB port in the `addr` param.
-#### TCP Parameters
+## Error handling
+
+The HTTP transport supports automatic retries for failed requests deemed
+recoverable. Recoverable errors include network errors, some server errors, and
+timeouts, while non-recoverable errors encompass invalid data, authentication
+errors, and other client-side errors.
+
+Retrying is particularly beneficial during network issues or when the server is
+temporarily unavailable. The retrying behavior can be configured through the
+`retry_timeout` configuration option or, in some clients, via their API.
+The client continues to retry recoverable errors until they either succeed or the specified timeout is
+reached.
+
+The TCP transport lacks support for error propagation from the server. In such
+cases, the server merely closes the connection upon encountering an error. Consequently, the
+client receives no additional error information from the server. This limitation
+significantly contributes to the preference for HTTP transport over TCP
+transport.
+
+## Authentication
:::note
-These parameters are only useful when using ILP over TCP with authentication
-enabled. Most users should use ILP over HTTP. These parameters are listed for
-completeness and for users who have specific requirements.
+Using [QuestDB Enterprise](/enterprise/)?
+
+Skip to [advanced security features](/docs/operations/rbac/) instead, which
+provides holistic security out-of-the-box.
:::
-- **username**: Username for TCP authentication.
-- **token** (SENSITIVE): TCP Authentication `d` parameter.
- - **token_x** (SENSITIVE): TCP Authentication `x` parameter.
- - Used in C/C++/Rust/Python clients.
- - **token_y** (SENSITIVE): TCP Authentication `y` parameter.
- - Used in C/C++/Rust/Python clients.
-- **auth_timeout**: Timeout for TCP authentication with QuestDB server, in
- milliseconds.
- - Default 15 seconds.
+InfluxDB Line Protocol supports authentication via HTTP Basic Authentication, using [the HTTP Parameters](/docs/reference/api/ilp/overview/#http-parameters), or via token when using the TCP transport, using [the TCP Parameters](/docs/reference/api/ilp/overview/#tcp-parameters).
+
+A similar pattern is used across all client libraries. If you want to use a TCP token, you need to
+configure your QuestDB server. This document will break down and demonstrate the configuration keys and core
+configuration options.
+
+Once a client has been selected and configured, resume from your language client
+documentation.
+
+##### TCP token authentication setup
+
+Create `d`, `x` & `y` tokens for client usage.
+
+##### Prerequisites
+
+- `jose`: C-language implementation of Javascript Object Signing and Encryption.
+ Generates tokens.
+- `jq`: For pretty JSON output.
+
+
-## Transactionality caveat
+
+
+```bash
+brew install jose
+brew install jq
+```
+
+
+
+
+
+```bash
+yum install jose
+yum install jq
+```
+
+
+
+
+
+```bash
+apt install jose
+apt install jq
+```
+
+
+
+
+
+##### Server configuration
+
+Next, create an authentication file.
+
+Only elliptic curve (P-256) are supported (key type `ec-p-256-sha256`):
+
+```bash
+testUser1 ec-p-256-sha256 fLKYEaoEb9lrn3nkwLDA-M_xnuFOdSt9y0Z7_vWSHLU Dt5tbS1dEDMSYfym3fgMv0B99szno-dFc1rYF9t0aac
+# [key/user id] [key type] {keyX keyY}
+```
+
+Generate an authentication file using the `jose` utility:
+
+```bash
+jose jwk gen -i '{"alg":"ES256", "kid": "testUser1"}' -o /var/lib/questdb/conf/full_auth.json
+
+KID=$(cat /var/lib/questdb/conf/full_auth.json | jq -r '.kid')
+X=$(cat /var/lib/questdb/conf/full_auth.json | jq -r '.x')
+Y=$(cat /var/lib/questdb/conf/full_auth.json | jq -r '.y')
+
+echo "$KID ec-p-256-sha256 $X $Y" | tee /var/lib/questdb/conf/auth.txt
+```
+
+Once created, reference it in the server [configuration](/docs/configuration/):
+
+```ini title='/path/to/server.conf'
+line.tcp.auth.db.path=conf/auth.txt
+```
+
+##### Client keys
+
+For the server configuration above, the corresponding JSON Web Key must be
+stored on the clients' side.
+
+When sending a fully-composed JWK, it will have the following keys:
+
+```json
+{
+ "kty": "EC",
+ "d": "5UjEMuA0Pj5pjK8a-fa24dyIf-Es5mYny3oE_Wmus48",
+ "crv": "P-256",
+ "kid": "testUser1",
+ "x": "fLKYEaoEb9lrn3nkwLDA-M_xnuFOdSt9y0Z7_vWSHLU",
+ "y": "Dt5tbS1dEDMSYfym3fgMv0B99szno-dFc1rYF9t0aac"
+}
+```
+
+The `d`, `x` and `y` parameters generate the public key.
+
+For example, the Python client would be configured as outlined in the
+[Python docs](https://py-questdb-client.readthedocs.io/en/latest/conf.html#tcp-auth).
+
+## Table and column auto-creation
+
+When sending data to a table that does not exist, the server will create the
+table automatically. This also applies to columns that do not exist. The server
+will use the first row of data to determine the column types.
+
+If the table already exists, the server will validate that the columns match the
+existing table. If the columns do not match, the server will return a
+non-recoverable error which, when using the HTTP/HTTPS transport, is propagated to the client.
+
+You can avoid table and/or column auto-creation by setting the `line.auto.create.new.columns` and
+ `line.auto.create.new.tables`configuration parameters to false.
+
+If you're using QuestDB Enterprise, you must grant further permissions to the
+authenticated user:
+
+```sql
+CREATE SERVICE ACCOUNT ingest_user; -- creates a service account to be used by a client
+GRANT ilp, create table TO ingest_user; -- grants permissions to ingest data and create tables
+GRANT add column, insert ON all tables TO ingest_user; -- grants permissions to add columns and insert data to all tables
+-- OR
+GRANT add column, insert ON table1, table2 TO ingest_user; -- grants permissions to add columns and insert data to specific tables
+```
+
+Read more setup details in the
+[Enterprise quickstart](/docs/guides/enterprise-quick-start/#4-ingest-data-influxdb-line-protocol)
+and the [role-based access control](/docs/operations/rbac/) guides.
+
+## Timestamp Column Name
+
+QuestDB's underlying ILP protocol sends timestamps to QuestDB without a name.
+
+If your table has been created beforehand, the designated timestamp will be correctly
+assigned based on the payload sent bt the client. But if your table does not
+exist, it will be automatically created and the timestamp column will be named
+`timestamp`. To use a custom name, say `my_ts`, pre-create the table with the desired
+timestamp column name.
+
+To do so, issue a `CREATE TABLE` statement to create the table in advance:
+
+```questdb-sql title="Creating a timestamp named my_ts"
+CREATE TABLE IF NOT EXISTS 'trades' (
+ symbol SYMBOL capacity 256 CACHE,
+ side SYMBOL capacity 256 CACHE,
+ price DOUBLE,
+ amount DOUBLE,
+ my_ts TIMESTAMP
+) timestamp (my_ts) PARTITION BY DAY WAL;
+```
+
+You can use the `CREATE TABLE IF NOT EXISTS` construct to make sure the table is
+created, but without raising an error if the table already exists.
+
+## HTTP transaction semantics
+
+The TCP endpoint does not support transactions. The HTTP ILP endpoint treats every requests as an individual transaction, so long as it contains rows for a single table.
As of writing, the HTTP endpoint does not provide full transactionality in all
cases.
@@ -243,8 +434,65 @@ Specifically:
- If an HTTP request contains data for two tables and the final commit fails for
the second table, the data for the first table will still be committed. This
is a deviation from full transactionality, where a failure in any part of the
- transaction would result in the entire transaction being rolled back.
+ transaction would result in the entire transaction being rolled back. If data
+ transactionality is important for you, the best practice is to make sure you
+ flush data to the server in batches that contain rows for a single table.
+
+- Even when you are sending data to a single table, when dynamically adding new columns to
+ a table, an implicit commit occurs each time a new column is added. If the request
+ is aborted or has parse errors, no data will be inserted into the corresponding
+ table, but the new column will be added and will not be rolled back.
+
+- Some clients have built-in support for controlling transactions. These APIs help to comply with the single-table-per-request pre-requisite for HTTP transactions, but they don't control if new columns
+ are being added.
+
+- As of writing, if you want to make sure you have data transactionality and
+ schema/metadata transactionality, you should disable `line.auto.create.new.columns` and
+ `line.auto.create.new.tables` on your configuration. Be aware that if you do this,
+ you will not have dynamic schema capabilities and you will need to create each table
+ and column before you try to ingest data, via [`CREATE TABLE`](/docs/reference/sql/create-table/) and/or [`ALTER TABLE ADD COLUMN`](/docs/reference/sql/alter-table-add-column/) SQL statements.
+
+
+## Exactly-once delivery vs at-least-once delivery
+
+The retrying behavior of the HTTP transport can lead to some data being sent to
+the server more than once.
+
+**Example**: Client sends a batch to the server, the server receives the batch,
+processes it, but fails to send a response back to the client due to a network
+error. The client will retry sending the batch to the server. This means the
+server will receive the batch again and process it again. This can lead to
+duplicated rows in the server.
+
+The are two ways to mitigate this issue:
+
+- Use [QuestDB deduplication feature](/docs/concept/deduplication/) to remove
+ duplicated rows. QuestDB server can detect and remove duplicated rows
+ automatically, resulting in exactly-once processing. This is recommended when
+ using the HTTP transport with retrying enabled.
+- Disable retrying by setting `retry_timeout` to 0. This will make the client
+ send the batch only once, failed requests will not be retried and the client
+ will receive an error. This effectively turns the client into an at-most-once
+ delivery.
+
+## Health Check
+
+To monitor your active connection, there is a `ping` endpoint:
+
+```shell
+curl -I http://localhost:9000/ping
+```
+
+Returns (pong!):
+
+```shell
+HTTP/1.1 204 OK
+Server: questDB/1.0
+Date: Fri, 2 Feb 2024 17:09:38 GMT
+Transfer-Encoding: chunked
+Content-Type: text/plain; charset=utf-8
+X-Influxdb-Version: v2.7.4
+```
-- When adding new columns to a table, an implicit commit occurs each time a new
- column is added. If the request is aborted or has parse errors, this commit
- cannot be rolled back.
+Determine whether an instance is active and confirm the version of InfluxDB Line
+Protocol with which you are interacting.
diff --git a/third-party-tools/cube.md b/third-party-tools/cube.md
index fe63366a..b67183cc 100644
--- a/third-party-tools/cube.md
+++ b/third-party-tools/cube.md
@@ -1,6 +1,7 @@
---
title: "Cube"
-description: Yaa
+description:
+ Guide for QuestDB and Cube integration.
---
Cube is middleware that connects your data sources and your data applications.
@@ -47,6 +48,7 @@ services:
image: "cubejs/cube:latest"
ports:
- "4000:4000"
+ env_file: "cube.env"
volumes:
- ".:/cube/conf"
questdb:
@@ -58,13 +60,13 @@ services:
- "8812:8812"
```
-Within your project directory, create an `.env` file.
+Within your project directory, create an `cube.env` file.
These variables will allow Cube to connect to your QuestDB deployment.
Remember: default passwords are dangerous! We recommend altering them.
-```shell title=.env
+```bash title=.env
CUBEJS_DB_HOST=questdb
CUBEJS_DB_PORT=8812
CUBEJS_DB_NAME=qdb
@@ -73,6 +75,12 @@ CUBEJS_DB_PASS=quest
CUBEJS_DB_TYPE=questdb
```
+Create `model` directory to be used by Cube:
+
+```bash
+mkdir model
+```
+
Finally, bring it all up with Docker:
```bash title=shell
diff --git a/third-party-tools/embeddable.md b/third-party-tools/embeddable.md
index 1903dccf..c1d7ee68 100644
--- a/third-party-tools/embeddable.md
+++ b/third-party-tools/embeddable.md
@@ -9,11 +9,11 @@ Embeddable is a developer toolkit for building fast, interactive customer-facing
analytics. It works well with a high performance time-series database like
QuestDB.
-In [Embeddable](https://embeddable.com/) define
+In [Embeddable](https://embeddable.com/) you define
[Data Models](https://trevorio.notion.site/Data-modeling-35637bbbc01046a1bc47715456bfa1d8)
and
[Components](https://trevorio.notion.site/Using-components-761f52ac2d0743b488371088a1024e49)
-in code stored in your own code repository, then use the **SDK** to make these
+in code, which are stored in your own code repository, then use the **SDK** to make these
available for your team in the powerful Embeddable **no-code builder.** The end
result is the ability to deliver fast, interactive **customer-facing analytics**
directly into your product.
@@ -65,30 +65,16 @@ The above represents a `CREATE` action, but all `CRUD` operations are available.
The `apiKey` can be found by clicking “**Publish**” on one of your Embeddable
dashboards.
-The `name` is a unique name to identify this **connection**.
-
-- By default your **data models** will look for a **connection** called
- “default”, but you can supply models with different
- [**data_source**](https://cube.dev/docs/reference/data-model/cube#data_source)
- names to support connecting different **data models** to different
- **connections**. To do so , specify the
- **[data_source](https://cube.dev/docs/reference/data-model/cube#data_source)**
- name in the model.
-
-The `type` tells Embeddable which driver to use, in this case `questdb`. You can
-also connect multiple datasources like `postgres`, `bigquery` or `mongodb`. For
-a full list, see
-[the documentaiton](https://cube.dev/docs/product/configuration/data-sources).
-
-The `credentials` is a javascript object containing the credentials expected by
-the driver:
-
-- Credentials are securely encrypted and only used to retrieve exactly the data
- described in the data models.
-- Emeddable strongly encourages you to create a **read-only** database user for
- each connection. Embeddable will only ever read from your database, not write.
-
-To support connecting to different databases for prod, qa, test, etc, or to
-support different databases for different customers, you can assign each
-**connection** to an **environment**. For more information, see
-[Environments API](https://www.notion.so/Environments-API-497169036b5148b38f7936aa75e62949?pvs=21).
+The `name` is a unique name to identify this connection.
+
+- By default your data models will look for a connection called “default”, but you can supply your models with different `data_source` names to support connecting different data models to different connections (simply specify the data_source name in the model)
+
+The `type` tells Embeddable which driver to use
+
+- Here you'll want to use `questbd`, but you can connect multiple different datasources to one Embeddable workspace so you may use others such as: `postgres`, `bigquery`, `mongodb`, etc.
+
+The `credentials` is a javascript object containing the necessary credentials expected by the driver
+- These are securely encrypted and only used to retrieve exactly the data you have described in your data models.
+- Embeddable strongly encourage you to create a read-only database user for each connection (Embeddable will only ever read from your database, not write).
+
+In order to support connecting to different databases for prod, qa, test, etc (or to support different databases for different customers) you can assign each connection to an environment (see [Environments API](https://www.notion.so/Environments-API-497169036b5148b38f7936aa75e62949?pvs=21)).
diff --git a/third-party-tools/grafana.md b/third-party-tools/grafana.md
index 2003c20b..c0d30d11 100644
--- a/third-party-tools/grafana.md
+++ b/third-party-tools/grafana.md
@@ -2,7 +2,7 @@
title: Grafana
description:
Guide for fastest, high performance time-series data visualizations with
- QuestDB and Grafana
+ QuestDB and Grafana.
---
import Screenshot from "@theme/Screenshot"