diff --git a/.gitignore b/.gitignore
index 1213f6978..5594217f5 100644
--- a/.gitignore
+++ b/.gitignore
@@ -15,4 +15,7 @@ dependency-reduced-pom.xml
# Others
.DS_Store
-*.swp
+*.swp
+**/local
+Scripts
+.dbeaver*
\ No newline at end of file
diff --git a/.travis.yml b/.travis.yml
index e8112099a..bebeeea41 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -11,4 +11,6 @@ matrix:
include:
- jdk: "oraclejdk8"
+before_script: ./jdbc-adapter/tools/version.sh verify
+
script: ./jdbc-adapter/integration-test-data/run_integration_tests.sh
diff --git a/README.md b/README.md
index c57721a79..d08debe70 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
[![Build Status](https://travis-ci.org/EXASOL/virtual-schemas.svg?branch=master)](https://travis-ci.org/EXASOL/virtual-schemas)
-###### Please note that this is an open source project which is officially supported by Exasol. For any question, you can contact our support team.
+
⚠ Please note that this is an open source project which is officially supported by Exasol. For any question, you can contact our support team.
Virtual schemas provide a powerful abstraction to conveniently access arbitrary data sources. Virtual schemas are a kind of read-only link to an external source and contain virtual tables which look like regular tables except that the actual data are not stored locally.
@@ -14,16 +14,14 @@ Please note that virtual schemas are part of the Advanced Edition of Exasol.
For further details about the concept, usage and examples, please see the corresponding chapter in our Exasol User Manual.
-
## API Specification
The subdirectory [doc](doc) contains the API specification for virtual schema adapters.
-
## JDBC Adapter
The subdirectory [jdbc-adapter](jdbc-adapter) contains the JDBC adapter which allows to integrate any kind of JDBC data source which provides a JDBC driver.
## Python Redis Demo Adapter
-The subdirectory [python-redis-demo-adapter](python-redis-demo-adapter) contains a demo adapter for Redis written in Python. This adapter was created to easily demonstrate the key concepts in a real, but very simple implementation. If you want to write your own adapter, this might be the right code to get a first impression what you'll have to develop.
+The subdirectory [python-redis-demo-adapter](python-redis-demo-adapter) contains a demo adapter for Redis written in Python. This adapter was created to easily demonstrate the key concepts in a real, but very simple implementation. If you want to write your own adapter, this might be the right code to get a first impression what you'll have to develop.
\ No newline at end of file
diff --git a/jdbc-adapter/README.md b/jdbc-adapter/README.md
index 7dc389ab4..cd912a583 100644
--- a/jdbc-adapter/README.md
+++ b/jdbc-adapter/README.md
@@ -1,41 +1,52 @@
# JDBC Adapter for Virtual Schemas
-[![Build Status](https://travis-ci.org/EXASOL/virtual-schemas.svg?branch=master)](https://travis-ci.org/EXASOL/virtual-schemas)
+[![Build Status](https://travis-ci.org/EXASOL/virtual-schemas.svg)](https://travis-ci.org/EXASOL/virtual-schemas)
+
+## Supported Dialects
+
+1. [EXASOL](doc/sql_dialects/exasol.md)
+1. [Hive](doc/sql_dialects/hive.md)
+1. [Impala](doc/sql_dialects/impala.md)
+1. [DB2](doc/sql_dialects/db2.md)
+1. [Oracle](doc/sql_dialects/oracle.md)
+1. [Teradata](doc/sql_dialects/teradata.md)
+1. [Redshift](doc/sql_dialects/redshift.md)
+1. [SQL Server](doc/sql_dialects/sql_server.md)
+1. [Sybase ASE](doc/sql_dialects/sybase.md)
+1. [PostgresSQL](doc/sql_dialects/postgresql.md)
+1. Generic
## Overview
The JDBC adapter for virtual schemas allows you to connect to JDBC data sources like Hive, Oracle, Teradata, Exasol or any other data source supporting JDBC. It uses the well proven ```IMPORT FROM JDBC``` Exasol statement behind the scenes to obtain the requested data, when running a query on a virtual table. The JDBC adapter also serves as the reference adapter for the Exasol virtual schema framework.
-The JDBC adapter currently supports the following SQL dialects and data sources. This list will be continuously extended based on the feedback from our users:
-* Exasol
-* Hive
-* Impala
-* Oracle
-* Teradata
-* Redshift
-* DB2
-* SQL Server
-* PostgreSQL
+Check the [SQL dialect list](doc/supported_sql_dialects.md) to learn which SQL dialects the JDBC adapter currently supports
+
+This list will be continuously extended based on the feedback from our users.
Each such implementation of a dialect handles three major aspects:
+
* How to **map the tables** in the source systems to virtual tables in Exasol, including how to **map the data types** to Exasol data types.
-* How is the **SQL syntax** of the data source, including identifier quoting, case-sensitivity, function names, or special syntax like `LIMIT`/`TOP`.
+* How is the **SQL syntax** of the data source, including identifier quoting, case-sensitivity, function names, or special syntax like `LIMIT` / `TOP`.
* Which **capabilities** are supported by the data source. E.g. is it supported to run filters, to specify select list expressions, to run aggregation or scalar functions or to order or limit the result.
In addition to the aforementioned dialects there is the so called `GENERIC` dialect, which is designed to work with any JDBC driver. It derives the SQL dialect from the JDBC driver metadata. However, it does not support any capabilities and might fail if the data source has special syntax or data types, so it should only be used for evaluation purposes.
-If you are interested in a introduction to virtual schemas please refer to the Exasol user manual. You can find it in the [download area of the Exasol user portal](https://www.exasol.com/portal/display/DOWNLOAD/6.0).
+If you are interested in a introduction to virtual schemas please refer to the Exasol user manual. You can find it in the [download area of the Exasol user portal](https://www.exasol.com/portal/display/DOC/Database+User+Manual).
+## Before you Start
+
+Please note that the syntax for creating adapter scripts is not recognized by all SQL clients. [DBeaver](https://dbeaver.io/) for example. If you encounter such a problem, try a different client.
## Getting Started
Before you can start using the JDBC adapter for virtual schemas you have to deploy the adapter and the JDBC driver of your data source in your Exasol database.
-Please follow the [step-by-step deployment guide](doc/deploy-adapter.md).
-
+Please follow the [step-by-step deployment guide](doc/deploying_the_virtual_schema_adapter.md).
## Using the Adapter
The following statements demonstrate how you can use virtual schemas with the JDBC adapter to connect to a Hive system. Please scroll down to see a list of all properties supported by the JDBC adapter.
First we create a virtual schema using the JDBC adapter. The adapter will retrieve the metadata via JDBC and map them to virtual tables. The metadata (virtual tables, columns and data types) are then cached in Exasol.
+
```sql
CREATE CONNECTION hive_conn TO 'jdbc:hive2://localhost:10000/default' USER 'hive-usr' IDENTIFIED BY 'hive-pwd';
@@ -46,6 +57,7 @@ CREATE VIRTUAL SCHEMA hive USING adapter.jdbc_adapter WITH
```
We can now explore the tables in the virtual schema, just like for a regular schema:
+
```sql
OPEN SCHEMA hive;
SELECT * FROM cat;
@@ -53,40 +65,45 @@ DESCRIBE clicks;
```
And we can run arbitrary queries on the virtual tables:
+
```sql
SELECT count(*) FROM clicks;
SELECT DISTINCT USER_ID FROM clicks;
```
-Behind the scenes the Exasol command `IMPORT FROM JDBC` will be executed to obtain the data needed from the data source to fulfil the query. The Exasol database interacts with the adapter to pushdown as much as possible to the data source (e.g. filters, aggregations or `ORDER BY`/`LIMIT`), while considering the capabilities of the data source.
+Behind the scenes the Exasol command `IMPORT FROM JDBC` will be executed to obtain the data needed from the data source to fulfil the query. The Exasol database interacts with the adapter to pushdown as much as possible to the data source (e.g. filters, aggregations or `ORDER BY` / `LIMIT`), while considering the capabilities of the data source.
Let's combine a virtual and a native tables in a query:
-```
+
+```sql
SELECT * from clicks JOIN native_schema.users on clicks.userid = users.id;
```
You can refresh the schemas metadata, e.g. if tables were added in the remote system:
+
```sql
ALTER VIRTUAL SCHEMA hive REFRESH;
ALTER VIRTUAL SCHEMA hive REFRESH TABLES t1 t2; -- refresh only these tables
```
-Or set properties. Depending on the adapter and the property you set this might update the metadata or not. In our example the metadata are affected, because afterwards the virtual schema will only expose two virtul tables.
+Or set properties. Depending on the adapter and the property you set this might update the metadata or not. In our example the metadata are affected, because afterwards the virtual schema will only expose two virtual tables.
+
```sql
ALTER VIRTUAL SCHEMA hive SET TABLE_FILTER='CUSTOMERS, CLICKS';
```
Finally you can unset properties:
+
```sql
ALTER VIRTUAL SCHEMA hive SET TABLE_FILTER=null;
```
Or drop the virtual schema:
+
```sql
DROP VIRTUAL SCHEMA hive CASCADE;
```
-
### Adapter Properties
The following properties can be used to control the behavior of the JDBC adapter. As you see above, these properties can be defined in `CREATE VIRTUAL SCHEMA` or changed afterwards via `ALTER VIRTUAL SCHEMA SET`. Note that properties are always strings, like `TABLE_FILTER='T1,T2'`.
@@ -129,14 +146,17 @@ Property | Value
## Debugging
+
To see all communication between the database and the adapter you can use the python script udf_debug.py located in the [tools](tools) directory.
First, start the `udf_debug.py` script, which will listen on the specified address and print all incoming text.
-```
+
+```sh
python tools/udf_debug.py -s myhost -p 3000
```
Then run following SQL statement in your session to redirect all stdout and stderr from the adapter script to the `udf_debug.py` script we started before.
+
```sql
ALTER SESSION SET SCRIPT_OUTPUT_ADDRESS='host-where-udf-debug-script-runs:3000'
```
@@ -145,12 +165,23 @@ You have to make sure that Exasol can connect to the host running the `udf_debug
## Frequent Issues
-* **Error: No suitable driver found for JDBC...**: The JDBC driver class was not discovered automatically. Either you have to add a `META-INF/services/java.sql.Driver` file with the classname to your jar, or you have to load the driver manually (see `JdbcMetadataReader.readRemoteMetadata()`).
+
+### Error: No suitable driver found for JDBC...
+
+The JDBC driver class was not discovered automatically. Either you have to add a `META-INF/services/java.sql.Driver` file with the class name to your JAR, or you have to load the driver manually (see `JdbcMetadataReader.readRemoteMetadata()`).
+
See https://docs.oracle.com/javase/7/docs/api/java/sql/DriverManager.html
-* **Very slow execution of queries with SCRIPT_OUTPUT_ADDRESS**: If `SCRIPT_OUTPUT_ADDRESS` is set as explained in the [debugging section](#debugging), verify that a service is actually listening at that address. Otherwise, if Exasol can not establish a connection, repeated connection attempts can be the cause for slowdowns.
-* **Very slow execution of queries**: Depending on which JDK version Exasol uses to execute Java user-defined functions, a blocking randomness source may be used by default. Especially cryptographic operations do not complete until the operating system has collected a sufficient amount of entropy. This problem seems to occur most often when Exasol is run in an isolated environment, e.g., a virtual machine or a container. A solution is to use a non-blocking randomness source.
- To do so, log in to EXAOperation and shutdown the database. Append `-etlJdbcJavaEnv -Djava.security.egd=/dev/./urandom` to the "Extra Database Parameters" input field and power the database on again.
+
+### Very Slow Execution of Queries With SCRIPT_OUTPUT_ADDRESS
+
+If `SCRIPT_OUTPUT_ADDRESS` is set as explained in the [debugging section](#debugging), verify that a service is actually listening at that address. Otherwise, if Exasol can not establish a connection, repeated connection attempts can be the cause for slowdowns.
+
+### Very Slow Execution of Queries
+
+Depending on which JDK version Exasol uses to execute Java user-defined functions, a blocking random-number source may be used by default. Especially cryptographic operations do not complete until the operating system has collected a sufficient amount of entropy. This problem seems to occur most often when Exasol is run in an isolated environment, e.g., a virtual machine or a container. A solution is to use a non-blocking random-number source.
+
+To do so, log in to EXAOperation and shutdown the database. Append `-etlJdbcJavaEnv -Djava.security.egd=/dev/./urandom` to the "Extra Database Parameters" input field and power the database on again.
## Developing New Dialects
-If you want to contribute a new dialect please visit the guide [how to develop and test a dialect](doc/develop-dialect.md).
+If you want to contribute a new dialect please visit the guide [how to develop and test a dialect](doc/developing_an_sql_dialect.md).
\ No newline at end of file
diff --git a/jdbc-adapter/doc/deploy-adapter.md b/jdbc-adapter/doc/deploying_the_virtual_schema_adapter.md
similarity index 51%
rename from jdbc-adapter/doc/deploy-adapter.md
rename to jdbc-adapter/doc/deploying_the_virtual_schema_adapter.md
index f6eeb8e9f..2aefc8948 100644
--- a/jdbc-adapter/doc/deploy-adapter.md
+++ b/jdbc-adapter/doc/deploying_the_virtual_schema_adapter.md
@@ -1,56 +1,70 @@
-## Deploying the Adapter Step By Step
+# Deploying the Adapter Step By Step
Run the following steps to deploy your adapter:
-### 1. Prerequisites:
-* EXASOL >= 6.0
+## Prerequisites
+
+* Exasol Version 6.0 or later
* Advanced edition (which includes the ability to execute adapter scripts), or Free Small Business Edition
-* EXASOL must be able to connect to the host and port specified in the JDBC connection string. In case of problems you can use a [UDF to test the connectivity](https://www.exasol.com/support/browse/SOL-307).
-* If the JDBC driver requires Kerberos authentication (e.g. for Hive or Impala), the EXASOL database will authenticate using a keytab file. Each EXASOL node needs access to port 88 of the the Kerberos KDC (key distribution center).
+* Exasol must be able to connect to the host and port specified in the JDBC connection string. In case of problems you can use a [UDF to test the connectivity](https://www.exasol.com/support/browse/SOL-307).
+* If the JDBC driver requires Kerberos authentication (e.g. for Hive or Impala), the Exasol database will authenticate using a keytab file. Each Exasol node needs access to port 88 of the the Kerberos KDC (key distribution center).
-### 2. Obtain Jar:
+## Obtaining JAR Archives
-First you have to obtain the so called fat jar (including all dependencies).
+First you have to obtain the so called fat JAR (including all dependencies).
-The easiest way is to download the jar from the last [Release](https://github.com/EXASOL/virtual-schemas/releases).
+The easiest way is to download the JAR from the last [Release](https://github.com/Exasol/virtual-schemas/releases).
-Alternatively you can clone the repository and build the jar as follows:
-```
-git clone https://github.com/EXASOL/virtual-schemas.git
+Alternatively you can clone the repository and build the JAR as follows:
+
+```bash
+git clone https://github.com/Exasol/virtual-schemas.git
cd virtual-schemas/jdbc-adapter/
mvn clean -DskipTests package
```
-The resulting fat jar is stored in `virtualschema-jdbc-adapter-dist/target/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar`.
+The resulting fat JAR is stored in `virtualschema-jdbc-adapter-dist/target/virtualschema-jdbc-adapter-dist-1.1.0.jar`.
-### 3. Upload Adapter Jar
+## Uploading the Adapter JAR Archive
-You have to upload the jar of the adapter to a bucket of your choice in the EXASOL bucket file system (BucketFS). This will allow using the jar in the adapter script.
+You have to upload the JAR of the adapter to a bucket of your choice in the Exasol bucket file system (BucketFS). This will allow using the jar in the adapter script.
Following steps are required to upload a file to a bucket:
-* Make sure you have a bucket file system (BucketFS) and you know the port for either http or https. This can be done in EXAOperation under "EXABuckets". E.g. the id could be `bucketfs1` and the http port 2580.
-* Check if you have a bucket in the BucketFS. Simply click on the name of the BucketFS in EXAOperation and add a bucket there, e.g. `bucket1`. Also make sure you know the write password. For simplicity we assume that the bucket is defined as a public bucket, i.e. it can be read by any script.
-* Now upload the file into this bucket, e.g. using curl (adapt the hostname, BucketFS port, bucket name and bucket write password).
-```
-curl -X PUT -T virtualschema-jdbc-adapter-dist/target/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar \
- http://w:write-password@your.exasol.host.com:2580/bucket1/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar
+
+1. Make sure you have a bucket file system (BucketFS) and you know the port for either HTTP or HTTPS.
+
+ This can be done in EXAOperation under "EXABuckets". E.g. the id could be `bucketfs1` and the HTTP port 2580.
+
+1. Check if you have a bucket in the BucketFS. Simply click on the name of the BucketFS in EXAOperation and add a bucket there, e.g. `bucket1`.
+
+ Also make sure you know the write password. For simplicity we assume that the bucket is defined as a public bucket, i.e. it can be read by any script.
+
+1. Now upload the file into this bucket, e.g. using curl (adapt the hostname, BucketFS port, bucket name and bucket write password).
+
+```bash
+curl -X PUT -T virtualschema-jdbc-adapter-dist/target/virtualschema-jdbc-adapter-dist-1.1.0.jar \
+ http://w:write-password@your.exasol.host.com:2580/bucket1/virtualschema-jdbc-adapter-dist-1.1.0.jar
```
See chapter 3.6.4. "The synchronous cluster file system BucketFS" in the EXASolution User Manual for more details about BucketFS.
-### 4. Upload JDBC Driver Files
+## Deploying JDBC Driver Files
+
+You have to upload the JDBC driver files of your remote database **twice**:
-You have to upload the JDBC driver files of your remote database **two times**:
-* Upload all files of the JDBC driver into a bucket of your choice, so that they can be accessed from the adapter script. This happens the same way as described above for the adapter jar. You can use the same bucket.
+* Upload all files of the JDBC driver into a bucket of your choice, so that they can be accessed from the adapter script.
+ This happens the same way as described above for the adapter JAR. You can use the same bucket.
* Upload all files of the JDBC driver as a JDBC driver in EXAOperation
- In EXAOperation go to Software -> JDBC Drivers
- - Add the JDBC driver by specifying the jdbc main class and the prefix of the JDBC connection string
+ - Add the JDBC driver by specifying the JDBC main class and the prefix of the JDBC connection string
- Upload all files (one by one) to the specific JDBC to the newly added JDBC driver.
-Note that some JDBC drivers consist of several files and that you have to upload all of them. To find out which jar you need, consult the [supported dialects page](supported-dialects.md).
+Note that some JDBC drivers consist of several files and that you have to upload all of them. To find out which JAR you need, consult the [supported dialects page](supported_sql_dialects.md).
+
+## Deploying the Adapter Script
-### 5. Deploy Adapter Script
Then run the following SQL commands to deploy the adapter in the database:
+
```sql
-- The adapter is simply a script. It has to be stored in any regular schema.
CREATE SCHEMA adapter;
@@ -61,7 +75,7 @@ CREATE JAVA ADAPTER SCRIPT adapter.jdbc_adapter AS
// This will add the adapter jar to the classpath so that it can be used inside the adapter script
// Replace the names of the bucketfs and the bucket with the ones you used.
- %jar /buckets/your-bucket-fs/your-bucket/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar;
+ %jar /buckets/your-bucket-fs/your-bucket/virtualschema-jdbc-adapter-dist-1.1.0.jar;
// You have to add all files of the data source jdbc driver here (e.g. Hive JDBC driver files)
%jar /buckets/your-bucket-fs/your-bucket/name-of-data-source-jdbc-driver.jar;
diff --git a/jdbc-adapter/doc/develop-dialect.md b/jdbc-adapter/doc/develop-dialect.md
deleted file mode 100644
index 99792f768..000000000
--- a/jdbc-adapter/doc/develop-dialect.md
+++ /dev/null
@@ -1,111 +0,0 @@
-# How To Develop and Test a Dialect
-This page describes how you can develop and semi-automatically test a dialect for the JDBC adapter. The framework for testing a dialect is still work in progress.
-
-# Content
-* [How To Develop a Dialect](#how-to-develop-a-dialect)
-* [How To Start Integration Tests](#how-to-start-integration-tests)
-
-## How To Develop a Dialect
-You can implement a dialect by implementing the interface `com.exasol.adapter.dialects.SqlDialect`.
-We recommend to look at the following ressources to get started:
-* First have a look at the [SqlDialect interface source code](../virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlDialect.java). You can start with the comments of the interface and have a look at the methods you can override.
-* Second you can review the source code of one of the [dialect implementations](../virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl) as an inspiration. Ideally you should look at the dialect which is closest to your data source.
-
-To implement a full dialect for a typical data source you have to run all of the following steps. We recommend to follow the order proposed here.
-
-### Setup Data Source
-* Setup and start the database
-* Testdata: Create a test schema with a simple table (simple data types)
-
-### Setup EXASOL
-* Setup and start an EXASOL database with virtual schemas feature
-* Upload the JDBC drivers of the data source via EXAOperation
-* Manual test: query data from the data source via `IMPORT FROM JDBC`
-
-### Catalog, Schema & Table Mapping
-* Override the `SqlDialect` methods for catalog, schema and table mapping
-* Manual test: create a virtual schema by specifying the catalog and/or schema.
-
-### Data Type Mapping
-* Testdata: Create a table with all data types and at least one row of data
-* Override the `SqlDialect` method for data type mapping
-* Automatic test: sys tables show virtual table and columns with correctly mapped type
-* Automatic test: running `SELECT` on the virtual table returns the expected result
-
-### Identifier Case Handling & Quoting
-* Testdata: Create a schema/table/column with mixed case (if supported)
-* Automatic test: sys tables correct
-* Automatic test: `SELECT` works as expected
-
-### Projection Capability
-* Add capability
-* Automatic test: pushed down & correct result (incl. `EXPLAIN VIRTUAL`). Also test with mixed case columns.
-
-### Predicates and Literal Capabilities
-* Add capabilities for supported literals and predicates (e.g. `c1='foo'`)
-* Automatic test: pushed down & correct result (incl. `EXPLAIN VIRTUAL`) for all predicates & literals
-
-### Aggregation & Set Function Capabilities
-* Add capabilities for aggregations and aggregation functions
-* Automatic test: pushed down & correct result (incl. `EXPLAIN VIRTUAL`) for all set functions
-
-### Order By / Limit Capabilities
-* Testdata: Create a table with null values and non-null values, to check null collation.
-* Add capabilities for order by and/or limit
-* Automatic test: pushed down & correct result (incl. `EXPLAIN VIRTUAL`)
-* Automatic test: default null collation, explicit `NULLS FIRST/LAST`
-
-### Scalar Function Capabilities
-* Add capabilities for scalar functions
-* Automatic test: pushed down & correct result (incl. `EXPLAIN VIRTUAL`)
-
-### Views
-* Testdata: Create a simple view, e.g. joining two existing tables
-* Automatic test: Query the view, optionally e.g. with a filter.
-
-
-## How To Start Integration Tests
-We assume that you have a running EXASOL and data source database with all required test tables.
-
-We use following Maven phases for our integration tests:
-* pre-integration-test phase is used to automatically deploy the latest jdbc adapter jar (based on your latest code modifications)
-* integration-test phase is used to execute the actual integration tests
-
-Note that to check whether the integration-tests were successful, you have to run the verify Maven phase.
-
-You can start the integration tests as follows:
-```
-mvn clean package && mvn verify -Pit -Dintegrationtest.configfile=/path/to/your/integration-test-config.yaml
-```
-
-This will run all integration tests, i.e. all junit tests with the suffix "IT" in the filename. The yaml configuration file stores the information for your test environment like jdbc connection strings, paths and credentials.
-
-## Java Remote Debugging of Adapter script
-
-When developing a new dialect it's sometimes really helpful to debug the deployed adapter script inside the database.
-In a one node EXASOL environment setting up remote debugging is straight forward.
-First define the following env directive in your adapter script:
-
-```sql
-CREATE OR REPLACE JAVA ADAPTER SCRIPT adapter.jdbc_adapter
- AS
-
- %env JAVA_TOOL_OPTIONS="-agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=y";
-
- // This is the class implementing the callback method of the adapter script
- %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
-
- // This will add the adapter jar to the classpath so that it can be used inside the adapter script
- // Replace the names of the bucketfs and the bucket with the ones you used.
- %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-1.0.2-SNAPSHOT.jar;
-
- // You have to add all files of the data source jdbc driver here (e.g. MySQL or Hive)
-
- %jar /buckets/bucketfs1/bucket1/RedshiftJDBC42-1.2.1.1001.jar;
-
-/
-```
-
-In eclipse (or any other Java IDE) you can then attach remotely to the Java Adapter using the IP of your one node EXASOL environment and the port 8000.
-With `suspend=y` the Java-process will wait until the debugger connects to the Java UDF.
-
diff --git a/jdbc-adapter/doc/developing_an_sql_dialect.md b/jdbc-adapter/doc/developing_an_sql_dialect.md
new file mode 100644
index 000000000..609d86af0
--- /dev/null
+++ b/jdbc-adapter/doc/developing_an_sql_dialect.md
@@ -0,0 +1,270 @@
+# How To Develop and Test a Dialect
+This page describes how you can develop and semi-automatically test a dialect for the JDBC adapter. The framework for testing a dialect is still work in progress.
+
+## Content
+
+* [Developing a Dialect](#developing-a-dialect)
+* [Integration Testing](#integration-testing)
+
+## Developing a Dialect
+
+You can implement a dialect by implementing the interface `com.exasol.adapter.dialects.SqlDialect`.
+We recommend to look at the following resources to get started:
+
+* First have a look at the [SqlDialect interface source code](../virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlDialect.java). You can start with the comments of the interface and have a look at the methods you can override.
+* Second you can review the source code of one of the [dialect implementations](../virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl) as an inspiration. Ideally you should look at the dialect which is closest to your data source.
+
+To implement a full dialect for a typical data source you have to run all of the following steps. We recommend to follow the order proposed here.
+
+### Registering the Dialect
+
+The Virtual Schema adapter creates an instance of an SQL dialect on demand. You can pick any dialect that is listed in the `SqlDialects` registry.
+
+To register your new dialect add it to the list in [sql_dialects.properties](../virtualschema-jdbc-adapter/src/main/resources/sql_dialects.properties]).
+
+```properties
+com.exasol.adapter.dialects.supported=\
+...
+com.exasol.adapter.dialects.impl.MyAweSomeSqlDialect
+```
+
+For tests or in case you want to exclude existing dialects in certain scenarios you can override the contents of this file by setting the system property `com.exasol.adapter.dialects.supported`.
+
+Please also remember to [list the supported dialect in the documentation](../README.md).
+
+### Setup Data Source
+
+* Setup and start the database
+* Testdata: Create a test schema with a simple table (simple data types)
+
+### Setup Exasol
+
+* Setup and start an Exasol database with virtual schemas feature
+* Upload the JDBC drivers of the data source via EXAOperation
+* Manual test: query data from the data source via `IMPORT FROM JDBC`
+
+### Catalog, Schema & Table Mapping
+
+* Override the `SqlDialect` methods for catalog, schema and table mapping
+* Manual test: create a virtual schema by specifying the catalog and/or schema.
+
+### Data Type Mapping
+
+* Testdata: Create a table with all data types and at least one row of data
+* Override the `SqlDialect` method for data type mapping
+* Automatic test: sys tables show virtual table and columns with correctly mapped type
+* Automatic test: running `SELECT` on the virtual table returns the expected result
+
+### Identifier Case Handling & Quoting
+
+* Testdata: Create a schema/table/column with mixed case (if supported)
+* Automatic test: sys tables correct
+* Automatic test: `SELECT` works as expected
+
+### Projection Capability
+
+* Add capability
+* Automatic test: pushed down & correct result (incl. `EXPLAIN VIRTUAL`). Also test with mixed case columns.
+
+### Predicates and Literal Capabilities
+
+* Add capabilities for supported literals and predicates (e.g. `c1='foo'`)
+* Automatic test: pushed down & correct result (incl. `EXPLAIN VIRTUAL`) for all predicates & literals
+
+### Aggregation & Set Function Capabilities
+
+* Add capabilities for aggregations and aggregation functions
+* Automatic test: pushed down & correct result (incl. `EXPLAIN VIRTUAL`) for all set functions
+
+### Order By / Limit Capabilities
+
+* Testdata: Create a table with null values and non-null values, to check null collation.
+* Add capabilities for order by and/or limit
+* Automatic test: pushed down & correct result (incl. `EXPLAIN VIRTUAL`)
+* Automatic test: default null collation, explicit `NULLS FIRST/LAST`
+
+### Scalar Function Capabilities
+
+* Add capabilities for scalar functions
+* Automatic test: pushed down & correct result (incl. `EXPLAIN VIRTUAL`)
+
+### Views
+
+* Testdata: Create a simple view, e.g. joining two existing tables
+* Automatic test: Query the view, optionally e.g. with a filter.
+
+## Integration Testing
+
+### Security Considerations
+
+Please note that in the course of the integration tests you need to provide the test framework with access rights and credentials to the source database.
+
+In order not to create security issues:
+
+* Make sure the data in the source database is not confidential (demo data only)
+* Don't reuse credentials
+* Don't check in credentials
+
+### Prerequisites
+
+* Exasol running
+* Exasol accessible from within integration test environment
+* Source database running
+* Source database accessible from within integration test environment
+* Test data loaded into source database
+* [BucketFS HTTP port listening and reachable](https://www.exasol.com/support/browse/SOL-503?src=confmacro) (e.g. on port 2580)
+
+ ![BucketFS on port 2580](images/Screenshot_BucketFS_default_service.png)
+
+* Bucket on BucketFS prepared for holding JDBC drivers and virtual schema adapter
+
+ ![Integration test bucket](images/Screenshot_bucket_for_JARs.png)
+
+* JDBC driver JAR archives available for databases against which to run integration tests
+
+If BucketFS is new to you, there are nice [training videos on BucketFS](https://www.exasol.com/portal/display/TRAINING/BucketFS) available.
+
+### Preparing Integration Test
+
+1. Create a dedicated user in the source database that has the necessary access privileges
+2. Create credentials for the user under which the integration tests run at the source
+3. Make a local copy of the [sample integration test configuration file](../integration-test-data/integration-test-sample.yaml) in a place where you don't accidentally check this file in.
+4. Edit the credentials information
+5. [Deploy the JDBC driver(s)](deploying_the_virtual_schema_adapter.md#deploying-jdbc-driver-files) to the prepared bucket in Exasol's BucketFS
+
+#### Creating Your own Integration Test Configuration
+
+Directories called `local` are ignored by Git, so you can place your configuration there and avoid having it checked in.
+
+In the root directory of the adapter sources execute the following commands:
+
+```bash
+mkdir jdbc-adapter/local
+cp jdbc-adapter/integration-test-data/integration-test-sample.yaml jdbc-adapter/local/integration-test-config.yaml
+```
+
+Now edit the file `jdbc-adapter/local/integration-test-config.yaml` to adapt the settings to your local installation.
+
+### Executing Integration Tests
+
+We use following [Maven lifecycle phases](https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html) for our integration tests:
+
+* `pre-integration-test` phase is used to **automatically deploy the latest [JDBC](https://www.exasol.com/support/secure/attachment/66315/EXASOL_JDBC-6.1.rc1.tar.gz) adapter JAR** (based on your latest code modifications)
+* `integration-test` phase is used to execute the actual integration tests
+
+Note that to check whether the integration-tests were successful, you have to run the verify Maven phase.
+
+You can start the integration tests as follows:
+
+```bash
+mvn clean package && mvn verify -Pit -Dintegrationtest.configfile=/path/to/your/integration-test-config.yaml
+```
+
+This will run all integration tests, i.e. all JUnit tests with the suffix `IT` in the filename.
+
+The YAML configuration file stores the information for your test environment like JDBC connection strings, paths and credentials.
+
+## Observing Adapter Output
+
+You can either use [netcat](http://netcat.sourceforge.net/) or `exaoutput.py` from the [EXASolution Python Package](https://github.com/EXASOL/python-exasol). Since netcat is available on most Linux machines anyway, we will use this in the description here.
+
+First start netcat in listen-mode on a free TCP port on your machine.
+
+```bash
+nc -lkp 3000
+```
+
+The `-l` switch puts netcat into listen-mode. `-k` tells it to stay open after the peer closed a connection. `-p 3000` set the number of the TCP port netcat listens on.
+
+Next find out your IP address.
+
+Linux:
+
+```bash
+ip -br address
+```
+
+Windows:
+
+```cmd
+ipconfig /all
+```
+
+The next SQL command shows an example of declaring a virtual schema. Notice the IP address and port in the last line. This tells the adapter script where to direct the output to.
+
+```sql
+CREATE VIRTUAL SCHEMA VS_EXA_IT
+USING ADAPTER.JDBC_ADAPTER
+WITH CONNECTION_STRING='jdbc:exa:localhost:8563' USERNAME='sys' PASSWORD='exasol'
+ SCHEMA_NAME='NATIVE_EXA_IT' SQL_DIALECT='EXASOL' IS_LOCAL='true'
+ DEBUG_ADDRESS='10.44.1.228:3000' LOG_LEVEL='ALL';
+```
+
+The parameter LOG_LEVEL lets you pick a log level as defined in [java.util.logging.Level](https://docs.oracle.com/javase/8/docs/api/java/util/logging/Level.html).
+
+The recommended standard log levels are:
+
+* `INFO` in production
+* `ALL` for in-depth debugging
+
+You can tell that the connection works if you see the following message after executing the SQL command that installs a virtual schema:
+
+ Attached to output service
+
+## Java Remote Debugging of Adapter script
+
+When developing a new dialect it is sometimes really helpful to debug the deployed adapter script inside the database.
+In a one node Exasol environment setting up remote debugging is straight forward.
+First define the following `env` directive in your adapter script:
+
+```sql
+CREATE OR REPLACE JAVA ADAPTER SCRIPT adapter.jdbc_adapter
+ AS
+
+ %env JAVA_TOOL_OPTIONS="-agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=y";
+
+ // This is the class implementing the callback method of the adapter script
+ %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
+
+ // This will add the adapter jar to the classpath so that it can be used inside the adapter script
+ // Replace the names of the bucketfs and the bucket with the ones you used.
+ %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.1.0.jar;
+
+ // You have to add all files of the data source jdbc driver here (e.g. MySQL or Hive)
+
+ %jar /buckets/bucketfs1/bucket1/RedshiftJDBC42-1.2.1.1001.jar;
+
+/
+```
+
+In eclipse (or any other Java IDE) you can then attach remotely to the Java Adapter using the IP of your one node Exasol environment and the port 8000.
+
+The switch `suspend=y` tells the Java-process to wait until the debugger connects to the Java UDF.
+
+## Version Management
+
+All dialects have the same version as the master project. In the master `pom.xml` file a property called `product-version` is set. Use this in as the artifact version number in the JDBC adapter and all dialects.
+
+Run the script
+
+```bash
+jdbc-adapter/tools/version.sh verify
+```
+
+To check that all documentation and templates reference the same version number. This script is also used as a build breaker in the continuous integration script.
+
+To update documentation files run
+
+```bash
+jdbc-adapter/tools/version.sh unify
+```
+
+Note that the script must be run from the root directory of the virtual schema project.
+
+## Troubleshooting
+
+### Setting the Right IP Addresses for Database Connections
+
+Keep in mind that the adapter script is deployed in the Exasol database. If you want it to be able to make connections to other databases, you need to make sure that the IP addresses or host names are the ones that the database sees, not your local machine. This is easily forgotten in case of automated integration tests since it feels like they run on your machine -- which is only partially true.
+
+So a common source of error would be to specify `localhost` or `127.0.0.1` as address of the remote database in case you have it running in Docker or a VM on your local machine. But the Exasol Database cannot reach the other database there unless it is running on the same machine directly (i.e. not behind a virtual network device).
\ No newline at end of file
diff --git a/jdbc-adapter/doc/images/Screenshot_BucketFS_default_service.png b/jdbc-adapter/doc/images/Screenshot_BucketFS_default_service.png
new file mode 100644
index 000000000..24b249da7
Binary files /dev/null and b/jdbc-adapter/doc/images/Screenshot_BucketFS_default_service.png differ
diff --git a/jdbc-adapter/doc/images/Screenshot_bucket_for_JARs.png b/jdbc-adapter/doc/images/Screenshot_bucket_for_JARs.png
new file mode 100644
index 000000000..a9b02f2f8
Binary files /dev/null and b/jdbc-adapter/doc/images/Screenshot_bucket_for_JARs.png differ
diff --git a/jdbc-adapter/doc/sql_dialects/db2.md b/jdbc-adapter/doc/sql_dialects/db2.md
new file mode 100644
index 000000000..7f27bf9b1
--- /dev/null
+++ b/jdbc-adapter/doc/sql_dialects/db2.md
@@ -0,0 +1,77 @@
+# DB2 SQL Dialect
+
+DB2 was tested with the IBM DB2 JCC Drivers that come with DB2 LUW V10.1 and V11. As these drivers didn't have any major changes in the past years any DB2 driver should work (back to V9.1). The driver comes with 2 different implementations `db2jcc.jar` and `db2jcc4.jar`. All tests were made with the `db2jcc4.jar`.
+
+Additionally there are 2 files for the DB2 Driver.
+
+* `db2jcc_license_cu.jar` - License File for DB2 on Linux Unix and Windows
+* `db2jcc_license_cisuz.jar` - License File for DB2 on zOS (Mainframe)
+
+Make sure that you upload the necessary license file for the target platform you want to connect to.
+
+## Supported Capabilities
+
+The DB2 dialect handles some casts in regards of time data types and functions.
+
+Casting of Data Types
+
+* `TIMESTAMP` and `TIMESTAMP(x)` will be cast to `VARCHAR` to not lose precision.
+* `VARCHAR` and `CHAR` for bit data will be cast to a hex string with double the original size
+* `TIME` will be cast to `VARCHAR(8)`
+* `XML` will be cast to `VARCHAR(DB2_MAX_LENGTH)`
+* `BLOB` is not supported
+
+Casting of Functions
+
+* `LIMIT` will replaced by `FETCH FIRST x ROWS ONLY`
+* `OFFSET` is currently not supported as only DB2 V11 support this nativly
+* `ADD_DAYS`, `ADD_WEEKS` ... will be replaced by `COLUMN + DAYS`, `COLUMN + ....`
+
+
+## JDBC Driver
+
+You have to specify the following settings when adding the JDBC driver via EXAOperation:
+
+* Name: `DB2`
+* Main: `com.ibm.db2.jcc.DB2Driver`
+* Prefix: `jdbc:db2:`
+
+## Adapter script
+
+```sql
+CREATE or replace JAVA ADAPTER SCRIPT adapter.jdbc_adapter AS
+
+ // This is the class implementing the callback method of the adapter script
+ %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
+
+ // This will add the adapter jar to the classpath so that it can be used inside the adapter script
+ // Replace the names of the bucketfs and the bucket with the ones you used.
+ %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.1.0.jar;
+
+ // DB2 Driver files
+ %jar /buckets/bucketfs1/bucket1/db2jcc4.jar;
+ %jar /buckets/bucketfs1/bucket1/db2jcc_license_cu.jar;
+ // uncomment for mainframe connection and upload db2jcc_license_cisuz.jar;
+ //%jar /buckets/bucketfs1/bucket1/db2jcc_license_cisuz.jar;
+/
+```
+
+## Creating a Virtual Schema
+
+You can now create a virtual schema as follows:
+
+```sql
+create or replace connection DB2_CON to 'jdbc:db2://host:port/database' user 'db2-usr' identified by 'db2-pwd';
+
+create virtual schema db2 using adapter.jdbc_adapter with
+ SQL_DIALECT = 'DB2'
+ CONNECTION_NAME = 'DB2_CON'
+ SCHEMA_NAME = ''
+;
+```
+
+`` has to be replaced by the actual db2 schema you want to connect to.
+
+## Running the DB2 Integration Tests
+
+A how to has been included in the [setup sql file](../../integration-test-data/db2-testdata.sql)
\ No newline at end of file
diff --git a/jdbc-adapter/doc/sql_dialects/exasol.md b/jdbc-adapter/doc/sql_dialects/exasol.md
new file mode 100644
index 000000000..1dab6e236
--- /dev/null
+++ b/jdbc-adapter/doc/sql_dialects/exasol.md
@@ -0,0 +1,48 @@
+# Exasol SQL Dialect
+
+## Supported Capabilities
+
+The Exasol SQL dialect supports all capabilities that are supported by the virtual schema framework.
+
+## JDBC Driver
+
+Connecting to an Exasol database is the simplest way to start with virtual schemas.
+You don't have to install any JDBC driver, because it is already installed in the Exasol database and also included in the jar of the JDBC adapter.
+
+## Adapter Script
+
+After uploading the adapter jar, the adapter script can be created as follows:
+
+```sql
+CREATE SCHEMA adapter;
+CREATE JAVA ADAPTER SCRIPT adapter.jdbc_adapter AS
+ %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
+ %jar /buckets/your-bucket-fs/your-bucket/virtualschema-jdbc-adapter-dist-1.1.0.jar;
+/
+```
+
+## Creating a Virtual Schema
+
+```sql
+CREATE CONNECTION exasol_conn TO 'jdbc:exa:exasol-host:1234' USER 'user' IDENTIFIED BY 'pwd';
+
+CREATE VIRTUAL SCHEMA virtual_exasol USING adapter.jdbc_adapter WITH
+ SQL_DIALECT = 'EXASOL'
+ CONNECTION_NAME = 'EXASOL_CONN'
+ SCHEMA_NAME = 'default';
+```
+
+## Using IMPORT FROM EXA Instead of IMPORT FROM JDBC
+
+Exasol provides the faster and parallel `IMPORT FROM EXA` command for loading data from Exasol. You can tell the adapter to use this command instead of `IMPORT FROM JDBC` by setting the `IMPORT_FROM_EXA` property. In this case you have to provide the additional `EXA_CONNECTION_STRING` which is the connection string used for the internally used `IMPORT FROM EXA` command (it also supports ranges like `192.168.6.11..14:8563`). Please note, that the `CONNECTION` object must still have the JDBC connection string in `AT`, because the Adapter Script uses a JDBC connection to obtain the metadata when a schema is created or refreshed. For the internally used `IMPORT FROM EXA` statement, the address from `EXA_CONNECTION_STRING` and the user name and password from the connection will be used.
+
+```sql
+CREATE CONNECTION exasol_conn TO 'jdbc:exa:exasol-host:1234' USER 'user' IDENTIFIED BY 'pwd';
+
+CREATE VIRTUAL SCHEMA virtual_exasol USING adapter.jdbc_adapter WITH
+ SQL_DIALECT = 'EXASOL'
+ CONNECTION_NAME = 'EXASOL_CONN'
+ SCHEMA_NAME = 'default'
+ IMPORT_FROM_EXA = 'true'
+ EXA_CONNECTION_STRING = 'exasol-host:1234';
+```
\ No newline at end of file
diff --git a/jdbc-adapter/doc/sql_dialects/hive.md b/jdbc-adapter/doc/sql_dialects/hive.md
new file mode 100644
index 000000000..1062584f7
--- /dev/null
+++ b/jdbc-adapter/doc/sql_dialects/hive.md
@@ -0,0 +1,106 @@
+# Hive SQL Dialect
+
+## JDBC Driver
+
+The dialect was tested with the Cloudera Hive JDBC driver available on the [Cloudera downloads page](http://www.cloudera.com/downloads). The driver is also available directly from [Simba technologies](http://www.simba.com/), who developed the driver.
+
+When you unpack the JDBC driver archive you will see that there are two variants, JDBC 4.0 and 4.1. We tested with the JDBC 4.1 variant.
+
+You have to specify the following settings when adding the JDBC driver via EXAOperation:
+
+* Name: `Hive`
+* Main: `com.cloudera.hive.jdbc41.HS2Driver`
+* Prefix: `jdbc:hive2:`
+
+Make sure you upload **all files** of the JDBC driver (over 10 at the time of writing) in EXAOperation **and** to the bucket.
+
+## Adapter Script
+
+You have to add all files of the JDBC driver to the classpath using `%jar` as follows (filenames may vary):
+
+```sql
+CREATE SCHEMA adapter;
+CREATE JAVA ADAPTER SCRIPT jdbc_adapter AS
+ %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
+
+ %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.1.0.jar;
+
+ %jar /buckets/bucketfs1/bucket1/hive_metastore.jar;
+ %jar /buckets/bucketfs1/bucket1/hive_service.jar;
+ %jar /buckets/bucketfs1/bucket1/HiveJDBC41.jar;
+ %jar /buckets/bucketfs1/bucket1/libfb303-0.9.0.jar;
+ %jar /buckets/bucketfs1/bucket1/libthrift-0.9.0.jar;
+ %jar /buckets/bucketfs1/bucket1/log4j-1.2.14.jar;
+ %jar /buckets/bucketfs1/bucket1/ql.jar;
+ %jar /buckets/bucketfs1/bucket1/slf4j-api-1.5.11.jar;
+ %jar /buckets/bucketfs1/bucket1/slf4j-log4j12-1.5.11.jar;
+ %jar /buckets/bucketfs1/bucket1/TCLIServiceClient.jar;
+ %jar /buckets/bucketfs1/bucket1/zookeeper-3.4.6.jar;
+/
+```
+
+### Creating a Virtual Schema
+
+```sql
+CREATE CONNECTION hive_conn TO 'jdbc:hive2://hive-host:10000' USER 'hive-usr' IDENTIFIED BY 'hive-pwd';
+
+CREATE VIRTUAL SCHEMA hive_default USING adapter.jdbc_adapter WITH
+ SQL_DIALECT = 'HIVE'
+ CONNECTION_NAME = 'HIVE_CONN'
+ SCHEMA_NAME = 'default';
+```
+
+### Connecting To a Kerberos Secured Hadoop:
+
+Connecting to a Kerberos secured Impala or Hive service only differs in one aspect: You have to a `CONNECTION` object which contains all the relevant information for the Kerberos authentication. This section describes how Kerberos authentication works and how to create such a `CONNECTION`.
+
+#### Understanding how it Works (Optional)
+
+Both the adapter script and the internally used `IMPORT FROM JDBC` statement support Kerberos authentication. They detect, that the connection is a Kerberos connection by a special prefix in the `IDENTIFIED BY` field. In such case, the authentication will happen using a Kerberos keytab and Kerberos config file (using the JAAS Java API).
+
+The `CONNECTION` object stores all relevant information and files in its fields:
+
+* The `TO` field contains the JDBC connection string
+* The `USER` field contains the Kerberos principal
+* The `IDENTIFIED BY` field contains the Kerberos configuration file and keytab file (base64 encoded) along with an internal prefix `ExaAuthType=Kerberos;` to identify the `CONNECTION` as a Kerberos `CONNECTION`.
+
+#### Generating the CREATE CONNECTION Statement
+
+In order to simplify the creation of Kerberos `CONNECTION` objects, the [`create_kerberos_conn.py`](https://github.com/EXASOL/hadoop-etl-udfs/blob/master/tools/create_kerberos_conn.py) Python script has been provided. The script requires 5 arguments:
+
+* `CONNECTION` name (arbitrary name for the new `CONNECTION`)
+* Kerberos principal for Hadoop (i.e., Hadoop user)
+* Kerberos configuration file path (e.g., `krb5.conf`)
+* Kerberos keytab file path, which contains keys for the Kerberos principal
+* JDBC connection string
+
+Example command:
+
+```
+python tools/create_kerberos_conn.py krb_conn krbuser@EXAMPLE.COM /etc/krb5.conf ./krbuser.keytab \
+ 'jdbc:hive2://hive-host.example.com:10000;AuthMech=1;KrbRealm=EXAMPLE.COM;KrbHostFQDN=hive-host.example.com;KrbServiceName=hive'
+```
+
+Output:
+
+```sql
+CREATE CONNECTION krb_conn TO 'jdbc:hive2://hive-host.example.com:10000;AuthMech=1;KrbRealm=EXAMPLE.COM;KrbHostFQDN=hive-host.example.com;KrbServiceName=hive' USER 'krbuser@EXAMPLE.COM' IDENTIFIED BY 'ExaAuthType=Kerberos;enp6Cg==;YWFhCg=='
+```
+
+#### Creating the CONNECTION
+You have to execute the generated `CREATE CONNECTION` statement directly in EXASOL to actually create the Kerberos `CONNECTION` object. For more detailed information about the script, use the help option:
+
+```sh
+python tools/create_kerberos_conn.py -h
+```
+
+#### Using the Connection When Creating a Virtual Schema
+
+You can now create a virtual schema using the Kerberos connection created before.
+
+```sql
+CREATE VIRTUAL SCHEMA hive_default USING adapter.jdbc_adapter WITH
+ SQL_DIALECT = 'HIVE'
+ CONNECTION_NAME = 'KRB_CONN'
+ SCHEMA_NAME = 'default';
+```
\ No newline at end of file
diff --git a/jdbc-adapter/doc/sql_dialects/impala.md b/jdbc-adapter/doc/sql_dialects/impala.md
new file mode 100644
index 000000000..18aa2daed
--- /dev/null
+++ b/jdbc-adapter/doc/sql_dialects/impala.md
@@ -0,0 +1,54 @@
+# Impala SQL Dialect
+
+The Impala dialect is similar to the Hive dialect in most aspects. For this reason we only highlight the differences in this section.
+
+## JDBC Driver
+
+You have to specify the following settings when adding the JDBC driver via EXAOperation:
+
+* Name: `Hive`
+* Main: `com.cloudera.impala.jdbc41.Driver`
+* Prefix: `jdbc:impala:`
+
+Make sure you upload **all files** of the JDBC driver (over 10 at the time of writing) in EXAOperation and to the bucket.
+
+## Adapter script
+
+The adapter can be created similar to Hive:
+
+```sql
+
+CREATE SCHEMA adapter;
+CREATE JAVA ADAPTER SCRIPT jdbc_adapter AS
+ %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
+
+ %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.1.0.jar;
+
+ %jar /buckets/bucketfs1/bucket1/hive_metastore.jar;
+ %jar /buckets/bucketfs1/bucket1/hive_service.jar;
+ %jar /buckets/bucketfs1/bucket1/ImpalaJDBC41.jar;
+ %jar /buckets/bucketfs1/bucket1/libfb303-0.9.0.jar;
+ %jar /buckets/bucketfs1/bucket1/libthrift-0.9.0.jar;
+ %jar /buckets/bucketfs1/bucket1/log4j-1.2.14.jar;
+ %jar /buckets/bucketfs1/bucket1/ql.jar;
+ %jar /buckets/bucketfs1/bucket1/slf4j-api-1.5.11.jar;
+ %jar /buckets/bucketfs1/bucket1/slf4j-log4j12-1.5.11.jar;
+ %jar /buckets/bucketfs1/bucket1/TCLIServiceClient.jar;
+ %jar /buckets/bucketfs1/bucket1/zookeeper-3.4.6.jar;
+/
+```
+
+## Creating a Virtual Schema
+
+You can now create a virtual schema as follows:
+
+```sql
+CREATE CONNECTION impala_conn TO 'jdbc:impala://impala-host:21050' USER 'impala-usr' IDENTIFIED BY 'impala-pwd';
+
+CREATE VIRTUAL SCHEMA impala_default USING adapter.jdbc_adapter WITH
+ SQL_DIALECT = 'IMPALA'
+ CONNECTION_NAME = 'IMPALA_CONN'
+ SCHEMA_NAME = 'default';
+```
+
+Connecting to a Kerberos secured Impala works similar as for Hive and is described in the section [Connecting To a Kerberos Secured Hadoop](hive.md#connecting-to-a-kerberos-secured-hadoop).
\ No newline at end of file
diff --git a/jdbc-adapter/doc/sql_dialects/oracle.md b/jdbc-adapter/doc/sql_dialects/oracle.md
new file mode 100644
index 000000000..0265f78e0
--- /dev/null
+++ b/jdbc-adapter/doc/sql_dialects/oracle.md
@@ -0,0 +1,111 @@
+# Oracle SQL Dialect
+
+## Supported capabilities
+
+The Oracle dialect does not support all capabilities. A complete list can be found in [OracleSqlDialect.getCapabilities()](../../virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/OracleSqlDialect.java).
+
+Oracle data types are mapped to their equivalents in Exasol. The following exceptions apply:
+
+- `NUMBER`, `NUMBER with precision > 36` and `LONG` are casted to `VARCHAR` to prevent a loss of precision.
+- `DATE` is casted to `TIMESTAMP`. This data type is only supported for positive year values, i.e., years > 0001.
+- `TIMESTAMP WITH [LOCAL] TIME ZONE` is casted to `VARCHAR`. Exasol does not support timestamps with time zone information.
+- `INTERVAL` is casted to `VARCHAR`.
+- `CLOB`, `NCLOB` and `BLOB` are casted to `VARCHAR`.
+- `RAW` and `LONG RAW` are not supported.
+
+## JDBC Driver
+
+To setup a virtual schema that communicates with an Oracle database using JDBC, the JDBC driver, e.g., `ojdbc7-12.1.0.2.jar`, must first be installed in EXAoperation and deployed to BucketFS; see [this article](https://www.exasol.com/support/browse/SOL-179#WhichJDBCdriverforOracleshallIuse?) and [Deploying the Adapter Step By Step](deploying_the_virtual_schema_adapter.md) for instructions.
+
+## Adapter Script
+
+After uploading the adapter jar we are ready to create an Oracle adapter script. Adapt the following script as indicated.
+
+```sql
+CREATE SCHEMA adapter;
+CREATE JAVA ADAPTER SCRIPT adapter.jdbc_oracle AS
+ %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
+
+ // You need to replace `your-bucket-fs` and `your-bucket` to match the actual location
+ // of the adapter jar.
+ %jar /buckets/your-bucket-fs/your-bucket/virtualschema-jdbc-adapter-dist-1.1.0.jar;
+
+ // Add the oracle jdbc driver to the classpath
+ %jar /buckets/bucketfs1/bucket1/ojdbc7-12.1.0.2.jar
+/
+```
+
+## JDBC Connection
+
+Next, create a JDBC connection to your Oracle database. Adjust the properties to match your environment.
+
+```sql
+CREATE CONNECTION jdbc_oracle
+ TO 'jdbc:oracle:thin:@//:/'
+ USER ''
+ IDENTIFIED BY '';
+```
+
+A quick option to test the `JDBC_ORACLE` connection is to run an `IMPORT FROM JDBC` query. The connection works, if `42` is returned.
+
+```sql
+IMPORT FROM JDBC AT jdbc_oracle
+ STATEMENT 'SELECT 42 FROM DUAL';
+```
+
+### Creating a Virtual schema
+
+Having created both a JDBC adapter script and a JDBC oracle connection, we are ready to create a virtual schema. Insert the name of the schema that you want to expose in Exasol.
+
+```sql
+CREATE VIRTUAL SCHEMA virt_oracle USING adapter.jdbc_oracle WITH
+ SQL_DIALECT = 'ORACLE'
+ CONNECTION_NAME = 'JDBC_ORACLE'
+ SCHEMA_NAME = '';
+```
+
+## Using IMPORT FROM ORA Instead of IMPORT FROM JDBC
+
+Exasol provides the `IMPORT FROM ORA` command for loading data from Oracle. It is possible to create a virtual schema that uses `IMPORT FROM ORA` instead of JDBC to communicate with Oracle. Both options are indented to support the same features. `IMPORT FROM ORA` almost always offers better performance since it is implemented natively.
+
+This behavior is toggled by the Boolean `IMPORT_FROM_ORA` variable. Note that a JDBC connection to Oracle is still required to fetch metadata. In addition, a "direct" connection to the Oracle database is needed.
+
+### Deploying the Oracle Instant Client
+
+To be able to communicate with Oracle, you first need to supply Exasol with the Oracle Instant Client, which can be obtained [directly from Oracle](http://www.oracle.com/technetwork/database/database-technologies/instant-client/overview/index.html). Open EXAoperation, visit Software -> "Upload Oracle Instant Client" and select the downloaded package. The latest version of Oracle Instant Client we tested is `instantclient-basic-linux.x64-12.1.0.2.0`.
+
+### Creating an Oracle Connection
+
+Having deployed the Oracle Instant Client, a connection to your Oracle database can be set up.
+
+```sql
+CREATE CONNECTION conn_oracle
+ TO '(DESCRIPTION =
+ (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)
+ (HOST = )
+ (PORT = )))
+ (CONNECT_DATA = (SERVER = DEDICATED)
+ (SERVICE_NAME = )))'
+ USER ''
+ IDENTIFIED BY '';
+```
+
+This connection can be tested using, e.g., the following SQL expression.
+
+```sql
+IMPORT FROM ORA at CONN_ORACLE
+ STATEMENT 'SELECT 42 FROM DUAL';
+```
+
+### Creating a Virtual schema
+
+Assuming you already setup the JDBC connection `JDBC_ORACLE` as shown in the previous section, you can continue with creating the virtual schema.
+
+```sql
+CREATE VIRTUAL SCHEMA virt_import_oracle USING adapter.jdbc_oracle WITH
+ SQL_DIALECT = 'ORACLE'
+ CONNECTION_NAME = 'JDBC_ORACLE'
+ SCHEMA_NAME = ''
+ IMPORT_FROM_ORA = 'true'
+ EXA_CONNECTION_NAME = 'CONN_ORACLE';
+```
\ No newline at end of file
diff --git a/jdbc-adapter/doc/sql_dialects/postgresql.md b/jdbc-adapter/doc/sql_dialects/postgresql.md
new file mode 100644
index 000000000..e35cc2413
--- /dev/null
+++ b/jdbc-adapter/doc/sql_dialects/postgresql.md
@@ -0,0 +1,37 @@
+# PostgreSQL SQL Dialect
+
+## JDBC Driver
+
+The PostgreSQL dialect was tested with JDBC driver version 42.0.0 and PostgreSQL 9.6.2 .
+
+## Adapter Script
+
+```sql
+CREATE OR REPLACE JAVA ADAPTER SCRIPT adapter.jdbc_adapter
+ AS
+
+ // This is the class implementing the callback method of the adapter script
+ %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
+
+ // This will add the adapter jar to the classpath so that it can be used inside the adapter script
+ // Replace the names of the bucketfs and the bucket with the ones you used.
+ %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.1.0.jar;
+
+ // You have to add all files of the data source jdbc driver here (e.g. MySQL or Hive)
+ %jar /buckets/bucketfs1/bucket1/postgresql-42.0.0.jar;
+
+/
+```
+
+## Creating a Virtual Schema
+
+```sql
+CREATE VIRTUAL SCHEMA postgres
+ USING adapter.jdbc_adapter
+ WITH
+ SQL_DIALECT = 'POSTGRESQL'
+ CATALOG_NAME = 'postgres'
+ SCHEMA_NAME = 'public'
+ CONNECTION_NAME = 'POSTGRES_DOCKER'
+ ;
+```
\ No newline at end of file
diff --git a/jdbc-adapter/doc/sql_dialects/redshift.md b/jdbc-adapter/doc/sql_dialects/redshift.md
new file mode 100644
index 000000000..21c153da9
--- /dev/null
+++ b/jdbc-adapter/doc/sql_dialects/redshift.md
@@ -0,0 +1,44 @@
+# Redshift SQL Dialect
+
+## JDBC Driver
+
+You have to specify the following settings when adding the JDBC driver via EXAOperation:
+* Name: `REDSHIFT`
+* Main: `com.amazon.redshift.jdbc.Driver`
+* Prefix: `jdbc:redshift:`
+* Files: `RedshiftJDBC42-1.2.1.1001.jar`
+
+Please also upload the driver jar into a bucket for the adapter script.
+
+## Adapter Script
+
+```sql
+CREATE OR REPLACE JAVA ADAPTER SCRIPT adapter.jdbc_adapter
+ AS
+
+ // This is the class implementing the callback method of the adapter script
+ %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
+
+ // This will add the adapter jar to the classpath so that it can be used inside the adapter script
+ // Replace the names of the bucketfs and the bucket with the ones you used.
+ %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.1.0.jar;
+
+ // You have to add all files of the data source jdbc driver here (e.g. MySQL or Hive)
+
+ %jar /buckets/bucketfs1/bucket1/RedshiftJDBC42-1.2.1.1001.jar;
+
+/
+```
+
+## Creating a Virtual Schema
+
+```sql
+CREATE VIRTUAL SCHEMA redshift_tickit
+ USING adapter.jdbc_adapter
+ WITH
+ SQL_DIALECT = 'REDSHIFT'
+ CONNECTION_NAME = 'REDSHIFT_CONNECTION'
+ CATALOG_NAME = 'database_name'
+ SCHEMA_NAME = 'public'
+ ;
+```
\ No newline at end of file
diff --git a/jdbc-adapter/doc/sql_dialects/sql_server.md b/jdbc-adapter/doc/sql_dialects/sql_server.md
new file mode 100644
index 000000000..41a2efeac
--- /dev/null
+++ b/jdbc-adapter/doc/sql_dialects/sql_server.md
@@ -0,0 +1,37 @@
+# SQL Server SQL Dialect
+
+## JDBC driver
+
+The SQL Server Dialect was tested with the jTDS 1.3.1 JDBC driver and SQL Server 2014.
+As the jTDS driver is already pre-installed for the `IMPORT` command itself you only need
+to upload the `jtds.jar` to a bucket for the adapter script.
+
+## Adapter Script
+
+```sql
+CREATE OR REPLACE JAVA ADAPTER SCRIPT adapter.sql_server_jdbc_adapter
+ AS
+
+ // This is the class implementing the callback method of the adapter script
+ %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
+
+ // This will add the adapter jar to the classpath so that it can be used inside the adapter script
+ // Replace the names of the bucketfs and the bucket with the ones you used.
+ %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.1.0.jar;
+
+ // You have to add all files of the data source jdbc driver here
+ %jar /buckets/bucketfs1/bucket1/jtds.jar;
+/
+```
+
+## Creating a Virtual Schema
+
+```sql
+CREATE VIRTUAL SCHEMA VS_SQLSERVER USING adapter.sql_server_jdbc_adapter
+WITH
+ SQL_DIALECT = 'SQLSERVER'
+ CONNECTION_NAME = 'SQLSERVER_CONNECTION'
+ CATALOG_NAME = 'MyDatabase'
+ SCHEMA_NAME = 'dbo'
+;
+```
\ No newline at end of file
diff --git a/jdbc-adapter/doc/sql_dialects/sybase.md b/jdbc-adapter/doc/sql_dialects/sybase.md
new file mode 100644
index 000000000..255311f82
--- /dev/null
+++ b/jdbc-adapter/doc/sql_dialects/sybase.md
@@ -0,0 +1,50 @@
+# Sybase SQL Dialect
+
+## JDBC driver
+
+The Sybase dialect was tested with the [jTDS 1.3.1 JDBC driver](https://sourceforge.net/projects/jtds/files/jtds/1.3.1/) and Sybase 16.0.
+While the jTDS driver is pre-installed in EXAOperation, you still need to upload `jdts.jar` to BucketFS.
+
+You can check the Sybase version with the following SQL command:
+
+```sql
+SELECT @@version;
+```
+
+## Adapter script
+
+```sql
+CREATE OR REPLACE JAVA ADAPTER SCRIPT adapter.jdbc_adapter
+ AS
+
+ %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
+ %jar /buckets/bucketfs1/virtualschema/virtualschema-jdbc-adapter-dist-1.1.0.jar;
+ %jar /buckets/bucketfs1/virtualschema/jtds-1.3.1.jar;
+/
+```
+
+## Installing the Test Data
+
+Create and populate the test database using the [sybase-testdata.sql](../integration-test-data/sybase-testdata.sql) SQL script.
+
+## Creating a Virtual Schema
+
+```sql
+CREATE OR REPLACE CONNECTION "conn_sybase"
+ TO 'jdbc:jtds:sybase://172.17.0.1:5000/testdb'
+ USER 'tester'
+ IDENTIFIED BY 'pass'
+
+CREATE VIRTUAL SCHEMA sybase USING adapter.jdbc_adapter WITH
+ SQL_DIALECT = 'SYBASE'
+ CONNECTION_NAME = 'CONN_SYBASE'
+ CATALOG_NAME = 'testdb'
+ SCHEMA_NAME = 'tester';
+```
+
+## Supported Data types
+
+* `NUMERIC/DECIMAL(precision, scale)`: Sybase supports precision values up to 38, Exasol only up to 36 decimals. `NUMERIC/DECIMAL` with precision <= 36 are mapped to Exasol's `DECIMAL` type; greater precision values are mapped to a `VARCHAR` column.
+* The Sybase data type `CHAR(n > 2000)` is mapped to Exasol's `VARCHAR(n)`. Exasol only supports `n <= 2000` for data type `CHAR`.
+* The Sybase data types `TEXT` and `UNITEXT` are mapped to `VARCHAR(2000000) UTF8`. If the virtual schema is queried and a row of the text column is matched that contains a value that exceed Exasol's column size, an error is shown.
+* The Sybase data types `BINARY`, `VARBINARY`, and `IMAGE` are not supported.
\ No newline at end of file
diff --git a/jdbc-adapter/doc/sql_dialects/teradata.md b/jdbc-adapter/doc/sql_dialects/teradata.md
new file mode 100644
index 000000000..7afdb78de
--- /dev/null
+++ b/jdbc-adapter/doc/sql_dialects/teradata.md
@@ -0,0 +1,43 @@
+# Teradata SQL Dialect
+
+## JDBC Driver
+
+You have to specify the following settings when adding the JDBC driver via EXAOperation:
+
+* Name: `TERADATA`
+* Main: `com.teradata.jdbc.TeraDriver`
+* Prefix: `jdbc:teradata:`
+* Files: `terajdbc4.jar`, `tdgssconfig.jar`
+
+Please also upload the jar files to a bucket for the adapter script.
+
+## Adapter script
+
+```sql
+CREATE OR REPLACE JAVA ADAPTER SCRIPT adapter.jdbc_adapter
+ AS
+
+ // This is the class implementing the callback method of the adapter script
+ %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
+
+ // This will add the adapter jar to the classpath so that it can be used inside the adapter script
+ // Replace the names of the bucketfs and the bucket with the ones you used.
+ %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.1.0.jar;
+
+ // You have to add all files of the data source jdbc driver here (e.g. MySQL or Hive)
+ %jar /buckets/bucketfs1/bucket1/terajdbc4.jar;
+ %jar /buckets/bucketfs1/bucket1/tdgssconfig.jar;
+
+/
+```
+
+## Creating a Virtual Schema
+
+```sql
+CREATE VIRTUAL SCHEMA TERADATA_financial USING adapter.jdbc_adapter
+WITH
+ SQL_DIALECT = 'TERADATA'
+ CONNECTION_NAME = 'TERADATA_CONNECTION'
+ SCHEMA_NAME = 'financial'
+;
+```
\ No newline at end of file
diff --git a/jdbc-adapter/doc/supported-dialects.md b/jdbc-adapter/doc/supported-dialects.md
deleted file mode 100644
index 894050221..000000000
--- a/jdbc-adapter/doc/supported-dialects.md
+++ /dev/null
@@ -1,524 +0,0 @@
-# Supported Dialects
-
-The purpose of this page is to provide detailed instructions for each of the supported dialects on how to get started. Typical questions are
-* Which **JDBC driver** is used, which files have to be uploaded and included when creating the adapter script.
-* How does the **CREATE VIRTUAL SCHEMA** statement look like, i.e. which properties are required.
-* **Data source specific notes**, like authentication with Kerberos, supported capabilities or things to consider regarding the data type mapping.
-
-As an entry point we recommend to follow the [step-by-step deployment guide](deploy-adapter.md) which will link to this page whenever needed.
-
-## Table of Contents
-
-1. [EXASOL](#exasol)
-2. [Hive](#hive)
- - [Connecting To a Kerberos Secured Hadoop](#connecting-to-a-kerberos-secured-hadoop)
-3. [Impala](#impala)
-4. [DB2](#db2)
-5. [Oracle](#oracle)
-6. [Teradata](#teradata)
-7. [Redshift](#redshift)
-8. [SQL Server](#sql-server)
-8. [PostgresSQL](#postgresql)
-10. [Generic](#generic)
-
-## EXASOL
-
-**Supported capabilities**:
-The EXASOL SQL dialect supports all capabilities that are supported by the virtual schema framework.
-
-**JDBC driver**:
-Connecting to an EXASOL database is the simplest way to start with virtual schemas.
-You don't have to install any JDBC driver, because it is already installed in the EXASOL database and also included in the jar of the JDBC adapter.
-
-**Adapter script**:
-After uploading the adapter jar, the adapter script can be created as follows:
-```sql
-CREATE SCHEMA adapter;
-CREATE JAVA ADAPTER SCRIPT adapter.jdbc_adapter AS
- %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
- %jar /buckets/your-bucket-fs/your-bucket/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar;
-/
-```
-**Create a virtual schema:**
-
-```sql
-CREATE CONNECTION exasol_conn TO 'jdbc:exa:exasol-host:1234' USER 'user' IDENTIFIED BY 'pwd';
-
-CREATE VIRTUAL SCHEMA virtual_exasol USING adapter.jdbc_adapter WITH
- SQL_DIALECT = 'EXASOL'
- CONNECTION_NAME = 'EXASOL_CONN'
- SCHEMA_NAME = 'default';
-```
-
-**Use IMPORT FROM EXA instead of IMPORT FROM JDBC**
-
-EXASOL provides the faster and parallel `IMPORT FROM EXA` command for loading data from EXASOL. You can tell the adapter to use this command instead of `IMPORT FROM JDBC` by setting the `IMPORT_FROM_EXA` property. In this case you have to provide the additional `EXA_CONNECTION_STRING` which is the connection string used for the internally used `IMPORT FROM EXA` command (it also supports ranges like `192.168.6.11..14:8563`). Please note, that the `CONNECTION` object must still have the jdbc connection string in `AT`, because the Adapter Script uses a JDBC connection to obtain the metadata when a schema is created or refreshed. For the internally used `IMPORT FROM EXA` statement, the address from `EXA_CONNECTION_STRING` and the username and password from the connection will be used.
-```sql
-CREATE CONNECTION exasol_conn TO 'jdbc:exa:exasol-host:1234' USER 'user' IDENTIFIED BY 'pwd';
-
-CREATE VIRTUAL SCHEMA virtual_exasol USING adapter.jdbc_adapter WITH
- SQL_DIALECT = 'EXASOL'
- CONNECTION_NAME = 'EXASOL_CONN'
- SCHEMA_NAME = 'default'
- IMPORT_FROM_EXA = 'true'
- EXA_CONNECTION_STRING = 'exasol-host:1234';
-```
-
-## Hive
-
-**JDBC driver**:
-The dialect was tested with the Cloudera Hive JDBC driver available on the [Cloudera downloads page](http://www.cloudera.com/downloads). The driver is also available directly from [Simba technologies](http://www.simba.com/), who developed the driver.
-
-When you unpack the JDBC driver archive you will see that there are two variants, JDBC 4.0 and 4.1. We tested with the JDBC 4.1 variant.
-
-You have to specify the following settings when adding the JDBC driver via EXAOperation:
-* Name: `Hive`
-* Main: `com.cloudera.hive.jdbc41.HS2Driver`
-* Prefix: `jdbc:hive2:`
-
-Make sure you upload **all files** of the JDBC driver (over 10 at the time of writing) in EXAOperation **and** to the bucket.
-
-**Adapter script**:
-You have to add all files of the JDBC driver to the classpath using `%jar` as follows (filenames may vary):
-```sql
-CREATE SCHEMA adapter;
-CREATE JAVA ADAPTER SCRIPT jdbc_adapter AS
- %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
-
- %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar;
-
- %jar /buckets/bucketfs1/bucket1/hive_metastore.jar;
- %jar /buckets/bucketfs1/bucket1/hive_service.jar;
- %jar /buckets/bucketfs1/bucket1/HiveJDBC41.jar;
- %jar /buckets/bucketfs1/bucket1/libfb303-0.9.0.jar;
- %jar /buckets/bucketfs1/bucket1/libthrift-0.9.0.jar;
- %jar /buckets/bucketfs1/bucket1/log4j-1.2.14.jar;
- %jar /buckets/bucketfs1/bucket1/ql.jar;
- %jar /buckets/bucketfs1/bucket1/slf4j-api-1.5.11.jar;
- %jar /buckets/bucketfs1/bucket1/slf4j-log4j12-1.5.11.jar;
- %jar /buckets/bucketfs1/bucket1/TCLIServiceClient.jar;
- %jar /buckets/bucketfs1/bucket1/zookeeper-3.4.6.jar;
-/
-```
-**Create a virtual schema:**
-```sql
-CREATE CONNECTION hive_conn TO 'jdbc:hive2://hive-host:10000' USER 'hive-usr' IDENTIFIED BY 'hive-pwd';
-
-CREATE VIRTUAL SCHEMA hive_default USING adapter.jdbc_adapter WITH
- SQL_DIALECT = 'HIVE'
- CONNECTION_NAME = 'HIVE_CONN'
- SCHEMA_NAME = 'default';
-```
-
-### Connecting To a Kerberos Secured Hadoop:
-
-Connecting to a Kerberos secured Impala or Hive service only differs in one aspect: You have to a `CONNECTION` object which contains all the relevant information for the Kerberos authentication. This section describes how Kerberos authentication works and how to create such a `CONNECTION`.
-
-#### 0. Understand how it works (optional)
-Both the adapter script and the internally used `IMPORT FROM JDBC` statement support Kerberos authentication. They detect, that the connection is a Kerberos connection by a special prefix in the `IDENTIFIED BY` field. In such case, the authentication will happen using a Kerberos keytab and Kerberos config file (using the JAAS Java API).
-
-The `CONNECTION` object stores all relevant information and files in its fields:
-* The `TO` field contains the JDBC connection string
-* The `USER` field contains the Kerberos principal
-* The `IDENTIFIED BY` field contains the Kerberos configuration file and keytab file (base64 encoded) along with an internal prefix `ExaAuthType=Kerberos;` to identify the `CONNECTION` as a Kerberos `CONNECTION`.
-
-#### 1. Generate the CREATE CONNECTION statement
-In order to simplify the creation of Kerberos `CONNECTION` objects, the [`create_kerberos_conn.py`](https://github.com/EXASOL/hadoop-etl-udfs/blob/master/tools/create_kerberos_conn.py) Python script has been provided. The script requires 5 arguments:
-* `CONNECTION` name (arbitrary name for the new `CONNECTION`)
-* Kerberos principal for Hadoop (i.e., Hadoop user)
-* Kerberos configuration file path (e.g., `krb5.conf`)
-* Kerberos keytab file path, which contains keys for the Kerberos principal
-* JDBC connection string
-
-Example command:
-```
-python tools/create_kerberos_conn.py krb_conn krbuser@EXAMPLE.COM /etc/krb5.conf ./krbuser.keytab \
- 'jdbc:hive2://hive-host.example.com:10000;AuthMech=1;KrbRealm=EXAMPLE.COM;KrbHostFQDN=hive-host.example.com;KrbServiceName=hive'
-```
-Output:
-```sql
-CREATE CONNECTION krb_conn TO 'jdbc:hive2://hive-host.example.com:10000;AuthMech=1;KrbRealm=EXAMPLE.COM;KrbHostFQDN=hive-host.example.com;KrbServiceName=hive' USER 'krbuser@EXAMPLE.COM' IDENTIFIED BY 'ExaAuthType=Kerberos;enp6Cg==;YWFhCg=='
-```
-
-#### 2. Create the CONNECTION
-You have to execute the generated `CREATE CONNECTION` statement directly in EXASOL to actually create the Kerberos `CONNECTION` object. For more detailed information about the script, use the help option:
-```
-python tools/create_kerberos_conn.py -h
-```
-
-#### 3. Use the connection when creating a virtual schema
-You can now create a virtual schema using the Kerberos connection created before.
-```sql
-CREATE VIRTUAL SCHEMA hive_default USING adapter.jdbc_adapter WITH
- SQL_DIALECT = 'HIVE'
- CONNECTION_NAME = 'KRB_CONN'
- SCHEMA_NAME = 'default';
-```
-
-## Impala
-
-The Impala dialect is similar to the Hive dialect in most aspects. For this reason we only highlight the differences in this section.
-
-**JDBC driver:**
-
-You have to specify the following settings when adding the JDBC driver via EXAOperation:
-* Name: `Hive`
-* Main: `com.cloudera.impala.jdbc41.Driver`
-* Prefix: `jdbc:impala:`
-
-Make sure you upload **all files** of the JDBC driver (over 10 at the time of writing) in EXAOperation and to the bucket.
-
-**Adapter script**:
-The adapter can be created similar to Hive:
-```sql
-
-CREATE SCHEMA adapter;
-CREATE JAVA ADAPTER SCRIPT jdbc_adapter AS
- %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
-
- %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar;
-
- %jar /buckets/bucketfs1/bucket1/hive_metastore.jar;
- %jar /buckets/bucketfs1/bucket1/hive_service.jar;
- %jar /buckets/bucketfs1/bucket1/ImpalaJDBC41.jar;
- %jar /buckets/bucketfs1/bucket1/libfb303-0.9.0.jar;
- %jar /buckets/bucketfs1/bucket1/libthrift-0.9.0.jar;
- %jar /buckets/bucketfs1/bucket1/log4j-1.2.14.jar;
- %jar /buckets/bucketfs1/bucket1/ql.jar;
- %jar /buckets/bucketfs1/bucket1/slf4j-api-1.5.11.jar;
- %jar /buckets/bucketfs1/bucket1/slf4j-log4j12-1.5.11.jar;
- %jar /buckets/bucketfs1/bucket1/TCLIServiceClient.jar;
- %jar /buckets/bucketfs1/bucket1/zookeeper-3.4.6.jar;
-/
-```
-
-**Create a virtual schema:**
-You can now create a virtual schema as follows:
-```sql
-CREATE CONNECTION impala_conn TO 'jdbc:impala://impala-host:21050' USER 'impala-usr' IDENTIFIED BY 'impala-pwd';
-
-CREATE VIRTUAL SCHEMA impala_default USING adapter.jdbc_adapter WITH
- SQL_DIALECT = 'IMPALA'
- CONNECTION_NAME = 'IMPALA_CONN'
- SCHEMA_NAME = 'default';
-```
-
-Connecting to a Kerberos secured Impala works similar as for Hive and is described in the section [Connecting To a Kerberos Secured Hadoop](#connecting-to-a-kerberos-secured-hadoop).
-
-## DB2
-
-DB2 was tested with the IBM DB2 JCC Drivers that come with DB2 LUW V10.1 and V11. As these drivers didn't have any major changes in the past years any DB2 driver should work (back to V9.1). The driver comes with 2 different implementations `db2jcc.jar` and `db2jcc4.jar`. All tests were made with the `db2jcc4.jar`.
-
-Additionally there are 2 files for the DB2 Driver.
-* `db2jcc_license_cu.jar` - License File for DB2 on Linux Unix and Windows
-* `db2jcc_license_cisuz.jar` - License File for DB2 on zOS (Mainframe)
-
-Make sure that you upload the necessary license file for the target platform you want to connect to.
-
-**Supported capabilities**:
-The db2 dialect handles some casts in regards of time data types and functions.
-
-Casting of Data Types
-* `TIMESTAMP` and `TIMESTAMP(x)` will be cast to `VARCHAR` to not lose precision.
-* `VARCHAR` and `CHAR` for bit data will be cast to a hex string with double the original size
-* `TIME` will be cast to `VARCHAR(8)`
-* `XML` will be cast to `VARCHAR(DB2_MAX_LENGTH)`
-* `BLOB` is not supported
-
-Casting of Functions
-* `LIMIT` will replaced by `FETCH FIRST x ROWS ONLY`
-* `OFFSET` is currently not supported as only DB2 V11 support this nativly
-* `ADD_DAYS`, `ADD_WEEKS` ... will be replaced by `COLUMN + DAYS`, `COLUMN + ....`
-
-
-**JDBC driver:**
-You have to specify the following settings when adding the JDBC driver via EXAOperation:
-* Name: `DB2`
-* Main: `com.ibm.db2.jcc.DB2Driver`
-* Prefix: `jdbc:db2:`
-
-**Adapter script**
-```sql
-CREATE or replace JAVA ADAPTER SCRIPT adapter.jdbc_adapter AS
-
- // This is the class implementing the callback method of the adapter script
- %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
-
- // This will add the adapter jar to the classpath so that it can be used inside the adapter script
- // Replace the names of the bucketfs and the bucket with the ones you used.
- %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar;
-
- // DB2 Driver files
- %jar /buckets/bucketfs1/bucket1/db2jcc4.jar;
- %jar /buckets/bucketfs1/bucket1/db2jcc_license_cu.jar;
- // uncomment for mainframe connection and upload db2jcc_license_cisuz.jar;
- //%jar /buckets/bucketfs1/bucket1/db2jcc_license_cisuz.jar;
-/
-```
-
-**Create a virtual schema**
-You can now create a virtual schema as follows:
-```sql
-create or replace connection DB2_CON to 'jdbc:db2://host:port/database' user 'db2-usr' identified by 'db2-pwd';
-
-create virtual schema db2 using adapter.jdbc_adapter with
- SQL_DIALECT = 'DB2'
- CONNECTION_NAME = 'DB2_CON'
- SCHEMA_NAME = ''
-;
-```
-
-`` has to be replaced by the actual db2 schema you want to connect to.
-
-**Running the DB2 integration tests**
-A how to has been included in the [setup sql file](../integration-test-data/db2-testdata.sql)
-
-## Oracle
-**Supported capabilities**:
-The Oracle dialect does not support all capabilities. A complete list can be found in [OracleSqlDialect.getCapabilities()](../virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/OracleSqlDialect.java).
-
-Oracle datatypes are mapped to their equivalents in Exasol. The following exceptions apply:
-- `NUMBER`, `NUMBER with precision > 36` and `LONG` are casted to `VARCHAR` to prevent a loss of precision.
-- `DATE` is casted to `TIMESTAMP`. This datatype is only supported for positive year values, i.e., years > 0001.
-- `TIMESTAMP WITH [LOCAL] TIME ZONE` is casted to `VARCHAR`. Exasol does not support timestamps with time zone information.
-- `INTERVAL` is casted to `VARCHAR`.
-- `CLOB`, `NCLOB` and `BLOB` are casted to `VARCHAR`.
-- `RAW` and `LONG RAW` are not supported.
-
-
-### JDBC driver
-To setup a virtual schema that communicates with an Oracle database using JDBC, the JDBC driver, e.g., `ojdbc7-12.1.0.2.jar`, must first be installed in EXAoperation and deployed to BucketFS; see [this article](https://www.exasol.com/support/browse/SOL-179#WhichJDBCdriverforOracleshallIuse?) and [Deploying the Adapter Step By Step](deploy-adapter.md) for instructions.
-
-**Adapter script**:
-After uploading the adapter jar we are ready to create an Oracle adapter script. Adapt the following script as indicated.
-```sql
-CREATE SCHEMA adapter;
-CREATE JAVA ADAPTER SCRIPT adapter.jdbc_oracle AS
- %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
-
- // You need to replace `your-bucket-fs` and `your-bucket` to match the actual location
- // of the adapter jar.
- %jar /buckets/your-bucket-fs/your-bucket/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar;
-
- // Add the oracle jdbc driver to the classpath
- %jar /buckets/bucketfs1/bucket1/ojdbc7-12.1.0.2.jar
-/
-```
-
-**JDBC Connection**:
-Next, create a JDBC connection to your Oracle database. Adjust the properties to match your environment.
-```sql
-CREATE CONNECTION jdbc_oracle
- TO 'jdbc:oracle:thin:@//:/'
- USER ''
- IDENTIFIED BY '';
-```
-
-A quick option to test the `JDBC_ORACLE` connection is to run an `IMPORT FROM JDBC` query. The connection works, if `42` is returned.
-```sql
-IMPORT FROM JDBC AT jdbc_oracle
- STATEMENT 'SELECT 42 FROM DUAL';
-```
-
-**Virtual schema:**
-Having created both a JDBC adapter script and a JDBC oracle connection, we are ready to create a virtual schema. Insert the name of the schema that you want to expose in Exasol.
-```sql
-CREATE VIRTUAL SCHEMA virt_oracle USING adapter.jdbc_oracle WITH
- SQL_DIALECT = 'ORACLE'
- CONNECTION_NAME = 'JDBC_ORACLE'
- SCHEMA_NAME = '';
-```
-
-### Use IMPORT FROM ORA instead of IMPORT FROM JDBC**
-Exasol provides the `IMPORT FROM ORA` command for loading data from Oracle. It is possible to create a virtual schema that uses `IMPORT FROM ORA` instead of JDBC to communicate with Oracle. Both options are indented to support the same features. `IMPORT FROM ORA` almost always offers better performance since it is implemented natively.
-
-This behaviour is toggled by the Boolean `IMPORT_FROM_ORA` variable. Note that a JDBC connection to Oracle is still required to fetch metadata. In addition, a "direct" connection to the Oracle database is needed.
-
-**Deploy the Oracle Instant Client**:
-To be able to communicate with Oracle, you first need to supply Exasol with the Oracle Instant Client, which can be obtained [directly from Oracle](http://www.oracle.com/technetwork/database/database-technologies/instant-client/overview/index.html). Open EXAoperation, visit Software -> "Upload Oracle Instant Client" and select the downloaded package. The latest version of Oracle Instant Client we tested is `instantclient-basic-linux.x64-12.1.0.2.0`.
-
-**Create an Oracle Connection**:
-Having deployed the Oracle Instant Client, a connection to your Oracle database can be set up.
-```sql
-CREATE CONNECTION conn_oracle
- TO '(DESCRIPTION =
- (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)
- (HOST = )
- (PORT = )))
- (CONNECT_DATA = (SERVER = DEDICATED)
- (SERVICE_NAME = )))'
- USER ''
- IDENTIFIED BY '';
-```
-
-This connection can be tested using, e.g., the following SQL expression.
-```sql
-IMPORT FROM ORA at CONN_ORACLE
- STATEMENT 'SELECT 42 FROM DUAL';
-```
-
-**Virtual schema**:
-Assuming you already setup the JDBC connection `JDBC_ORACLE` as shown in the previous section, you can continue with creating the virtual schema.
-```sql
-CREATE VIRTUAL SCHEMA virt_import_oracle USING adapter.jdbc_oracle WITH
- SQL_DIALECT = 'ORACLE'
- CONNECTION_NAME = 'JDBC_ORACLE'
- SCHEMA_NAME = ''
- IMPORT_FROM_ORA = 'true'
- EXA_CONNECTION_NAME = 'CONN_ORACLE';
-```
-
-## Teradata
-
-**JDBC driver:**
-You have to specify the following settings when adding the JDBC driver via EXAOperation:
-* Name: `TERADATA`
-* Main: `com.teradata.jdbc.TeraDriver`
-* Prefix: `jdbc:teradata:`
-* Files: `terajdbc4.jar`, `tdgssconfig.jar`
-
-Please also upload the jar files to a bucket for the adapter script.
-
-**Adapter script**
-```sql
-CREATE OR REPLACE JAVA ADAPTER SCRIPT adapter.jdbc_adapter
- AS
-
- // This is the class implementing the callback method of the adapter script
- %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
-
- // This will add the adapter jar to the classpath so that it can be used inside the adapter script
- // Replace the names of the bucketfs and the bucket with the ones you used.
- %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar;
-
- // You have to add all files of the data source jdbc driver here (e.g. MySQL or Hive)
- %jar /buckets/bucketfs1/bucket1/terajdbc4.jar;
- %jar /buckets/bucketfs1/bucket1/tdgssconfig.jar;
-
-/
-```
-
-**Create a virtual schema**
-```sql
-CREATE VIRTUAL SCHEMA TERADATA_financial USING adapter.jdbc_adapter
-WITH
- SQL_DIALECT = 'TERADATA'
- CONNECTION_NAME = 'TERADATA_CONNECTION'
- SCHEMA_NAME = 'financial'
-;
-```
-
-## Redshift
-
-**JDBC driver:**
-
-You have to specify the following settings when adding the JDBC driver via EXAOperation:
-* Name: `REDSHIFT`
-* Main: `com.amazon.redshift.jdbc.Driver`
-* Prefix: `jdbc:redshift:`
-* Files: `RedshiftJDBC42-1.2.1.1001.jar`
-
-Please also upload the driver jar into a bucket for the adapter script.
-
-**Adapter script**
-```sql
-CREATE OR REPLACE JAVA ADAPTER SCRIPT adapter.jdbc_adapter
- AS
-
- // This is the class implementing the callback method of the adapter script
- %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
-
- // This will add the adapter jar to the classpath so that it can be used inside the adapter script
- // Replace the names of the bucketfs and the bucket with the ones you used.
- %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar;
-
- // You have to add all files of the data source jdbc driver here (e.g. MySQL or Hive)
-
- %jar /buckets/bucketfs1/bucket1/RedshiftJDBC42-1.2.1.1001.jar;
-
-/
-```
-
-**Create a virtual schema**
-```sql
-CREATE VIRTUAL SCHEMA redshift_tickit
- USING adapter.jdbc_adapter
- WITH
- SQL_DIALECT = 'REDSHIFT'
- CONNECTION_NAME = 'REDSHIFT_CONNECTION'
- CATALOG_NAME = 'database_name'
- SCHEMA_NAME = 'public'
- ;
-```
-
-## Sql Server
-
-**JDBC driver:**
-The Sql Server Dialect was tested with the jdts 1.3.1 JDBC driver and Sql Server 2014.
-As the jdts driver is already preinstalled for the `IMPORT` command itself you only need
-to upload the `jdts.jar` to a bucket for the adapter script.
-
-**Adapter script**
-```sql
-CREATE OR REPLACE JAVA ADAPTER SCRIPT adapter.sql_server_jdbc_adapter
- AS
-
- // This is the class implementing the callback method of the adapter script
- %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
-
- // This will add the adapter jar to the classpath so that it can be used inside the adapter script
- // Replace the names of the bucketfs and the bucket with the ones you used.
- %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar;
-
- // You have to add all files of the data source jdbc driver here
- %jar /buckets/bucketfs1/bucket1/jtds.jar;
-/
-```
-
-**Create a virtual schema**
-```sql
-CREATE VIRTUAL SCHEMA VS_SQLSERVER USING adapter.sql_server_jdbc_adapter
-WITH
- SQL_DIALECT = 'SQLSERVER'
- CONNECTION_NAME = 'SQLSERVER_CONNECTION'
- CATALOG_NAME = 'MyDatabase'
- SCHEMA_NAME = 'dbo'
-;
-```
-
-## PostgreSQL
-
-**JDBC driver:**
-The PostgreSQL dialect was tested with JDBC driver version 42.0.0 and PostgreSQL 9.6.2 .
-
-**Adapter script**
-```sql
-CREATE OR REPLACE JAVA ADAPTER SCRIPT adapter.jdbc_adapter
- AS
-
- // This is the class implementing the callback method of the adapter script
- %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;
-
- // This will add the adapter jar to the classpath so that it can be used inside the adapter script
- // Replace the names of the bucketfs and the bucket with the ones you used.
- %jar /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar;
-
- // You have to add all files of the data source jdbc driver here (e.g. MySQL or Hive)
- %jar /buckets/bucketfs1/bucket1/postgresql-42.0.0.jar;
-
-/
-```
-
-**Create a virtual schema**
-```sql
-CREATE VIRTUAL SCHEMA postgres
- USING adapter.jdbc_adapter
- WITH
- SQL_DIALECT = 'POSTGRESQL'
- CATALOG_NAME = 'postgres'
- SCHEMA_NAME = 'public'
- CONNECTION_NAME = 'POSTGRES_DOCKER'
- ;
-```
-
-## Generic
diff --git a/jdbc-adapter/doc/supported_sql_dialects.md b/jdbc-adapter/doc/supported_sql_dialects.md
new file mode 100644
index 000000000..34c04dcb3
--- /dev/null
+++ b/jdbc-adapter/doc/supported_sql_dialects.md
@@ -0,0 +1,26 @@
+# Supported Dialects
+
+The purpose of this page is to provide detailed instructions for each of the supported dialects on how to get started. Typical questions are
+* Which **JDBC driver** is used, which files have to be uploaded and included when creating the adapter script.
+* How does the **CREATE VIRTUAL SCHEMA** statement look like, i.e. which properties are required.
+* **Data source specific notes**, like authentication with Kerberos, supported capabilities or things to consider regarding the data type mapping.
+
+As an entry point we recommend you follow the [step-by-step deployment guide](deploying_the_virtual_schema_adapter.md) which will link to this page whenever needed.
+
+## Before you Start
+
+Please note that the syntax for creating adapter scripts is not recognized by all SQL clients. [DBeaver](https://dbeaver.io/) for example. If you encounter such a problem, try a different client.
+
+## List of Supported Dialects
+
+1. [EXASOL](sql_dialects/exasol.md)
+1. [Hive](sql_dialects/hive.md)
+1. [Impala](sql_dialects/impala.md)
+1. [DB2](sql_dialects/db2.md)
+1. [Oracle](sql_dialects/oracle.md)
+1. [Teradata](sql_dialects/teradata.md)
+1. [Redshift](sql_dialects/redshift.md)
+1. [SQL Server](sql_dialects/sql_server.md)
+1. [Sybase ASE](sql_dialects/sybase.md)
+1. [PostgresSQL](sql_dialects/postgresql.md)
+1. Generic
\ No newline at end of file
diff --git a/jdbc-adapter/integration-test-data/integration-test-db2.yaml b/jdbc-adapter/integration-test-data/integration-test-db2.yaml
index 7ab4f7dd8..020c678a7 100644
--- a/jdbc-adapter/integration-test-data/integration-test-db2.yaml
+++ b/jdbc-adapter/integration-test-data/integration-test-db2.yaml
@@ -5,7 +5,7 @@ general:
debugAddress: '192.168.0.12:3000' # Address which will be defined as DEBUG_ADDRESS in the virtual schemas
bucketFsUrl: http://exasol-host:2580/bucket1
bucketFsPassword: bucket1
- jdbcAdapterPath: /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar
+ jdbcAdapterPath: /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.1.0.jar
exasol:
runIntegrationTests: true
diff --git a/jdbc-adapter/integration-test-data/integration-test-sample.yaml b/jdbc-adapter/integration-test-data/integration-test-sample.yaml
index adb74b1d6..95cda3fe2 100644
--- a/jdbc-adapter/integration-test-data/integration-test-sample.yaml
+++ b/jdbc-adapter/integration-test-data/integration-test-sample.yaml
@@ -5,7 +5,7 @@ general:
debugAddress: '192.168.0.12:3000' # Address which will be defined as DEBUG_ADDRESS in the virtual schemas
bucketFsUrl: http://exasol-host:2580/bucket1
bucketFsPassword: bucket1
- jdbcAdapterPath: /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar
+ jdbcAdapterPath: /buckets/bucketfs1/bucket1/virtualschema-jdbc-adapter-dist-1.1.0.jar
exasol:
runIntegrationTests: true
@@ -87,3 +87,9 @@ hive:
user: user
password: pass
+sybase:
+ runIntegrationTests: false
+ jdbcDriverPath: /buckets/mybucketfs/mybucket/jtds-1.3.1.jar
+ connectionString: jdbc:jtds:sybase://127.0.0.1:5000/db
+ user: sybase-user
+ password: sybase-password
diff --git a/jdbc-adapter/integration-test-data/integration-test-travis.yaml b/jdbc-adapter/integration-test-data/integration-test-travis.yaml
index 415e6ae83..fff001e94 100644
--- a/jdbc-adapter/integration-test-data/integration-test-travis.yaml
+++ b/jdbc-adapter/integration-test-data/integration-test-travis.yaml
@@ -5,7 +5,7 @@ general:
debugAddress: ''
bucketFsUrl: http://127.0.0.1:6594/default
bucketFsPassword: write
- jdbcAdapterPath: /buckets/bfsdefault/default/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar
+ jdbcAdapterPath: /buckets/bfsdefault/default/virtualschema-jdbc-adapter-dist-1.1.0.jar
exasol:
runIntegrationTests: true
diff --git a/jdbc-adapter/integration-test-data/run_integration_tests.sh b/jdbc-adapter/integration-test-data/run_integration_tests.sh
index 004cac16f..ca03b8935 100755
--- a/jdbc-adapter/integration-test-data/run_integration_tests.sh
+++ b/jdbc-adapter/integration-test-data/run_integration_tests.sh
@@ -1,51 +1,79 @@
#!/usr/bin/env bash
-
# This script executes integration tests as defined in
# integration-test-travis.yaml (currently only Exasol integration tests).
-
+#
# An Exasol instance is run using the exasol/docker-db image. Therefore, a
# working installation of Docker and sudo privileges are required.
set -eux
-
cd "$(dirname "$0")/.."
-config="$(pwd)/integration-test-data/integration-test-travis.yaml"
+readonly config="$(pwd)/integration-test-data/integration-test-travis.yaml"
+readonly exasol_docker_image_version="6.0.10-d1"
+readonly docker_image="exasol/docker-db:$exasol_docker_image_version"
+readonly docker_name="exasoldb"
+readonly tmp="$(mktemp -td exasol-vs-adapter-integration.XXXXXX)" || exit 1
function cleanup() {
docker rm -f exasoldb || true
- sudo rm -rf integration-test-data/exa || true
+ sudo rm -rf "$tmp" || true
}
trap cleanup EXIT
-# Setup directory "exa" with pre-configured EXAConf to attach it to the exasoldb docker container
-mkdir -p integration-test-data/exa/{etc,data/storage}
-cp integration-test-data/EXAConf integration-test-data/exa/etc/EXAConf
-dd if=/dev/zero of=integration-test-data/exa/data/storage/dev.1.data bs=1 count=1 seek=4G
-touch integration-test-data/exa/data/storage/dev.1.meta
-
-docker pull exasol/docker-db:latest
-docker run \
- --name exasoldb \
- -p 8899:8888 \
- -p 6594:6583 \
- --detach \
- --privileged \
- -v "$(pwd)/integration-test-data/exa:/exa" \
- exasol/docker-db:latest \
- init-sc --node-id 11
-
-docker logs -f exasoldb &
-
-# Wait until database is ready
-(docker logs -f --tail 0 exasoldb &) 2>&1 | grep -q -i 'stage4: All stages finished'
-sleep 30
-
-mvn -q clean package
-
-# Load virtualschema-jdbc-adapter jar into BucketFS and wait until it's available.
-mvn -q pre-integration-test -DskipTests -Pit -Dintegrationtest.configfile="$config"
-(docker exec exasoldb sh -c 'tail -f -n +0 /exa/logs/cored/*bucket*' &) | \
- grep -q -i 'File.*virtualschema-jdbc-adapter.*linked'
-
-mvn -q verify -Pit -Dintegrationtest.configfile="$config" -Dintegrationtest.skipTestSetup=true
+main() {
+ prepare_configuration_dir "$tmp/etc"
+ prepare_data_dir "$tmp/data/storage"
+ init_docker
+ check_docker_ready
+ build
+ upload_jar_to_bucket
+ run_tests
+}
+
+
+prepare_configuration_dir() {
+ mkdir -p "$1"
+ cp integration-test-data/EXAConf "$1/EXAConf"
+}
+
+prepare_data_dir() {
+ mkdir -p "$1"
+ dd if=/dev/zero of="$1/dev.1.data" bs=1 count=1 seek=4G
+ touch "$1/dev.1.meta"
+}
+
+init_docker() {
+ docker pull "$docker_image"
+ docker run \
+ --name "$docker_name" \
+ -p 8899:8888 \
+ -p 6594:6583 \
+ --detach \
+ --privileged \
+ -v "$tmp:/exa" \
+ "$docker_image" \
+ init-sc --node-id 11
+ docker logs -f "$docker_name" &
+}
+
+check_docker_ready() {
+ # Wait until database is ready
+ (docker logs -f --tail 0 "$docker_name" &) 2>&1 | grep -q -i 'stage4: All stages finished'
+ sleep 30
+}
+
+build() {
+ mvn -q clean package
+}
+
+upload_jar_to_bucket() {
+ mvn -q pre-integration-test -DskipTests -Pit -Dintegrationtest.configfile="$config"
+ (docker exec "$docker_name" sh -c 'tail -f -n +0 /exa/logs/cored/*bucket*' &) | \
+ grep -q -i 'File.*virtualschema-jdbc-adapter.*linked'
+}
+
+run_tests() {
+ mvn -q verify -Pit -Dintegrationtest.configfile="$config" -Dintegrationtest.skipTestSetup=true
+}
+
+main "$@"
\ No newline at end of file
diff --git a/jdbc-adapter/integration-test-data/sybase.sql b/jdbc-adapter/integration-test-data/sybase.sql
new file mode 100644
index 000000000..350e08aae
--- /dev/null
+++ b/jdbc-adapter/integration-test-data/sybase.sql
@@ -0,0 +1,165 @@
+DROP TABLE testdb.tester.ittable go
+CREATE TABLE testdb.tester.ittable (
+ a varchar(100),
+ b decimal
+) go
+
+INSERT INTO testdb.tester.ittable (a, b) VALUES('e', 2)
+INSERT INTO testdb.tester.ittable (a, b) VALUES('b', 3)
+INSERT INTO testdb.tester.ittable (a, b) VALUES(NULL, -1)
+INSERT INTO testdb.tester.ittable (a, b) VALUES('a', NULL)
+INSERT INTO testdb.tester.ittable (a, b) VALUES('z', 0)
+INSERT INTO testdb.tester.ittable (a, b) VALUES('z', 0) go
+
+DROP TABLE testdb.tester.timetypes go
+CREATE TABLE testdb.tester.timetypes (
+ c_smalldatetime smalldatetime,
+ c_datetime datetime,
+ c_date date,
+ c_time time,
+ c_bigdatetime bigdatetime, -- error data truncation
+ c_bigtime bigtime
+) go
+
+INSERT INTO testdb.tester.timetypes
+ VALUES('1.1.1900 01:02',
+ '1.1.1753 01:02:03.100',
+ '12/3/2032',
+ '11:22:33.456',
+ '6.4.1553 11:11:11.111111',
+ '11:11:11.111111'
+ )
+go
+
+
+-- https://help.sap.com/viewer/b65d6a040c4a4709afd93068071b2a76/16.0.3.5/en-US/aa354eb4bc2b101495d29877b5bd3c5b.html DROP TABLE testdb.tester.integertypes go CREATE TABLE testdb.tester.integertypes (
+ c_bigint bigint,
+ c_int int,
+ c_smallint smallint,
+ c_ubigint unsigned bigint,
+ c_uint unsigned int,
+ c_usmallint unsigned smallint
+) go
+
+INSERT INTO testdb.tester.integertypes
+ VALUES(-9223372036854775808,
+ -2147483648,
+ -32768,
+ 0,
+ 0,
+ 0
+ )
+INSERT INTO testdb.tester.integertypes
+ VALUES(9223372036854775807,
+ 2147483647,
+ 32767,
+ 18446744073709551615,
+ 4294967295,
+ 65535
+ )
+go
+
+
+-- https://help.sap.com/viewer/b65d6a040c4a4709afd93068071b2a76/16.0.3.5/en-US/aa357b76bc2b1014ba159ac9d0074e1d.html
+DROP TABLE testdb.tester.decimaltypes go
+CREATE TABLE testdb.tester.decimaltypes (
+ c_numeric_36_0 numeric(36, 0),
+ c_numeric_38_0 numeric(38, 0),
+ c_decimal_20_10 decimal(20, 10),
+ c_decimal_37_10 decimal(37, 10)
+) go
+
+INSERT INTO testdb.tester.decimaltypes
+VALUES(12345678901234567890123456,
+ 1234567890123456789012345678,
+ 1234567890.0123456789,
+ 12345678901234567.0123456789
+)
+INSERT INTO testdb.tester.decimaltypes
+ VALUES(-12345678901234567890123456,
+ -1234567890123456789012345678,
+ -1234567890.0123456789,
+ -12345678901234567.0123456789
+ )
+go
+
+
+-- https://help.sap.com/viewer/b65d6a040c4a4709afd93068071b2a76/16.0.3.5/en-US/aa357b76bc2b1014ba159ac9d0074e1d.html
+-- FLOAT(p) is alias for either DOUBLE PRECISION or REAL. If p < 16, FLOAT is stored as REAL, if p >= 16, FLOAT is stored as DOUBLE PRECISION.
+DROP TABLE testdb.tester.approxtypes go
+CREATE TABLE testdb.tester.approxtypes (
+ c_double double precision,
+ c_real real,
+) go
+
+INSERT INTO testdb.tester.approxtypes VALUES(
+ 2.2250738585072014e-308,
+ 1.175494351e-38
+)
+INSERT INTO testdb.tester.approxtypes VALUES(
+ 1.797693134862315708e+308,
+ 3.402823466e+38
+)
+go
+
+
+DROP TABLE testdb.tester.moneytypes go
+CREATE TABLE testdb.tester.moneytypes (
+ c_smallmoney smallmoney,
+ c_money money,
+) go
+
+INSERT INTO testdb.tester.moneytypes VALUES(
+ 214748.3647,
+ 922337203685477.5807
+)
+INSERT INTO testdb.tester.moneytypes VALUES(
+ -214748.3648,
+ -922337203685477.5808
+)
+go
+
+
+-- https://help.sap.com/viewer/b65d6a040c4a4709afd93068071b2a76/16.0.3.5/en-US/aa362f6cbc2b1014b1ed808e2a54e693.html
+DROP TABLE testdb.tester.chartypes go
+CREATE TABLE testdb.tester.chartypes (
+ c_char_10 char(10),
+ c_char_toobig char(2001),
+ c_varchar varchar(10), -- maximum size in Sybase is 16384 -> smaller than Exasol's limit
+ c_unichar_10 unichar(10), -- NOT right-padded with spaces
+ c_unichar_toobig unichar(8192), -- NOT right-padded with spaces
+ c_univarchar univarchar(10), -- maximum size is 8192
+ c_nchar nchar(10), -- maximum size in Sybase is 16384. NOT right-padded with spaces.
+ c_nvarchar nvarchar(10), -- maximum size in Sybase is 16384
+ c_text text,
+ c_unitext unitext
+) go
+
+INSERT INTO testdb.tester.chartypes VALUES(
+ 'abcd',
+ 'Lorem ipsum dolor sit amet... rest is zero.',
+ 'Lorem.',
+ 'Ipsum.',
+ 'xyz',
+ 'Dolor.',
+ 'Sit.',
+ 'Amet.',
+ 'Text. A wall of text.',
+ 'Text. A wall of Unicode text.'
+) go
+
+
+DROP TABLE testdb.tester.misctypes go
+CREATE TABLE testdb.tester.misctypes (
+ c_binary binary(10), -- n <= 255
+ c_varbinary varbinary(10),
+ c_image image,
+ c_bit bit NOT NULL
+) go
+
+INSERT INTO testdb.tester.misctypes VALUES(
+ 0xdeadbeef,
+ 0xdeadbeef,
+ 0xdeadbeef,
+ 0
+) go
diff --git a/jdbc-adapter/integration-test-data/sybase/sybase-create-tables.sql b/jdbc-adapter/integration-test-data/sybase/sybase-create-tables.sql
new file mode 100644
index 000000000..e3fb3cbf4
--- /dev/null
+++ b/jdbc-adapter/integration-test-data/sybase/sybase-create-tables.sql
@@ -0,0 +1,69 @@
+USE testdb go
+sp_adduser 'tester' go
+SETUSER 'tester' go
+CREATE SCHEMA AUTHORIZATION tester
+ CREATE TABLE ittable (
+ a varchar(100) null,
+ b decimal null
+ )
+ CREATE TABLE timetypes (
+ c_smalldatetime smalldatetime,
+ c_datetime datetime,
+ c_date date,
+ c_time time,
+ c_bigdatetime bigdatetime, -- error data truncation
+ c_bigtime bigtime
+ )
+ -- https://help.sap.com/viewer/b65d6a040c4a4709afd93068071b2a76/16.0.3.5/en-US/aa357b76bc2b1014ba159ac9d0074e1d.html
+ -- FLOAT(p) is alias for either DOUBLE PRECISION or REAL.
+ -- If p < 16, FLOAT is stored as REAL, if p >= 16, FLOAT is stored as DOUBLE PRECISION.
+ CREATE TABLE approxtypes (
+ c_double double precision,
+ c_real real,
+ )
+ -- https://help.sap.com/viewer/b65d6a040c4a4709afd93068071b2a76/16.0.3.5/en-US/aa357b76bc2b1014ba159ac9d0074e1d.html
+ CREATE TABLE decimaltypes (
+ c_numeric_36_0 numeric(36, 0),
+ c_numeric_38_0 numeric(38, 0),
+ c_decimal_20_10 decimal(20, 10),
+ c_decimal_37_10 decimal(37, 10)
+ )
+ CREATE TABLE integertypes (
+ c_bigint bigint,
+ c_int int,
+ c_smallint smallint,
+ c_ubigint unsigned bigint,
+ c_uint unsigned int,
+ c_usmallint unsigned smallint
+ )
+ CREATE TABLE moneytypes (
+ c_smallmoney smallmoney,
+ c_money money,
+ )
+ -- https://help.sap.com/viewer/b65d6a040c4a4709afd93068071b2a76/16.0.3.5/en-US/aa362f6cbc2b1014b1ed808e2a54e693.html
+ CREATE TABLE chartypes (
+ c_char_10 char(10),
+ c_char_toobig char(2001),
+ c_varchar varchar(10), -- maximum size in Sybase is 16384 -> smaller than Exasol's limit
+ c_unichar_10 unichar(10),
+ c_univarchar univarchar(10), -- maximum size is 8192
+ c_nchar nchar(10), -- maximum size in Sybase is 16384. NOT right-padded with spaces.
+ c_nvarchar nvarchar(10), -- maximum size in Sybase is 16384
+ )
+ -- NOT right-padded with spaces.
+ -- While the theoretical maximum is 8192 unichars, effectively only 8148 are possible because
+ -- Sybase otherwise complains that the maximum row width is exceeded.
+ CREATE TABLE fatunichartypes (
+ c_unichar_toobig unichar(8148)
+ )
+ CREATE TABLE texttypes (
+ c_text text,
+ c_unitext unitext
+ )
+ CREATE TABLE misctypes (
+ c_binary binary(10), -- n <= 255
+ c_varbinary varbinary(10),
+ c_image image,
+ c_bit bit NOT NULL
+ )
+go
\ No newline at end of file
diff --git a/jdbc-adapter/integration-test-data/sybase/sybase-drop-tables.sql b/jdbc-adapter/integration-test-data/sybase/sybase-drop-tables.sql
new file mode 100644
index 000000000..7d4c763a5
--- /dev/null
+++ b/jdbc-adapter/integration-test-data/sybase/sybase-drop-tables.sql
@@ -0,0 +1,13 @@
+USE testdb go
+
+DROP TABLE ittable go
+DROP TABLE timetypes go
+DROP TABLE integertypes go
+DROP TABLE decimaltypes go
+DROP TABLE approxtypes go
+DROP TABLE moneytypes go
+DROP TABLE chartypes go
+DROP TABLE fatunichartypes go
+DROP TABLE texttypes go
+DROP TABLE misctypes go
+SELECT * FROM sysobjects WHERE type = 'U' go
\ No newline at end of file
diff --git a/jdbc-adapter/integration-test-data/sybase/sybase-populate-tables.sql b/jdbc-adapter/integration-test-data/sybase/sybase-populate-tables.sql
new file mode 100644
index 000000000..72da153f3
--- /dev/null
+++ b/jdbc-adapter/integration-test-data/sybase/sybase-populate-tables.sql
@@ -0,0 +1,94 @@
+USE testdb go
+
+TRUNCATE TABLE tester.ittable go
+TRUNCATE TABLE tester.timetypes go
+TRUNCATE TABLE tester.integertypes go
+TRUNCATE TABLE tester.decimaltypes go
+TRUNCATE TABLE tester.approxtypes go
+TRUNCATE TABLE tester.moneytypes go
+TRUNCATE TABLE tester.chartypes go
+TRUNCATE TABLE tester.fatunichartypes go
+TRUNCATE TABLE tester.texttypes go
+TRUNCATE TABLE tester.misctypes go
+
+INSERT INTO tester.ittable (a, b) VALUES('e', 2) go
+INSERT INTO tester.ittable (a, b) VALUES('b', 3) go
+INSERT INTO tester.ittable (a, b) VALUES(NULL, -1) go
+INSERT INTO tester.ittable (a, b) VALUES('a', NULL) go
+INSERT INTO tester.ittable (a, b) VALUES('z', 0) go
+INSERT INTO tester.ittable (a, b) VALUES('z', 0) go
+INSERT INTO tester.timetypes VALUES(
+ '1.1.1900 01:02',
+ '1.1.1753 01:02:03.100',
+ '12/3/2032',
+ '11:22:33.456',
+ '6.4.1553 11:11:11.111111',
+ '11:11:11.111111'
+)
+INSERT INTO tester.approxtypes VALUES(
+ 2.2250738585072014e-308,
+ 1.175494351e-38
+) go
+INSERT INTO tester.approxtypes VALUES(
+ 1.797693134862315708e+308,
+ 3.402823466e+38
+) go
+INSERT INTO tester.decimaltypes VALUES(
+ 12345678901234567890123456,
+ 1234567890123456789012345678,
+ 1234567890.0123456789,
+ 12345678901234567.0123456789
+) go
+INSERT INTO tester.decimaltypes VALUES(
+ -12345678901234567890123456,
+ -1234567890123456789012345678,
+ -1234567890.0123456789,
+ -12345678901234567.0123456789
+) go
+INSERT INTO tester.integertypes VALUES(
+ -9223372036854775808,
+ -2147483648,
+ -32768,
+ 0,
+ 0,
+ 0
+) go
+INSERT INTO tester.integertypes VALUES(
+ 9223372036854775807,
+ 2147483647,
+ 32767,
+ 18446744073709551615,
+ 4294967295,
+ 65535
+) go
+INSERT INTO tester.moneytypes VALUES(
+ 214748.3647,
+ 922337203685477.5807
+) go
+INSERT INTO tester.moneytypes VALUES(
+ -214748.3648,
+ -922337203685477.5808
+) go
+INSERT INTO tester.chartypes VALUES(
+ 'c10',
+ 'c2001',
+ 'vc10',
+ 'uc10',
+ 'uvc10',
+ 'nc10',
+ 'nvc10'
+) go
+INSERT INTO tester.fatunichartypes VALUES(
+ 'xyz'
+) go
+INSERT INTO tester.texttypes VALUES(
+ 'Text. A wall of text.',
+ 'Text. A wall of Unicode text.'
+) go
+INSERT INTO tester.misctypes VALUES(
+ 0xdeadbeef,
+ 0xdeadbeef,
+ 0xdeadbeef,
+ 0
+) go
+COMMIT go
\ No newline at end of file
diff --git a/jdbc-adapter/integration-test-data/sybase/sybase-prepare-database.sql b/jdbc-adapter/integration-test-data/sybase/sybase-prepare-database.sql
new file mode 100644
index 000000000..763af100d
--- /dev/null
+++ b/jdbc-adapter/integration-test-data/sybase/sybase-prepare-database.sql
@@ -0,0 +1,19 @@
+USE master go
+
+-- Initialiase a data partition
+DISK INIT
+ name = 'data_dev1',
+ physname = 'data_dev1.dat',
+ size = '100M'
+go
+
+-- Initialize a database log partition
+DISK INIT
+ name = 'log_dev1',
+ physname = 'log_dev1.dat',
+ size = '25M'
+go
+
+--DROP DATABASE testdb go
+CREATE DATABASE testdb ON data_dev1='25M' LOG ON log_dev1='5M' go
+sp_addlogin 'tester', 'tester' go
\ No newline at end of file
diff --git a/jdbc-adapter/launch/Virtual-Schema_all_tests.launch b/jdbc-adapter/launch/Virtual-Schema_all_tests.launch
new file mode 100644
index 000000000..1704dc40c
--- /dev/null
+++ b/jdbc-adapter/launch/Virtual-Schema_all_tests.launch
@@ -0,0 +1,22 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/jdbc-adapter/local/integration-test-config.yaml b/jdbc-adapter/local/integration-test-config.yaml
new file mode 100644
index 000000000..f75ec8db0
--- /dev/null
+++ b/jdbc-adapter/local/integration-test-config.yaml
@@ -0,0 +1,89 @@
+# Configuration file for integration tests
+
+general:
+ debug: false
+ debugAddress: '10.44.1.228:3000' # Address which will be defined as DEBUG_ADDRESS in the virtual schemas
+ bucketFsUrl: http://localhost:2580/jars
+ bucketFsPassword: public
+ jdbcAdapterPath: /buckets/bfsdefault/jars/virtualschema-jdbc-adapter-dist-1.1.0.jar
+
+exasol:
+ runIntegrationTests: true
+ address: localhost:8563
+ user: sys
+ password: exasol
+
+
+
+# Generic sql dialect is tested via MySQL
+generic:
+ runIntegrationTests: false
+ jdbcDriverPath: /buckets/bfsdefault/jars/mysql-connector-java-8.0.12.jar
+ connectionString: jdbc:mysql://localhost/virtual-schema-integration-test
+ user: virtual-schema-integration-test
+ password: password
+
+oracle:
+ runIntegrationTests: false
+ jdbcDriverPath: /buckets/mybucketfs/mybucket/oracle/ojdbc7.jar
+ connectionString: jdbc:oracle:thin:@oracle-host:1521:orcl
+ user: myuser
+ password: mypass
+
+
+impala:
+ runIntegrationTests: false
+ connectionString: jdbc:impala://impala-host:21050;AuthMech=0
+ jdbcDriverPath: /buckets/mybucketfs/mybucket/Cloudera_Impala_JDBC_2_5_28.1047_Driver/
+ jdbcDriverJars:
+ - hive_metastore.jar
+ - hive_service.jar
+ - ImpalaJDBC41.jar
+ - libfb303-0.9.0.jar
+ - libthrift-0.9.0.jar
+ - log4j-1.2.14.jar
+ - ql.jar
+ - slf4j-api-1.5.11.jar
+ - slf4j-log4j12-1.5.11.jar
+ - TCLIServiceClient.jar
+ - zookeeper-3.4.6.jar
+
+
+kerberos:
+ runIntegrationTests: false
+ jdbcDriverPath: /buckets/mybucketfs/mybucket/cloudera-hive-jdbc-driver/
+ jdbcDriverJars:
+ - HiveJDBC41.jar
+ - hive_metastore.jar
+ - hive_service.jar
+ - libfb303-0.9.0.jar
+ - libthrift-0.9.0.jar
+ - log4j-1.2.14.jar
+ - ql.jar
+ - slf4j-api-1.5.11.jar
+ - slf4j-log4j12-1.5.11.jar
+ - TCLIServiceClient.jar
+ - zookeeper-3.4.6.jar
+ connectionString: jdbc:hive2://hadoop-host.yourcompany.com:10000/;AuthMech=1;KrbRealm=YOURCOMPANY.COM;KrbHostFQDN=hadoop-host.yourcompany.com;KrbServiceName=hive
+ user: testuser@YOURCOMPANY.COM
+ password: ExaAuthType=Kerberos;X3xpYmRlZmF1bHRzXQpkZWZhdWx0X3JlYWxtID0gT01HLkRFVi5FWEFTT0wuQ09NCmRuc19jYW5vbmljYWxpemVfaG9zdG5hbWUgPSBmYWxzZQpkbnNfbG9va3VwX2tkYyA9IGZhbHNlCmRuc19sb29rdXBfcmVhbG0gPSBmYWxzZQp0aWNrZXRfbGlmZXRpbWUgPSA4NjQwMApyZW5ld19saWZldGltZSA9IDYwNDgwMApmb3J3YXJkYWJsZSA9IHRydWUKZGVmYXVsdF90Z3NfZW5jdHlwZXMgPSBhcmNmb3VyLWhtYWMKZGVmYXVsdF90a3RfZW5jdHlwZXMgPSBhcmNmb3VyLWhtYWMKcGVybWl0dGVkX2VuY3R5cGVzID0gYXJjZm91ci1obWFjCnVkcF9wcmVmZXJlbmNlX2xpbWl0ID0gMQpbcmVhbG1zXQpPTUcuREVWLkVYQVNPTC5DT00gPSB7CmtkYyA9IGhhZG9vcDAxLm9tZy5kZXYuZXhhc29sLmNvbQphZG1pbl9zZXJ2ZXIgPSBoYWRvb3AwMS5vbWcuZGV2LmV4YXNvbC5jb20KfQo=;BQIAAABBAAEAEk9NRy5ERVYuRVhBU09MLkNPTQAMaGFkb29wdGVzdGVyAAAAAVYo0X0BABcAEGuPtGr6sYdhUEbTqhYQ3E0=
+
+hive:
+ runIntegrationTests: false
+ jdbcDriverPath: /buckets/mybucketfs/mybucket/cloudera-hive-jdbc-driver/
+ jdbcDriverJars:
+ - HiveJDBC41.jar
+ - hive_metastore.jar
+ - hive_service.jar
+ - libfb303-0.9.0.jar
+ - libthrift-0.9.0.jar
+ - log4j-1.2.14.jar
+ - ql.jar
+ - slf4j-api-1.5.11.jar
+ - slf4j-log4j12-1.5.11.jar
+ - TCLIServiceClient.jar
+ - zookeeper-3.4.6.jar
+ connectionString: jdbc:hive2://hive-host:10000
+ user: user
+ password: pass
+
diff --git a/jdbc-adapter/local/logging.properties b/jdbc-adapter/local/logging.properties
new file mode 100644
index 000000000..b0d506f9f
--- /dev/null
+++ b/jdbc-adapter/local/logging.properties
@@ -0,0 +1,14 @@
+handlers=java.util.logging.ConsoleHandler, java.util.logging.FileHandler
+.level=INFO
+java.util.logging.ConsoleHandler.level=ALL
+java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
+
+java.util.logging.FileHandler.level = ALL
+java.util.logging.FileHandler.pattern=/home/seb/logs/virtual_schema.log
+java.util.logging.FileHandler.limit=50000
+java.util.logging.FileHandler.count=1
+java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
+
+java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS.%1$tL %4$-7s [%3$s] %5$s %6$s%n
+
+com.exasol.level=FINE
\ No newline at end of file
diff --git a/jdbc-adapter/pom.xml b/jdbc-adapter/pom.xml
index d4e1ca4e6..09ada3232 100644
--- a/jdbc-adapter/pom.xml
+++ b/jdbc-adapter/pom.xml
@@ -3,7 +3,7 @@
4.0.0com.exasolvirtualschema-jdbc-adapter-main
- 1.0.2-SNAPSHOT
+ ${product.version}pom
@@ -14,7 +14,9 @@
+ 1.1.0UTF-8
+ UTF-81.8
@@ -55,7 +57,7 @@
junitjunit
- 4.11
+ 4.12test
@@ -64,6 +66,12 @@
2.0.52-beta
+
+ org.hamcrest
+ hamcrest-junit
+ 2.0.0.0
+ test
+
diff --git a/jdbc-adapter/tools/increment_version.sh b/jdbc-adapter/tools/increment_version.sh
deleted file mode 100755
index 3a128798b..000000000
--- a/jdbc-adapter/tools/increment_version.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-if [ $# -ne 2 ]; then
- echo "Usage example: $0 0.0.1-SNAPSHOT 0.0.1"
- exit 1;
-fi
-
-BASEDIR=$(dirname "$0")
-PARENTDIR=$(dirname "$BASEDIR")
-OLD_VERSION="$1"
-OLD_VERSION="${OLD_VERSION//./\\.}"
-NEW_VERSION="$2"
-
-echo "Substitute $OLD_VERSION with $NEW_VERSION in $PARENTDIR"
-
-find $PARENTDIR -type f | xargs sed -i -e "s/$OLD_VERSION/$NEW_VERSION/g"
-
diff --git a/jdbc-adapter/tools/version.sh b/jdbc-adapter/tools/version.sh
new file mode 100755
index 000000000..e495d30ef
--- /dev/null
+++ b/jdbc-adapter/tools/version.sh
@@ -0,0 +1,100 @@
+#!/bin/bash
+readonly vs_jar_prefix='virtualschema-jdbc-adapter-dist'
+readonly jar_suffix='jar'
+readonly vs_jar_pattern="$vs_jar_prefix-.*\.$jar_suffix"
+readonly root_dir='virtual-schemas'
+readonly master_pom='jdbc-adapter/pom.xml'
+readonly file_find_regex='.*\.(md|yaml)'
+readonly script=$(basename $0)
+
+main() {
+ case "$1" in
+ help)
+ usage
+ ;;
+ verify)
+ verify
+ ;;
+ unify)
+ unify
+ ;;
+ *)
+ log "Unknown command: \"$1\""
+ log
+ usage
+ exit 1
+ ;;
+ esac
+}
+
+usage () {
+ log "Usage: $script help"
+ log " $script verify"
+ log " $script unify"
+ log
+ log "Run from the root directory \"$root_dir\""
+ log
+ log "This script can serve as a checkpoint using 'verify' as command. The exit value"
+ log "is zero when all detected version numbers match the ones on the master POM file."
+ log "It is non-zero if there is a mismatch."
+ log
+ log "Used with the command 'unify' this script rewrites all occurrences of divergent"
+ log "version numbers with the one found in the master POM file."
+}
+
+verify () {
+ prepare
+ verify_no_other_version_numbers "$version"
+}
+
+prepare() {
+ verify_current_directory "$root_dir"
+ readonly version=$(extract_product_version "$master_pom")
+ log "Found version $version in master file \"$master_pom\""
+}
+
+verify_current_directory() {
+ if [[ "$(basename $PWD)" != "$root_dir" ]]
+ then
+ log "Must be in root directory '$root_dir' to execute this script."
+ exit 1
+ fi
+}
+
+extract_product_version() {
+ grep -oP "product\.version>[^<]*<" "$1" | sed -e's/^.*>\s*//' -e's/\s*/'
+}
+
+log () {
+ echo "$@"
+}
+
+verify_no_other_version_numbers() {
+ find -type f -regextype posix-extended -regex "$file_find_regex" \
+ -exec grep -Hnor $vs_jar_pattern {} \; | grep -v "$1"
+ if [[ $? -eq 0 ]]
+ then
+ log
+ log "Verification failed."
+ log "Found version mismatches that need to be fixed. Try the following command"
+ log
+ log " $script unify"
+ exit 1
+ else
+ log "Verification successful."
+ fi
+}
+
+unify() {
+ prepare
+ update_documentation
+}
+
+update_documentation() {
+log "Checking all files matching \"$file_find_regex\""
+ find -type f -regextype posix-extended -regex "$file_find_regex" \
+ -exec echo "Processing \"{}\"" \; \
+ -exec sed -i s/"$vs_jar_pattern"/"$vs_jar_prefix-$version.$jar_suffix"/g {} \;
+}
+
+main "$@"
\ No newline at end of file
diff --git a/jdbc-adapter/virtualschema-common/pom.xml b/jdbc-adapter/virtualschema-common/pom.xml
index 67d4b5db7..45c948f1f 100644
--- a/jdbc-adapter/virtualschema-common/pom.xml
+++ b/jdbc-adapter/virtualschema-common/pom.xml
@@ -5,7 +5,7 @@
com.exasolvirtualschema-jdbc-adapter-main
- 1.0.2-SNAPSHOT
+ ${product.version}virtualschema-common
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter-dist/pom.xml b/jdbc-adapter/virtualschema-jdbc-adapter-dist/pom.xml
index 21d7b4ca0..9190e5df0 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter-dist/pom.xml
+++ b/jdbc-adapter/virtualschema-jdbc-adapter-dist/pom.xml
@@ -5,7 +5,7 @@
com.exasolvirtualschema-jdbc-adapter-main
- 1.0.2-SNAPSHOT
+ ${product.version}virtualschema-jdbc-adapter-dist
@@ -21,12 +21,12 @@
com.exasolvirtualschema-common
- 1.0.2-SNAPSHOT
+ ${product.version}com.exasolvirtualschema-jdbc-adapter
- 1.0.2-SNAPSHOT
+ ${product.version}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/pom.xml b/jdbc-adapter/virtualschema-jdbc-adapter/pom.xml
index 7c22a1729..f1b02ec8c 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/pom.xml
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/pom.xml
@@ -5,7 +5,7 @@
com.exasolvirtualschema-jdbc-adapter-main
- 1.0.2-SNAPSHOT
+ ${product.version}virtualschema-jdbc-adapter
@@ -95,7 +95,7 @@
com.exasolvirtualschema-common
- 1.0.2-SNAPSHOT
+ ${product.version}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/AbstractSqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/AbstractSqlDialect.java
index b5fc00efb..8c28c94cf 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/AbstractSqlDialect.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/AbstractSqlDialect.java
@@ -1,5 +1,14 @@
package com.exasol.adapter.dialects;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.util.EnumMap;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
import com.exasol.adapter.jdbc.ColumnAdapterNotes;
import com.exasol.adapter.jdbc.JdbcAdapterProperties;
import com.exasol.adapter.metadata.ColumnMetadata;
@@ -7,23 +16,21 @@
import com.exasol.adapter.sql.AggregateFunction;
import com.exasol.adapter.sql.ScalarFunction;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.sql.Types;
-import java.util.*;
-
/**
- * Abstract implementation of a dialect. We recommend that every dialect should extend this abstract class.
+ * Abstract implementation of a dialect. We recommend that every dialect should
+ * extend this abstract class.
*
- * TODO Find solution to handle unsupported types (e.g. exceeding varchar size). E.g. skip column or always truncate or add const-null column or throw error or make configurable
+ * TODO Find solution to handle unsupported types (e.g. exceeding varchar size).
+ * E.g. skip column or always truncate or add const-null column or throw error
+ * or make configurable
*/
public abstract class AbstractSqlDialect implements SqlDialect {
protected Set omitParenthesesMap = new HashSet<>();
- private SqlDialectContext context;
+ private final SqlDialectContext context;
- public AbstractSqlDialect(SqlDialectContext context) {
+ public AbstractSqlDialect(final SqlDialectContext context) {
this.context = context;
}
@@ -33,73 +40,70 @@ public String getTableCatalogAndSchemaSeparator() {
}
@Override
- public MappedTable mapTable(ResultSet tables) throws SQLException {
-// for (int i=1; i<=tables.getMetaData().getColumnCount(); ++i) {
-// System.out.println(" - " + tables.getMetaData().getColumnName(i) + ": " + tables.getString(i));
-// }
+ public MappedTable mapTable(final ResultSet tables) throws SQLException {
String commentString = tables.getString("REMARKS");
if (commentString == null) {
commentString = "";
}
- String tableName = changeIdentifierCaseIfNeeded(tables.getString("TABLE_NAME"));
- return MappedTable.createMappedTable(tableName,tables.getString("TABLE_NAME"), commentString);
+ final String tableName = changeIdentifierCaseIfNeeded(tables.getString("TABLE_NAME"));
+ return MappedTable.createMappedTable(tableName, tables.getString("TABLE_NAME"), commentString);
}
@Override
- public ColumnMetadata mapColumn(ResultSet columns) throws SQLException {
- String colName = changeIdentifierCaseIfNeeded(columns.getString("COLUMN_NAME"));
- int jdbcType = columns.getInt("DATA_TYPE");
- int decimalScale = columns.getInt("DECIMAL_DIGITS");
- int precisionOrSize = columns.getInt("COLUMN_SIZE");
- int charOctedLength = columns.getInt("CHAR_OCTET_LENGTH");
- String typeName = columns.getString("TYPE_NAME");
- JdbcTypeDescription jdbcTypeDescription = new JdbcTypeDescription(jdbcType,
- decimalScale, precisionOrSize, charOctedLength, typeName);
+ public ColumnMetadata mapColumn(final ResultSet columns) throws SQLException {
+ final String colName = changeIdentifierCaseIfNeeded(columns.getString("COLUMN_NAME"));
+ final int jdbcType = columns.getInt("DATA_TYPE");
+ final int decimalScale = columns.getInt("DECIMAL_DIGITS");
+ final int precisionOrSize = columns.getInt("COLUMN_SIZE");
+ final int charOctedLength = columns.getInt("CHAR_OCTET_LENGTH");
+ final String typeName = columns.getString("TYPE_NAME");
+ final JdbcTypeDescription jdbcTypeDescription = new JdbcTypeDescription(jdbcType, decimalScale, precisionOrSize,
+ charOctedLength, typeName);
// Check if dialect want's to handle this row
- DataType colType = mapJdbcType(jdbcTypeDescription);
+ final DataType colType = mapJdbcType(jdbcTypeDescription);
// Nullable
boolean isNullable = true;
try {
- String nullable = columns.getString("IS_NULLABLE");
+ final String nullable = columns.getString("IS_NULLABLE");
if (nullable != null && nullable.toLowerCase().equals("no")) {
isNullable = false;
}
- } catch (SQLException ex) {
+ } catch (final SQLException ex) {
// ignore me
}
// Identity
-
+
boolean isIdentity = false;
try {
- String identity = columns.getString("IS_AUTOINCREMENT");
+ final String identity = columns.getString("IS_AUTOINCREMENT");
if (identity != null && identity.toLowerCase().equals("yes")) {
isIdentity = true;
}
- } catch (SQLException ex) {
+ } catch (final SQLException ex) {
// ignore me --some older JDBC drivers (Java 1.5) don't support IS_AUTOINCREMENT
}
// Default
String defaultValue = "";
try {
- String defaultString = columns.getString("COLUMN_DEF");
+ final String defaultString = columns.getString("COLUMN_DEF");
if (defaultString != null) {
defaultValue = defaultString;
}
- } catch (SQLException ex) {
+ } catch (final SQLException ex) {
// ignore me
}
// Comment
String comment = "";
try {
- String commentString = columns.getString("REMARKS");
+ final String commentString = columns.getString("REMARKS");
if (commentString != null && !commentString.isEmpty()) {
comment = commentString;
}
- } catch (SQLException ex) {
+ } catch (final SQLException ex) {
// ignore me
}
@@ -108,123 +112,132 @@ public ColumnMetadata mapColumn(ResultSet columns) throws SQLException {
if (columnTypeName == null) {
columnTypeName = "";
}
- String adapterNotes = ColumnAdapterNotes.serialize(new ColumnAdapterNotes(jdbcType, columnTypeName));;
+ final String adapterNotes = ColumnAdapterNotes.serialize(new ColumnAdapterNotes(jdbcType, columnTypeName));
+ ;
return new ColumnMetadata(colName, adapterNotes, colType, isNullable, isIdentity, defaultValue, comment);
}
- private static DataType getExaTypeFromJdbcType(JdbcTypeDescription jdbcTypeDescription) throws SQLException {
+ private static DataType getExaTypeFromJdbcType(final JdbcTypeDescription jdbcTypeDescription) throws SQLException {
DataType colType;
switch (jdbcTypeDescription.getJdbcType()) {
- case Types.TINYINT:
- case Types.SMALLINT:
- if (jdbcTypeDescription.getPrecisionOrSize() <= DataType.maxExasolDecimalPrecision) {
- int precision = jdbcTypeDescription.getPrecisionOrSize() == 0 ? 9 : jdbcTypeDescription.getPrecisionOrSize();
- colType = DataType.createDecimal(precision, 0);
- } else {
- colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
- }
- break;
- case Types.INTEGER:
- if (jdbcTypeDescription.getPrecisionOrSize() <= DataType.maxExasolDecimalPrecision) {
- int precision = jdbcTypeDescription.getPrecisionOrSize() == 0 ? 18 : jdbcTypeDescription.getPrecisionOrSize();
- colType = DataType.createDecimal(precision, 0);
- } else {
- colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
- }
- break;
- case Types.BIGINT: // Java type long
- if (jdbcTypeDescription.getPrecisionOrSize() <= DataType.maxExasolDecimalPrecision) {
- int precision = jdbcTypeDescription.getPrecisionOrSize() == 0 ? 36 : jdbcTypeDescription.getPrecisionOrSize();
- colType = DataType.createDecimal(precision, 0);
- } else {
- colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
- }
- break;
- case Types.DECIMAL:
+ case Types.TINYINT:
+ case Types.SMALLINT:
+ if (jdbcTypeDescription.getPrecisionOrSize() <= DataType.maxExasolDecimalPrecision) {
+ final int precision = jdbcTypeDescription.getPrecisionOrSize() == 0 ? 9
+ : jdbcTypeDescription.getPrecisionOrSize();
+ colType = DataType.createDecimal(precision, 0);
+ } else {
+ colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
+ }
+ break;
+ case Types.INTEGER:
+ if (jdbcTypeDescription.getPrecisionOrSize() <= DataType.maxExasolDecimalPrecision) {
+ final int precision = jdbcTypeDescription.getPrecisionOrSize() == 0 ? 18
+ : jdbcTypeDescription.getPrecisionOrSize();
+ colType = DataType.createDecimal(precision, 0);
+ } else {
+ colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
+ }
+ break;
+ case Types.BIGINT: // Java type long
+ if (jdbcTypeDescription.getPrecisionOrSize() <= DataType.maxExasolDecimalPrecision) {
+ final int precision = jdbcTypeDescription.getPrecisionOrSize() == 0 ? 36
+ : jdbcTypeDescription.getPrecisionOrSize();
+ colType = DataType.createDecimal(precision, 0);
+ } else {
+ colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
+ }
+ break;
+ case Types.DECIMAL:
- if (jdbcTypeDescription.getPrecisionOrSize() <= DataType.maxExasolDecimalPrecision) {
- colType = DataType.createDecimal(jdbcTypeDescription.getPrecisionOrSize(), jdbcTypeDescription.getDecimalScale());
- } else {
- colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
- }
- break;
- case Types.NUMERIC: // Java BigInteger
+ if (jdbcTypeDescription.getPrecisionOrSize() <= DataType.maxExasolDecimalPrecision) {
+ colType = DataType.createDecimal(jdbcTypeDescription.getPrecisionOrSize(),
+ jdbcTypeDescription.getDecimalScale());
+ } else {
colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
- break;
- case Types.REAL:
- case Types.FLOAT:
- case Types.DOUBLE:
- colType = DataType.createDouble();
- break;
- case Types.VARCHAR:
- case Types.NVARCHAR:
- case Types.LONGVARCHAR:
- case Types.LONGNVARCHAR: {
- DataType.ExaCharset charset = (jdbcTypeDescription.getCharOctedLength() == jdbcTypeDescription.getPrecisionOrSize()) ? DataType.ExaCharset.ASCII : DataType.ExaCharset.UTF8;
+ }
+ break;
+ case Types.NUMERIC: // Java BigInteger
+ colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+ case Types.REAL:
+ case Types.FLOAT:
+ case Types.DOUBLE:
+ colType = DataType.createDouble();
+ break;
+ case Types.VARCHAR:
+ case Types.NVARCHAR:
+ case Types.LONGVARCHAR:
+ case Types.LONGNVARCHAR: {
+ final DataType.ExaCharset charset = (jdbcTypeDescription.getCharOctedLength() == jdbcTypeDescription
+ .getPrecisionOrSize()) ? DataType.ExaCharset.ASCII : DataType.ExaCharset.UTF8;
+ if (jdbcTypeDescription.getPrecisionOrSize() <= DataType.maxExasolVarcharSize) {
+ final int precision = jdbcTypeDescription.getPrecisionOrSize() == 0 ? DataType.maxExasolVarcharSize
+ : jdbcTypeDescription.getPrecisionOrSize();
+ colType = DataType.createVarChar(precision, charset);
+ } else {
+ colType = DataType.createVarChar(DataType.maxExasolVarcharSize, charset);
+ }
+ break;
+ }
+ case Types.CHAR:
+ case Types.NCHAR: {
+ final DataType.ExaCharset charset = (jdbcTypeDescription.getCharOctedLength() == jdbcTypeDescription
+ .getPrecisionOrSize()) ? DataType.ExaCharset.ASCII : DataType.ExaCharset.UTF8;
+ if (jdbcTypeDescription.getPrecisionOrSize() <= DataType.maxExasolCharSize) {
+ colType = DataType.createChar(jdbcTypeDescription.getPrecisionOrSize(), charset);
+ } else {
if (jdbcTypeDescription.getPrecisionOrSize() <= DataType.maxExasolVarcharSize) {
- int precision = jdbcTypeDescription.getPrecisionOrSize() == 0
- ? DataType.maxExasolVarcharSize : jdbcTypeDescription.getPrecisionOrSize();
- colType = DataType.createVarChar(precision, charset);
+ colType = DataType.createVarChar(jdbcTypeDescription.getPrecisionOrSize(), charset);
} else {
colType = DataType.createVarChar(DataType.maxExasolVarcharSize, charset);
}
- break;
- }
- case Types.CHAR:
- case Types.NCHAR: {
- DataType.ExaCharset charset = (jdbcTypeDescription.getCharOctedLength() == jdbcTypeDescription.getPrecisionOrSize()) ? DataType.ExaCharset.ASCII : DataType.ExaCharset.UTF8;
- if (jdbcTypeDescription.getPrecisionOrSize() <= DataType.maxExasolCharSize) {
- colType = DataType.createChar(jdbcTypeDescription.getPrecisionOrSize(), charset);
- } else {
- if (jdbcTypeDescription.getPrecisionOrSize() <= DataType.maxExasolVarcharSize) {
- colType = DataType.createVarChar(jdbcTypeDescription.getPrecisionOrSize(), charset);
- } else {
- colType = DataType.createVarChar(DataType.maxExasolVarcharSize, charset);
- }
- }
- break;
}
- case Types.DATE:
- colType = DataType.createDate();
- break;
- case Types.TIMESTAMP:
- colType = DataType.createTimestamp(false);
- break;
- case Types.TIME:
- colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
- break;
- case Types.BIT:
- case Types.BOOLEAN:
- colType = DataType.createBool();
- break;
- case Types.BINARY:
- case Types.VARBINARY:
- case Types.LONGVARBINARY:
- case Types.BLOB:
- case Types.CLOB:
- case Types.NCLOB:
- colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
- break;
- case Types.OTHER:
- case Types.JAVA_OBJECT:
- case Types.DISTINCT:
- case Types.STRUCT:
- case Types.ARRAY:
- case Types.REF:
- case Types.DATALINK:
- case Types.SQLXML:
- case Types.NULL:
- default:
- throw new RuntimeException("Unsupported data type (" + jdbcTypeDescription.getJdbcType() + ") found in source system, should never happen");
+ break;
}
- assert(colType != null);
+ case Types.DATE:
+ colType = DataType.createDate();
+ break;
+ case Types.TIMESTAMP:
+ colType = DataType.createTimestamp(false);
+ break;
+ case Types.TIME:
+ colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+ case Types.BIT:
+ case Types.BOOLEAN:
+ colType = DataType.createBool();
+ break;
+ case Types.BINARY:
+ case Types.VARBINARY:
+ case Types.LONGVARBINARY:
+ case Types.BLOB:
+ case Types.CLOB:
+ case Types.NCLOB:
+ colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+ case Types.OTHER:
+ case Types.JAVA_OBJECT:
+ case Types.DISTINCT:
+ case Types.STRUCT:
+ case Types.ARRAY:
+ case Types.REF:
+ case Types.DATALINK:
+ case Types.SQLXML:
+ case Types.NULL:
+ default:
+ throw new RuntimeException("Unsupported data type (" + jdbcTypeDescription.getJdbcType()
+ + ") found in source system, should never happen");
+ }
+ assert (colType != null);
return colType;
}
- public String changeIdentifierCaseIfNeeded(String identifier) {
+ public String changeIdentifierCaseIfNeeded(final String identifier) {
if (getQuotedIdentifierHandling() == getUnquotedIdentifierHandling()) {
if (getQuotedIdentifierHandling() != IdentifierCaseHandling.INTERPRET_CASE_SENSITIVE) {
- // Completely case-insensitive. We can store everything uppercase to allow working with unquoted identifiers in EXASOL
+ // Completely case-insensitive. We can store everything uppercase to allow
+ // working with unquoted identifiers in EXASOL
return identifier.toUpperCase();
}
}
@@ -232,12 +245,12 @@ public String changeIdentifierCaseIfNeeded(String identifier) {
}
@Override
- public boolean omitParentheses(ScalarFunction function) {
- return omitParenthesesMap.contains(function);
+ public boolean omitParentheses(final ScalarFunction function) {
+ return this.omitParenthesesMap.contains(function);
}
@Override
- public SqlGenerationVisitor getSqlGenerationVisitor(SqlGenerationContext context) {
+ public SqlGenerationVisitor getSqlGenerationVisitor(final SqlGenerationContext context) {
return new SqlGenerationVisitor(this, context);
}
@@ -245,7 +258,7 @@ public SqlGenerationVisitor getSqlGenerationVisitor(SqlGenerationContext context
public abstract DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcType) throws SQLException;
@Override
- public final DataType mapJdbcType(JdbcTypeDescription jdbcType) throws SQLException {
+ public final DataType mapJdbcType(final JdbcTypeDescription jdbcType) throws SQLException {
DataType type = dialectSpecificMapJdbcType(jdbcType);
if (type == null) {
type = getExaTypeFromJdbcType(jdbcType);
@@ -260,7 +273,7 @@ public Map getScalarFunctionAliases() {
@Override
public Map getAggregateFunctionAliases() {
- Map aliases = new HashMap<>();
+ final Map aliases = new HashMap<>();
aliases.put(AggregateFunction.GEO_INTERSECTION_AGGREGATE, "ST_INTERSECTION");
aliases.put(AggregateFunction.GEO_UNION_AGGREGATE, "ST_UNION");
return aliases;
@@ -268,7 +281,7 @@ public Map getAggregateFunctionAliases() {
@Override
public Map getBinaryInfixFunctionAliases() {
- Map aliases = new HashMap<>();
+ final Map aliases = new HashMap<>();
aliases.put(ScalarFunction.ADD, "+");
aliases.put(ScalarFunction.SUB, "-");
aliases.put(ScalarFunction.MULT, "*");
@@ -278,17 +291,18 @@ public Map getBinaryInfixFunctionAliases() {
@Override
public Map getPrefixFunctionAliases() {
- Map aliases = new HashMap<>();
+ final Map aliases = new HashMap<>();
aliases.put(ScalarFunction.NEG, "-");
return aliases;
}
public SqlDialectContext getContext() {
- return context;
+ return this.context;
}
- public void handleException(SQLException exception,
- JdbcAdapterProperties.ExceptionHandlingMode exceptionMode) throws SQLException {
+ @Override
+ public void handleException(final SQLException exception,
+ final JdbcAdapterProperties.ExceptionHandlingMode exceptionMode) throws SQLException {
throw exception;
};
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlDialect.java
index 249a0cb5f..c6394ae1f 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlDialect.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlDialect.java
@@ -1,5 +1,10 @@
package com.exasol.adapter.dialects;
+import java.sql.DatabaseMetaData;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.Map;
+
import com.exasol.adapter.capabilities.Capabilities;
import com.exasol.adapter.jdbc.JdbcAdapterProperties;
import com.exasol.adapter.metadata.ColumnMetadata;
@@ -7,13 +12,9 @@
import com.exasol.adapter.sql.AggregateFunction;
import com.exasol.adapter.sql.ScalarFunction;
-import java.sql.DatabaseMetaData;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.util.Map;
-
/**
- * Interface for the implementation of a SQL dialect. All data source specific logic is specified here.
+ * Interface for the implementation of a SQL dialect. All data source specific
+ * logic is specified here.
*
*
* The responsibilities of a dialect can be be divided into 3 areas:
@@ -21,43 +22,57 @@
*
*
* 1. Capabilities:
- * The dialect defines the set of supported capabilities. See {@link #getCapabilities()} for details.
+ * The dialect defines the set of supported capabilities. See
+ * {@link #getCapabilities()} for details.
*
*
*
* 2. Data Type Mapping:
- * The dialect defines, how the tables in the data source are mapped to EXASOL virtual tables.
- * In particular the data types have to be mapped to EXASOL data types. See {@link #mapJdbcType(JdbcTypeDescription)} for details.
+ * The dialect defines, how the tables in the data source are mapped to EXASOL
+ * virtual tables. In particular the data types have to be mapped to EXASOL data
+ * types. See {@link #mapJdbcType(JdbcTypeDescription)} for details.
*
*
*
* 3. SQL Generation:
* The dialect defines how to generate SQL statements in the data source syntax.
- * The dialect provides several methods to customize quoting, case-sensitivity, function name aliases,
- * and other aspects of the syntax.
+ * The dialect provides several methods to customize quoting, case-sensitivity,
+ * function name aliases, and other aspects of the syntax.
*
- * The actual SQL generation is done by the separate class {@link SqlGenerationVisitor} (it uses the visitor pattern).
- * For things like quoting and case-sensitivity, the SQL generation visitor will ask the dialect how to handle them.
+ * The actual SQL generation is done by the separate class
+ * {@link SqlGenerationVisitor} (it uses the visitor pattern). For things like
+ * quoting and case-sensitivity, the SQL generation visitor will ask the dialect
+ * how to handle them.
*
- * If your dialect has a special SQL syntax which cannot be realized using the methods of {@link SqlDialect}, then you can
- * implement your own SQL generation visitor which extends {@link SqlGenerationVisitor}.
- * Your custom visitor must then be returned by {@link #getSqlGenerationVisitor(SqlGenerationContext)}.
- * For an example look at {@link com.exasol.adapter.dialects.impl.OracleSqlGenerationVisitor}.
+ * If your dialect has a special SQL syntax which cannot be realized using the
+ * methods of {@link SqlDialect}, then you can implement your own SQL generation
+ * visitor which extends {@link SqlGenerationVisitor}. Your custom visitor must
+ * then be returned by {@link #getSqlGenerationVisitor(SqlGenerationContext)}.
+ * For an example look at
+ * {@link com.exasol.adapter.dialects.impl.OracleSqlGenerationVisitor}.
*
*
* Notes for developing a dialect
*
- *
Create a class for your integration test, with the suffix IT.java.
+ *
+ * Create a class for your integration test, with the suffix IT.java.
+ *
*
- *
We recommend to extend the abstract class {@link AbstractSqlDialect} instead of directly implementing {@link SqlDialect}.
+ *
+ * We recommend to extend the abstract class {@link AbstractSqlDialect} instead
+ * of directly implementing {@link SqlDialect}.
+ *
*/
public interface SqlDialect {
-
+
/**
- * @return the name that can be used to choose this dialect (user can give this name). Case insensitive.
+ * @return the name that can be used to choose this dialect (user can give this
+ * name). Case insensitive.
*/
- String getPublicName();
-
+ public static String getPublicName() {
+ return "SqlDialect interface";
+ };
+
//
// CAPABILITIES
//
@@ -65,146 +80,178 @@ public interface SqlDialect {
/**
* @return The set of capabilities supported by this SQL-Dialect
*/
- Capabilities getCapabilities();
+ public Capabilities getCapabilities();
//
// MAPPING OF METADATA: CATALOGS, SCHEMAS, TABLES AND DATA TYPES
//
- enum SchemaOrCatalogSupport {
- SUPPORTED,
- UNSUPPORTED,
- UNKNOWN
+ public enum SchemaOrCatalogSupport {
+ SUPPORTED, UNSUPPORTED, UNKNOWN
}
/**
- * @return True, if the database "truly" supports the concept of JDBC catalogs (not just a single dummy catalog). If true, the user must specify the catalog.
- * False, if the database does not have a catalog concept, e.g. if it has no catalogs, or a single dummy catalog, or even if it throws an Exception for {@link DatabaseMetaData#getCatalogs()}. If false, the user must not specify the catalog.
+ * @return True, if the database "truly" supports the concept of JDBC catalogs
+ * (not just a single dummy catalog). If true, the user must specify the
+ * catalog. False, if the database does not have a catalog concept, e.g.
+ * if it has no catalogs, or a single dummy catalog, or even if it
+ * throws an Exception for {@link DatabaseMetaData#getCatalogs()}. If
+ * false, the user must not specify the catalog.
*/
- SchemaOrCatalogSupport supportsJdbcCatalogs();
+ public SchemaOrCatalogSupport supportsJdbcCatalogs();
/**
- * @return True, if the database "truly" supports the concept of JDBC schemas (not just a single dummy schema). If true, the user must specify the schema.
- * False, if the database does not have a schema concept, e.g. if it has no schemas, or a single dummy schemas, or even if it throws an Exception for {@link DatabaseMetaData#getSchemas()}. If false, the user must not specify the schema.
+ * @return True, if the database "truly" supports the concept of JDBC schemas
+ * (not just a single dummy schema). If true, the user must specify the
+ * schema. False, if the database does not have a schema concept, e.g.
+ * if it has no schemas, or a single dummy schemas, or even if it throws
+ * an Exception for {@link DatabaseMetaData#getSchemas()}. If false, the
+ * user must not specify the schema.
*/
- SchemaOrCatalogSupport supportsJdbcSchemas();
+ public SchemaOrCatalogSupport supportsJdbcSchemas();
- class MappedTable {
+ public class MappedTable {
private boolean isIgnored = false;
private String tableName = "";
private String originalName = "";
private String tableComment = "";
- public static MappedTable createMappedTable(String tableName, String originalName, String tableComment) {
- MappedTable t = new MappedTable();
+
+ public static MappedTable createMappedTable(final String tableName, final String originalName,
+ final String tableComment) {
+ final MappedTable t = new MappedTable();
t.isIgnored = false;
t.tableName = tableName;
t.originalName = originalName;
t.tableComment = tableComment;
return t;
}
+
public static MappedTable createIgnoredTable() {
- MappedTable t = new MappedTable();
+ final MappedTable t = new MappedTable();
t.isIgnored = true;
return t;
}
- public boolean isIgnored() { return isIgnored; }
- public String getTableName() { return tableName; }
- public String getOriginalTableName () { return originalName;}
- public String getTableComment() { return tableComment; }
+
+ public boolean isIgnored() {
+ return isIgnored;
+ }
+
+ public String getTableName() {
+ return tableName;
+ }
+
+ public String getOriginalTableName() {
+ return originalName;
+ }
+
+ public String getTableComment() {
+ return tableComment;
+ }
}
/**
- * @param tables A jdbc Resultset for the {@link DatabaseMetaData#getTables(String, String, String, String[])} call, pointing to the current table.
+ * @param tables A jdbc Resultset for the
+ * {@link DatabaseMetaData#getTables(String, String, String, String[])}
+ * call, pointing to the current table.
* @return An instance of {@link MappedTable} describing the mapped table.
*/
- MappedTable mapTable(ResultSet tables) throws SQLException;
+ public MappedTable mapTable(ResultSet tables) throws SQLException;
/**
- * @param columns A jdbc Resultset for the {@link DatabaseMetaData#getColumns(String, String, String, String)} call, pointing to the current column.
+ * @param columns A jdbc Resultset for the
+ * {@link DatabaseMetaData#getColumns(String, String, String, String)}
+ * call, pointing to the current column.
* @return The mapped column
* @throws SQLException
*/
- ColumnMetadata mapColumn(ResultSet columns) throws SQLException;
-
+ public ColumnMetadata mapColumn(ResultSet columns) throws SQLException;
/**
- * Maps the jdbc datatype information of a column to the EXASOL datatype of the column.
- * The dialect can also return null, so that the default mapping occurs.
- * This method will be called by {@link #mapJdbcType(JdbcTypeDescription)} in the default implementation.
+ * Maps the jdbc datatype information of a column to the EXASOL datatype of the
+ * column. The dialect can also return null, so that the default mapping occurs.
+ * This method will be called by {@link #mapJdbcType(JdbcTypeDescription)} in
+ * the default implementation.
*
* @param jdbcType A jdbc type description
- * @return Either null, if the default datatype mapping shall be applied,
- * or the datatype which the current column shall be mapped to.
- * This datatype will be used as the datatype in the virtual table and in the pushdown sql.
+ * @return Either null, if the default datatype mapping shall be applied, or the
+ * datatype which the current column shall be mapped to. This datatype
+ * will be used as the datatype in the virtual table and in the pushdown
+ * sql.
*
*/
- DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcType) throws SQLException;
+ public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcType) throws SQLException;
/**
- * Maps the jdbc datatype information of a column to the EXASOL datatype of the column.
- * This method will be called by {@link #mapColumn(ResultSet)} in the default implementation.
+ * Maps the jdbc datatype information of a column to the EXASOL datatype of the
+ * column. This method will be called by {@link #mapColumn(ResultSet)} in the
+ * default implementation.
*
* @param jdbcType A jdbc type description
- * @return Either null, if the default datatype mapping shall be applied,
- * or the datatype which the current column shall be mapped to.
- * This datatype will be used as the datatype in the virtual table and in the pushdown sql.
+ * @return Either null, if the default datatype mapping shall be applied, or the
+ * datatype which the current column shall be mapped to. This datatype
+ * will be used as the datatype in the virtual table and in the pushdown
+ * sql.
*
*/
- DataType mapJdbcType(JdbcTypeDescription jdbcType) throws SQLException;
+ public DataType mapJdbcType(JdbcTypeDescription jdbcType) throws SQLException;
//
// SQL GENERATION
//
-
+
/**
* How unquoted or quoted identifiers in queries or DDLs are handled
*/
- enum IdentifierCaseHandling {
- INTERPRET_AS_LOWER,
- INTERPRET_AS_UPPER,
- INTERPRET_CASE_SENSITIVE
+ public enum IdentifierCaseHandling {
+ INTERPRET_AS_LOWER, INTERPRET_AS_UPPER, INTERPRET_CASE_SENSITIVE
}
/**
* @return How to handle case sensitivity of unquoted identifiers
*/
- IdentifierCaseHandling getUnquotedIdentifierHandling();
+ public IdentifierCaseHandling getUnquotedIdentifierHandling();
/**
- * @return How to handle case sensitivity of quoted identifiers
+ * @return How to handle case sensitivity of quoted identifiers
*/
- IdentifierCaseHandling getQuotedIdentifierHandling();
+ public IdentifierCaseHandling getQuotedIdentifierHandling();
/**
- * @param identifier The name of an identifier (table or column). If identifiers are case sensitive, the identifier must be passed case-sensitive of course.
+ * @param identifier The name of an identifier (table or column). If identifiers
+ * are case sensitive, the identifier must be passed
+ * case-sensitive of course.
* @return the quoted identifier, also if quoting is not required
*/
- String applyQuote(String identifier);
+ public String applyQuote(String identifier);
/**
* @param identifier The name of an identifier (table or column).
- * @return the quoted identifier, if this name requires quoting, or the unquoted identifier, if no quoting is required.
+ * @return the quoted identifier, if this name requires quoting, or the unquoted
+ * identifier, if no quoting is required.
*/
- String applyQuoteIfNeeded(String identifier);
-
+ public String applyQuoteIfNeeded(String identifier);
+
/**
- * @return True if table names must be catalog-qualified, e.g. SELECT * FROM MY_CATALOG.MY_TABLE, otherwise false.
- * Can be combined with {@link #requiresSchemaQualifiedTableNames(SqlGenerationContext)}
+ * @return True if table names must be catalog-qualified, e.g. SELECT * FROM
+ * MY_CATALOG.MY_TABLE, otherwise false. Can be combined with
+ * {@link #requiresSchemaQualifiedTableNames(SqlGenerationContext)}
*/
- boolean requiresCatalogQualifiedTableNames(SqlGenerationContext context);
+ public boolean requiresCatalogQualifiedTableNames(SqlGenerationContext context);
/**
- * @return True if table names must be schema-qualified, e.g. SELECT * FROM MY_SCHEMA.MY_TABLE, otherwise false.
- * Can be combined with {@link #requiresCatalogQualifiedTableNames(SqlGenerationContext)}
+ * @return True if table names must be schema-qualified, e.g. SELECT * FROM
+ * MY_SCHEMA.MY_TABLE, otherwise false. Can be combined with
+ * {@link #requiresCatalogQualifiedTableNames(SqlGenerationContext)}
*/
- boolean requiresSchemaQualifiedTableNames(SqlGenerationContext context);
+ public boolean requiresSchemaQualifiedTableNames(SqlGenerationContext context);
/**
- * @return String that is used to separate the catalog and/or the schema from the tablename. In many cases this is a dot.
+ * @return String that is used to separate the catalog and/or the schema from
+ * the tablename. In many cases this is a dot.
*/
- String getTableCatalogAndSchemaSeparator();
+ public String getTableCatalogAndSchemaSeparator();
- enum NullSorting {
+ public enum NullSorting {
// NULL values are sorted at the end regardless of sort order
NULLS_SORTED_AT_END,
@@ -219,61 +266,73 @@ enum NullSorting {
}
/**
- * @return The behavior how nulls are sorted in an ORDER BY. If the null sorting behavior is
- * not {@link NullSorting#NULLS_SORTED_AT_END} and your dialects has the order by
- * capability but you cannot explicitly specify NULLS FIRST or NULLS LAST, then you must
- * overwrite the SQL generation to somehow obtain the desired semantic.
+ * @return The behavior how nulls are sorted in an ORDER BY. If the null sorting
+ * behavior is not {@link NullSorting#NULLS_SORTED_AT_END} and your
+ * dialects has the order by capability but you cannot explicitly
+ * specify NULLS FIRST or NULLS LAST, then you must overwrite the SQL
+ * generation to somehow obtain the desired semantic.
*/
- NullSorting getDefaultNullSorting();
+ public NullSorting getDefaultNullSorting();
/**
* @param value a string literal value
- * @return the string literal in valid SQL syntax, e.g. "value" becomes "'value'". This might include escaping
+ * @return the string literal in valid SQL syntax, e.g. "value" becomes
+ * "'value'". This might include escaping
*/
- String getStringLiteral(String value);
+ public String getStringLiteral(String value);
/**
- * @return aliases for scalar functions. To be defined for each function that has the same semantic but a different name in the data source.
- * If an alias for the same function is defined in {@link #getBinaryInfixFunctionAliases()}, than the infix alias will be ignored.
+ * @return aliases for scalar functions. To be defined for each function that
+ * has the same semantic but a different name in the data source. If an
+ * alias for the same function is defined in
+ * {@link #getBinaryInfixFunctionAliases()}, than the infix alias will
+ * be ignored.
*/
- Map getScalarFunctionAliases();
+ public Map getScalarFunctionAliases();
/**
- * @return Defines which binary scalar functions should be treated infix and how. E.g. a map entry ("ADD", "+") causes a function call "ADD(1,2)" to be written as "1 + 2".
+ * @return Defines which binary scalar functions should be treated infix and
+ * how. E.g. a map entry ("ADD", "+") causes a function call "ADD(1,2)"
+ * to be written as "1 + 2".
*/
- Map getBinaryInfixFunctionAliases();
+ public Map getBinaryInfixFunctionAliases();
/**
- * @return Defines which unary scalar functions should be treated prefix and how. E.g. a map entry ("NEG", "-") causes a function call "NEG(2)" to be written as "-2".
+ * @return Defines which unary scalar functions should be treated prefix and
+ * how. E.g. a map entry ("NEG", "-") causes a function call "NEG(2)" to
+ * be written as "-2".
*/
- Map getPrefixFunctionAliases();
+ public Map getPrefixFunctionAliases();
/**
- * @return aliases for aggregate functions. To be defined for each function that has the same semantic but a different name in the data source.
+ * @return aliases for aggregate functions. To be defined for each function that
+ * has the same semantic but a different name in the data source.
*/
- Map getAggregateFunctionAliases();
+ public Map getAggregateFunctionAliases();
/**
- * @return Returns true for functions with zero arguments if they do not require parentheses (e.g. SYSDATE).
+ * @return Returns true for functions with zero arguments if they do not require
+ * parentheses (e.g. SYSDATE).
*/
- boolean omitParentheses(ScalarFunction function);
+ public boolean omitParentheses(ScalarFunction function);
/**
- * Returns the Visitor to be used for SQL generation.
- * Use this only if you need to, i.e. if you have requirements which cannot
- * be realized via the other methods provided by {@link SqlDialect}.
+ * Returns the Visitor to be used for SQL generation. Use this only if you need
+ * to, i.e. if you have requirements which cannot be realized via the other
+ * methods provided by {@link SqlDialect}.
*
* @param context context information for the sql generation visitor
* @return the SqlGenerationVisitor to be used for this dialect
*/
- SqlGenerationVisitor getSqlGenerationVisitor(SqlGenerationContext context);
+ public SqlGenerationVisitor getSqlGenerationVisitor(SqlGenerationContext context);
/**
* Allows dialect specific handling of different exceptions.
- * @param exception the catched exception
+ *
+ * @param exception the catched exception
* @param exceptionMode exception mode of the adapter
* @throws SQLException
*/
- void handleException(SQLException exception,
- JdbcAdapterProperties.ExceptionHandlingMode exceptionMode) throws SQLException;
+ public void handleException(SQLException exception, JdbcAdapterProperties.ExceptionHandlingMode exceptionMode)
+ throws SQLException;
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlDialects.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlDialects.java
index f8603e467..c444bf9b6 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlDialects.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlDialects.java
@@ -1,74 +1,159 @@
package com.exasol.adapter.dialects;
-import com.exasol.adapter.dialects.impl.*;
-
-import java.util.List;
+import java.io.IOException;
+import java.io.InputStream;
+import java.lang.reflect.InvocationTargetException;
+import java.util.HashSet;
+import java.util.Optional;
+import java.util.Properties;
+import java.util.Set;
+import java.util.logging.Logger;
+import java.util.stream.Collectors;
/**
- * Manages a set of supported SqlDialects.
+ * This class implements a registry for supported SQL dialects.
*/
-public class SqlDialects {
+public final class SqlDialects {
+ public static final String SQL_DIALECTS_PROPERTY = "com.exasol.adapter.dialects.supported";
+ private static final String GET_PUBLIC_NAME_METHOD = "getPublicName";
+ private static final String DIALECTS_PROPERTIES_FILE = "sql_dialects.properties";
+ private static SqlDialects instance = null;
+ private final Set> supportedDialects = new HashSet<>();
+ private static final Logger LOGGER = Logger.getLogger(SqlDialects.class.getName());
- private List supportedDialects;
+ /**
+ * Get an instance of the {@link SqlDialects} class
+ *
+ * @return the instance
+ */
+ public static synchronized SqlDialects getInstance() {
+ if (instance == null) {
+ instance = new SqlDialects();
+ instance.registerDialectsFromProperty();
+ }
+ return instance;
+ }
- private List> dialects;
+ private SqlDialects() {
+ // prevent instantiation outside of singleton.
+ }
- public SqlDialects(List supportedDialects) {
- this.supportedDialects = supportedDialects;
+ private void registerDialectsFromProperty() {
+ final String sqlDialects = (System.getProperty(SQL_DIALECTS_PROPERTY) == null)
+ ? readDialectListFromPropertyFile()
+ : System.getProperty(SQL_DIALECTS_PROPERTY);
+ registerDialects(sqlDialects);
}
- public boolean isSupported(String dialectName) {
- for (String curName : supportedDialects) {
- if (curName.equalsIgnoreCase(dialectName)) {
- return true;
- }
+ private String readDialectListFromPropertyFile() {
+ final Properties properties = new Properties();
+ final ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
+ try (final InputStream stream = contextClassLoader.getResourceAsStream(DIALECTS_PROPERTIES_FILE)) {
+ properties.load(stream);
+ return properties.getProperty(SQL_DIALECTS_PROPERTY);
+ } catch (final IOException e) {
+ throw new SqlDialectsRegistryException(
+ "Unable to load list of SQL dialect from " + DIALECTS_PROPERTIES_FILE);
}
- return false;
}
- public SqlDialect getDialectByName(String name, SqlDialectContext context) {
- if (name.equalsIgnoreCase(GenericSqlDialect.NAME)) {
- return new GenericSqlDialect(context);
- } else if (name.equalsIgnoreCase(ExasolSqlDialect.NAME)) {
- return new ExasolSqlDialect(context);
- } else if (name.equalsIgnoreCase(HiveSqlDialect.NAME)) {
- return new HiveSqlDialect(context);
- } else if (name.equalsIgnoreCase(ImpalaSqlDialect.NAME)) {
- return new ImpalaSqlDialect(context);
- } else if (name.equalsIgnoreCase(MysqlSqlDialect.NAME)) {
- return new MysqlSqlDialect(context);
- } else if (name.equalsIgnoreCase(OracleSqlDialect.NAME)) {
- return new OracleSqlDialect(context);
- } else if (name.equalsIgnoreCase(TeradataSqlDialect.NAME)) {
- return new TeradataSqlDialect(context);
- } else if (name.equalsIgnoreCase(RedshiftSqlDialect.NAME)) {
- return new RedshiftSqlDialect(context);
- } else if (name.equalsIgnoreCase(DB2SqlDialect.NAME)) {
- return new DB2SqlDialect(context);
- } else if (name.equalsIgnoreCase(SqlServerSqlDialect.NAME)) {
- return new SqlServerSqlDialect(context);
- } else if (name.equalsIgnoreCase(PostgreSQLSqlDialect.NAME)) {
- return new PostgreSQLSqlDialect(context);
+ private void registerDialects(final String sqlDialects) {
+ for (final String className : sqlDialects.split("\\s*,\\s*")) {
+ registerDialect(className);
}
- else {
- return null;
+ }
+
+ private void registerDialect(final String className) {
+ try {
+ @SuppressWarnings("unchecked")
+ final Class extends SqlDialect> dialect = (Class extends SqlDialect>) Class.forName(className);
+ this.supportedDialects.add(dialect);
+ LOGGER.fine(() -> "Registered SQL dialect implementation class \"" + className + "\"");
+ } catch (final ClassNotFoundException e) {
+ throw new SqlDialectsRegistryException("Unable to find SQL dialect implementation class " + className);
}
}
- public List> getDialects() {
- return dialects;
+ /**
+ * Check whether a dialect is supported
+ *
+ * @param wantedDialectName the name of the dialect
+ * @return true if the dialect is supported
+ */
+ public boolean isSupported(final String wantedDialectName) {
+ return this.supportedDialects.stream().anyMatch(dialect -> {
+ return getNameForDialectClass(dialect).equalsIgnoreCase(wantedDialectName);
+ });
}
- public String getDialectsString() {
- StringBuilder builder = new StringBuilder();
- boolean first = true;
- for (String curName : supportedDialects) {
- if (!first) {
- builder.append(", ");
- }
- builder.append(curName);
- first = false;
+ private String getNameForDialectClass(final Class extends SqlDialect> dialect) {
+ String dialectName;
+ try {
+ dialectName = (String) dialect.getMethod(GET_PUBLIC_NAME_METHOD).invoke(null);
+ } catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException | NoSuchMethodException
+ | SecurityException e) {
+ throw new SqlDialectsRegistryException(
+ "Unable to invoke " + GET_PUBLIC_NAME_METHOD + " trying to determine SQL dialect name");
+ }
+ return dialectName;
+ }
+
+ /**
+ * Finds an SQL dialect by its name and hands back an instance of the according
+ * dialect implementation.
+ *
+ * @param name name of the dialect to be instantiated
+ * @param context the context to be handed to the instance.
+ * @return a new instance of the dialect
+ *
+ * @throws SqlDialectsRegistryException if the dialect is not found or cannot be
+ * instantiated.
+ */
+ public SqlDialect getDialectInstanceForNameWithContext(final String name, final SqlDialectContext context)
+ throws SqlDialectsRegistryException {
+ final Optional> foundDialect = findDialectByName(name);
+ return instantiateDialect(name, foundDialect, context);
+ }
+
+ private Optional> findDialectByName(final String name) {
+ final Optional> foundDialect = this.supportedDialects.stream()
+ .filter(dialect -> getNameForDialectClass(dialect).equalsIgnoreCase(name)) //
+ .findFirst();
+ if (!foundDialect.isPresent()) {
+ throw new SqlDialectsRegistryException("SQL dialect \"" + name + "\" not found in the dialects registry.");
+ }
+ return foundDialect;
+ }
+
+ private SqlDialect instantiateDialect(final String name, final Optional> foundDialect,
+ final SqlDialectContext context) throws SqlDialectsRegistryException {
+ SqlDialect instance;
+ try {
+ final Class extends SqlDialect> dialectClass = foundDialect.get();
+ instance = dialectClass.getConstructor(SqlDialectContext.class).newInstance(context);
+ } catch (InstantiationException | IllegalAccessException | IllegalArgumentException | InvocationTargetException
+ | NoSuchMethodException | SecurityException e) {
+ throw new SqlDialectsRegistryException("Unable to instanciate SQL dialect \"" + name + "\".", e);
}
- return builder.toString();
+ return instance;
+ }
+
+ /**
+ * Get a comma separated, alphabetically sorted list of supported dialects.
+ *
+ * @return comma separated list of dialect.
+ */
+ public String getDialectsString() {
+ return this.supportedDialects.stream() //
+ .map(dialect -> getNameForDialectClass(dialect)) //
+ .sorted() //
+ .collect(Collectors.joining(", "));
+ }
+
+ /**
+ * Delete the singleton instance (necessary for tests)
+ */
+ public static void deleteInstance() {
+ instance = null;
}
-}
+}
\ No newline at end of file
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlDialectsRegistryException.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlDialectsRegistryException.java
new file mode 100644
index 000000000..1b1792662
--- /dev/null
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlDialectsRegistryException.java
@@ -0,0 +1,27 @@
+package com.exasol.adapter.dialects;
+
+/**
+ * This class provides runtime exceptions for the SQL dialects registry.
+ */
+public class SqlDialectsRegistryException extends RuntimeException {
+ private static final long serialVersionUID = -5603866366083182805L;
+
+ /**
+ * Create a new instance of the {@link SqlDialectsRegistryException}
+ *
+ * @param message message to be displayed
+ */
+ public SqlDialectsRegistryException(final String message) {
+ super(message);
+ }
+
+ /**
+ * Create a new instance of the {@link SqlDialectsRegistryException}
+ *
+ * @param message message to be displayed
+ * @param cause root cause
+ */
+ public SqlDialectsRegistryException(final String message, final Throwable cause) {
+ super(message, cause);
+ }
+}
\ No newline at end of file
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlGenerationVisitor.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlGenerationVisitor.java
index f65eb3549..cf9e8ce81 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlGenerationVisitor.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/SqlGenerationVisitor.java
@@ -1,39 +1,80 @@
package com.exasol.adapter.dialects;
+import java.util.ArrayList;
+import java.util.List;
+
import com.exasol.adapter.AdapterException;
import com.exasol.adapter.metadata.DataType;
-import com.exasol.adapter.sql.*;
+import com.exasol.adapter.sql.AggregateFunction;
+import com.exasol.adapter.sql.ScalarFunction;
+import com.exasol.adapter.sql.SqlColumn;
+import com.exasol.adapter.sql.SqlFunctionAggregate;
+import com.exasol.adapter.sql.SqlFunctionAggregateGroupConcat;
+import com.exasol.adapter.sql.SqlFunctionScalar;
+import com.exasol.adapter.sql.SqlFunctionScalarCase;
+import com.exasol.adapter.sql.SqlFunctionScalarCast;
+import com.exasol.adapter.sql.SqlFunctionScalarExtract;
+import com.exasol.adapter.sql.SqlGroupBy;
+import com.exasol.adapter.sql.SqlLimit;
+import com.exasol.adapter.sql.SqlLiteralBool;
+import com.exasol.adapter.sql.SqlLiteralDate;
+import com.exasol.adapter.sql.SqlLiteralDouble;
+import com.exasol.adapter.sql.SqlLiteralExactnumeric;
+import com.exasol.adapter.sql.SqlLiteralInterval;
+import com.exasol.adapter.sql.SqlLiteralNull;
+import com.exasol.adapter.sql.SqlLiteralString;
+import com.exasol.adapter.sql.SqlLiteralTimestamp;
+import com.exasol.adapter.sql.SqlLiteralTimestampUtc;
+import com.exasol.adapter.sql.SqlNode;
+import com.exasol.adapter.sql.SqlNodeVisitor;
+import com.exasol.adapter.sql.SqlOrderBy;
+import com.exasol.adapter.sql.SqlPredicateAnd;
+import com.exasol.adapter.sql.SqlPredicateBetween;
+import com.exasol.adapter.sql.SqlPredicateEqual;
+import com.exasol.adapter.sql.SqlPredicateInConstList;
+import com.exasol.adapter.sql.SqlPredicateIsNotNull;
+import com.exasol.adapter.sql.SqlPredicateIsNull;
+import com.exasol.adapter.sql.SqlPredicateLess;
+import com.exasol.adapter.sql.SqlPredicateLessEqual;
+import com.exasol.adapter.sql.SqlPredicateLike;
+import com.exasol.adapter.sql.SqlPredicateLikeRegexp;
+import com.exasol.adapter.sql.SqlPredicateNot;
+import com.exasol.adapter.sql.SqlPredicateNotEqual;
+import com.exasol.adapter.sql.SqlPredicateOr;
+import com.exasol.adapter.sql.SqlSelectList;
+import com.exasol.adapter.sql.SqlStatementSelect;
+import com.exasol.adapter.sql.SqlTable;
import com.google.common.base.Joiner;
-import java.util.ArrayList;
-import java.util.List;
-
/**
- * This class has the logic to generate SQL queries based on a graph of {@link SqlNode} elements.
- * It uses the visitor pattern.
- * This class interacts with the dialects in some situations, e.g. to find out how to handle quoting,
+ * This class has the logic to generate SQL queries based on a graph of
+ * {@link SqlNode} elements. It uses the visitor pattern. This class interacts
+ * with the dialects in some situations, e.g. to find out how to handle quoting,
* case-sensitivity.
*
*
- * If this class is not sufficiently customizable for your use case, you can extend
- * this class and override the required methods. You also have to return your custom
- * visitor class then in the method {@link SqlDialect#getSqlGenerationVisitor(SqlGenerationContext)}.
- * See {@link com.exasol.adapter.dialects.impl.OracleSqlGenerationVisitor} for an example.
+ * If this class is not sufficiently customizable for your use case, you can
+ * extend this class and override the required methods. You also have to return
+ * your custom visitor class then in the method
+ * {@link SqlDialect#getSqlGenerationVisitor(SqlGenerationContext)}. See
+ * {@link com.exasol.adapter.dialects.impl.OracleSqlGenerationVisitor} for an
+ * example.
*
*
* Note on operator associativity and parenthesis generation: Currently we use
- * parenthesis almost always. Without parenthesis, two SqlNode graphs with different
- * semantic lead to "select 1 = 1 - 1 + 1". Also "SELECT NOT NOT TRUE" needs to be written
- * as "SELECT NOT (NOT TRUE)" to work at all, whereas SELECT NOT TRUE works fine
- * without parentheses. Currently we make inflationary use of parenthesis to to enforce
- * the right semantic, but hopefully there is a better way.
+ * parenthesis almost always. Without parenthesis, two SqlNode graphs with
+ * different semantic lead to "select 1 = 1 - 1 + 1". Also "SELECT NOT NOT TRUE"
+ * needs to be written as "SELECT NOT (NOT TRUE)" to work at all, whereas SELECT
+ * NOT TRUE works fine without parentheses. Currently we make inflationary use
+ * of parenthesis to to enforce the right semantic, but hopefully there is a
+ * better way.
*/
public class SqlGenerationVisitor implements SqlNodeVisitor {
- private SqlDialect dialect;
- private SqlGenerationContext context;
+ private final SqlDialect dialect;
+ private final SqlGenerationContext context;
- public SqlGenerationVisitor(SqlDialect dialect, SqlGenerationContext context) {
+ public SqlGenerationVisitor(final SqlDialect dialect, final SqlGenerationContext context) {
this.dialect = dialect;
this.context = context;
@@ -42,21 +83,25 @@ public SqlGenerationVisitor(SqlDialect dialect, SqlGenerationContext context) {
protected void checkDialectAliases() {
// Check if dialect provided invalid aliases, which would never be applied.
- for (ScalarFunction function : dialect.getScalarFunctionAliases().keySet()) {
+ for (final ScalarFunction function : this.dialect.getScalarFunctionAliases().keySet()) {
if (!function.isSimple()) {
- throw new RuntimeException("The dialect " + dialect.getPublicName() + " provided an alias for the non-simple scalar function " + function.name() + ". This alias will never be considered.");
+ throw new RuntimeException("The dialect " + SqlDialect.getPublicName()
+ + " provided an alias for the non-simple scalar function " + function.name()
+ + ". This alias will never be considered.");
}
}
- for (AggregateFunction function : dialect.getAggregateFunctionAliases().keySet()) {
+ for (final AggregateFunction function : this.dialect.getAggregateFunctionAliases().keySet()) {
if (!function.isSimple()) {
- throw new RuntimeException("The dialect " + dialect.getPublicName() + " provided an alias for the non-simple aggregate function " + function.name() + ". This alias will never be considered.");
+ throw new RuntimeException("The dialect " + SqlDialect.getPublicName()
+ + " provided an alias for the non-simple aggregate function " + function.name()
+ + ". This alias will never be considered.");
}
}
}
@Override
- public String visit(SqlStatementSelect select) throws AdapterException {
- StringBuilder sql = new StringBuilder();
+ public String visit(final SqlStatementSelect select) throws AdapterException {
+ final StringBuilder sql = new StringBuilder();
sql.append("SELECT ");
sql.append(select.getSelectList().accept(this));
sql.append(" FROM ");
@@ -85,15 +130,15 @@ public String visit(SqlStatementSelect select) throws AdapterException {
}
@Override
- public String visit(SqlSelectList selectList) throws AdapterException {
- List selectElement = new ArrayList<>();
+ public String visit(final SqlSelectList selectList) throws AdapterException {
+ final List selectElement = new ArrayList<>();
if (selectList.isRequestAnyColumn()) {
// The system requested any column
selectElement.add("true");
} else if (selectList.isSelectStar()) {
selectElement.add("*");
} else {
- for (SqlNode node : selectList.getExpressions()) {
+ for (final SqlNode node : selectList.getExpressions()) {
selectElement.add(node.accept(this));
}
}
@@ -101,41 +146,42 @@ public String visit(SqlSelectList selectList) throws AdapterException {
}
@Override
- public String visit(SqlColumn column) throws AdapterException {
- return dialect.applyQuoteIfNeeded(column.getName());
+ public String visit(final SqlColumn column) throws AdapterException {
+ return this.dialect.applyQuoteIfNeeded(column.getName());
}
@Override
- public String visit(SqlTable table) {
+ public String visit(final SqlTable table) {
String schemaPrefix = "";
- if (dialect.requiresCatalogQualifiedTableNames(context) && context.getCatalogName() != null && !context.getCatalogName().isEmpty()) {
- schemaPrefix = dialect.applyQuoteIfNeeded(context.getCatalogName())
- + dialect.getTableCatalogAndSchemaSeparator();
+ if (this.dialect.requiresCatalogQualifiedTableNames(this.context) && this.context.getCatalogName() != null
+ && !this.context.getCatalogName().isEmpty()) {
+ schemaPrefix = this.dialect.applyQuoteIfNeeded(this.context.getCatalogName())
+ + this.dialect.getTableCatalogAndSchemaSeparator();
}
- if (dialect.requiresSchemaQualifiedTableNames(context) && context.getSchemaName() != null && !context.getSchemaName().isEmpty()) {
- schemaPrefix += dialect.applyQuoteIfNeeded(context.getSchemaName())
- + dialect.getTableCatalogAndSchemaSeparator();
+ if (this.dialect.requiresSchemaQualifiedTableNames(this.context) && this.context.getSchemaName() != null
+ && !this.context.getSchemaName().isEmpty()) {
+ schemaPrefix += this.dialect.applyQuoteIfNeeded(this.context.getSchemaName())
+ + this.dialect.getTableCatalogAndSchemaSeparator();
}
- return schemaPrefix + dialect.applyQuoteIfNeeded(table.getName());
+ return schemaPrefix + this.dialect.applyQuoteIfNeeded(table.getName());
}
@Override
- public String visit(SqlGroupBy groupBy) throws AdapterException {
+ public String visit(final SqlGroupBy groupBy) throws AdapterException {
if (groupBy.getExpressions() == null || groupBy.getExpressions().isEmpty()) {
- throw new RuntimeException(
- "Unexpected internal state (empty group by)");
+ throw new RuntimeException("Unexpected internal state (empty group by)");
}
- List selectElement = new ArrayList<>();
- for (SqlNode node : groupBy.getExpressions()) {
+ final List selectElement = new ArrayList<>();
+ for (final SqlNode node : groupBy.getExpressions()) {
selectElement.add(node.accept(this));
}
return Joiner.on(", ").join(selectElement);
}
@Override
- public String visit(SqlFunctionAggregate function) throws AdapterException {
- List argumentsSql = new ArrayList<>();
- for (SqlNode node : function.getArguments()) {
+ public String visit(final SqlFunctionAggregate function) throws AdapterException {
+ final List argumentsSql = new ArrayList<>();
+ for (final SqlNode node : function.getArguments()) {
argumentsSql.add(node.accept(this));
}
if (function.getFunctionName().equalsIgnoreCase("count") && argumentsSql.size() == 0) {
@@ -146,26 +192,25 @@ public String visit(SqlFunctionAggregate function) throws AdapterException {
distinctSql = "DISTINCT ";
}
String functionNameInSourceSystem = function.getFunctionName();
- if (dialect.getAggregateFunctionAliases().containsKey(function.getFunction())) {
- functionNameInSourceSystem = dialect.getAggregateFunctionAliases().get(function.getFunction());
+ if (this.dialect.getAggregateFunctionAliases().containsKey(function.getFunction())) {
+ functionNameInSourceSystem = this.dialect.getAggregateFunctionAliases().get(function.getFunction());
}
- return functionNameInSourceSystem + "(" + distinctSql
- + Joiner.on(", ").join(argumentsSql) + ")";
+ return functionNameInSourceSystem + "(" + distinctSql + Joiner.on(", ").join(argumentsSql) + ")";
}
@Override
- public String visit(SqlFunctionAggregateGroupConcat function) throws AdapterException {
- StringBuilder builder = new StringBuilder();
+ public String visit(final SqlFunctionAggregateGroupConcat function) throws AdapterException {
+ final StringBuilder builder = new StringBuilder();
builder.append(function.getFunctionName());
builder.append("(");
if (function.hasDistinct()) {
builder.append("DISTINCT ");
}
- assert(function.getArguments().size() == 1 && function.getArguments().get(0) != null);
+ assert (function.getArguments().size() == 1 && function.getArguments().get(0) != null);
builder.append(function.getArguments().get(0).accept(this));
if (function.hasOrderBy()) {
builder.append(" ");
- String orderByString = function.getOrderBy().accept(this);
+ final String orderByString = function.getOrderBy().accept(this);
builder.append(orderByString);
}
if (function.getSeparator() != null) {
@@ -179,35 +224,33 @@ public String visit(SqlFunctionAggregateGroupConcat function) throws AdapterExce
}
@Override
- public String visit(SqlFunctionScalar function) throws AdapterException {
- List argumentsSql = new ArrayList<>();
- for (SqlNode node : function.getArguments()) {
+ public String visit(final SqlFunctionScalar function) throws AdapterException {
+ final List argumentsSql = new ArrayList<>();
+ for (final SqlNode node : function.getArguments()) {
argumentsSql.add(node.accept(this));
}
String functionNameInSourceSystem = function.getFunctionName();
- if (dialect.getScalarFunctionAliases().containsKey(function.getFunction())) {
+ if (this.dialect.getScalarFunctionAliases().containsKey(function.getFunction())) {
// Take alias if one is defined - will overwrite the infix
- functionNameInSourceSystem = dialect.getScalarFunctionAliases().get(function.getFunction());
+ functionNameInSourceSystem = this.dialect.getScalarFunctionAliases().get(function.getFunction());
} else {
- if (dialect.getBinaryInfixFunctionAliases().containsKey(function.getFunction())) {
+ if (this.dialect.getBinaryInfixFunctionAliases().containsKey(function.getFunction())) {
assert (argumentsSql.size() == 2);
String realFunctionName = function.getFunctionName();
- if (dialect.getBinaryInfixFunctionAliases().containsKey(function.getFunction())) {
- realFunctionName = dialect.getBinaryInfixFunctionAliases().get(function.getFunction());
+ if (this.dialect.getBinaryInfixFunctionAliases().containsKey(function.getFunction())) {
+ realFunctionName = this.dialect.getBinaryInfixFunctionAliases().get(function.getFunction());
}
- return "(" + argumentsSql.get(0) + " " + realFunctionName + " "
- + argumentsSql.get(1) + ")";
- } else if (dialect.getPrefixFunctionAliases().containsKey(function.getFunction())) {
+ return "(" + argumentsSql.get(0) + " " + realFunctionName + " " + argumentsSql.get(1) + ")";
+ } else if (this.dialect.getPrefixFunctionAliases().containsKey(function.getFunction())) {
assert (argumentsSql.size() == 1);
String realFunctionName = function.getFunctionName();
- if (dialect.getPrefixFunctionAliases().containsKey(function.getFunction())) {
- realFunctionName = dialect.getPrefixFunctionAliases().get(function.getFunction());
+ if (this.dialect.getPrefixFunctionAliases().containsKey(function.getFunction())) {
+ realFunctionName = this.dialect.getPrefixFunctionAliases().get(function.getFunction());
}
- return "(" + realFunctionName
- + argumentsSql.get(0) + ")";
+ return "(" + realFunctionName + argumentsSql.get(0) + ")";
}
}
- if (argumentsSql.size() == 0 && dialect.omitParentheses(function.getFunction())) {
+ if (argumentsSql.size() == 0 && this.dialect.omitParentheses(function.getFunction())) {
return functionNameInSourceSystem;
} else {
return functionNameInSourceSystem + "(" + Joiner.on(", ").join(argumentsSql) + ")";
@@ -215,16 +258,16 @@ public String visit(SqlFunctionScalar function) throws AdapterException {
}
@Override
- public String visit(SqlFunctionScalarCase function) throws AdapterException {
- StringBuilder builder = new StringBuilder();
+ public String visit(final SqlFunctionScalarCase function) throws AdapterException {
+ final StringBuilder builder = new StringBuilder();
builder.append("CASE");
if (function.getBasis() != null) {
builder.append(" ");
builder.append(function.getBasis().accept(this));
}
for (int i = 0; i < function.getArguments().size(); i++) {
- SqlNode node = function.getArguments().get(i);
- SqlNode result = function.getResults().get(i);
+ final SqlNode node = function.getArguments().get(i);
+ final SqlNode result = function.getResults().get(i);
builder.append(" WHEN ");
builder.append(node.accept(this));
builder.append(" THEN ");
@@ -239,12 +282,12 @@ public String visit(SqlFunctionScalarCase function) throws AdapterException {
}
@Override
- public String visit(SqlFunctionScalarCast function) throws AdapterException {
+ public String visit(final SqlFunctionScalarCast function) throws AdapterException {
- StringBuilder builder = new StringBuilder();
+ final StringBuilder builder = new StringBuilder();
builder.append("CAST");
builder.append("(");
- assert(function.getArguments().size() == 1 && function.getArguments().get(0) != null);
+ assert (function.getArguments().size() == 1 && function.getArguments().get(0) != null);
builder.append(function.getArguments().get(0).accept(this));
builder.append(" AS ");
builder.append(function.getDataType());
@@ -253,14 +296,14 @@ public String visit(SqlFunctionScalarCast function) throws AdapterException {
}
@Override
- public String visit(SqlFunctionScalarExtract function) throws AdapterException {
- assert(function.getArguments().size() == 1 && function.getArguments().get(0) != null);
- String expression = function.getArguments().get(0).accept(this);
- return function.getFunctionName() + "(" + function.getToExtract() + " FROM "+ expression + ")";
+ public String visit(final SqlFunctionScalarExtract function) throws AdapterException {
+ assert (function.getArguments().size() == 1 && function.getArguments().get(0) != null);
+ final String expression = function.getArguments().get(0).accept(this);
+ return function.getFunctionName() + "(" + function.getToExtract() + " FROM " + expression + ")";
}
@Override
- public String visit(SqlLimit limit) {
+ public String visit(final SqlLimit limit) {
String offsetSql = "";
if (limit.getOffset() != 0) {
offsetSql = " OFFSET " + limit.getOffset();
@@ -269,7 +312,7 @@ public String visit(SqlLimit limit) {
}
@Override
- public String visit(SqlLiteralBool literal) {
+ public String visit(final SqlLiteralBool literal) {
if (literal.getValue()) {
return "true";
} else {
@@ -278,50 +321,50 @@ public String visit(SqlLiteralBool literal) {
}
@Override
- public String visit(SqlLiteralDate literal) {
+ public String visit(final SqlLiteralDate literal) {
return "DATE '" + literal.getValue() + "'"; // This gets always executed
// as
// TO_DATE('2015-02-01','YYYY-MM-DD')
}
@Override
- public String visit(SqlLiteralDouble literal) {
+ public String visit(final SqlLiteralDouble literal) {
return Double.toString(literal.getValue());
}
@Override
- public String visit(SqlLiteralExactnumeric literal) {
+ public String visit(final SqlLiteralExactnumeric literal) {
return literal.getValue().toString();
}
@Override
- public String visit(SqlLiteralNull literal) {
+ public String visit(final SqlLiteralNull literal) {
return "NULL";
}
@Override
- public String visit(SqlLiteralString literal) {
- return dialect.getStringLiteral(literal.getValue());
+ public String visit(final SqlLiteralString literal) {
+ return this.dialect.getStringLiteral(literal.getValue());
}
@Override
- public String visit(SqlLiteralTimestamp literal) {
+ public String visit(final SqlLiteralTimestamp literal) {
// TODO Allow dialect to modify behavior
return "TIMESTAMP '" + literal.getValue().toString() + "'";
}
@Override
- public String visit(SqlLiteralTimestampUtc literal) {
+ public String visit(final SqlLiteralTimestampUtc literal) {
// TODO Allow dialect to modify behavior
return "TIMESTAMP '" + literal.getValue().toString() + "'";
}
@Override
- public String visit(SqlLiteralInterval literal) {
+ public String visit(final SqlLiteralInterval literal) {
// TODO Allow dialect to modify behavior
if (literal.getDataType().getIntervalType() == DataType.IntervalType.YEAR_TO_MONTH) {
- return "INTERVAL '" + literal.getValue().toString()
- + "' YEAR (" + literal.getDataType().getPrecision() + ") TO MONTH";
+ return "INTERVAL '" + literal.getValue().toString() + "' YEAR (" + literal.getDataType().getPrecision()
+ + ") TO MONTH";
} else {
return "INTERVAL '" + literal.getValue().toString() + "' DAY (" + literal.getDataType().getPrecision()
+ ") TO SECOND (" + literal.getDataType().getIntervalFraction() + ")";
@@ -329,18 +372,18 @@ public String visit(SqlLiteralInterval literal) {
}
@Override
- public String visit(SqlOrderBy orderBy) throws AdapterException {
+ public String visit(final SqlOrderBy orderBy) throws AdapterException {
// ORDER BY [ASC/DESC] [NULLS FIRST/LAST]
// ASC and NULLS LAST are default in EXASOL
- List sqlOrderElement = new ArrayList<>();
+ final List sqlOrderElement = new ArrayList<>();
for (int i = 0; i < orderBy.getExpressions().size(); ++i) {
String elementSql = orderBy.getExpressions().get(i).accept(this);
- boolean shallNullsBeAtTheEnd = orderBy.nullsLast().get(i);
- boolean isAscending = orderBy.isAscending().get(i);
+ final boolean shallNullsBeAtTheEnd = orderBy.nullsLast().get(i);
+ final boolean isAscending = orderBy.isAscending().get(i);
if (isAscending == false) {
elementSql += " DESC";
}
- if (shallNullsBeAtTheEnd != nullsAreAtEndByDefault(isAscending, dialect.getDefaultNullSorting())) {
+ if (shallNullsBeAtTheEnd != nullsAreAtEndByDefault(isAscending, this.dialect.getDefaultNullSorting())) {
// we have to specify null positioning explicitly, otherwise it would be wrong
elementSql += (shallNullsBeAtTheEnd) ? " NULLS LAST" : " NULLS FIRST";
}
@@ -350,11 +393,13 @@ public String visit(SqlOrderBy orderBy) throws AdapterException {
}
/**
- * @param isAscending true if the desired sort order is ascending, false if descending
- * @param defaultNullSorting default null sorting of dialect
- * @return true, if the data source would position nulls at end of the resultset if NULLS FIRST/LAST is not specified explicitly.
+ * @param isAscending true if the desired sort order is ascending, false
+ * if descending
+ * @param defaultNullSorting default null sorting of dialect
+ * @return true, if the data source would position nulls at end of the resultset
+ * if NULLS FIRST/LAST is not specified explicitly.
*/
- private boolean nullsAreAtEndByDefault(boolean isAscending, SqlDialect.NullSorting defaultNullSorting) {
+ private boolean nullsAreAtEndByDefault(final boolean isAscending, final SqlDialect.NullSorting defaultNullSorting) {
if (defaultNullSorting == SqlDialect.NullSorting.NULLS_SORTED_AT_END) {
return true;
} else if (defaultNullSorting == SqlDialect.NullSorting.NULLS_SORTED_AT_START) {
@@ -369,53 +414,47 @@ private boolean nullsAreAtEndByDefault(boolean isAscending, SqlDialect.NullSorti
}
@Override
- public String visit(SqlPredicateAnd predicate) throws AdapterException {
- List operandsSql = new ArrayList<>();
- for (SqlNode node : predicate.getAndedPredicates()) {
+ public String visit(final SqlPredicateAnd predicate) throws AdapterException {
+ final List operandsSql = new ArrayList<>();
+ for (final SqlNode node : predicate.getAndedPredicates()) {
operandsSql.add(node.accept(this));
}
return "(" + Joiner.on(" AND ").join(operandsSql) + ")";
}
@Override
- public String visit(SqlPredicateBetween predicate) throws AdapterException {
- return predicate.getExpression().accept(this) + " BETWEEN "
- + predicate.getBetweenLeft().accept(this) + " AND "
+ public String visit(final SqlPredicateBetween predicate) throws AdapterException {
+ return predicate.getExpression().accept(this) + " BETWEEN " + predicate.getBetweenLeft().accept(this) + " AND "
+ predicate.getBetweenRight().accept(this);
}
@Override
- public String visit(SqlPredicateEqual predicate) throws AdapterException {
- return predicate.getLeft().accept(this) + " = "
- + predicate.getRight().accept(this);
+ public String visit(final SqlPredicateEqual predicate) throws AdapterException {
+ return predicate.getLeft().accept(this) + " = " + predicate.getRight().accept(this);
}
@Override
- public String visit(SqlPredicateInConstList predicate) throws AdapterException {
- List argumentsSql = new ArrayList<>();
- for (SqlNode node : predicate.getInArguments()) {
+ public String visit(final SqlPredicateInConstList predicate) throws AdapterException {
+ final List argumentsSql = new ArrayList<>();
+ for (final SqlNode node : predicate.getInArguments()) {
argumentsSql.add(node.accept(this));
}
- return predicate.getExpression().accept(this) + " IN ("
- + Joiner.on(", ").join(argumentsSql) + ")";
+ return predicate.getExpression().accept(this) + " IN (" + Joiner.on(", ").join(argumentsSql) + ")";
}
@Override
- public String visit(SqlPredicateLess predicate) throws AdapterException {
- return predicate.getLeft().accept(this) + " < "
- + predicate.getRight().accept(this);
+ public String visit(final SqlPredicateLess predicate) throws AdapterException {
+ return predicate.getLeft().accept(this) + " < " + predicate.getRight().accept(this);
}
@Override
- public String visit(SqlPredicateLessEqual predicate) throws AdapterException {
- return predicate.getLeft().accept(this) + " <= "
- + predicate.getRight().accept(this);
+ public String visit(final SqlPredicateLessEqual predicate) throws AdapterException {
+ return predicate.getLeft().accept(this) + " <= " + predicate.getRight().accept(this);
}
@Override
- public String visit(SqlPredicateLike predicate) throws AdapterException {
- String sql = predicate.getLeft().accept(this) + " LIKE "
- + predicate.getPattern().accept(this);
+ public String visit(final SqlPredicateLike predicate) throws AdapterException {
+ String sql = predicate.getLeft().accept(this) + " LIKE " + predicate.getPattern().accept(this);
if (predicate.getEscapeChar() != null) {
sql += " ESCAPE " + predicate.getEscapeChar().accept(this);
}
@@ -423,39 +462,37 @@ public String visit(SqlPredicateLike predicate) throws AdapterException {
}
@Override
- public String visit(SqlPredicateLikeRegexp predicate) throws AdapterException {
- return predicate.getLeft().accept(this) + " REGEXP_LIKE "
- + predicate.getPattern().accept(this);
+ public String visit(final SqlPredicateLikeRegexp predicate) throws AdapterException {
+ return predicate.getLeft().accept(this) + " REGEXP_LIKE " + predicate.getPattern().accept(this);
}
@Override
- public String visit(SqlPredicateNot predicate) throws AdapterException {
+ public String visit(final SqlPredicateNot predicate) throws AdapterException {
// "SELECT NOT NOT TRUE" is invalid syntax, "SELECT NOT (NOT TRUE)" works.
return "NOT (" + predicate.getExpression().accept(this) + ")";
}
@Override
- public String visit(SqlPredicateNotEqual predicate) throws AdapterException {
- return predicate.getLeft().accept(this) + " <> "
- + predicate.getRight().accept(this);
+ public String visit(final SqlPredicateNotEqual predicate) throws AdapterException {
+ return predicate.getLeft().accept(this) + " <> " + predicate.getRight().accept(this);
}
@Override
- public String visit(SqlPredicateOr predicate) throws AdapterException {
- List operandsSql = new ArrayList<>();
- for (SqlNode node : predicate.getOrPredicates()) {
+ public String visit(final SqlPredicateOr predicate) throws AdapterException {
+ final List operandsSql = new ArrayList<>();
+ for (final SqlNode node : predicate.getOrPredicates()) {
operandsSql.add(node.accept(this));
}
return "(" + Joiner.on(" OR ").join(operandsSql) + ")";
}
@Override
- public String visit(SqlPredicateIsNull predicate) throws AdapterException {
+ public String visit(final SqlPredicateIsNull predicate) throws AdapterException {
return predicate.getExpression().accept(this) + " IS NULL";
}
@Override
- public String visit(SqlPredicateIsNotNull predicate) throws AdapterException {
+ public String visit(final SqlPredicateIsNotNull predicate) throws AdapterException {
return predicate.getExpression().accept(this) + " IS NOT NULL";
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/DB2SqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/DB2SqlDialect.java
index 384192fe6..745526f7a 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/DB2SqlDialect.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/DB2SqlDialect.java
@@ -1,8 +1,5 @@
package com.exasol.adapter.dialects.impl;
-import com.exasol.adapter.dialects.*;
-
-import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Types;
@@ -12,33 +9,33 @@
import com.exasol.adapter.capabilities.MainCapability;
import com.exasol.adapter.capabilities.PredicateCapability;
import com.exasol.adapter.capabilities.ScalarFunctionCapability;
+import com.exasol.adapter.dialects.AbstractSqlDialect;
+import com.exasol.adapter.dialects.JdbcTypeDescription;
+import com.exasol.adapter.dialects.SqlDialectContext;
+import com.exasol.adapter.dialects.SqlGenerationContext;
+import com.exasol.adapter.dialects.SqlGenerationVisitor;
import com.exasol.adapter.metadata.DataType;
/**
- * Dialect for DB2 using the DB2 Connector jdbc driver.
+ * Dialect for DB2 using the DB2 Connector JDBC driver.
*
* @author Karl Griesser (fullref@gmail.com)
*/
public class DB2SqlDialect extends AbstractSqlDialect {
+ private static final String NAME = "DB2";
- public DB2SqlDialect(SqlDialectContext context)
- {
+ public DB2SqlDialect(final SqlDialectContext context) {
super(context);
}
- public static final String NAME = "DB2";
-
- @Override
- public String getPublicName()
- {
+ public static String getPublicName() {
return NAME;
}
@Override
- public Capabilities getCapabilities()
- {
- Capabilities cap = new Capabilities();
+ public Capabilities getCapabilities() {
+ final Capabilities cap = new Capabilities();
// Capabilities
cap.supportMainCapability(MainCapability.SELECTLIST_PROJECTION);
cap.supportMainCapability(MainCapability.SELECTLIST_EXPRESSIONS);
@@ -101,7 +98,8 @@ public Capabilities getCapabilities()
cap.supportAggregateFunction(AggregateFunctionCapability.FIRST_VALUE);
cap.supportAggregateFunction(AggregateFunctionCapability.LAST_VALUE);
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV);
- // not supported cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_DISTINCT);
+ // not supported
+ // cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_DISTINCT);
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_POP);
// STDDEV_POP_DISTINCT
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_SAMP);
@@ -153,7 +151,8 @@ public Capabilities getCapabilities()
// CONCAT is not supported. Number of arguments can be different.
// DUMP is not supported. Output is different.
- // EDIT_DISTANCE is not supported. Output is different. UTL_MATCH.EDIT_DISTANCE returns -1 with NULL argument.
+ // EDIT_DISTANCE is not supported. Output is different. UTL_MATCH.EDIT_DISTANCE
+ // returns -1 with NULL argument.
// INSERT is not supported.
cap.supportScalarFunction(ScalarFunctionCapability.INSTR);
cap.supportScalarFunction(ScalarFunctionCapability.LENGTH);
@@ -162,9 +161,12 @@ public Capabilities getCapabilities()
cap.supportScalarFunction(ScalarFunctionCapability.LPAD);
cap.supportScalarFunction(ScalarFunctionCapability.LTRIM);
// OCTET_LENGTH is not supported. Can be different for Unicode characters.
- // not supported cap.supportScalarFunction(ScalarFunctionCapability.REGEXP_INSTR);
- // not supported cap.supportScalarFunction(ScalarFunctionCapability.REGEXP_REPLACE);
- // not supported cap.supportScalarFunction(ScalarFunctionCapability.REGEXP_SUBSTR);
+ // not supported
+ // cap.supportScalarFunction(ScalarFunctionCapability.REGEXP_INSTR);
+ // not supported
+ // cap.supportScalarFunction(ScalarFunctionCapability.REGEXP_REPLACE);
+ // not supported
+ // cap.supportScalarFunction(ScalarFunctionCapability.REGEXP_SUBSTR);
cap.supportScalarFunction(ScalarFunctionCapability.REPEAT);
cap.supportScalarFunction(ScalarFunctionCapability.REPLACE);
// REVERSE is not supported
@@ -200,8 +202,8 @@ public Capabilities getCapabilities()
// MINUTES_BETWEEN is not supported. EXTRACT does not work on strings.
// MONTH is not supported. EXTRACT does not work on strings.
// MONTHS_BETWEEN is not supported. EXTRACT does not work on strings.
- //cap.supportScalarFunction(ScalarFunctionCapability.NUMTODSINTERVAL);
- //cap.supportScalarFunction(ScalarFunctionCapability.NUMTOYMINTERVAL);
+ // cap.supportScalarFunction(ScalarFunctionCapability.NUMTODSINTERVAL);
+ // cap.supportScalarFunction(ScalarFunctionCapability.NUMTOYMINTERVAL);
// POSIX_TIME is not supported. Does not work on strings.
// SECOND is not supported. EXTRACT does not work on strings.
// SECONDS_BETWEEN is not supported. EXTRACT does not work on strings.
@@ -278,42 +280,38 @@ public Capabilities getCapabilities()
}
@Override
- public SchemaOrCatalogSupport supportsJdbcCatalogs()
- {
+ public SchemaOrCatalogSupport supportsJdbcCatalogs() {
return SchemaOrCatalogSupport.UNSUPPORTED;
}
@Override
- public SchemaOrCatalogSupport supportsJdbcSchemas()
- {
+ public SchemaOrCatalogSupport supportsJdbcSchemas() {
return SchemaOrCatalogSupport.SUPPORTED;
}
@Override
- public IdentifierCaseHandling getUnquotedIdentifierHandling()
- {
+ public IdentifierCaseHandling getUnquotedIdentifierHandling() {
return IdentifierCaseHandling.INTERPRET_AS_UPPER;
}
@Override
- public IdentifierCaseHandling getQuotedIdentifierHandling()
- {
+ public IdentifierCaseHandling getQuotedIdentifierHandling() {
return IdentifierCaseHandling.INTERPRET_CASE_SENSITIVE;
}
@Override
- public String applyQuote(String identifier)
- {
- // If identifier contains double quotation marks ", it needs to be espaced by another double quotation mark. E.g. "a""b" is the identifier a"b in the db.
+ public String applyQuote(final String identifier) {
+ // If identifier contains double quotation marks ", it needs to be espaced by
+ // another double quotation mark. E.g. "a""b" is the identifier a"b in the db.
return "\"" + identifier.replace("\"", "\"\"") + "\"";
}
@Override
- public String applyQuoteIfNeeded(String identifier)
- {
+ public String applyQuoteIfNeeded(final String identifier) {
// Quoted identifiers can contain any unicode char except dot (.).
- // This is a simplified rule, which might cause that some identifiers are quoted although not needed
- boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
+ // This is a simplified rule, which might cause that some identifiers are quoted
+ // although not needed
+ final boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
if (isSimpleIdentifier) {
return identifier;
} else {
@@ -322,44 +320,38 @@ public String applyQuoteIfNeeded(String identifier)
}
@Override
- public boolean requiresCatalogQualifiedTableNames(
- SqlGenerationContext context)
- {
- //DB2 does not know catalogs
+ public boolean requiresCatalogQualifiedTableNames(final SqlGenerationContext context) {
+ // DB2 does not know catalogs
return false;
}
@Override
- public boolean requiresSchemaQualifiedTableNames(
- SqlGenerationContext context)
- {
+ public boolean requiresSchemaQualifiedTableNames(final SqlGenerationContext context) {
return true;
}
-
+
@Override
- public SqlGenerationVisitor getSqlGenerationVisitor(SqlGenerationContext context) {
+ public SqlGenerationVisitor getSqlGenerationVisitor(final SqlGenerationContext context) {
return new DB2SqlGenerationVisitor(this, context);
}
@Override
- public NullSorting getDefaultNullSorting()
- {
- //default db2 behaviour is to set nulls to the end of the result
+ public NullSorting getDefaultNullSorting() {
+ // default db2 behaviour is to set nulls to the end of the result
return NullSorting.NULLS_SORTED_AT_END;
}
@Override
- public String getStringLiteral(String value)
- {
+ public String getStringLiteral(final String value) {
// Don't forget to escape single quote
return "'" + value.replace("'", "''") + "'";
}
-
+
@Override
- public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcTypeDescription) throws SQLException {
+ public DataType dialectSpecificMapJdbcType(final JdbcTypeDescription jdbcTypeDescription) throws SQLException {
DataType colType = null;
- int jdbcType = jdbcTypeDescription.getJdbcType();
-
+ final int jdbcType = jdbcTypeDescription.getJdbcType();
+
switch (jdbcType) {
case Types.CLOB:
colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
@@ -372,16 +364,17 @@ public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcTypeDescripti
case Types.TIMESTAMP:
colType = DataType.createVarChar(32, DataType.ExaCharset.UTF8);
break;
-
- // db2 driver always delivers UTF8 Characters no matter what encoding is specified for var + char data
+
+ // db2 driver always delivers UTF8 Characters no matter what encoding is
+ // specified for var + char data
case Types.VARCHAR:
case Types.NVARCHAR:
case Types.LONGVARCHAR:
case Types.CHAR:
case Types.NCHAR:
case Types.LONGNVARCHAR: {
- int size = jdbcTypeDescription.getPrecisionOrSize();
- DataType.ExaCharset charset = DataType.ExaCharset.UTF8;
+ final int size = jdbcTypeDescription.getPrecisionOrSize();
+ final DataType.ExaCharset charset = DataType.ExaCharset.UTF8;
if (size <= DataType.maxExasolVarcharSize) {
colType = DataType.createVarChar(size, charset);
} else {
@@ -389,12 +382,13 @@ public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcTypeDescripti
}
break;
}
-
- // VARCHAR and CHAR for bit data -> will be converted to hex string so we have to double the size
+
+ // VARCHAR and CHAR for bit data -> will be converted to hex string so we have
+ // to double the size
case -2:
- colType = DataType.createChar(jdbcTypeDescription.getPrecisionOrSize()*2, DataType.ExaCharset.ASCII);
+ colType = DataType.createChar(jdbcTypeDescription.getPrecisionOrSize() * 2, DataType.ExaCharset.ASCII);
case -3:
- colType = DataType.createVarChar(jdbcTypeDescription.getPrecisionOrSize()*2, DataType.ExaCharset.ASCII);
+ colType = DataType.createVarChar(jdbcTypeDescription.getPrecisionOrSize() * 2, DataType.ExaCharset.ASCII);
break;
}
return colType;
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/ExasolSqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/ExasolSqlDialect.java
index adec7f1fb..5a620d5e2 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/ExasolSqlDialect.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/ExasolSqlDialect.java
@@ -1,5 +1,7 @@
package com.exasol.adapter.dialects.impl;
+import java.sql.SQLException;
+
import com.exasol.adapter.capabilities.Capabilities;
import com.exasol.adapter.dialects.AbstractSqlDialect;
import com.exasol.adapter.dialects.JdbcTypeDescription;
@@ -8,30 +10,33 @@
import com.exasol.adapter.metadata.DataType;
import com.exasol.adapter.sql.ScalarFunction;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-
/**
* This class is work-in-progress
*
- * TODO The precision of interval type columns is hardcoded, because it cannot be retrieved via JDBC. Should be retrieved from system table.
- * TODO The srid of geometry type columns is hardcoded, because it cannot be retrieved via JDBC. Should be retrieved from system table.
+ * TODO The precision of interval type columns is hardcoded, because it cannot
+ * be retrieved via JDBC. Should be retrieved from system table.
+ * TODO The srid of geometry type columns is hardcoded, because it cannot be
+ * retrieved via JDBC. Should be retrieved from system table.
*/
public class ExasolSqlDialect extends AbstractSqlDialect {
+ private static final String NAME = "EXASOL";
- public ExasolSqlDialect(SqlDialectContext context) {
+ public ExasolSqlDialect(final SqlDialectContext context) {
super(context);
- omitParenthesesMap.add(ScalarFunction.SYSDATE);
- omitParenthesesMap.add(ScalarFunction.SYSTIMESTAMP);
- omitParenthesesMap.add(ScalarFunction.CURRENT_SCHEMA);
- omitParenthesesMap.add(ScalarFunction.CURRENT_SESSION);
- omitParenthesesMap.add(ScalarFunction.CURRENT_STATEMENT);
- omitParenthesesMap.add(ScalarFunction.CURRENT_USER);
+ this.omitParenthesesMap.add(ScalarFunction.SYSDATE);
+ this.omitParenthesesMap.add(ScalarFunction.SYSTIMESTAMP);
+ this.omitParenthesesMap.add(ScalarFunction.CURRENT_SCHEMA);
+ this.omitParenthesesMap.add(ScalarFunction.CURRENT_SESSION);
+ this.omitParenthesesMap.add(ScalarFunction.CURRENT_STATEMENT);
+ this.omitParenthesesMap.add(ScalarFunction.CURRENT_USER);
}
- public static final String NAME = "EXASOL";
-
- public String getPublicName() {
+ /**
+ * Get the name under which the dialect is listed.
+ *
+ * @return name of the dialect
+ */
+ public static String getPublicName() {
return NAME;
}
@@ -46,26 +51,29 @@ public SchemaOrCatalogSupport supportsJdbcSchemas() {
}
@Override
- public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcTypeDescription) throws SQLException {
+ public DataType dialectSpecificMapJdbcType(final JdbcTypeDescription jdbcTypeDescription) throws SQLException {
DataType colType = null;
- int jdbcType = jdbcTypeDescription.getJdbcType();
+ final int jdbcType = jdbcTypeDescription.getJdbcType();
switch (jdbcType) {
- case -104:
- // Currently precision is hardcoded, because we cannot retrieve it via EXASOL jdbc driver.
- colType = DataType.createIntervalDaySecond(2,3);
- break;
- case -103:
- // Currently precision is hardcoded, because we cannot retrieve it via EXASOL jdbc driver.
- colType = DataType.createIntervalYearMonth(2);
- break;
- case 123:
- // Currently srid is hardcoded, because we cannot retrieve it via EXASOL jdbc driver.
- colType = DataType.createGeometry(3857);
- break;
- case 124:
- colType = DataType.createTimestamp(true);
- break;
+ case -104:
+ // Currently precision is hardcoded, because we cannot retrieve it via EXASOL
+ // jdbc driver.
+ colType = DataType.createIntervalDaySecond(2, 3);
+ break;
+ case -103:
+ // Currently precision is hardcoded, because we cannot retrieve it via EXASOL
+ // jdbc driver.
+ colType = DataType.createIntervalYearMonth(2);
+ break;
+ case 123:
+ // Currently srid is hardcoded, because we cannot retrieve it via EXASOL jdbc
+ // driver.
+ colType = DataType.createGeometry(3857);
+ break;
+ case 124:
+ colType = DataType.createTimestamp(true);
+ break;
}
return colType;
}
@@ -73,7 +81,7 @@ public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcTypeDescripti
@Override
public Capabilities getCapabilities() {
// Supports all capabilities
- Capabilities cap = new Capabilities();
+ final Capabilities cap = new Capabilities();
cap.supportAllCapabilities();
return cap;
}
@@ -89,16 +97,18 @@ public IdentifierCaseHandling getQuotedIdentifierHandling() {
}
@Override
- public String applyQuote(String identifier) {
- // If identifier contains double quotation marks ", it needs to be espaced by another double quotation mark. E.g. "a""b" is the identifier a"b in the db.
+ public String applyQuote(final String identifier) {
+ // If identifier contains double quotation marks ", it needs to be espaced by
+ // another double quotation mark. E.g. "a""b" is the identifier a"b in the db.
return "\"" + identifier.replace("\"", "\"\"") + "\"";
}
@Override
- public String applyQuoteIfNeeded(String identifier) {
+ public String applyQuoteIfNeeded(final String identifier) {
// Quoted identifiers can contain any unicode char except dot (.).
- // This is a simplified rule, which might cause that some identifiers are quoted although not needed
- boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
+ // This is a simplified rule, which might cause that some identifiers are quoted
+ // although not needed
+ final boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
if (isSimpleIdentifier) {
return identifier;
} else {
@@ -107,26 +117,28 @@ public String applyQuoteIfNeeded(String identifier) {
}
@Override
- public boolean requiresCatalogQualifiedTableNames(SqlGenerationContext context) {
+ public boolean requiresCatalogQualifiedTableNames(final SqlGenerationContext context) {
return false;
}
@Override
- public boolean requiresSchemaQualifiedTableNames(SqlGenerationContext context) {
- // We need schema qualifiers a) if we are in IS_LOCAL mode, i.e. we run statements directly in a subselect without IMPORT FROM JDBC
- // and b) if we don't have the schema in the jdbc connection string (like "jdbc:exa:localhost:5555;schema=native")
+ public boolean requiresSchemaQualifiedTableNames(final SqlGenerationContext context) {
+ // We need schema qualifiers a) if we are in IS_LOCAL mode, i.e. we run
+ // statements directly in a subselect without IMPORT FROM JDBC
+ // and b) if we don't have the schema in the jdbc connection string (like
+ // "jdbc:exa:localhost:5555;schema=native")
return true;
// return context.isLocal();
}
@Override
public NullSorting getDefaultNullSorting() {
- assert(getContext().getSchemaAdapterNotes().isNullsAreSortedHigh());
+ assert (getContext().getSchemaAdapterNotes().isNullsAreSortedHigh());
return NullSorting.NULLS_SORTED_HIGH;
}
@Override
- public String getStringLiteral(String value) {
+ public String getStringLiteral(final String value) {
// Don't forget to escape single quote
return "'" + value.replace("'", "''") + "'";
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/GenericSqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/GenericSqlDialect.java
index d2e4df945..bdc9109d7 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/GenericSqlDialect.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/GenericSqlDialect.java
@@ -1,5 +1,7 @@
package com.exasol.adapter.dialects.impl;
+import java.sql.SQLException;
+
import com.exasol.adapter.capabilities.Capabilities;
import com.exasol.adapter.dialects.AbstractSqlDialect;
import com.exasol.adapter.dialects.JdbcTypeDescription;
@@ -8,27 +10,25 @@
import com.exasol.adapter.jdbc.SchemaAdapterNotes;
import com.exasol.adapter.metadata.DataType;
-import java.sql.SQLException;
-
/**
- * This dialect can be used for data sources where a custom dialect implementation does not yet exists.
- * It will obtain all information from the JDBC Metadata.
+ * This dialect can be used for data sources where a custom dialect
+ * implementation does not yet exists. It will obtain all information from the
+ * JDBC Metadata.
*/
public class GenericSqlDialect extends AbstractSqlDialect {
-
- public GenericSqlDialect(SqlDialectContext context) {
+ public GenericSqlDialect(final SqlDialectContext context) {
super(context);
}
- public static final String NAME = "GENERIC";
+ private static final String NAME = "GENERIC";
- public String getPublicName() {
+ public static String getPublicName() {
return NAME;
}
@Override
public Capabilities getCapabilities() {
- Capabilities cap = new Capabilities();
+ final Capabilities cap = new Capabilities();
return cap;
}
@@ -44,7 +44,7 @@ public SchemaOrCatalogSupport supportsJdbcSchemas() {
@Override
public IdentifierCaseHandling getUnquotedIdentifierHandling() {
- SchemaAdapterNotes adapterNotes = getContext().getSchemaAdapterNotes();
+ final SchemaAdapterNotes adapterNotes = getContext().getSchemaAdapterNotes();
if (adapterNotes.isSupportsMixedCaseIdentifiers()) {
// Unquoted identifiers are treated case-sensitive and stored mixed case
return IdentifierCaseHandling.INTERPRET_CASE_SENSITIVE;
@@ -57,14 +57,15 @@ public IdentifierCaseHandling getUnquotedIdentifierHandling() {
// This case is a bit strange - case insensitive, but still stores it mixed case
return IdentifierCaseHandling.INTERPRET_CASE_SENSITIVE;
} else {
- throw new RuntimeException("Unexpected quote behavior. Adapternotes: " + SchemaAdapterNotes.serialize(adapterNotes));
+ throw new RuntimeException(
+ "Unexpected quote behavior. Adapternotes: " + SchemaAdapterNotes.serialize(adapterNotes));
}
}
}
@Override
public IdentifierCaseHandling getQuotedIdentifierHandling() {
- SchemaAdapterNotes adapterNotes = getContext().getSchemaAdapterNotes();
+ final SchemaAdapterNotes adapterNotes = getContext().getSchemaAdapterNotes();
if (adapterNotes.isSupportsMixedCaseQuotedIdentifiers()) {
// Quoted identifiers are treated case-sensitive and stored mixed case
return IdentifierCaseHandling.INTERPRET_CASE_SENSITIVE;
@@ -77,38 +78,41 @@ public IdentifierCaseHandling getQuotedIdentifierHandling() {
// This case is a bit strange - case insensitive, but still stores it mixed case
return IdentifierCaseHandling.INTERPRET_CASE_SENSITIVE;
} else {
- throw new RuntimeException("Unexpected quote behavior. Adapternotes: " + SchemaAdapterNotes.serialize(adapterNotes));
+ throw new RuntimeException(
+ "Unexpected quote behavior. Adapternotes: " + SchemaAdapterNotes.serialize(adapterNotes));
}
}
}
@Override
- public String applyQuote(String identifier) {
- String quoteString = getContext().getSchemaAdapterNotes().getIdentifierQuoteString();
+ public String applyQuote(final String identifier) {
+ final String quoteString = getContext().getSchemaAdapterNotes().getIdentifierQuoteString();
return quoteString + identifier + quoteString;
}
@Override
- public String applyQuoteIfNeeded(String identifier) {
+ public String applyQuoteIfNeeded(final String identifier) {
// We could consider getExtraNameCharacters() here as well to do less quoting
return applyQuote(identifier);
}
@Override
- public boolean requiresCatalogQualifiedTableNames(SqlGenerationContext context) {
+ public boolean requiresCatalogQualifiedTableNames(final SqlGenerationContext context) {
return true;
}
@Override
- public boolean requiresSchemaQualifiedTableNames(SqlGenerationContext context) {
- // See getCatalogSeparator(): String that this database uses as the separator between a catalog and table name.
- // See isCatalogAtStart(): whether a catalog appears at the start of a fully qualified table name
+ public boolean requiresSchemaQualifiedTableNames(final SqlGenerationContext context) {
+ // See getCatalogSeparator(): String that this database uses as the separator
+ // between a catalog and table name.
+ // See isCatalogAtStart(): whether a catalog appears at the start of a fully
+ // qualified table name
return true;
}
@Override
public NullSorting getDefaultNullSorting() {
- SchemaAdapterNotes notes = getContext().getSchemaAdapterNotes();
+ final SchemaAdapterNotes notes = getContext().getSchemaAdapterNotes();
if (notes.isNullsAreSortedAtEnd()) {
return NullSorting.NULLS_SORTED_AT_END;
} else if (notes.isNullsAreSortedAtStart()) {
@@ -122,12 +126,12 @@ public NullSorting getDefaultNullSorting() {
}
@Override
- public String getStringLiteral(String value) {
+ public String getStringLiteral(final String value) {
return "'" + value.replace("'", "''") + "'";
}
@Override
- public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcType) throws SQLException {
+ public DataType dialectSpecificMapJdbcType(final JdbcTypeDescription jdbcType) throws SQLException {
return null;
}
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/HiveSqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/HiveSqlDialect.java
index 3a50cee54..c4bfc33c7 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/HiveSqlDialect.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/HiveSqlDialect.java
@@ -1,34 +1,41 @@
package com.exasol.adapter.dialects.impl;
-import com.exasol.adapter.capabilities.*;
-import com.exasol.adapter.dialects.*;
-import com.exasol.adapter.metadata.DataType;
-import com.exasol.adapter.sql.ScalarFunction;
-
import java.sql.SQLException;
import java.util.EnumMap;
import java.util.Map;
+import com.exasol.adapter.capabilities.AggregateFunctionCapability;
+import com.exasol.adapter.capabilities.Capabilities;
+import com.exasol.adapter.capabilities.LiteralCapability;
+import com.exasol.adapter.capabilities.MainCapability;
+import com.exasol.adapter.capabilities.PredicateCapability;
+import com.exasol.adapter.capabilities.ScalarFunctionCapability;
+import com.exasol.adapter.dialects.AbstractSqlDialect;
+import com.exasol.adapter.dialects.JdbcTypeDescription;
+import com.exasol.adapter.dialects.SqlDialectContext;
+import com.exasol.adapter.dialects.SqlGenerationContext;
+import com.exasol.adapter.dialects.SqlGenerationVisitor;
+import com.exasol.adapter.metadata.DataType;
+import com.exasol.adapter.sql.ScalarFunction;
+
/**
- * Dialect for Hive, using the Cloudera Hive JDBC Driver/Connector (developed by Simba).
- * Only supports Hive 2.1.0 and later because of the order by (nulls first/last option)
- * TODO Finish implementation of this dialect and add as a supported dialect
+ * Dialect for Hive, using the Cloudera Hive JDBC Driver/Connector (developed by
+ * Simba). Only supports Hive 2.1.0 and later because of the order by (nulls
+ * first/last option) TODO Finish implementation of this dialect and add as a
+ * supported dialect
*/
public class HiveSqlDialect extends AbstractSqlDialect {
-
- public HiveSqlDialect(SqlDialectContext context) {
+ public HiveSqlDialect(final SqlDialectContext context) {
super(context);
}
- public static final String NAME = "HIVE";
-
- public String getPublicName() {
- return NAME;
+ public static String getPublicName() {
+ return "HIVE";
}
@Override
public Capabilities getCapabilities() {
- Capabilities cap = new Capabilities();
+ final Capabilities cap = new Capabilities();
cap.supportMainCapability(MainCapability.SELECTLIST_PROJECTION);
cap.supportMainCapability(MainCapability.SELECTLIST_EXPRESSIONS);
cap.supportMainCapability(MainCapability.FILTER_EXPRESSIONS);
@@ -131,7 +138,7 @@ public Capabilities getCapabilities() {
cap.supportScalarFunction(ScalarFunctionCapability.SECOND);
cap.supportScalarFunction(ScalarFunctionCapability.WEEK);
- /*hive doesn't support geospatial functions*/
+ /* hive doesn't support geospatial functions */
cap.supportScalarFunction(ScalarFunctionCapability.CAST);
@@ -145,11 +152,12 @@ public Capabilities getCapabilities() {
}
/**
- * Quote from user manual The Cloudera JDBC Driver for Apache Hive supports both catalogs and schemas to make it easy for
- * the driver to work with various JDBC applications. Since Hive only organizes tables into
- * schemas/databases, the driver provides a synthetic catalog called “HIVE” under which all of the
- * schemas/databases are organized. The driver also maps the JDBC schema to the Hive
- * schema/database.
+ * Quote from user manual The Cloudera JDBC Driver for Apache Hive supports both
+ * catalogs and schemas to make it easy for the driver to work with various JDBC
+ * applications. Since Hive only organizes tables into schemas/databases, the
+ * driver provides a synthetic catalog called “HIVE” under which all of the
+ * schemas/databases are organized. The driver also maps the JDBC schema to the
+ * Hive schema/database.
*/
@Override
public SchemaOrCatalogSupport supportsJdbcCatalogs() {
@@ -172,26 +180,30 @@ public IdentifierCaseHandling getQuotedIdentifierHandling() {
}
@Override
- public String applyQuote(String identifier) {
- // If identifier contains double quotation marks ", it needs to be escaped by another double quotation mark. E.g. "a""b" is the identifier a"b in the db.
+ public String applyQuote(final String identifier) {
+ // If identifier contains double quotation marks ", it needs to be escaped by
+ // another double quotation mark. E.g. "a""b" is the identifier a"b in the db.
return "`" + identifier + "`";
}
@Override
- public String applyQuoteIfNeeded(String identifier) {
- // We need to apply quotes only in case of reserved keywords. Since we don't know these (could look up in JDBC Metadata...) we always quote.
+ public String applyQuoteIfNeeded(final String identifier) {
+ // We need to apply quotes only in case of reserved keywords. Since we don't
+ // know these (could look up in JDBC Metadata...) we always quote.
return applyQuote(identifier);
}
@Override
- public boolean requiresCatalogQualifiedTableNames(SqlGenerationContext context) {
+ public boolean requiresCatalogQualifiedTableNames(final SqlGenerationContext context) {
return false;
}
@Override
- public boolean requiresSchemaQualifiedTableNames(SqlGenerationContext context) {
- // We need schema qualifiers a) if we are in IS_LOCAL mode, i.e. we run statements directly in a subselect without IMPORT FROM JDBC
- // and b) if we don't have the schema in the jdbc connection string (like "jdbc:exa:localhost:5555;schema=native")
+ public boolean requiresSchemaQualifiedTableNames(final SqlGenerationContext context) {
+ // We need schema qualifiers a) if we are in IS_LOCAL mode, i.e. we run
+ // statements directly in a subselect without IMPORT FROM JDBC
+ // and b) if we don't have the schema in the jdbc connection string (like
+ // "jdbc:exa:localhost:5555;schema=native")
return true;
// return context.isLocal();
}
@@ -207,20 +219,20 @@ public NullSorting getDefaultNullSorting() {
}
@Override
- public String getStringLiteral(String value) {
+ public String getStringLiteral(final String value) {
// Don't forget to escape single quote
return "'" + value.replace("'", "''") + "'";
}
@Override
- public SqlGenerationVisitor getSqlGenerationVisitor(SqlGenerationContext context) {
+ public SqlGenerationVisitor getSqlGenerationVisitor(final SqlGenerationContext context) {
return new HiveSqlGenerationVisitor(this, context);
}
@Override
public Map getScalarFunctionAliases() {
- Map scalarAliases = new EnumMap<>(ScalarFunction.class);
+ final Map scalarAliases = new EnumMap<>(ScalarFunction.class);
scalarAliases.put(ScalarFunction.ADD_DAYS, "DATE_ADD");
scalarAliases.put(ScalarFunction.DAYS_BETWEEN, "DATEDIFF");
@@ -232,7 +244,7 @@ public Map getScalarFunctionAliases() {
}
@Override
- public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcType) throws SQLException {
+ public DataType dialectSpecificMapJdbcType(final JdbcTypeDescription jdbcType) throws SQLException {
return null;
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/ImpalaSqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/ImpalaSqlDialect.java
index 426c131b0..202bf0382 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/ImpalaSqlDialect.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/ImpalaSqlDialect.java
@@ -1,32 +1,41 @@
package com.exasol.adapter.dialects.impl;
-import com.exasol.adapter.capabilities.*;
-import com.exasol.adapter.dialects.*;
-import com.exasol.adapter.metadata.DataType;
-
import java.sql.SQLException;
+import com.exasol.adapter.capabilities.AggregateFunctionCapability;
+import com.exasol.adapter.capabilities.Capabilities;
+import com.exasol.adapter.capabilities.LiteralCapability;
+import com.exasol.adapter.capabilities.MainCapability;
+import com.exasol.adapter.capabilities.PredicateCapability;
+import com.exasol.adapter.dialects.AbstractSqlDialect;
+import com.exasol.adapter.dialects.JdbcTypeDescription;
+import com.exasol.adapter.dialects.SqlDialectContext;
+import com.exasol.adapter.dialects.SqlGenerationContext;
+import com.exasol.adapter.dialects.SqlGenerationVisitor;
+import com.exasol.adapter.metadata.DataType;
+
/**
- * Dialect for Impala, using the Cloudera Impala JDBC Driver/Connector (developed by Simba).
- *
- * See http://www.cloudera.com/documentation/enterprise/latest/topics/impala_langref.html
+ * Dialect for Impala, using the Cloudera Impala JDBC Driver/Connector
+ * (developed by Simba).
+ *
+ * See
+ * http://www.cloudera.com/documentation/enterprise/latest/topics/impala_langref.html
*/
public class ImpalaSqlDialect extends AbstractSqlDialect {
-
- public ImpalaSqlDialect(SqlDialectContext context) {
+ public ImpalaSqlDialect(final SqlDialectContext context) {
super(context);
}
- public static final String NAME = "IMPALA";
+ private static final String NAME = "IMPALA";
- public String getPublicName() {
+ public static String getPublicName() {
return NAME;
}
@Override
public Capabilities getCapabilities() {
// Main capabilities
- Capabilities cap = new Capabilities();
+ final Capabilities cap = new Capabilities();
cap.supportMainCapability(MainCapability.SELECTLIST_PROJECTION);
cap.supportMainCapability(MainCapability.SELECTLIST_EXPRESSIONS);
cap.supportMainCapability(MainCapability.FILTER_EXPRESSIONS);
@@ -80,7 +89,7 @@ public Capabilities getCapabilities() {
cap.supportAggregateFunction(AggregateFunctionCapability.SUM_DISTINCT);
// TODO Scalar Functions
-
+
return cap;
}
@@ -96,10 +105,10 @@ public SchemaOrCatalogSupport supportsJdbcSchemas() {
/**
* Note from Impala documentation: Impala identifiers are always
- * case-insensitive. That is, tables named t1 and T1 always refer to the
- * same table, regardless of quote characters. Internally, Impala always
- * folds all specified table and column names to lowercase. This is why the
- * column headers in query output are always displayed in lowercase.
+ * case-insensitive. That is, tables named t1 and T1 always refer to the same
+ * table, regardless of quote characters. Internally, Impala always folds all
+ * specified table and column names to lowercase. This is why the column headers
+ * in query output are always displayed in lowercase.
*/
@Override
public IdentifierCaseHandling getUnquotedIdentifierHandling() {
@@ -112,55 +121,64 @@ public IdentifierCaseHandling getQuotedIdentifierHandling() {
}
@Override
- public String applyQuote(String identifier) {
- // If identifier contains double quotation marks ", it needs to be espaced by another double quotation mark. E.g. "a""b" is the identifier a"b in the db.
+ public String applyQuote(final String identifier) {
+ // If identifier contains double quotation marks ", it needs to be espaced by
+ // another double quotation mark. E.g. "a""b" is the identifier a"b in the db.
return "`" + identifier + "`";
}
@Override
- public String applyQuoteIfNeeded(String identifier) {
- // We need to apply quotes only in case of reserved keywords. Since we don't know these (could look up in JDBC Metadata...) we always quote.
+ public String applyQuoteIfNeeded(final String identifier) {
+ // We need to apply quotes only in case of reserved keywords. Since we don't
+ // know these (could look up in JDBC Metadata...) we always quote.
return applyQuote(identifier);
}
@Override
- public boolean requiresCatalogQualifiedTableNames(SqlGenerationContext context) {
+ public boolean requiresCatalogQualifiedTableNames(final SqlGenerationContext context) {
return false;
}
@Override
- public boolean requiresSchemaQualifiedTableNames(SqlGenerationContext context) {
- // We need schema qualifiers a) if we are in IS_LOCAL mode, i.e. we run statements directly in a subselect without IMPORT FROM JDBC
- // and b) if we don't have the schema in the jdbc connection string (like "jdbc:exa:localhost:5555;schema=native")
+ public boolean requiresSchemaQualifiedTableNames(final SqlGenerationContext context) {
+ // We need schema qualifiers a) if we are in IS_LOCAL mode, i.e. we run
+ // statements directly in a subselect without IMPORT FROM JDBC
+ // and b) if we don't have the schema in the jdbc connection string (like
+ // "jdbc:exa:localhost:5555;schema=native")
return true;
// return context.isLocal();
}
@Override
- public SqlGenerationVisitor getSqlGenerationVisitor(SqlGenerationContext context) {
+ public SqlGenerationVisitor getSqlGenerationVisitor(final SqlGenerationContext context) {
return new ImpalaSqlGenerationVisitor(this, context);
}
@Override
public NullSorting getDefaultNullSorting() {
- // In Impala 1.2.1 and higher, all NULL values come at the end of the result set for ORDER BY ... ASC queries,
+ // In Impala 1.2.1 and higher, all NULL values come at the end of the result set
+ // for ORDER BY ... ASC queries,
// and at the beginning of the result set for ORDER BY ... DESC queries.
- // In effect, NULL is considered greater than all other values for sorting purposes.
- // The original Impala behavior always put NULL values at the end, even for ORDER BY ... DESC queries.
- // The new behavior in Impala 1.2.1 makes Impala more compatible with other popular database systems.
- // In Impala 1.2.1 and higher, you can override or specify the sorting behavior for NULL by adding the clause
+ // In effect, NULL is considered greater than all other values for sorting
+ // purposes.
+ // The original Impala behavior always put NULL values at the end, even for
+ // ORDER BY ... DESC queries.
+ // The new behavior in Impala 1.2.1 makes Impala more compatible with other
+ // popular database systems.
+ // In Impala 1.2.1 and higher, you can override or specify the sorting behavior
+ // for NULL by adding the clause
// NULLS FIRST or NULLS LAST at the end of the ORDER BY clause.
return NullSorting.NULLS_SORTED_HIGH;
}
@Override
- public String getStringLiteral(String value) {
+ public String getStringLiteral(final String value) {
// Don't forget to escape single quote
return "'" + value.replace("'", "''") + "'";
}
@Override
- public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcType) throws SQLException {
+ public DataType dialectSpecificMapJdbcType(final JdbcTypeDescription jdbcType) throws SQLException {
return null;
}
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/MysqlSqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/MysqlSqlDialect.java
index a3b38ac4c..e939963fd 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/MysqlSqlDialect.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/MysqlSqlDialect.java
@@ -1,5 +1,7 @@
package com.exasol.adapter.dialects.impl;
+import java.sql.SQLException;
+
import com.exasol.adapter.capabilities.Capabilities;
import com.exasol.adapter.dialects.AbstractSqlDialect;
import com.exasol.adapter.dialects.JdbcTypeDescription;
@@ -7,28 +9,25 @@
import com.exasol.adapter.dialects.SqlGenerationContext;
import com.exasol.adapter.metadata.DataType;
-import java.sql.SQLException;
-
/**
* Dialect for MySQL using the MySQL Connector jdbc driver.
*
* TODO Finish implementation of this dialect and add as a supported dialect
*/
public class MysqlSqlDialect extends AbstractSqlDialect {
-
- public MysqlSqlDialect(SqlDialectContext context) {
+ public MysqlSqlDialect(final SqlDialectContext context) {
super(context);
}
- public static final String NAME = "MYSQL";
+ private static final String NAME = "MYSQL";
- public String getPublicName() {
+ public static String getPublicName() {
return NAME;
}
@Override
public Capabilities getCapabilities() {
- Capabilities cap = new Capabilities();
+ final Capabilities cap = new Capabilities();
return cap;
}
@@ -53,42 +52,45 @@ public IdentifierCaseHandling getQuotedIdentifierHandling() {
}
@Override
- public String applyQuote(String identifier) {
- // TODO ANSI_QUOTES option. Must be obtained from JDBC DatabaseMetadata. http://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_ansi_quotes
- CharSequence quoteChar = "`";
+ public String applyQuote(final String identifier) {
+ // TODO ANSI_QUOTES option. Must be obtained from JDBC DatabaseMetadata.
+ // http://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_ansi_quotes
+ final CharSequence quoteChar = "`";
return quoteChar + identifier.replace(quoteChar, quoteChar + "" + quoteChar) + quoteChar;
}
@Override
- public String applyQuoteIfNeeded(String identifier) {
+ public String applyQuoteIfNeeded(final String identifier) {
return applyQuote(identifier);
}
@Override
- public boolean requiresCatalogQualifiedTableNames(SqlGenerationContext context) {
+ public boolean requiresCatalogQualifiedTableNames(final SqlGenerationContext context) {
return true;
}
@Override
- public boolean requiresSchemaQualifiedTableNames(SqlGenerationContext context) {
+ public boolean requiresSchemaQualifiedTableNames(final SqlGenerationContext context) {
return false;
}
@Override
public NullSorting getDefaultNullSorting() {
- // See http://stackoverflow.com/questions/2051602/mysql-orderby-a-number-nulls-last
- // and also http://stackoverflow.com/questions/9307613/mysql-order-by-null-first-and-desc-after
- assert(getContext().getSchemaAdapterNotes().isNullsAreSortedLow());
+ // See
+ // http://stackoverflow.com/questions/2051602/mysql-orderby-a-number-nulls-last
+ // and also
+ // http://stackoverflow.com/questions/9307613/mysql-order-by-null-first-and-desc-after
+ assert (getContext().getSchemaAdapterNotes().isNullsAreSortedLow());
return NullSorting.NULLS_SORTED_LOW;
}
@Override
- public String getStringLiteral(String value) {
+ public String getStringLiteral(final String value) {
return "'" + value.replace("'", "''") + "'";
}
@Override
- public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcType) throws SQLException {
+ public DataType dialectSpecificMapJdbcType(final JdbcTypeDescription jdbcType) throws SQLException {
return null;
}
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/OracleSqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/OracleSqlDialect.java
index c891c1ba2..7f4bc71b8 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/OracleSqlDialect.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/OracleSqlDialect.java
@@ -1,48 +1,56 @@
package com.exasol.adapter.dialects.impl;
-import com.exasol.adapter.capabilities.*;
-import com.exasol.adapter.dialects.*;
-import com.exasol.adapter.metadata.DataType;
-import com.exasol.adapter.sql.AggregateFunction;
-import com.exasol.adapter.sql.ScalarFunction;
-
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Types;
import java.util.EnumMap;
import java.util.Map;
+import com.exasol.adapter.capabilities.AggregateFunctionCapability;
+import com.exasol.adapter.capabilities.Capabilities;
+import com.exasol.adapter.capabilities.LiteralCapability;
+import com.exasol.adapter.capabilities.MainCapability;
+import com.exasol.adapter.capabilities.PredicateCapability;
+import com.exasol.adapter.capabilities.ScalarFunctionCapability;
+import com.exasol.adapter.dialects.AbstractSqlDialect;
+import com.exasol.adapter.dialects.JdbcTypeDescription;
+import com.exasol.adapter.dialects.SqlDialectContext;
+import com.exasol.adapter.dialects.SqlGenerationContext;
+import com.exasol.adapter.dialects.SqlGenerationVisitor;
+import com.exasol.adapter.metadata.DataType;
+import com.exasol.adapter.sql.AggregateFunction;
+import com.exasol.adapter.sql.ScalarFunction;
+
/**
* Work in Progress
*/
public class OracleSqlDialect extends AbstractSqlDialect {
+ private final boolean castAggFuncToFloat = true;
+ private final boolean castScalarFuncToFloat = true;
- private boolean castAggFuncToFloat = true;
- private boolean castScalarFuncToFloat = true;
-
- public OracleSqlDialect(SqlDialectContext context) {
+ public OracleSqlDialect(final SqlDialectContext context) {
super(context);
- omitParenthesesMap.add(ScalarFunction.SYSDATE);
- omitParenthesesMap.add(ScalarFunction.SYSTIMESTAMP);
+ this.omitParenthesesMap.add(ScalarFunction.SYSDATE);
+ this.omitParenthesesMap.add(ScalarFunction.SYSTIMESTAMP);
}
- public static final String NAME = "ORACLE";
+ private static final String NAME = "ORACLE";
- public String getPublicName() {
+ public static String getPublicName() {
return NAME;
}
public boolean getCastAggFuncToFloat() {
- return castAggFuncToFloat;
+ return this.castAggFuncToFloat;
}
public boolean getCastScalarFuncToFloat() {
- return castScalarFuncToFloat;
+ return this.castScalarFuncToFloat;
}
@Override
public Capabilities getCapabilities() {
- Capabilities cap = new Capabilities();
+ final Capabilities cap = new Capabilities();
// Capabilities
cap.supportMainCapability(MainCapability.SELECTLIST_PROJECTION);
@@ -96,7 +104,7 @@ public Capabilities getCapabilities() {
// GEO_INTERSECTION_AGGREGATE is not supported
// GEO_UNION_AGGREGATE is not supported
// APPROXIMATE_COUNT_DISTINCT supported with version >= 12.1.0.2
- if (castAggFuncToFloat) {
+ if (this.castAggFuncToFloat) {
// Cast result to FLOAT because result set precision = 0, scale = 0
cap.supportAggregateFunction(AggregateFunctionCapability.SUM);
cap.supportAggregateFunction(AggregateFunctionCapability.SUM_DISTINCT);
@@ -125,10 +133,12 @@ public Capabilities getCapabilities() {
cap.supportScalarFunction(ScalarFunctionCapability.CEIL);
cap.supportScalarFunction(ScalarFunctionCapability.DIV);
cap.supportScalarFunction(ScalarFunctionCapability.FLOOR);
- // ROUND is not supported. DATETIME could be pushed down, NUMBER would have to be rounded.
+ // ROUND is not supported. DATETIME could be pushed down, NUMBER would have to
+ // be rounded.
cap.supportScalarFunction(ScalarFunctionCapability.SIGN);
- // TRUNC is not supported. DATETIME could be pushed down, NUMBER would have to be rounded.
- if (castScalarFuncToFloat) {
+ // TRUNC is not supported. DATETIME could be pushed down, NUMBER would have to
+ // be rounded.
+ if (this.castScalarFuncToFloat) {
// Cast result to FLOAT because result set precision = 0, scale = 0
cap.supportScalarFunction(ScalarFunctionCapability.ADD);
cap.supportScalarFunction(ScalarFunctionCapability.SUB);
@@ -165,7 +175,8 @@ public Capabilities getCapabilities() {
// COLOGNE_PHONETIC is not supported.
// CONCAT is not supported. Number of arguments can be different.
// DUMP is not supported. Output is different.
- // EDIT_DISTANCE is not supported. Output is different. UTL_MATCH.EDIT_DISTANCE returns -1 with NULL argument.
+ // EDIT_DISTANCE is not supported. Output is different. UTL_MATCH.EDIT_DISTANCE
+ // returns -1 with NULL argument.
// INSERT is not supported.
cap.supportScalarFunction(ScalarFunctionCapability.INSTR);
cap.supportScalarFunction(ScalarFunctionCapability.LENGTH);
@@ -180,7 +191,8 @@ public Capabilities getCapabilities() {
cap.supportScalarFunction(ScalarFunctionCapability.REPEAT);
cap.supportScalarFunction(ScalarFunctionCapability.REPLACE);
cap.supportScalarFunction(ScalarFunctionCapability.REVERSE);
- // RIGHT is not supported. Possible solution with SUBSTRING (must handle corner cases correctly).
+ // RIGHT is not supported. Possible solution with SUBSTRING (must handle corner
+ // cases correctly).
cap.supportScalarFunction(ScalarFunctionCapability.RPAD);
cap.supportScalarFunction(ScalarFunctionCapability.RTRIM);
cap.supportScalarFunction(ScalarFunctionCapability.SOUNDEX);
@@ -301,9 +313,10 @@ public Capabilities getCapabilities() {
@Override
public Map getAggregateFunctionAliases() {
- Map aggregationAliases = new EnumMap<>(AggregateFunction.class);
+ final Map aggregationAliases = new EnumMap<>(AggregateFunction.class);
// APPROXIMATE_COUNT_DISTINCT supported with version >= 12.1.0.2
- // aggregationAliases.put(AggregateFunction.APPROXIMATE_COUNT_DISTINCT, "APPROX_COUNT_DISTINCT");
+ // aggregationAliases.put(AggregateFunction.APPROXIMATE_COUNT_DISTINCT,
+ // "APPROX_COUNT_DISTINCT");
return aggregationAliases;
}
@@ -318,10 +331,13 @@ public SchemaOrCatalogSupport supportsJdbcSchemas() {
}
@Override
- public MappedTable mapTable(ResultSet tables) throws SQLException {
- String tableName = tables.getString("TABLE_NAME");
+ public MappedTable mapTable(final ResultSet tables) throws SQLException {
+ final String tableName = tables.getString("TABLE_NAME");
if (tableName.startsWith("BIN$")) {
- // In case of Oracle we may see deleted tables with strange names (BIN$OeQco6jg/drgUDAKzmRzgA==$0). Should be filtered out. Squirrel also doesn't see them for unknown reasons. See http://stackoverflow.com/questions/2446053/what-are-the-bin-tables-in-oracles-all-tab-columns-table
+ // In case of Oracle we may see deleted tables with strange names
+ // (BIN$OeQco6jg/drgUDAKzmRzgA==$0). Should be filtered out. Squirrel also
+ // doesn't see them for unknown reasons. See
+ // http://stackoverflow.com/questions/2446053/what-are-the-bin-tables-in-oracles-all-tab-columns-table
System.out.println("Skip table: " + tableName);
return MappedTable.createIgnoredTable();
} else {
@@ -330,52 +346,53 @@ public MappedTable mapTable(ResultSet tables) throws SQLException {
}
@Override
- public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcTypeDescription) throws SQLException {
+ public DataType dialectSpecificMapJdbcType(final JdbcTypeDescription jdbcTypeDescription) throws SQLException {
DataType colType = null;
- int jdbcType = jdbcTypeDescription.getJdbcType();
+ final int jdbcType = jdbcTypeDescription.getJdbcType();
switch (jdbcType) {
- case Types.DECIMAL:
- int decimalPrec = jdbcTypeDescription.getPrecisionOrSize();
- int decimalScale = jdbcTypeDescription.getDecimalScale();
- if (decimalScale == -127) {
- // Oracle JDBC driver returns scale -127 if NUMBER data type was specified without scale and precision. Convert to VARCHAR.
- // See http://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#i16209
- // and https://docs.oracle.com/cd/E19501-01/819-3659/gcmaz/
- colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
- break;
- }
- if (decimalPrec <= DataType.maxExasolDecimalPrecision) {
- colType = DataType.createDecimal(decimalPrec, decimalScale);
- } else {
- colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
- }
- break;
- case Types.OTHER:
- // Oracle JDBC uses OTHER as CLOB
- colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
- break;
- case -103:
- // INTERVAL YEAR TO MONTH
- case -104:
- // INTERVAL DAY TO SECOND
+ case Types.DECIMAL:
+ final int decimalPrec = jdbcTypeDescription.getPrecisionOrSize();
+ final int decimalScale = jdbcTypeDescription.getDecimalScale();
+ if (decimalScale == -127) {
+ // Oracle JDBC driver returns scale -127 if NUMBER data type was specified
+ // without scale and precision. Convert to VARCHAR.
+ // See http://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#i16209
+ // and https://docs.oracle.com/cd/E19501-01/819-3659/gcmaz/
colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
break;
- case -102:
- case -101:
- // -101 and -102 is TIMESTAMP WITH (LOCAL) TIMEZONE in Oracle.
+ }
+ if (decimalPrec <= DataType.maxExasolDecimalPrecision) {
+ colType = DataType.createDecimal(decimalPrec, decimalScale);
+ } else {
colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
- break;
- case 100:
- case 101:
- // 100 and 101 are BINARY_FLOAT and BINARY_DOUBLE in Oracle.
- colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
- break;
+ }
+ break;
+ case Types.OTHER:
+ // Oracle JDBC uses OTHER as CLOB
+ colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+ case -103:
+ // INTERVAL YEAR TO MONTH
+ case -104:
+ // INTERVAL DAY TO SECOND
+ colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+ case -102:
+ case -101:
+ // -101 and -102 is TIMESTAMP WITH (LOCAL) TIMEZONE in Oracle.
+ colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+ case 100:
+ case 101:
+ // 100 and 101 are BINARY_FLOAT and BINARY_DOUBLE in Oracle.
+ colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
+ break;
}
return colType;
}
@Override
- public SqlGenerationVisitor getSqlGenerationVisitor(SqlGenerationContext context) {
+ public SqlGenerationVisitor getSqlGenerationVisitor(final SqlGenerationContext context) {
return new OracleSqlGenerationVisitor(this, context);
}
@@ -390,14 +407,15 @@ public IdentifierCaseHandling getQuotedIdentifierHandling() {
}
@Override
- public String applyQuote(String identifier) {
- // If identifier contains double quotation marks ", it needs to be escaped by another double quotation mark. E.g. "a""b" is the identifier a"b in the db.
+ public String applyQuote(final String identifier) {
+ // If identifier contains double quotation marks ", it needs to be escaped by
+ // another double quotation mark. E.g. "a""b" is the identifier a"b in the db.
return "\"" + identifier.replace("\"", "\"\"") + "\"";
}
@Override
- public String applyQuoteIfNeeded(String identifier) {
- boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
+ public String applyQuoteIfNeeded(final String identifier) {
+ final boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
if (isSimpleIdentifier) {
return identifier;
} else {
@@ -406,12 +424,12 @@ public String applyQuoteIfNeeded(String identifier) {
}
@Override
- public boolean requiresCatalogQualifiedTableNames(SqlGenerationContext context) {
+ public boolean requiresCatalogQualifiedTableNames(final SqlGenerationContext context) {
return false;
}
@Override
- public boolean requiresSchemaQualifiedTableNames(SqlGenerationContext context) {
+ public boolean requiresSchemaQualifiedTableNames(final SqlGenerationContext context) {
return true;
}
@@ -421,7 +439,7 @@ public NullSorting getDefaultNullSorting() {
}
@Override
- public String getStringLiteral(String value) {
+ public String getStringLiteral(final String value) {
return "'" + value.replace("'", "''") + "'";
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/PostgreSQLSqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/PostgreSQLSqlDialect.java
index de877b5d1..fdacac552 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/PostgreSQLSqlDialect.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/PostgreSQLSqlDialect.java
@@ -1,6 +1,5 @@
package com.exasol.adapter.dialects.impl;
-import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Types;
import java.util.EnumMap;
@@ -12,31 +11,32 @@
import com.exasol.adapter.capabilities.MainCapability;
import com.exasol.adapter.capabilities.PredicateCapability;
import com.exasol.adapter.capabilities.ScalarFunctionCapability;
-import com.exasol.adapter.dialects.*;
+import com.exasol.adapter.dialects.AbstractSqlDialect;
+import com.exasol.adapter.dialects.JdbcTypeDescription;
+import com.exasol.adapter.dialects.SqlDialectContext;
+import com.exasol.adapter.dialects.SqlGenerationContext;
+import com.exasol.adapter.dialects.SqlGenerationVisitor;
import com.exasol.adapter.metadata.DataType;
-import com.exasol.adapter.sql.AggregateFunction;
import com.exasol.adapter.sql.ScalarFunction;
-public class PostgreSQLSqlDialect extends AbstractSqlDialect{
+public class PostgreSQLSqlDialect extends AbstractSqlDialect {
+ public PostgreSQLSqlDialect(final SqlDialectContext context) {
+ super(context);
+ }
+
+ private static final String NAME = "POSTGRESQL";
+ public static int maxPostgresSQLVarcharSize = 2000000; // Postgres limit actually is 1 GB, so we use as max the
+ // EXASOL limit
- public PostgreSQLSqlDialect(SqlDialectContext context) {
- super(context);
- }
+ public static String getPublicName() {
+ return NAME;
+ }
- public static final String NAME = "POSTGRESQL";
-
- public static int maxPostgresSQLVarcharSize = 2000000; // Postgres limit actually is 1 GB, so we use as max the EXASOL limit
-
- @Override
- public String getPublicName() {
- return NAME;
- }
+ @Override
+ public Capabilities getCapabilities() {
- @Override
- public Capabilities getCapabilities() {
-
- Capabilities cap = new Capabilities();
+ final Capabilities cap = new Capabilities();
cap.supportMainCapability(MainCapability.SELECTLIST_PROJECTION);
cap.supportMainCapability(MainCapability.SELECTLIST_EXPRESSIONS);
@@ -50,7 +50,7 @@ public Capabilities getCapabilities() {
cap.supportMainCapability(MainCapability.ORDER_BY_EXPRESSION);
cap.supportMainCapability(MainCapability.LIMIT);
cap.supportMainCapability(MainCapability.LIMIT_WITH_OFFSET);
-
+
// Predicates
cap.supportPredicate(PredicateCapability.AND);
cap.supportPredicate(PredicateCapability.OR);
@@ -66,7 +66,7 @@ public Capabilities getCapabilities() {
cap.supportPredicate(PredicateCapability.IN_CONSTLIST);
cap.supportPredicate(PredicateCapability.IS_NULL);
cap.supportPredicate(PredicateCapability.IS_NOT_NULL);
-
+
// Literals
// BOOL is not supported
cap.supportLiteral(LiteralCapability.BOOL);
@@ -77,43 +77,41 @@ public Capabilities getCapabilities() {
cap.supportLiteral(LiteralCapability.DOUBLE);
cap.supportLiteral(LiteralCapability.EXACTNUMERIC);
cap.supportLiteral(LiteralCapability.STRING);
- //cap.supportLiteral(LiteralCapability.INTERVAL);
-
-
+ // cap.supportLiteral(LiteralCapability.INTERVAL);
+
// Aggregate functions
cap.supportAggregateFunction(AggregateFunctionCapability.COUNT);
cap.supportAggregateFunction(AggregateFunctionCapability.COUNT_STAR);
cap.supportAggregateFunction(AggregateFunctionCapability.COUNT_DISTINCT);
-
+
cap.supportAggregateFunction(AggregateFunctionCapability.SUM);
cap.supportAggregateFunction(AggregateFunctionCapability.SUM_DISTINCT);
cap.supportAggregateFunction(AggregateFunctionCapability.MIN);
cap.supportAggregateFunction(AggregateFunctionCapability.MAX);
cap.supportAggregateFunction(AggregateFunctionCapability.AVG);
cap.supportAggregateFunction(AggregateFunctionCapability.AVG_DISTINCT);
-
+
cap.supportAggregateFunction(AggregateFunctionCapability.MEDIAN);
cap.supportAggregateFunction(AggregateFunctionCapability.FIRST_VALUE);
cap.supportAggregateFunction(AggregateFunctionCapability.LAST_VALUE);
-
+
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV);
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_DISTINCT);
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_POP);
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_POP_DISTINCT);
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_SAMP);
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_SAMP_DISTINCT);
-
+
cap.supportAggregateFunction(AggregateFunctionCapability.VARIANCE);
cap.supportAggregateFunction(AggregateFunctionCapability.VARIANCE_DISTINCT);
cap.supportAggregateFunction(AggregateFunctionCapability.VAR_POP);
cap.supportAggregateFunction(AggregateFunctionCapability.VAR_POP_DISTINCT);
cap.supportAggregateFunction(AggregateFunctionCapability.VAR_SAMP);
- cap.supportAggregateFunction(AggregateFunctionCapability.VAR_SAMP_DISTINCT) ;
-
+ cap.supportAggregateFunction(AggregateFunctionCapability.VAR_SAMP_DISTINCT);
+
cap.supportAggregateFunction(AggregateFunctionCapability.GROUP_CONCAT); // translated to string_agg
-
- //math functions
+ // math functions
// Standard Arithmetic Operators
cap.supportScalarFunction(ScalarFunctionCapability.ADD);
cap.supportScalarFunction(ScalarFunctionCapability.SUB);
@@ -153,45 +151,44 @@ public Capabilities getCapabilities() {
cap.supportScalarFunction(ScalarFunctionCapability.TAN);
cap.supportScalarFunction(ScalarFunctionCapability.TANH);
cap.supportScalarFunction(ScalarFunctionCapability.TRUNC);
-
-
+
// String Functions
cap.supportScalarFunction(ScalarFunctionCapability.ASCII);
cap.supportScalarFunction(ScalarFunctionCapability.BIT_LENGTH);
cap.supportScalarFunction(ScalarFunctionCapability.CHR);
- //cap.supportScalarFunction(ScalarFunctionCapability.COLOGNE_PHONETIC);
+ // cap.supportScalarFunction(ScalarFunctionCapability.COLOGNE_PHONETIC);
cap.supportScalarFunction(ScalarFunctionCapability.CONCAT);
- //cap.supportScalarFunction(ScalarFunctionCapability.DUMP);
- //cap.supportScalarFunction(ScalarFunctionCapability.EDIT_DISTANCE);
- //cap.supportScalarFunction(ScalarFunctionCapability.INSERT);
+ // cap.supportScalarFunction(ScalarFunctionCapability.DUMP);
+ // cap.supportScalarFunction(ScalarFunctionCapability.EDIT_DISTANCE);
+ // cap.supportScalarFunction(ScalarFunctionCapability.INSERT);
cap.supportScalarFunction(ScalarFunctionCapability.INSTR);
cap.supportScalarFunction(ScalarFunctionCapability.LENGTH);
- //cap.supportScalarFunction(ScalarFunctionCapability.LOCATE);
+ // cap.supportScalarFunction(ScalarFunctionCapability.LOCATE);
cap.supportScalarFunction(ScalarFunctionCapability.LOWER);
cap.supportScalarFunction(ScalarFunctionCapability.LPAD);
cap.supportScalarFunction(ScalarFunctionCapability.LTRIM);
cap.supportScalarFunction(ScalarFunctionCapability.OCTET_LENGTH);
- //cap.supportScalarFunction(ScalarFunctionCapability.REGEXP_INSTR);
+ // cap.supportScalarFunction(ScalarFunctionCapability.REGEXP_INSTR);
cap.supportScalarFunction(ScalarFunctionCapability.REGEXP_REPLACE);
- //cap.supportScalarFunction(ScalarFunctionCapability.REGEXP_SUBSTR);
+ // cap.supportScalarFunction(ScalarFunctionCapability.REGEXP_SUBSTR);
cap.supportScalarFunction(ScalarFunctionCapability.REPEAT);
cap.supportScalarFunction(ScalarFunctionCapability.REPLACE);
cap.supportScalarFunction(ScalarFunctionCapability.REVERSE);
cap.supportScalarFunction(ScalarFunctionCapability.RIGHT);
cap.supportScalarFunction(ScalarFunctionCapability.RPAD);
cap.supportScalarFunction(ScalarFunctionCapability.RTRIM);
- //cap.supportScalarFunction(ScalarFunctionCapability.SOUNDEX);
- //cap.supportScalarFunction(ScalarFunctionCapability.SPACE);
+ // cap.supportScalarFunction(ScalarFunctionCapability.SOUNDEX);
+ // cap.supportScalarFunction(ScalarFunctionCapability.SPACE);
cap.supportScalarFunction(ScalarFunctionCapability.SUBSTR);
cap.supportScalarFunction(ScalarFunctionCapability.TRANSLATE);
cap.supportScalarFunction(ScalarFunctionCapability.TRIM);
cap.supportScalarFunction(ScalarFunctionCapability.UNICODE);
cap.supportScalarFunction(ScalarFunctionCapability.UNICODECHR);
cap.supportScalarFunction(ScalarFunctionCapability.UPPER);
-
+
// Date/Time Functions
-
- //The following functions will be rewrited to + operator in the Visitor
+
+ // The following functions will be rewrited to + operator in the Visitor
cap.supportScalarFunction(ScalarFunctionCapability.ADD_DAYS);
cap.supportScalarFunction(ScalarFunctionCapability.ADD_HOURS);
cap.supportScalarFunction(ScalarFunctionCapability.ADD_MINUTES);
@@ -199,19 +196,19 @@ public Capabilities getCapabilities() {
cap.supportScalarFunction(ScalarFunctionCapability.ADD_SECONDS);
cap.supportScalarFunction(ScalarFunctionCapability.ADD_WEEKS);
cap.supportScalarFunction(ScalarFunctionCapability.ADD_YEARS);
-
- //cap.supportScalarFunction(ScalarFunctionCapability.CONVERT_TZ);
-
-
- //handled via Visitor and transformed to e.g. date_part('day',age('2012-03-05','2010-04-01' ))
+
+ // cap.supportScalarFunction(ScalarFunctionCapability.CONVERT_TZ);
+
+ // handled via Visitor and transformed to e.g.
+ // date_part('day',age('2012-03-05','2010-04-01' ))
cap.supportScalarFunction(ScalarFunctionCapability.SECONDS_BETWEEN);
cap.supportScalarFunction(ScalarFunctionCapability.MINUTES_BETWEEN);
cap.supportScalarFunction(ScalarFunctionCapability.HOURS_BETWEEN);
cap.supportScalarFunction(ScalarFunctionCapability.DAYS_BETWEEN);
cap.supportScalarFunction(ScalarFunctionCapability.MONTHS_BETWEEN);
cap.supportScalarFunction(ScalarFunctionCapability.YEARS_BETWEEN);
-
- //handled via Visitor and transformed to e.g. date_part
+
+ // handled via Visitor and transformed to e.g. date_part
cap.supportScalarFunction(ScalarFunctionCapability.MINUTE);
cap.supportScalarFunction(ScalarFunctionCapability.SECOND);
cap.supportScalarFunction(ScalarFunctionCapability.DAY);
@@ -219,20 +216,19 @@ public Capabilities getCapabilities() {
cap.supportScalarFunction(ScalarFunctionCapability.MONTH);
cap.supportScalarFunction(ScalarFunctionCapability.YEAR);
-
cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_DATE);
cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_TIMESTAMP);
cap.supportScalarFunction(ScalarFunctionCapability.DATE_TRUNC);
-
- //cap.supportScalarFunction(ScalarFunctionCapability.DBTIMEZONE);
+
+ // cap.supportScalarFunction(ScalarFunctionCapability.DBTIMEZONE);
cap.supportScalarFunction(ScalarFunctionCapability.EXTRACT);
cap.supportScalarFunction(ScalarFunctionCapability.LOCALTIMESTAMP);
- //cap.supportScalarFunction(ScalarFunctionCapability.NUMTODSINTERVAL);
- //cap.supportScalarFunction(ScalarFunctionCapability.NUMTOYMINTERVAL);
- cap.supportScalarFunction(ScalarFunctionCapability.POSIX_TIME); //converted to extract(epoche
- //cap.supportScalarFunction(ScalarFunctionCapability.SESSIONTIMEZONE);
- //cap.supportScalarFunction(ScalarFunctionCapability.SYSDATE);
- //cap.supportScalarFunction(ScalarFunctionCapability.SYSTIMESTAMP);
+ // cap.supportScalarFunction(ScalarFunctionCapability.NUMTODSINTERVAL);
+ // cap.supportScalarFunction(ScalarFunctionCapability.NUMTOYMINTERVAL);
+ cap.supportScalarFunction(ScalarFunctionCapability.POSIX_TIME); // converted to extract(epoche
+ // cap.supportScalarFunction(ScalarFunctionCapability.SESSIONTIMEZONE);
+ // cap.supportScalarFunction(ScalarFunctionCapability.SYSDATE);
+ // cap.supportScalarFunction(ScalarFunctionCapability.SYSTIMESTAMP);
// Conversion functions
// cap.supportScalarFunction(ScalarFunctionCapability.IS_NUMBER);
@@ -247,7 +243,7 @@ public Capabilities getCapabilities() {
// cap.supportScalarFunction(ScalarFunctionCapability.TO_YMINTERVAL);
// cap.supportScalarFunction(ScalarFunctionCapability.TO_NUMBER);
// cap.supportScalarFunction(ScalarFunctionCapability.TO_TIMESTAMP);
-
+
// Bitwise functions
// cap.supportScalarFunction(ScalarFunctionCapability.BIT_AND);
// cap.supportScalarFunction(ScalarFunctionCapability.BIT_CHECK);
@@ -256,9 +252,8 @@ public Capabilities getCapabilities() {
// cap.supportScalarFunction(ScalarFunctionCapability.BIT_SET);
// cap.supportScalarFunction(ScalarFunctionCapability.BIT_TO_NUM);
// cap.supportScalarFunction(ScalarFunctionCapability.BIT_XOR);
-
-
- // Other functions
+
+ // Other functions
cap.supportScalarFunction(ScalarFunctionCapability.CASE);
// cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_SCHEMA);
// cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_SESSION);
@@ -271,124 +266,120 @@ public Capabilities getCapabilities() {
// cap.supportScalarFunction(ScalarFunctionCapability.NULLIFZERO);
// cap.supportScalarFunction(ScalarFunctionCapability.SYS_GUID);
// cap.supportScalarFunction(ScalarFunctionCapability.ZEROIFNULL);
-
+
return cap;
- }
-
- @Override
- public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcTypeDescription) throws SQLException {
+ }
+
+ @Override
+ public DataType dialectSpecificMapJdbcType(final JdbcTypeDescription jdbcTypeDescription) throws SQLException {
DataType colType = null;
- int jdbcType = jdbcTypeDescription.getJdbcType();
+ final int jdbcType = jdbcTypeDescription.getJdbcType();
switch (jdbcType) {
- case Types.OTHER:
- String columnTypeName = jdbcTypeDescription.getTypeName();
-
- if(columnTypeName.equals("varbit")){
- int n = jdbcTypeDescription.getPrecisionOrSize();
- colType = DataType.createVarChar(n, DataType.ExaCharset.UTF8);
- }
- else
- colType = DataType.createVarChar(PostgreSQLSqlDialect.maxPostgresSQLVarcharSize, DataType.ExaCharset.UTF8);
- break;
- case Types.SQLXML:
- colType = DataType.createVarChar(PostgreSQLSqlDialect.maxPostgresSQLVarcharSize, DataType.ExaCharset.UTF8);
- break;
- case Types.DISTINCT:
- colType=DataType.createVarChar(PostgreSQLSqlDialect.maxPostgresSQLVarcharSize, DataType.ExaCharset.UTF8);
- break;
+ case Types.OTHER:
+ final String columnTypeName = jdbcTypeDescription.getTypeName();
+
+ if (columnTypeName.equals("varbit")) {
+ final int n = jdbcTypeDescription.getPrecisionOrSize();
+ colType = DataType.createVarChar(n, DataType.ExaCharset.UTF8);
+ } else {
+ colType = DataType.createVarChar(PostgreSQLSqlDialect.maxPostgresSQLVarcharSize,
+ DataType.ExaCharset.UTF8);
+ }
+ break;
+ case Types.SQLXML:
+ colType = DataType.createVarChar(PostgreSQLSqlDialect.maxPostgresSQLVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+ case Types.DISTINCT:
+ colType = DataType.createVarChar(PostgreSQLSqlDialect.maxPostgresSQLVarcharSize, DataType.ExaCharset.UTF8);
+ break;
}
-
+
return colType;
}
-
- @Override
+
+ @Override
public Map getScalarFunctionAliases() {
-
- Map scalarAliases = new EnumMap<>(ScalarFunction.class);
-
- scalarAliases.put(ScalarFunction.SUBSTR,"SUBSTRING");
- scalarAliases.put(ScalarFunction.HASH_MD5, "MD5");
-
- return scalarAliases;
-
- }
-
-
- @Override
- public SchemaOrCatalogSupport supportsJdbcCatalogs() {
+
+ final Map scalarAliases = new EnumMap<>(ScalarFunction.class);
+
+ scalarAliases.put(ScalarFunction.SUBSTR, "SUBSTRING");
+ scalarAliases.put(ScalarFunction.HASH_MD5, "MD5");
+
+ return scalarAliases;
+
+ }
+
+ @Override
+ public SchemaOrCatalogSupport supportsJdbcCatalogs() {
return SchemaOrCatalogSupport.SUPPORTED;
- }
+ }
- @Override
- public SchemaOrCatalogSupport supportsJdbcSchemas() {
+ @Override
+ public SchemaOrCatalogSupport supportsJdbcSchemas() {
return SchemaOrCatalogSupport.SUPPORTED;
- }
-
- @Override
- public String changeIdentifierCaseIfNeeded(String identifier) {
-
- boolean isSimplePostgresIdentifier = identifier.matches("^[a-z][0-9a-z_]*");
-
- if(isSimplePostgresIdentifier)
- return identifier.toUpperCase();
- else
- return identifier;
-
}
-
- @Override
- public IdentifierCaseHandling getUnquotedIdentifierHandling() {
- return IdentifierCaseHandling.INTERPRET_AS_LOWER;
- }
-
- @Override
- public IdentifierCaseHandling getQuotedIdentifierHandling() {
+
+ @Override
+ public String changeIdentifierCaseIfNeeded(final String identifier) {
+
+ final boolean isSimplePostgresIdentifier = identifier.matches("^[a-z][0-9a-z_]*");
+
+ if (isSimplePostgresIdentifier) {
+ return identifier.toUpperCase();
+ } else {
+ return identifier;
+ }
+
+ }
+
+ @Override
+ public IdentifierCaseHandling getUnquotedIdentifierHandling() {
+ return IdentifierCaseHandling.INTERPRET_AS_LOWER;
+ }
+
+ @Override
+ public IdentifierCaseHandling getQuotedIdentifierHandling() {
return IdentifierCaseHandling.INTERPRET_CASE_SENSITIVE;
- }
-
-
- @Override
- public String applyQuote(String identifier) {
- return "\"" + identifier.replace("\"", "\"\"") + "\"";
- }
-
- @Override
- public String applyQuoteIfNeeded(String identifier) {
- boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
- if (isSimpleIdentifier) {
- return identifier;
- } else {
- return applyQuote(identifier);
- }
- }
-
- @Override
- public boolean requiresCatalogQualifiedTableNames(
- SqlGenerationContext context) {
- return false;
- }
-
-
-
- @Override
- public boolean requiresSchemaQualifiedTableNames(
- SqlGenerationContext context) {
- return true;
- }
-
- @Override
- public NullSorting getDefaultNullSorting() {
- return NullSorting.NULLS_SORTED_AT_END;
- }
-
- @Override
- public String getStringLiteral(String value) {
- return "'" + value.replace("'", "''") + "'";
- }
-
- @Override
- public SqlGenerationVisitor getSqlGenerationVisitor(SqlGenerationContext context) {
+ }
+
+ @Override
+ public String applyQuote(final String identifier) {
+ return "\"" + identifier.replace("\"", "\"\"") + "\"";
+ }
+
+ @Override
+ public String applyQuoteIfNeeded(final String identifier) {
+ final boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
+ if (isSimpleIdentifier) {
+ return identifier;
+ } else {
+ return applyQuote(identifier);
+ }
+ }
+
+ @Override
+ public boolean requiresCatalogQualifiedTableNames(final SqlGenerationContext context) {
+ return false;
+ }
+
+ @Override
+ public boolean requiresSchemaQualifiedTableNames(final SqlGenerationContext context) {
+ return true;
+ }
+
+ @Override
+ public NullSorting getDefaultNullSorting() {
+ return NullSorting.NULLS_SORTED_AT_END;
+ }
+
+ @Override
+ public String getStringLiteral(final String value) {
+ return "'" + value.replace("'", "''") + "'";
+ }
+
+ @Override
+ public SqlGenerationVisitor getSqlGenerationVisitor(final SqlGenerationContext context) {
return new PostgresSQLSqlGenerationVisitor(this, context);
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/RedshiftSqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/RedshiftSqlDialect.java
index 6d531997a..21e176ae6 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/RedshiftSqlDialect.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/RedshiftSqlDialect.java
@@ -1,6 +1,5 @@
package com.exasol.adapter.dialects.impl;
-import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Types;
import java.util.EnumMap;
@@ -12,30 +11,31 @@
import com.exasol.adapter.capabilities.MainCapability;
import com.exasol.adapter.capabilities.PredicateCapability;
import com.exasol.adapter.capabilities.ScalarFunctionCapability;
-import com.exasol.adapter.dialects.*;
+import com.exasol.adapter.dialects.AbstractSqlDialect;
+import com.exasol.adapter.dialects.JdbcTypeDescription;
+import com.exasol.adapter.dialects.SqlDialectContext;
+import com.exasol.adapter.dialects.SqlGenerationContext;
+import com.exasol.adapter.dialects.SqlGenerationVisitor;
import com.exasol.adapter.metadata.DataType;
import com.exasol.adapter.sql.AggregateFunction;
import com.exasol.adapter.sql.ScalarFunction;
+public class RedshiftSqlDialect extends AbstractSqlDialect {
-public class RedshiftSqlDialect extends AbstractSqlDialect{
+ public RedshiftSqlDialect(final SqlDialectContext context) {
+ super(context);
+ }
+ private static final String NAME = "REDSHIFT";
- public RedshiftSqlDialect(SqlDialectContext context) {
- super(context);
- }
+ public static String getPublicName() {
+ return NAME;
+ }
- public static final String NAME = "REDSHIFT";
-
- @Override
- public String getPublicName() {
- return NAME;
- }
+ @Override
+ public Capabilities getCapabilities() {
- @Override
- public Capabilities getCapabilities() {
-
- Capabilities cap = new Capabilities();
+ final Capabilities cap = new Capabilities();
cap.supportMainCapability(MainCapability.SELECTLIST_PROJECTION);
cap.supportMainCapability(MainCapability.SELECTLIST_EXPRESSIONS);
@@ -49,7 +49,7 @@ public Capabilities getCapabilities() {
cap.supportMainCapability(MainCapability.ORDER_BY_EXPRESSION);
cap.supportMainCapability(MainCapability.LIMIT);
cap.supportMainCapability(MainCapability.LIMIT_WITH_OFFSET);
-
+
// Predicates
cap.supportPredicate(PredicateCapability.AND);
cap.supportPredicate(PredicateCapability.OR);
@@ -65,7 +65,7 @@ public Capabilities getCapabilities() {
cap.supportPredicate(PredicateCapability.IN_CONSTLIST);
cap.supportPredicate(PredicateCapability.IS_NULL);
cap.supportPredicate(PredicateCapability.IS_NOT_NULL);
-
+
// Literals
// BOOL is not supported
cap.supportLiteral(LiteralCapability.BOOL);
@@ -77,14 +77,13 @@ public Capabilities getCapabilities() {
cap.supportLiteral(LiteralCapability.EXACTNUMERIC);
cap.supportLiteral(LiteralCapability.STRING);
cap.supportLiteral(LiteralCapability.INTERVAL);
-
-
+
// Aggregate functions
cap.supportAggregateFunction(AggregateFunctionCapability.COUNT);
cap.supportAggregateFunction(AggregateFunctionCapability.COUNT_STAR);
cap.supportAggregateFunction(AggregateFunctionCapability.COUNT_DISTINCT);
cap.supportAggregateFunction(AggregateFunctionCapability.GROUP_CONCAT);
-
+
cap.supportAggregateFunction(AggregateFunctionCapability.SUM);
cap.supportAggregateFunction(AggregateFunctionCapability.SUM_DISTINCT);
cap.supportAggregateFunction(AggregateFunctionCapability.MIN);
@@ -105,10 +104,9 @@ public Capabilities getCapabilities() {
cap.supportAggregateFunction(AggregateFunctionCapability.VAR_POP);
cap.supportAggregateFunction(AggregateFunctionCapability.VAR_POP_DISTINCT);
cap.supportAggregateFunction(AggregateFunctionCapability.VAR_SAMP);
- cap.supportAggregateFunction(AggregateFunctionCapability.VAR_SAMP_DISTINCT) ;
-
-
- //math functions
+ cap.supportAggregateFunction(AggregateFunctionCapability.VAR_SAMP_DISTINCT);
+
+ // math functions
cap.supportScalarFunction(ScalarFunctionCapability.CEIL);
cap.supportScalarFunction(ScalarFunctionCapability.DIV);
cap.supportScalarFunction(ScalarFunctionCapability.FLOOR);
@@ -141,13 +139,12 @@ public Capabilities getCapabilities() {
cap.supportScalarFunction(ScalarFunctionCapability.SQRT);
cap.supportScalarFunction(ScalarFunctionCapability.TAN);
cap.supportScalarFunction(ScalarFunctionCapability.TANH);
- cap.supportScalarFunction(ScalarFunctionCapability.ASCII);
+ cap.supportScalarFunction(ScalarFunctionCapability.ASCII);
cap.supportScalarFunction(ScalarFunctionCapability.CHR);
cap.supportScalarFunction(ScalarFunctionCapability.INSTR);
cap.supportScalarFunction(ScalarFunctionCapability.LENGTH);
cap.supportScalarFunction(ScalarFunctionCapability.SIGN);
-
-
+
cap.supportScalarFunction(ScalarFunctionCapability.CONCAT);
cap.supportScalarFunction(ScalarFunctionCapability.LOCATE);
cap.supportScalarFunction(ScalarFunctionCapability.LOWER);
@@ -167,148 +164,138 @@ public Capabilities getCapabilities() {
cap.supportScalarFunction(ScalarFunctionCapability.TRIM);
cap.supportScalarFunction(ScalarFunctionCapability.UPPER);
-
- //Bit functions
+ // Bit functions
cap.supportScalarFunction(ScalarFunctionCapability.BIT_AND);
cap.supportScalarFunction(ScalarFunctionCapability.BIT_OR);
- //Date and Time Functions
+ // Date and Time Functions
cap.supportScalarFunction(ScalarFunctionCapability.ADD_MONTHS);
cap.supportScalarFunction(ScalarFunctionCapability.MONTHS_BETWEEN);
cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_DATE);
cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_TIMESTAMP);
cap.supportScalarFunction(ScalarFunctionCapability.CONVERT_TZ);
cap.supportScalarFunction(ScalarFunctionCapability.SYSDATE);
-
- cap.supportScalarFunction(ScalarFunctionCapability.YEAR);
+
+ cap.supportScalarFunction(ScalarFunctionCapability.YEAR);
cap.supportScalarFunction(ScalarFunctionCapability.EXTRACT);
-
-
- //Convertion functions
+
+ // Convertion functions
cap.supportScalarFunction(ScalarFunctionCapability.CAST);
cap.supportScalarFunction(ScalarFunctionCapability.TO_NUMBER);
cap.supportScalarFunction(ScalarFunctionCapability.TO_TIMESTAMP);
cap.supportScalarFunction(ScalarFunctionCapability.TO_DATE);
-
-
- //hash functions
+
+ // hash functions
cap.supportScalarFunction(ScalarFunctionCapability.HASH_MD5);
cap.supportScalarFunction(ScalarFunctionCapability.HASH_SHA1);
-
-
- //system information functions
+
+ // system information functions
cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_SCHEMA);
cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_USER);
-
+
return cap;
- }
-
- @Override
- public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcTypeDescription) throws SQLException {
+ }
+
+ @Override
+ public DataType dialectSpecificMapJdbcType(final JdbcTypeDescription jdbcTypeDescription) throws SQLException {
DataType colType = null;
- int jdbcType = jdbcTypeDescription.getJdbcType();
+ final int jdbcType = jdbcTypeDescription.getJdbcType();
switch (jdbcType) {
- case Types.NUMERIC:
- int decimalPrec = jdbcTypeDescription.getPrecisionOrSize();
- int decimalScale = jdbcTypeDescription.getDecimalScale();
-
- if (decimalPrec <= DataType.maxExasolDecimalPrecision) {
- colType = DataType.createDecimal(decimalPrec, decimalScale);
- } else {
- colType = DataType.createDouble();
- }
- break;
-
+ case Types.NUMERIC:
+ final int decimalPrec = jdbcTypeDescription.getPrecisionOrSize();
+ final int decimalScale = jdbcTypeDescription.getDecimalScale();
+
+ if (decimalPrec <= DataType.maxExasolDecimalPrecision) {
+ colType = DataType.createDecimal(decimalPrec, decimalScale);
+ } else {
+ colType = DataType.createDouble();
+ }
+ break;
+
}
return colType;
}
-
- @Override
+
+ @Override
public Map getScalarFunctionAliases() {
-
- Map scalarAliases = new EnumMap<>(ScalarFunction.class);
-
- scalarAliases.put(ScalarFunction.YEAR, "DATE_PART_YEAR");
- scalarAliases.put(ScalarFunction.CONVERT_TZ, "CONVERT_TIMEZONE");
- scalarAliases.put(ScalarFunction.HASH_MD5, "MD5");
- scalarAliases.put(ScalarFunction.HASH_SHA1, "FUNC_SHA1");
-
- scalarAliases.put(ScalarFunction.SUBSTR,"SUBSTRING");
-
- return scalarAliases;
-
- }
-
-
- @Override
+
+ final Map scalarAliases = new EnumMap<>(ScalarFunction.class);
+
+ scalarAliases.put(ScalarFunction.YEAR, "DATE_PART_YEAR");
+ scalarAliases.put(ScalarFunction.CONVERT_TZ, "CONVERT_TIMEZONE");
+ scalarAliases.put(ScalarFunction.HASH_MD5, "MD5");
+ scalarAliases.put(ScalarFunction.HASH_SHA1, "FUNC_SHA1");
+
+ scalarAliases.put(ScalarFunction.SUBSTR, "SUBSTRING");
+
+ return scalarAliases;
+
+ }
+
+ @Override
public Map getAggregateFunctionAliases() {
- Map aggregationAliases = new EnumMap<>(AggregateFunction.class);
-
+ final Map aggregationAliases = new EnumMap<>(AggregateFunction.class);
+
return aggregationAliases;
}
-
- @Override
- public SchemaOrCatalogSupport supportsJdbcCatalogs() {
+ @Override
+ public SchemaOrCatalogSupport supportsJdbcCatalogs() {
return SchemaOrCatalogSupport.SUPPORTED;
- }
+ }
- @Override
- public SchemaOrCatalogSupport supportsJdbcSchemas() {
+ @Override
+ public SchemaOrCatalogSupport supportsJdbcSchemas() {
return SchemaOrCatalogSupport.SUPPORTED;
- }
+ }
- @Override
- public IdentifierCaseHandling getUnquotedIdentifierHandling() {
- return IdentifierCaseHandling.INTERPRET_AS_UPPER;
- }
+ @Override
+ public IdentifierCaseHandling getUnquotedIdentifierHandling() {
+ return IdentifierCaseHandling.INTERPRET_AS_UPPER;
+ }
- @Override
- public IdentifierCaseHandling getQuotedIdentifierHandling() {
+ @Override
+ public IdentifierCaseHandling getQuotedIdentifierHandling() {
return IdentifierCaseHandling.INTERPRET_AS_UPPER;
- }
-
- @Override
- public String applyQuote(String identifier) {
- return "\"" + identifier.replace("\"", "\"\"") + "\"";
- }
-
- @Override
- public String applyQuoteIfNeeded(String identifier) {
- boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
- if (isSimpleIdentifier) {
- return identifier;
- } else {
- return applyQuote(identifier);
- }
- }
-
- @Override
- public boolean requiresCatalogQualifiedTableNames(
- SqlGenerationContext context) {
- return false;
- }
-
-
-
- @Override
- public boolean requiresSchemaQualifiedTableNames(
- SqlGenerationContext context) {
- return true;
- }
-
- @Override
- public NullSorting getDefaultNullSorting() {
- return NullSorting.NULLS_SORTED_AT_END;
- }
-
- @Override
- public String getStringLiteral(String value) {
- return "'" + value.replace("'", "''") + "'";
- }
-
- @Override
- public SqlGenerationVisitor getSqlGenerationVisitor(SqlGenerationContext context) {
+ }
+
+ @Override
+ public String applyQuote(final String identifier) {
+ return "\"" + identifier.replace("\"", "\"\"") + "\"";
+ }
+
+ @Override
+ public String applyQuoteIfNeeded(final String identifier) {
+ final boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
+ if (isSimpleIdentifier) {
+ return identifier;
+ } else {
+ return applyQuote(identifier);
+ }
+ }
+
+ @Override
+ public boolean requiresCatalogQualifiedTableNames(final SqlGenerationContext context) {
+ return false;
+ }
+
+ @Override
+ public boolean requiresSchemaQualifiedTableNames(final SqlGenerationContext context) {
+ return true;
+ }
+
+ @Override
+ public NullSorting getDefaultNullSorting() {
+ return NullSorting.NULLS_SORTED_AT_END;
+ }
+
+ @Override
+ public String getStringLiteral(final String value) {
+ return "'" + value.replace("'", "''") + "'";
+ }
+
+ @Override
+ public SqlGenerationVisitor getSqlGenerationVisitor(final SqlGenerationContext context) {
return new RedshiftSqlGenerationVisitor(this, context);
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/SqlServerSqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/SqlServerSqlDialect.java
index 76651cf37..bade1267e 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/SqlServerSqlDialect.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/SqlServerSqlDialect.java
@@ -1,6 +1,5 @@
package com.exasol.adapter.dialects.impl;
-import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Types;
import java.util.EnumMap;
@@ -12,38 +11,34 @@
import com.exasol.adapter.capabilities.MainCapability;
import com.exasol.adapter.capabilities.PredicateCapability;
import com.exasol.adapter.capabilities.ScalarFunctionCapability;
-import com.exasol.adapter.dialects.*;
+import com.exasol.adapter.dialects.AbstractSqlDialect;
+import com.exasol.adapter.dialects.JdbcTypeDescription;
+import com.exasol.adapter.dialects.SqlDialectContext;
+import com.exasol.adapter.dialects.SqlGenerationContext;
+import com.exasol.adapter.dialects.SqlGenerationVisitor;
import com.exasol.adapter.metadata.DataType;
import com.exasol.adapter.sql.AggregateFunction;
import com.exasol.adapter.sql.ScalarFunction;
+public class SqlServerSqlDialect extends AbstractSqlDialect {
+ // Tested SQL Server versions: SQL Server 2014
+ // Tested JDBC drivers: jtds-1.3.1 (https://sourceforge.net/projects/jtds/)
+ public final static int maxSqlServerVarcharSize = 8000;
+ public final static int maxSqlServerNVarcharSize = 4000;
+ private static final String NAME = "SQLSERVER";
-public class SqlServerSqlDialect extends AbstractSqlDialect{
-
-
- // Tested SQL Server versions: SQL Server 2014
- // Tested JDBC drivers: jtds-1.3.1 (https://sourceforge.net/projects/jtds/)
-
- public final static int maxSqlServerVarcharSize = 8000;
-
- public final static int maxSqlServerNVarcharSize = 4000;
+ public SqlServerSqlDialect(final SqlDialectContext context) {
+ super(context);
+ }
-
- public SqlServerSqlDialect(SqlDialectContext context) {
- super(context);
- }
+ public static String getPublicName() {
+ return NAME;
+ }
- public static final String NAME = "SQLSERVER";
-
- @Override
- public String getPublicName() {
- return NAME;
- }
+ @Override
+ public Capabilities getCapabilities() {
- @Override
- public Capabilities getCapabilities() {
-
- Capabilities cap = new Capabilities();
+ final Capabilities cap = new Capabilities();
cap.supportMainCapability(MainCapability.SELECTLIST_PROJECTION);
cap.supportMainCapability(MainCapability.SELECTLIST_EXPRESSIONS);
@@ -55,8 +50,8 @@ public Capabilities getCapabilities() {
cap.supportMainCapability(MainCapability.AGGREGATE_HAVING);
cap.supportMainCapability(MainCapability.ORDER_BY_COLUMN);
cap.supportMainCapability(MainCapability.ORDER_BY_EXPRESSION);
- cap.supportMainCapability(MainCapability.LIMIT); // LIMIT will be translated to TOP in SqlServerSqlGenerationVisitor.java
-
+ cap.supportMainCapability(MainCapability.LIMIT); // LIMIT will be translated to TOP in
+ // SqlServerSqlGenerationVisitor.java
// Predicates
cap.supportPredicate(PredicateCapability.AND);
@@ -73,7 +68,7 @@ public Capabilities getCapabilities() {
cap.supportPredicate(PredicateCapability.IN_CONSTLIST);
cap.supportPredicate(PredicateCapability.IS_NULL);
cap.supportPredicate(PredicateCapability.IS_NOT_NULL);
-
+
// Literals
cap.supportLiteral(LiteralCapability.BOOL);
cap.supportLiteral(LiteralCapability.NULL);
@@ -84,7 +79,7 @@ public Capabilities getCapabilities() {
cap.supportLiteral(LiteralCapability.EXACTNUMERIC);
cap.supportLiteral(LiteralCapability.STRING);
cap.supportLiteral(LiteralCapability.INTERVAL);
-
+
// Aggregate functions
cap.supportAggregateFunction(AggregateFunctionCapability.COUNT);
cap.supportAggregateFunction(AggregateFunctionCapability.COUNT_STAR);
@@ -99,7 +94,7 @@ public Capabilities getCapabilities() {
cap.supportAggregateFunction(AggregateFunctionCapability.MEDIAN);
cap.supportAggregateFunction(AggregateFunctionCapability.FIRST_VALUE);
cap.supportAggregateFunction(AggregateFunctionCapability.LAST_VALUE);
-
+
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV);
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_DISTINCT);
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_POP);
@@ -107,25 +102,22 @@ public Capabilities getCapabilities() {
// STDDEV_SAMP
// STDDEV_SAMP_DISTINCT
-
+
cap.supportAggregateFunction(AggregateFunctionCapability.VARIANCE);
cap.supportAggregateFunction(AggregateFunctionCapability.VARIANCE_DISTINCT);
-
+
cap.supportAggregateFunction(AggregateFunctionCapability.VAR_POP);
cap.supportAggregateFunction(AggregateFunctionCapability.VAR_POP_DISTINCT);
-
- // GROUP_CONCAT,
- // GROUP_CONCAT_DISTINCT (AggregateFunction.GROUP_CONCAT),
- // GROUP_CONCAT_SEPARATOR (AggregateFunction.GROUP_CONCAT),
- // GROUP_CONCAT_ORDER_BY (AggregateFunction.GROUP_CONCAT),
- //
- // GEO_INTERSECTION_AGGREGATE,
- // GEO_UNION_AGGREGATE,
- //
- // APPROXIMATE_COUNT_DISTINCT;
-
-
+ // GROUP_CONCAT,
+ // GROUP_CONCAT_DISTINCT (AggregateFunction.GROUP_CONCAT),
+ // GROUP_CONCAT_SEPARATOR (AggregateFunction.GROUP_CONCAT),
+ // GROUP_CONCAT_ORDER_BY (AggregateFunction.GROUP_CONCAT),
+ //
+ // GEO_INTERSECTION_AGGREGATE,
+ // GEO_UNION_AGGREGATE,
+ //
+ // APPROXIMATE_COUNT_DISTINCT;
// Standard Arithmetic Operators
cap.supportScalarFunction(ScalarFunctionCapability.ADD);
@@ -136,23 +128,24 @@ public Capabilities getCapabilities() {
// Unary prefix operators
cap.supportScalarFunction(ScalarFunctionCapability.NEG);
- // Numeric functions https://msdn.microsoft.com/en-us/library/ms177516(v=sql.110).aspx
+ // Numeric functions
+ // https://msdn.microsoft.com/en-us/library/ms177516(v=sql.110).aspx
cap.supportScalarFunction(ScalarFunctionCapability.ABS);
cap.supportScalarFunction(ScalarFunctionCapability.ACOS);
cap.supportScalarFunction(ScalarFunctionCapability.ASIN);
cap.supportScalarFunction(ScalarFunctionCapability.ATAN);
cap.supportScalarFunction(ScalarFunctionCapability.ATAN2); // added alias ATN2
- cap.supportScalarFunction(ScalarFunctionCapability.CEIL); //alias CEILING
+ cap.supportScalarFunction(ScalarFunctionCapability.CEIL); // alias CEILING
cap.supportScalarFunction(ScalarFunctionCapability.COS);
- //COSH
+ // COSH
cap.supportScalarFunction(ScalarFunctionCapability.COT);
cap.supportScalarFunction(ScalarFunctionCapability.DEGREES);
- //DIV,
+ // DIV,
cap.supportScalarFunction(ScalarFunctionCapability.EXP);
cap.supportScalarFunction(ScalarFunctionCapability.FLOOR);
- //GREATEST,
- //LEAST,
- //LN,
+ // GREATEST,
+ // LEAST,
+ // LN,
cap.supportScalarFunction(ScalarFunctionCapability.LOG);
cap.supportScalarFunction(ScalarFunctionCapability.MOD);
cap.supportScalarFunction(ScalarFunctionCapability.POWER);
@@ -161,76 +154,75 @@ public Capabilities getCapabilities() {
cap.supportScalarFunction(ScalarFunctionCapability.ROUND);
cap.supportScalarFunction(ScalarFunctionCapability.SIGN);
cap.supportScalarFunction(ScalarFunctionCapability.SIN);
- //SINH,
+ // SINH,
cap.supportScalarFunction(ScalarFunctionCapability.SQRT);
cap.supportScalarFunction(ScalarFunctionCapability.TAN);
- //TANH,
+ // TANH,
cap.supportScalarFunction(ScalarFunctionCapability.TRUNC);
-
- // String Functions
+
+ // String Functions
cap.supportScalarFunction(ScalarFunctionCapability.ASCII);
- //BIT_LENGTH,
- cap.supportScalarFunction(ScalarFunctionCapability.CHR); //CHAR
- //COLOGNE_PHONETIC,
+ // BIT_LENGTH,
+ cap.supportScalarFunction(ScalarFunctionCapability.CHR); // CHAR
+ // COLOGNE_PHONETIC,
cap.supportScalarFunction(ScalarFunctionCapability.CONCAT);
- //DUMP,
- //EDIT_DISTANCE,
- //INSERT,
- cap.supportScalarFunction(ScalarFunctionCapability.INSTR); // translated to CHARINDEX in Visitor with Argument switch
- cap.supportScalarFunction(ScalarFunctionCapability.LENGTH); //alias LEN
- cap.supportScalarFunction(ScalarFunctionCapability.LOCATE); // CHARINDEX alias
+ // DUMP,
+ // EDIT_DISTANCE,
+ // INSERT,
+ cap.supportScalarFunction(ScalarFunctionCapability.INSTR); // translated to CHARINDEX in Visitor with Argument
+ // switch
+ cap.supportScalarFunction(ScalarFunctionCapability.LENGTH); // alias LEN
+ cap.supportScalarFunction(ScalarFunctionCapability.LOCATE); // CHARINDEX alias
cap.supportScalarFunction(ScalarFunctionCapability.LOWER);
- cap.supportScalarFunction(ScalarFunctionCapability.LPAD); //transformed in Visitor
+ cap.supportScalarFunction(ScalarFunctionCapability.LPAD); // transformed in Visitor
cap.supportScalarFunction(ScalarFunctionCapability.LTRIM);
- //OCTET_LENGTH,
- //REGEXP_INSTR,
- //REGEXP_REPLACE,
- //REGEXP_SUBSTR,
- cap.supportScalarFunction(ScalarFunctionCapability.REPEAT); //REPLICATE
+ // OCTET_LENGTH,
+ // REGEXP_INSTR,
+ // REGEXP_REPLACE,
+ // REGEXP_SUBSTR,
+ cap.supportScalarFunction(ScalarFunctionCapability.REPEAT); // REPLICATE
cap.supportScalarFunction(ScalarFunctionCapability.REPLACE);
cap.supportScalarFunction(ScalarFunctionCapability.REVERSE);
cap.supportScalarFunction(ScalarFunctionCapability.RIGHT);
- cap.supportScalarFunction(ScalarFunctionCapability.RPAD);
+ cap.supportScalarFunction(ScalarFunctionCapability.RPAD);
cap.supportScalarFunction(ScalarFunctionCapability.RTRIM);
- cap.supportScalarFunction(ScalarFunctionCapability.SOUNDEX);
+ cap.supportScalarFunction(ScalarFunctionCapability.SOUNDEX);
cap.supportScalarFunction(ScalarFunctionCapability.SPACE);
- cap.supportScalarFunction(ScalarFunctionCapability.SUBSTR); //SUBSTRING
- //TRANSLATE,
+ cap.supportScalarFunction(ScalarFunctionCapability.SUBSTR); // SUBSTRING
+ // TRANSLATE,
cap.supportScalarFunction(ScalarFunctionCapability.TRIM);
cap.supportScalarFunction(ScalarFunctionCapability.UNICODE);
- //UNICODECHR,
+ // UNICODECHR,
cap.supportScalarFunction(ScalarFunctionCapability.UPPER);
-
-
+
// Date/Time Functions
-
-
- // the following functions are translated to DATEADD(datepart,number,date) in Visitor
- cap.supportScalarFunction(ScalarFunctionCapability.ADD_DAYS);
- cap.supportScalarFunction(ScalarFunctionCapability.ADD_HOURS);
+
+ // the following functions are translated to DATEADD(datepart,number,date) in
+ // Visitor
+ cap.supportScalarFunction(ScalarFunctionCapability.ADD_DAYS);
+ cap.supportScalarFunction(ScalarFunctionCapability.ADD_HOURS);
cap.supportScalarFunction(ScalarFunctionCapability.ADD_MINUTES);
cap.supportScalarFunction(ScalarFunctionCapability.ADD_MONTHS);
cap.supportScalarFunction(ScalarFunctionCapability.ADD_SECONDS);
cap.supportScalarFunction(ScalarFunctionCapability.ADD_WEEKS);
- cap.supportScalarFunction(ScalarFunctionCapability.ADD_YEARS);
-
- //CONVERT_TZ,
-
- cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_DATE);
+ cap.supportScalarFunction(ScalarFunctionCapability.ADD_YEARS);
+
+ // CONVERT_TZ,
+
+ cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_DATE);
cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_TIMESTAMP);
-
- //DATE_TRUNC,
+
+ // DATE_TRUNC,
cap.supportScalarFunction(ScalarFunctionCapability.DAY);
-
- //the following functions are translated to DATEDIFF in Visitor
+
+ // the following functions are translated to DATEDIFF in Visitor
cap.supportScalarFunction(ScalarFunctionCapability.SECONDS_BETWEEN);
cap.supportScalarFunction(ScalarFunctionCapability.MINUTES_BETWEEN);
cap.supportScalarFunction(ScalarFunctionCapability.HOURS_BETWEEN);
cap.supportScalarFunction(ScalarFunctionCapability.DAYS_BETWEEN);
cap.supportScalarFunction(ScalarFunctionCapability.MONTHS_BETWEEN);
cap.supportScalarFunction(ScalarFunctionCapability.YEARS_BETWEEN);
-
-
+
// DBTIMEZONE,
// EXTRACT,
// LOCALTIMESTAMP,
@@ -246,11 +238,10 @@ public Capabilities getCapabilities() {
// SESSIONTIMEZONE,
cap.supportScalarFunction(ScalarFunctionCapability.SYSDATE);
cap.supportScalarFunction(ScalarFunctionCapability.SYSTIMESTAMP);
-
+
// WEEK,
-
+
cap.supportScalarFunction(ScalarFunctionCapability.YEAR);
-
// Geospatial
// - Point Functions
@@ -285,20 +276,20 @@ public Capabilities getCapabilities() {
cap.supportScalarFunction(ScalarFunctionCapability.ST_DISTANCE);
cap.supportScalarFunction(ScalarFunctionCapability.ST_ENVELOPE);
cap.supportScalarFunction(ScalarFunctionCapability.ST_EQUALS);
- //cap.supportScalarFunction(ScalarFunctionCapability.ST_FORCE2D);
+ // cap.supportScalarFunction(ScalarFunctionCapability.ST_FORCE2D);
cap.supportScalarFunction(ScalarFunctionCapability.ST_GEOMETRYTYPE);
cap.supportScalarFunction(ScalarFunctionCapability.ST_INTERSECTION);
cap.supportScalarFunction(ScalarFunctionCapability.ST_INTERSECTS);
cap.supportScalarFunction(ScalarFunctionCapability.ST_ISEMPTY);
cap.supportScalarFunction(ScalarFunctionCapability.ST_ISSIMPLE);
cap.supportScalarFunction(ScalarFunctionCapability.ST_OVERLAPS);
- //cap.supportScalarFunction(ScalarFunctionCapability.ST_SETSRID);
+ // cap.supportScalarFunction(ScalarFunctionCapability.ST_SETSRID);
cap.supportScalarFunction(ScalarFunctionCapability.ST_SYMDIFFERENCE);
cap.supportScalarFunction(ScalarFunctionCapability.ST_TOUCHES);
- //cap.supportScalarFunction(ScalarFunctionCapability.ST_TRANSFORM);
+ // cap.supportScalarFunction(ScalarFunctionCapability.ST_TRANSFORM);
cap.supportScalarFunction(ScalarFunctionCapability.ST_UNION);
cap.supportScalarFunction(ScalarFunctionCapability.ST_WITHIN);
-
+
// Conversion functions
// CAST, // Has alias CONVERT
// IS_NUMBER
@@ -313,7 +304,7 @@ public Capabilities getCapabilities() {
// TO_YMINTERVAL,
// TO_NUMBER,
// TO_TIMESTAMP,
-
+
// Bitwise functions
cap.supportScalarFunction(ScalarFunctionCapability.BIT_AND);
// BIT_CHECK,
@@ -329,186 +320,179 @@ public Capabilities getCapabilities() {
// CURRENT_SESSION,
// CURRENT_STATEMENT,
// CURRENT_USER,
- cap.supportScalarFunction(ScalarFunctionCapability.HASH_MD5); //translated to HASHBYTES
- cap.supportScalarFunction(ScalarFunctionCapability.HASH_SHA); //translated to HASHBYTES
- cap.supportScalarFunction(ScalarFunctionCapability.HASH_SHA1); //translated to HASHBYTES
-// HASH_TIGER,
- cap.supportScalarFunction(ScalarFunctionCapability.NULLIFZERO); //alias NULLIF
+ cap.supportScalarFunction(ScalarFunctionCapability.HASH_MD5); // translated to HASHBYTES
+ cap.supportScalarFunction(ScalarFunctionCapability.HASH_SHA); // translated to HASHBYTES
+ cap.supportScalarFunction(ScalarFunctionCapability.HASH_SHA1); // translated to HASHBYTES
+// HASH_TIGER,
+ cap.supportScalarFunction(ScalarFunctionCapability.NULLIFZERO); // alias NULLIF
// SYS_GUID,
- cap.supportScalarFunction(ScalarFunctionCapability.ZEROIFNULL); //translated to ISNULL(exp1, exp2) in Visitor
+ cap.supportScalarFunction(ScalarFunctionCapability.ZEROIFNULL); // translated to ISNULL(exp1, exp2) in Visitor
return cap;
- }
+ }
-
@Override
- public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcTypeDescription) throws SQLException {
+ public DataType dialectSpecificMapJdbcType(final JdbcTypeDescription jdbcTypeDescription) throws SQLException {
DataType colType = null;
- int jdbcType = jdbcTypeDescription.getJdbcType();
- String columnTypeName = jdbcTypeDescription.getTypeName();
-
+ final int jdbcType = jdbcTypeDescription.getJdbcType();
+ final String columnTypeName = jdbcTypeDescription.getTypeName();
+
switch (jdbcType) {
-
- case Types.VARCHAR: //the JTDS JDBC Type for date, time, datetime2, datetimeoffset is 12
- if(columnTypeName.equalsIgnoreCase("date")) {
- colType = DataType.createDate();
- }
- else if(columnTypeName.equalsIgnoreCase("datetime2")) {
- colType = DataType.createTimestamp(false);
- }
-
- //note: time and datetimeoffset are converted to varchar by default mapping
-
- break;
- case Types.TIME:
- colType = DataType.createVarChar(21, DataType.ExaCharset.UTF8);
- break;
- case 2013: //Types.TIME_WITH_TIMEZONE is Java 1.8 specific
- colType = DataType.createVarChar(21, DataType.ExaCharset.UTF8);
- break;
- case Types.NUMERIC:
- int decimalPrec = jdbcTypeDescription.getPrecisionOrSize();
- int decimalScale = jdbcTypeDescription.getDecimalScale();
-
- if (decimalPrec <= DataType.maxExasolDecimalPrecision) {
- colType = DataType.createDecimal(decimalPrec, decimalScale);
- } else {
- colType = DataType.createDouble();
- }
- break;
- case Types.OTHER:
-
- //TODO
- colType = DataType.createVarChar(SqlServerSqlDialect.maxSqlServerVarcharSize, DataType.ExaCharset.UTF8);
- break;
-
- case Types.SQLXML:
-
- colType = DataType.createVarChar(SqlServerSqlDialect.maxSqlServerVarcharSize, DataType.ExaCharset.UTF8);
- break;
-
- case Types.CLOB: //xml type in SQL Server
-
- colType = DataType.createVarChar(SqlServerSqlDialect.maxSqlServerNVarcharSize, DataType.ExaCharset.UTF8);
- break;
-
- case Types.BLOB:
- if(columnTypeName.equalsIgnoreCase("hierarchyid")) {
- colType = DataType.createVarChar(4000, DataType.ExaCharset.UTF8);
- }
- if(columnTypeName.equalsIgnoreCase("geometry")) {
- colType = DataType.createVarChar(SqlServerSqlDialect.maxSqlServerVarcharSize, DataType.ExaCharset.UTF8);
- }
- else{
- colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
- }
- break;
- case Types.VARBINARY:
- case Types.BINARY:
- colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
- break;
- case Types.DISTINCT:
- colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
- break;
+
+ case Types.VARCHAR: // the JTDS JDBC Type for date, time, datetime2, datetimeoffset is 12
+ if (columnTypeName.equalsIgnoreCase("date")) {
+ colType = DataType.createDate();
+ } else if (columnTypeName.equalsIgnoreCase("datetime2")) {
+ colType = DataType.createTimestamp(false);
+ }
+
+ // note: time and datetimeoffset are converted to varchar by default mapping
+
+ break;
+ case Types.TIME:
+ colType = DataType.createVarChar(21, DataType.ExaCharset.UTF8);
+ break;
+ case 2013: // Types.TIME_WITH_TIMEZONE is Java 1.8 specific
+ colType = DataType.createVarChar(21, DataType.ExaCharset.UTF8);
+ break;
+ case Types.NUMERIC:
+ final int decimalPrec = jdbcTypeDescription.getPrecisionOrSize();
+ final int decimalScale = jdbcTypeDescription.getDecimalScale();
+
+ if (decimalPrec <= DataType.maxExasolDecimalPrecision) {
+ colType = DataType.createDecimal(decimalPrec, decimalScale);
+ } else {
+ colType = DataType.createDouble();
+ }
+ break;
+ case Types.OTHER:
+
+ // TODO
+ colType = DataType.createVarChar(SqlServerSqlDialect.maxSqlServerVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+
+ case Types.SQLXML:
+
+ colType = DataType.createVarChar(SqlServerSqlDialect.maxSqlServerVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+
+ case Types.CLOB: // xml type in SQL Server
+
+ colType = DataType.createVarChar(SqlServerSqlDialect.maxSqlServerNVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+
+ case Types.BLOB:
+ if (columnTypeName.equalsIgnoreCase("hierarchyid")) {
+ colType = DataType.createVarChar(4000, DataType.ExaCharset.UTF8);
+ }
+ if (columnTypeName.equalsIgnoreCase("geometry")) {
+ colType = DataType.createVarChar(SqlServerSqlDialect.maxSqlServerVarcharSize, DataType.ExaCharset.UTF8);
+ } else {
+ colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
+ }
+ break;
+ case Types.VARBINARY:
+ case Types.BINARY:
+ colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
+ break;
+ case Types.DISTINCT:
+ colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
+ break;
}
return colType;
}
-
-
-
- @Override
+
+ @Override
public Map getScalarFunctionAliases() {
-
- Map scalarAliases = new EnumMap<>(ScalarFunction.class);
-
- scalarAliases.put(ScalarFunction.ATAN2, "ATN2");
- scalarAliases.put(ScalarFunction.CEIL, "CEILING");
- scalarAliases.put(ScalarFunction.CHR, "CHAR");
- scalarAliases.put(ScalarFunction.LENGTH, "LEN");
- scalarAliases.put(ScalarFunction.LOCATE, "CHARINDEX");
- scalarAliases.put(ScalarFunction.REPEAT, "REPLICATE");
- scalarAliases.put(ScalarFunction.SUBSTR, "SUBSTRING");
- scalarAliases.put(ScalarFunction.NULLIFZERO, "NULLIF");
-
- return scalarAliases;
-
- }
-
- @Override
+
+ final Map scalarAliases = new EnumMap<>(ScalarFunction.class);
+
+ scalarAliases.put(ScalarFunction.ATAN2, "ATN2");
+ scalarAliases.put(ScalarFunction.CEIL, "CEILING");
+ scalarAliases.put(ScalarFunction.CHR, "CHAR");
+ scalarAliases.put(ScalarFunction.LENGTH, "LEN");
+ scalarAliases.put(ScalarFunction.LOCATE, "CHARINDEX");
+ scalarAliases.put(ScalarFunction.REPEAT, "REPLICATE");
+ scalarAliases.put(ScalarFunction.SUBSTR, "SUBSTRING");
+ scalarAliases.put(ScalarFunction.NULLIFZERO, "NULLIF");
+
+ return scalarAliases;
+
+ }
+
+ @Override
public Map getAggregateFunctionAliases() {
- Map aggregationAliases = new EnumMap<>(AggregateFunction.class);
-
+ final Map aggregationAliases = new EnumMap<>(AggregateFunction.class);
+
aggregationAliases.put(AggregateFunction.STDDEV, "STDEV");
aggregationAliases.put(AggregateFunction.STDDEV_POP, "STDEVP");
-
+
aggregationAliases.put(AggregateFunction.VARIANCE, "VAR");
-
+
aggregationAliases.put(AggregateFunction.VAR_POP, "VARP");
-
+
return aggregationAliases;
}
-
- @Override
- public SchemaOrCatalogSupport supportsJdbcCatalogs() {
+
+ @Override
+ public SchemaOrCatalogSupport supportsJdbcCatalogs() {
return SchemaOrCatalogSupport.SUPPORTED;
- }
+ }
- @Override
- public SchemaOrCatalogSupport supportsJdbcSchemas() {
+ @Override
+ public SchemaOrCatalogSupport supportsJdbcSchemas() {
return SchemaOrCatalogSupport.SUPPORTED;
- }
+ }
- @Override
- public SqlGenerationVisitor getSqlGenerationVisitor(SqlGenerationContext context) {
+ @Override
+ public SqlGenerationVisitor getSqlGenerationVisitor(final SqlGenerationContext context) {
return new SqlServerSqlGenerationVisitor(this, context);
}
-
- @Override
- public IdentifierCaseHandling getUnquotedIdentifierHandling() {
- return IdentifierCaseHandling.INTERPRET_AS_UPPER;
- }
-
- @Override
- public IdentifierCaseHandling getQuotedIdentifierHandling() {
+
+ @Override
+ public IdentifierCaseHandling getUnquotedIdentifierHandling() {
+ return IdentifierCaseHandling.INTERPRET_AS_UPPER;
+ }
+
+ @Override
+ public IdentifierCaseHandling getQuotedIdentifierHandling() {
return IdentifierCaseHandling.INTERPRET_CASE_SENSITIVE;
- }
-
- @Override
- public String applyQuote(String identifier) {
- return "[" + identifier + "]";
- }
-
- @Override
- public String applyQuoteIfNeeded(String identifier) {
- boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
- if (isSimpleIdentifier) {
- return identifier;
- } else {
- return applyQuote(identifier);
- }
- }
-
- @Override
- public boolean requiresCatalogQualifiedTableNames(
- SqlGenerationContext context) {
- return true;
- }
-
- @Override
- public boolean requiresSchemaQualifiedTableNames(
- SqlGenerationContext context) {
- return true;
- }
-
- @Override
- public NullSorting getDefaultNullSorting() {
- return NullSorting.NULLS_SORTED_AT_START;
- }
-
- @Override
- public String getStringLiteral(String value) {
- return "'" + value.replace("'", "''") + "'";
- }
+ }
+
+ @Override
+ public String applyQuote(final String identifier) {
+ return "[" + identifier + "]";
+ }
+
+ @Override
+ public String applyQuoteIfNeeded(final String identifier) {
+ final boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
+ if (isSimpleIdentifier) {
+ return identifier;
+ } else {
+ return applyQuote(identifier);
+ }
+ }
+
+ @Override
+ public boolean requiresCatalogQualifiedTableNames(final SqlGenerationContext context) {
+ return true;
+ }
+
+ @Override
+ public boolean requiresSchemaQualifiedTableNames(final SqlGenerationContext context) {
+ return true;
+ }
+
+ @Override
+ public NullSorting getDefaultNullSorting() {
+ return NullSorting.NULLS_SORTED_AT_START;
+ }
+
+ @Override
+ public String getStringLiteral(final String value) {
+ return "'" + value.replace("'", "''") + "'";
+ }
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/SybaseSqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/SybaseSqlDialect.java
new file mode 100644
index 000000000..2cafc56a4
--- /dev/null
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/SybaseSqlDialect.java
@@ -0,0 +1,506 @@
+package com.exasol.adapter.dialects.impl;
+
+import java.sql.SQLException;
+import java.sql.Types;
+import java.util.EnumMap;
+import java.util.Map;
+
+import com.exasol.adapter.capabilities.AggregateFunctionCapability;
+import com.exasol.adapter.capabilities.Capabilities;
+import com.exasol.adapter.capabilities.LiteralCapability;
+import com.exasol.adapter.capabilities.MainCapability;
+import com.exasol.adapter.capabilities.PredicateCapability;
+import com.exasol.adapter.capabilities.ScalarFunctionCapability;
+import com.exasol.adapter.dialects.AbstractSqlDialect;
+import com.exasol.adapter.dialects.JdbcTypeDescription;
+import com.exasol.adapter.dialects.SqlDialectContext;
+import com.exasol.adapter.dialects.SqlGenerationContext;
+import com.exasol.adapter.dialects.SqlGenerationVisitor;
+import com.exasol.adapter.metadata.DataType;
+import com.exasol.adapter.sql.AggregateFunction;
+import com.exasol.adapter.sql.ScalarFunction;
+
+public class SybaseSqlDialect extends AbstractSqlDialect {
+ // The Sybase dialect started as a copy of the SQL Server dialect.
+ // Tested Sybase version: ASE 16.0
+ // Tested JDBC drivers: jtds-1.3.1 (https://sourceforge.net/projects/jtds/)
+ // Documentation:
+ // http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.help.ase.16.0/doc/html/title.html
+ // https://help.sap.com/viewer/p/SAP_ASE
+ public final static int maxSybaseVarcharSize = 8000;
+ public final static int maxSybaseNVarcharSize = 4000;
+ private static final String NAME = "SYBASE";
+
+ public SybaseSqlDialect(final SqlDialectContext context) {
+ super(context);
+ }
+
+ public static String getPublicName() {
+ return NAME;
+ }
+
+ @Override
+ public Capabilities getCapabilities() {
+
+ final Capabilities cap = new Capabilities();
+
+ cap.supportMainCapability(MainCapability.SELECTLIST_PROJECTION);
+ cap.supportMainCapability(MainCapability.SELECTLIST_EXPRESSIONS);
+ cap.supportMainCapability(MainCapability.FILTER_EXPRESSIONS);
+ cap.supportMainCapability(MainCapability.AGGREGATE_SINGLE_GROUP);
+ cap.supportMainCapability(MainCapability.AGGREGATE_GROUP_BY_COLUMN);
+ cap.supportMainCapability(MainCapability.AGGREGATE_GROUP_BY_EXPRESSION);
+ cap.supportMainCapability(MainCapability.AGGREGATE_GROUP_BY_TUPLE);
+ cap.supportMainCapability(MainCapability.AGGREGATE_HAVING);
+ cap.supportMainCapability(MainCapability.ORDER_BY_COLUMN);
+ cap.supportMainCapability(MainCapability.ORDER_BY_EXPRESSION);
+ cap.supportMainCapability(MainCapability.LIMIT); // LIMIT will be translated to TOP in
+ // SybaseSqlGenerationVisitor.java
+
+ // Predicates
+ cap.supportPredicate(PredicateCapability.AND);
+ cap.supportPredicate(PredicateCapability.OR);
+ cap.supportPredicate(PredicateCapability.NOT);
+ cap.supportPredicate(PredicateCapability.EQUAL);
+ cap.supportPredicate(PredicateCapability.NOTEQUAL);
+ cap.supportPredicate(PredicateCapability.LESS);
+ cap.supportPredicate(PredicateCapability.LESSEQUAL);
+ cap.supportPredicate(PredicateCapability.LIKE);
+ cap.supportPredicate(PredicateCapability.LIKE_ESCAPE);
+ cap.supportPredicate(PredicateCapability.REGEXP_LIKE);
+ cap.supportPredicate(PredicateCapability.BETWEEN);
+ cap.supportPredicate(PredicateCapability.IN_CONSTLIST);
+ cap.supportPredicate(PredicateCapability.IS_NULL);
+ cap.supportPredicate(PredicateCapability.IS_NOT_NULL);
+
+ // Literals
+ cap.supportLiteral(LiteralCapability.BOOL);
+ cap.supportLiteral(LiteralCapability.NULL);
+ cap.supportLiteral(LiteralCapability.DATE);
+ cap.supportLiteral(LiteralCapability.TIMESTAMP);
+ cap.supportLiteral(LiteralCapability.TIMESTAMP_UTC);
+ cap.supportLiteral(LiteralCapability.DOUBLE);
+ cap.supportLiteral(LiteralCapability.EXACTNUMERIC);
+ cap.supportLiteral(LiteralCapability.STRING);
+ cap.supportLiteral(LiteralCapability.INTERVAL);
+
+ // Aggregate functions
+ cap.supportAggregateFunction(AggregateFunctionCapability.COUNT);
+ cap.supportAggregateFunction(AggregateFunctionCapability.COUNT_STAR);
+ cap.supportAggregateFunction(AggregateFunctionCapability.COUNT_DISTINCT);
+
+ cap.supportAggregateFunction(AggregateFunctionCapability.SUM); // works
+ cap.supportAggregateFunction(AggregateFunctionCapability.SUM_DISTINCT);
+ cap.supportAggregateFunction(AggregateFunctionCapability.MIN);
+ cap.supportAggregateFunction(AggregateFunctionCapability.MAX);
+ cap.supportAggregateFunction(AggregateFunctionCapability.AVG);
+ cap.supportAggregateFunction(AggregateFunctionCapability.AVG_DISTINCT);
+ cap.supportAggregateFunction(AggregateFunctionCapability.MEDIAN);
+ cap.supportAggregateFunction(AggregateFunctionCapability.FIRST_VALUE);
+ cap.supportAggregateFunction(AggregateFunctionCapability.LAST_VALUE);
+
+ cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV);
+ cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_DISTINCT);
+ cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_POP);
+ cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_POP_DISTINCT);
+
+ // STDDEV_SAMP
+ // STDDEV_SAMP_DISTINCT
+
+ cap.supportAggregateFunction(AggregateFunctionCapability.VARIANCE);
+ cap.supportAggregateFunction(AggregateFunctionCapability.VARIANCE_DISTINCT);
+
+ cap.supportAggregateFunction(AggregateFunctionCapability.VAR_POP);
+ cap.supportAggregateFunction(AggregateFunctionCapability.VAR_POP_DISTINCT);
+
+ // GROUP_CONCAT,
+ // GROUP_CONCAT_DISTINCT (AggregateFunction.GROUP_CONCAT),
+ // GROUP_CONCAT_SEPARATOR (AggregateFunction.GROUP_CONCAT),
+ // GROUP_CONCAT_ORDER_BY (AggregateFunction.GROUP_CONCAT),
+ //
+ // GEO_INTERSECTION_AGGREGATE,
+ // GEO_UNION_AGGREGATE,
+ //
+ // APPROXIMATE_COUNT_DISTINCT;
+
+ // Standard Arithmetic Operators
+ cap.supportScalarFunction(ScalarFunctionCapability.ADD); // works
+ cap.supportScalarFunction(ScalarFunctionCapability.SUB);
+ cap.supportScalarFunction(ScalarFunctionCapability.MULT);
+ cap.supportScalarFunction(ScalarFunctionCapability.FLOAT_DIV);
+
+ // Unary prefix operators
+ cap.supportScalarFunction(ScalarFunctionCapability.NEG);
+
+ // Numeric functions
+ // https://msdn.microsoft.com/en-us/library/ms177516(v=sql.110).aspx
+ cap.supportScalarFunction(ScalarFunctionCapability.ABS);
+ cap.supportScalarFunction(ScalarFunctionCapability.ACOS);
+ cap.supportScalarFunction(ScalarFunctionCapability.ASIN);
+ cap.supportScalarFunction(ScalarFunctionCapability.ATAN);
+ cap.supportScalarFunction(ScalarFunctionCapability.ATAN2); // added alias ATN2
+ cap.supportScalarFunction(ScalarFunctionCapability.CEIL); // alias CEILING
+ cap.supportScalarFunction(ScalarFunctionCapability.COS);
+ // COSH
+ cap.supportScalarFunction(ScalarFunctionCapability.COT);
+ cap.supportScalarFunction(ScalarFunctionCapability.DEGREES);
+ // DIV,
+ cap.supportScalarFunction(ScalarFunctionCapability.EXP);
+ cap.supportScalarFunction(ScalarFunctionCapability.FLOOR);
+ // GREATEST,
+ // LEAST,
+ // LN,
+ cap.supportScalarFunction(ScalarFunctionCapability.LOG);
+ cap.supportScalarFunction(ScalarFunctionCapability.MOD);
+ cap.supportScalarFunction(ScalarFunctionCapability.POWER);
+ cap.supportScalarFunction(ScalarFunctionCapability.RADIANS);
+ cap.supportScalarFunction(ScalarFunctionCapability.RAND);
+ cap.supportScalarFunction(ScalarFunctionCapability.ROUND);
+ cap.supportScalarFunction(ScalarFunctionCapability.SIGN);
+ cap.supportScalarFunction(ScalarFunctionCapability.SIN);
+ // SINH,
+ cap.supportScalarFunction(ScalarFunctionCapability.SQRT);
+ cap.supportScalarFunction(ScalarFunctionCapability.TAN);
+ // TANH,
+ cap.supportScalarFunction(ScalarFunctionCapability.TRUNC);
+
+ // String Functions
+ cap.supportScalarFunction(ScalarFunctionCapability.ASCII);
+ // BIT_LENGTH,
+ cap.supportScalarFunction(ScalarFunctionCapability.CHR); // CHAR
+ // COLOGNE_PHONETIC,
+ cap.supportScalarFunction(ScalarFunctionCapability.CONCAT);
+ // DUMP,
+ // EDIT_DISTANCE,
+ // INSERT,
+ cap.supportScalarFunction(ScalarFunctionCapability.INSTR); // translated to CHARINDEX in Visitor with Argument
+ // switch
+ cap.supportScalarFunction(ScalarFunctionCapability.LENGTH); // alias LEN
+ cap.supportScalarFunction(ScalarFunctionCapability.LOCATE); // CHARINDEX alias
+ cap.supportScalarFunction(ScalarFunctionCapability.LOWER);
+ cap.supportScalarFunction(ScalarFunctionCapability.LPAD); // transformed in Visitor
+ cap.supportScalarFunction(ScalarFunctionCapability.LTRIM);
+ // OCTET_LENGTH,
+ // REGEXP_INSTR,
+ // REGEXP_REPLACE,
+ // REGEXP_SUBSTR,
+ cap.supportScalarFunction(ScalarFunctionCapability.REPEAT); // REPLICATE
+ cap.supportScalarFunction(ScalarFunctionCapability.REPLACE);
+ cap.supportScalarFunction(ScalarFunctionCapability.REVERSE);
+ cap.supportScalarFunction(ScalarFunctionCapability.RIGHT);
+ cap.supportScalarFunction(ScalarFunctionCapability.RPAD);
+ cap.supportScalarFunction(ScalarFunctionCapability.RTRIM);
+ cap.supportScalarFunction(ScalarFunctionCapability.SOUNDEX);
+ cap.supportScalarFunction(ScalarFunctionCapability.SPACE);
+ cap.supportScalarFunction(ScalarFunctionCapability.SUBSTR); // SUBSTRING
+ // TRANSLATE,
+ cap.supportScalarFunction(ScalarFunctionCapability.TRIM);
+ cap.supportScalarFunction(ScalarFunctionCapability.UNICODE);
+ // UNICODECHR,
+ cap.supportScalarFunction(ScalarFunctionCapability.UPPER);
+
+ // Date/Time Functions
+
+ // the following functions are translated to DATEADD(datepart,number,date) in
+ // Visitor
+ cap.supportScalarFunction(ScalarFunctionCapability.ADD_DAYS);
+ cap.supportScalarFunction(ScalarFunctionCapability.ADD_HOURS);
+ cap.supportScalarFunction(ScalarFunctionCapability.ADD_MINUTES);
+ cap.supportScalarFunction(ScalarFunctionCapability.ADD_MONTHS);
+ cap.supportScalarFunction(ScalarFunctionCapability.ADD_SECONDS);
+ cap.supportScalarFunction(ScalarFunctionCapability.ADD_WEEKS);
+ cap.supportScalarFunction(ScalarFunctionCapability.ADD_YEARS);
+
+ // CONVERT_TZ,
+
+ cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_DATE);
+ cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_TIMESTAMP);
+
+ // DATE_TRUNC,
+ cap.supportScalarFunction(ScalarFunctionCapability.DAY);
+
+ // the following functions are translated to DATEDIFF in Visitor
+ cap.supportScalarFunction(ScalarFunctionCapability.SECONDS_BETWEEN);
+ cap.supportScalarFunction(ScalarFunctionCapability.MINUTES_BETWEEN);
+ cap.supportScalarFunction(ScalarFunctionCapability.HOURS_BETWEEN);
+ cap.supportScalarFunction(ScalarFunctionCapability.DAYS_BETWEEN);
+ cap.supportScalarFunction(ScalarFunctionCapability.MONTHS_BETWEEN);
+ cap.supportScalarFunction(ScalarFunctionCapability.YEARS_BETWEEN);
+
+// DBTIMEZONE,
+// EXTRACT,
+// LOCALTIMESTAMP,
+// MINUTE,
+
+ cap.supportScalarFunction(ScalarFunctionCapability.MONTH);
+
+// NUMTODSINTERVAL,
+// NUMTOYMINTERVAL,
+// POSIX_TIME,
+// SECOND,
+
+// SESSIONTIMEZONE,
+ cap.supportScalarFunction(ScalarFunctionCapability.SYSDATE);
+ cap.supportScalarFunction(ScalarFunctionCapability.SYSTIMESTAMP);
+
+// WEEK,
+
+ cap.supportScalarFunction(ScalarFunctionCapability.YEAR);
+
+ // Geospatial
+ // - Point Functions
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_X);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_Y);
+// // - (Multi-)LineString Functions
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_ENDPOINT);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_ISCLOSED);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_ISRING);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_LENGTH);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_NUMPOINTS);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_POINTN);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_STARTPOINT);
+// // - (Multi-)Polygon Functions
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_AREA);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_EXTERIORRING);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_INTERIORRINGN);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_NUMINTERIORRINGS);
+// // - GeometryCollection Functions
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_GEOMETRYN);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_NUMGEOMETRIES);
+// // - General Functions
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_BOUNDARY);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_BUFFER);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_CENTROID);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_CONTAINS);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_CONVEXHULL);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_CROSSES);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_DIFFERENCE);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_DIMENSION);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_DISJOINT);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_DISTANCE);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_ENVELOPE);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_EQUALS);
+ // cap.supportScalarFunction(ScalarFunctionCapability.ST_FORCE2D);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_GEOMETRYTYPE);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_INTERSECTION);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_INTERSECTS);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_ISEMPTY);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_ISSIMPLE);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_OVERLAPS);
+ // cap.supportScalarFunction(ScalarFunctionCapability.ST_SETSRID);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_SYMDIFFERENCE);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_TOUCHES);
+ // cap.supportScalarFunction(ScalarFunctionCapability.ST_TRANSFORM);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_UNION);
+ cap.supportScalarFunction(ScalarFunctionCapability.ST_WITHIN);
+
+ // Conversion functions
+// CAST, // Has alias CONVERT
+// IS_NUMBER
+// IS_BOOLEAN,
+// IS_DATE,
+// IS_DSINTERVAL,
+// IS_YMINTERVAL,
+// IS_TIMESTAMP,
+// TO_CHAR,
+// TO_DATE,
+// TO_DSINTERVAL,
+// TO_YMINTERVAL,
+// TO_NUMBER,
+// TO_TIMESTAMP,
+
+ // Bitwise functions
+ cap.supportScalarFunction(ScalarFunctionCapability.BIT_AND);
+// BIT_CHECK,
+ cap.supportScalarFunction(ScalarFunctionCapability.BIT_NOT);
+ cap.supportScalarFunction(ScalarFunctionCapability.BIT_OR);
+// BIT_SET,
+// BIT_TO_NUM,
+ cap.supportScalarFunction(ScalarFunctionCapability.BIT_XOR);
+
+ // Other functions
+ cap.supportScalarFunction(ScalarFunctionCapability.CASE);
+// CURRENT_SCHEMA,
+// CURRENT_SESSION,
+// CURRENT_STATEMENT,
+// CURRENT_USER,
+ cap.supportScalarFunction(ScalarFunctionCapability.HASH_MD5); // translated to HASHBYTES
+ cap.supportScalarFunction(ScalarFunctionCapability.HASH_SHA); // translated to HASHBYTES
+ cap.supportScalarFunction(ScalarFunctionCapability.HASH_SHA1); // translated to HASHBYTES
+// HASH_TIGER,
+ cap.supportScalarFunction(ScalarFunctionCapability.NULLIFZERO); // alias NULLIF
+// SYS_GUID,
+ cap.supportScalarFunction(ScalarFunctionCapability.ZEROIFNULL); // translated to ISNULL(exp1, exp2) in Visitor
+
+ return cap;
+ }
+
+ @Override
+ public DataType dialectSpecificMapJdbcType(final JdbcTypeDescription jdbcTypeDescription) throws SQLException {
+ DataType colType = null;
+ final int jdbcType = jdbcTypeDescription.getJdbcType();
+ final String columnTypeName = jdbcTypeDescription.getTypeName();
+
+ switch (jdbcType) {
+
+ case Types.VARCHAR: // the JTDS JDBC Type for date, time, datetime2, datetimeoffset is 12
+ if (columnTypeName.equalsIgnoreCase("date")) {
+ colType = DataType.createDate();
+ } else if (columnTypeName.equalsIgnoreCase("datetime2")) {
+ colType = DataType.createTimestamp(false);
+ }
+
+ // note: time and datetimeoffset are converted to varchar by default mapping
+
+ break;
+ case Types.TIME:
+ colType = DataType.createVarChar(210, DataType.ExaCharset.UTF8);
+ break;
+ case 2013: // Types.TIME_WITH_TIMEZONE is Java 1.8 specific
+ colType = DataType.createVarChar(21, DataType.ExaCharset.UTF8);
+ break;
+ case Types.DATE:
+ colType = DataType.createDate();
+ break;
+ case Types.NUMERIC:
+ case Types.DECIMAL:
+ final int decimalPrec = jdbcTypeDescription.getPrecisionOrSize();
+ final int decimalScale = jdbcTypeDescription.getDecimalScale();
+
+ if (decimalPrec <= DataType.maxExasolDecimalPrecision) {
+ colType = DataType.createDecimal(decimalPrec, decimalScale);
+ } else {
+ int size = decimalPrec + 1;
+ if (decimalScale > 0) {
+ size++;
+ }
+ colType = DataType.createVarChar(size, DataType.ExaCharset.UTF8);
+ }
+ break;
+ case Types.OTHER:
+
+ // TODO
+ colType = DataType.createVarChar(SybaseSqlDialect.maxSybaseVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+
+ case Types.SQLXML:
+
+ colType = DataType.createVarChar(SybaseSqlDialect.maxSybaseVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+
+ case Types.CLOB: // TEXT and UNITEXT types in Sybase
+
+ colType = DataType.createVarChar(DataType.maxExasolVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+
+ case Types.BLOB:
+ if (columnTypeName.equalsIgnoreCase("hierarchyid")) {
+ colType = DataType.createVarChar(4000, DataType.ExaCharset.UTF8);
+ }
+ if (columnTypeName.equalsIgnoreCase("geometry")) {
+ colType = DataType.createVarChar(SybaseSqlDialect.maxSybaseVarcharSize, DataType.ExaCharset.UTF8);
+ } else {
+ colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
+ }
+ break;
+ case Types.DISTINCT:
+ colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
+ break;
+ }
+ return colType;
+ }
+
+ @Override
+ public Map getScalarFunctionAliases() {
+
+ final Map scalarAliases = new EnumMap<>(ScalarFunction.class);
+
+ scalarAliases.put(ScalarFunction.ATAN2, "ATN2");
+ scalarAliases.put(ScalarFunction.CEIL, "CEILING");
+ scalarAliases.put(ScalarFunction.CHR, "CHAR");
+ scalarAliases.put(ScalarFunction.LENGTH, "LEN");
+ scalarAliases.put(ScalarFunction.LOCATE, "CHARINDEX");
+ scalarAliases.put(ScalarFunction.REPEAT, "REPLICATE");
+ scalarAliases.put(ScalarFunction.SUBSTR, "SUBSTRING");
+ scalarAliases.put(ScalarFunction.NULLIFZERO, "NULLIF");
+
+ return scalarAliases;
+
+ }
+
+ @Override
+ public Map getAggregateFunctionAliases() {
+ final Map aggregationAliases = new EnumMap<>(AggregateFunction.class);
+
+ aggregationAliases.put(AggregateFunction.STDDEV, "STDEV");
+
+ aggregationAliases.put(AggregateFunction.STDDEV_POP, "STDEVP");
+
+ aggregationAliases.put(AggregateFunction.VARIANCE, "VAR");
+
+ aggregationAliases.put(AggregateFunction.VAR_POP, "VARP");
+
+ return aggregationAliases;
+ }
+
+ @Override
+ public SchemaOrCatalogSupport supportsJdbcCatalogs() {
+ return SchemaOrCatalogSupport.SUPPORTED;
+ }
+
+ @Override
+ public SchemaOrCatalogSupport supportsJdbcSchemas() {
+ return SchemaOrCatalogSupport.SUPPORTED;
+ }
+
+ @Override
+ public SqlGenerationVisitor getSqlGenerationVisitor(final SqlGenerationContext context) {
+ return new SybaseSqlGenerationVisitor(this, context);
+ }
+
+ @Override
+ public IdentifierCaseHandling getUnquotedIdentifierHandling() {
+ return IdentifierCaseHandling.INTERPRET_AS_UPPER;
+ }
+
+ @Override
+ public IdentifierCaseHandling getQuotedIdentifierHandling() {
+ return IdentifierCaseHandling.INTERPRET_CASE_SENSITIVE;
+ }
+
+ @Override
+ public String applyQuote(final String identifier) {
+ return "[" + identifier + "]";
+ }
+
+ @Override
+ public String applyQuoteIfNeeded(final String identifier) {
+ final boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
+ if (isSimpleIdentifier) {
+ return identifier;
+ } else {
+ return applyQuote(identifier);
+ }
+ }
+
+ @Override
+ public boolean requiresCatalogQualifiedTableNames(final SqlGenerationContext context) {
+ return true;
+ }
+
+ @Override
+ public boolean requiresSchemaQualifiedTableNames(final SqlGenerationContext context) {
+ return true;
+ }
+
+ @Override
+ public NullSorting getDefaultNullSorting() {
+ return NullSorting.NULLS_SORTED_LOW;
+ }
+
+ @Override
+ public String getStringLiteral(final String value) {
+ return "'" + value.replace("'", "''") + "'";
+ }
+
+}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/SybaseSqlGenerationVisitor.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/SybaseSqlGenerationVisitor.java
new file mode 100644
index 000000000..c58b2d512
--- /dev/null
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/SybaseSqlGenerationVisitor.java
@@ -0,0 +1,634 @@
+package com.exasol.adapter.dialects.impl;
+
+import com.exasol.adapter.AdapterException;
+import com.exasol.adapter.dialects.SqlDialect;
+import com.exasol.adapter.dialects.SqlGenerationContext;
+import com.exasol.adapter.dialects.SqlGenerationVisitor;
+import com.exasol.adapter.jdbc.ColumnAdapterNotes;
+import com.exasol.adapter.metadata.ColumnMetadata;
+import com.exasol.adapter.sql.*;
+import com.google.common.base.Joiner;
+import com.google.common.collect.ImmutableList;
+
+import java.util.ArrayList;
+import java.util.List;
+
+
+public class SybaseSqlGenerationVisitor extends SqlGenerationVisitor {
+
+ public SybaseSqlGenerationVisitor(SqlDialect dialect, SqlGenerationContext context) {
+ super(dialect, context);
+
+ }
+
+ @Override
+ public String visit(SqlSelectList selectList) throws AdapterException {
+ if (selectList.isRequestAnyColumn()) {
+ // The system requested any column
+ return "true";
+ }
+ List selectListElements = new ArrayList<>();
+ if (selectList.isSelectStar()) {
+ if (selectListRequiresCasts(selectList)) {
+
+ // Do as if the user has all columns in select list
+ SqlStatementSelect select = (SqlStatementSelect) selectList.getParent();
+
+ int columnId = 0;
+ for (ColumnMetadata columnMeta : select.getFromClause().getMetadata().getColumns()) {
+ SqlColumn sqlColumn = new SqlColumn(columnId, columnMeta);
+ selectListElements.add( getColumnProjectionStringNoCheck(sqlColumn, super.visit(sqlColumn) ) );
+ ++columnId;
+ }
+
+ } else {
+ selectListElements.add("*");
+ }
+ } else {
+ for (SqlNode node : selectList.getExpressions()) {
+ selectListElements.add(node.accept(this));
+ }
+ }
+
+ return Joiner.on(", ").join(selectListElements);
+ }
+
+
+ @Override
+ public String visit(SqlStatementSelect select) throws AdapterException {
+ if (!select.hasLimit()) {
+ return super.visit(select);
+ } else {
+ SqlLimit limit = select.getLimit();
+
+ StringBuilder sql = new StringBuilder();
+ sql.append("SELECT TOP "+limit.getLimit()+ " ");
+
+ sql.append(select.getSelectList().accept(this));
+ sql.append(" FROM ");
+ sql.append(select.getFromClause().accept(this));
+ if (select.hasFilter()) {
+ sql.append(" WHERE ");
+ sql.append(select.getWhereClause().accept(this));
+ }
+ if (select.hasGroupBy()) {
+ sql.append(" GROUP BY ");
+ sql.append(select.getGroupBy().accept(this));
+ }
+ if (select.hasHaving()) {
+ sql.append(" HAVING ");
+ sql.append(select.getHaving().accept(this));
+ }
+ if (select.hasOrderBy()) {
+ sql.append(" ");
+ sql.append(select.getOrderBy().accept(this));
+ }
+
+ return sql.toString();
+ }
+ }
+
+
+ @Override
+ public String visit(SqlColumn column) throws AdapterException {
+ return getColumnProjectionString(column, super.visit(column));
+ }
+
+ private String getColumnProjectionString(SqlColumn column, String projString) throws AdapterException {
+ boolean isDirectlyInSelectList = (column.hasParent() && column.getParent().getType() == SqlNodeType.SELECT_LIST);
+ if (!isDirectlyInSelectList) {
+ return projString;
+ }
+ String typeName = ColumnAdapterNotes.deserialize(column.getMetadata().getAdapterNotes(),
+ column.getMetadata().getName()).getTypeName();
+ return getColumnProjectionStringNoCheckImpl(typeName, column, projString);
+ }
+
+
+ private String getColumnProjectionStringNoCheck(SqlColumn column, String projString) throws AdapterException {
+ String typeName = ColumnAdapterNotes.deserialize(column.getMetadata().getAdapterNotes(),
+ column.getMetadata().getName()).getTypeName();
+ return getColumnProjectionStringNoCheckImpl(typeName, column, projString);
+ }
+
+ private String getColumnProjectionStringNoCheckImpl(String typeName, SqlColumn column, String projString) {
+ if ( typeName.startsWith("text") ) {
+ projString = "CAST(" + projString + " as NVARCHAR("+SybaseSqlDialect.maxSybaseNVarcharSize+") )";
+ } else if (typeName.equals("time") ){
+ projString = "CONVERT(VARCHAR(12), " + projString + ", 137)";
+ } else if (typeName.equals("bigtime") ){
+ projString = "CONVERT(VARCHAR(16), " + projString + ", 137)";
+ } else if (typeName.startsWith("xml")) {
+ projString = "CAST(" + projString + " as NVARCHAR("+SybaseSqlDialect.maxSybaseNVarcharSize+") )";
+ } else if (TYPE_NAME_NOT_SUPPORTED.contains(typeName)){
+ projString = "'"+typeName+" NOT SUPPORTED'"; //returning a string constant for unsupported data types
+ }
+
+ return projString;
+ }
+
+ private static final List TYPE_NAMES_REQUIRING_CAST =
+ ImmutableList.of("text", "time", "bigtime", "xml");
+
+ private static final List TYPE_NAME_NOT_SUPPORTED = ImmutableList.of("varbinary","binary","image");
+
+ private boolean nodeRequiresCast(SqlNode node) throws AdapterException {
+ if (node.getType() == SqlNodeType.COLUMN) {
+ SqlColumn column = (SqlColumn)node;
+ String typeName = ColumnAdapterNotes.deserialize(column.getMetadata().getAdapterNotes(),
+ column.getMetadata().getName()).getTypeName();
+ return TYPE_NAMES_REQUIRING_CAST.contains(typeName) || TYPE_NAME_NOT_SUPPORTED.contains(typeName) ;
+ }
+ return false;
+ }
+
+ private boolean selectListRequiresCasts(SqlSelectList selectList) throws AdapterException {
+ boolean requiresCasts = false;
+
+ // Do as if the user has all columns in select list
+ SqlStatementSelect select = (SqlStatementSelect) selectList.getParent();
+ int columnId = 0;
+ for (ColumnMetadata columnMeta : select.getFromClause().getMetadata().getColumns()) {
+ if (nodeRequiresCast(new SqlColumn(columnId, columnMeta))) {
+ requiresCasts = true;
+ }
+ }
+
+ return requiresCasts;
+ }
+
+
+ @Override
+ public String visit(SqlFunctionScalar function) throws AdapterException {
+
+ String sql = super.visit(function);
+ List argumentsSql = new ArrayList<>();
+ for (SqlNode node : function.getArguments()) {
+ argumentsSql.add(node.accept(this));
+ }
+ StringBuilder builder = new StringBuilder();
+
+ switch (function.getFunction()) {
+ case INSTR: {
+
+ builder.append("CHARINDEX(");
+ builder.append(argumentsSql.get(1));
+ builder.append(", ");
+ builder.append(argumentsSql.get(0));
+ if (argumentsSql.size() > 2) {
+ builder.append(", ");
+ builder.append(argumentsSql.get(2));
+ }
+ builder.append(")");
+ sql = builder.toString();
+ break;
+ }
+
+
+
+ case LPAD: { //RIGHT(REPLICATE(pad_char, length) + LEFT(string, length), length)
+
+ String padChar = "' '";
+
+ if (argumentsSql.size() > 2) {
+ padChar = argumentsSql.get(2);
+ }
+
+
+ String string = argumentsSql.get(0);
+
+ String length = argumentsSql.get(1);
+
+
+ builder.append("RIGHT ( REPLICATE(");
+ builder.append(padChar);
+ builder.append(",");
+ builder.append(length);
+ builder.append(") + LEFT(");
+ builder.append(string);
+ builder.append(",");
+ builder.append(length);
+ builder.append("),");
+ builder.append(length);
+ builder.append(")");
+ sql = builder.toString();
+ break;
+ }
+
+
+ case RPAD: { //LEFT(RIGHT(string, length) + REPLICATE(pad_char, length) , length);
+
+ String padChar = "' '";
+
+ if (argumentsSql.size() > 2) {
+ padChar = argumentsSql.get(2);
+ }
+
+ String string = argumentsSql.get(0);
+
+ String length = argumentsSql.get(1);
+
+ builder.append("LEFT(RIGHT(");
+ builder.append(string);
+ builder.append(",");
+ builder.append(length);
+ builder.append(") + REPLICATE(");
+ builder.append(padChar);
+ builder.append(",");
+ builder.append(length);
+ builder.append("),");
+ builder.append(length);
+ builder.append(")");
+ sql = builder.toString();
+ break;
+
+ }
+ case ADD_DAYS:
+ case ADD_HOURS:
+ case ADD_MINUTES:
+ case ADD_SECONDS:
+ case ADD_WEEKS:
+ case ADD_YEARS: { //DATEADD(datepart,number,date)
+
+ builder.append("DATEADD(");
+
+ switch (function.getFunction()) {
+ case ADD_DAYS:
+ builder.append("DAY");
+ break;
+ case ADD_HOURS:
+ builder.append("HOUR");
+ break;
+ case ADD_MINUTES:
+ builder.append("MINUTE");
+ break;
+ case ADD_SECONDS:
+ builder.append("SECOND");
+ break;
+ case ADD_WEEKS:
+ builder.append("WEEK");
+ break;
+ case ADD_YEARS:
+ builder.append("YEAR");
+ break;
+ default:
+ break;
+ }
+
+ builder.append(",");
+ builder.append( argumentsSql.get(1) );
+ builder.append(",");
+ builder.append( argumentsSql.get(0) );
+ builder.append(")");
+ sql = builder.toString();
+ break;
+ }
+ case SECONDS_BETWEEN:
+ case MINUTES_BETWEEN:
+ case HOURS_BETWEEN:
+ case DAYS_BETWEEN:
+ case MONTHS_BETWEEN:
+ case YEARS_BETWEEN: {
+
+ builder.append("DATEDIFF(");
+
+ switch (function.getFunction()) {
+ case SECONDS_BETWEEN:
+ builder.append("SECOND");
+ break;
+ case MINUTES_BETWEEN:
+ builder.append("MINUTE");
+ break;
+ case HOURS_BETWEEN:
+ builder.append("HOUR");
+ break;
+ case DAYS_BETWEEN:
+ builder.append("DAY");
+ break;
+ case MONTHS_BETWEEN:
+ builder.append("MONTH");
+ break;
+ case YEARS_BETWEEN:
+ builder.append("YEAR");
+ break;
+ default:
+ break;
+ }
+
+ builder.append(",");
+ builder.append( argumentsSql.get(1) );
+ builder.append(",");
+ builder.append( argumentsSql.get(0) );
+ builder.append(")");
+ sql = builder.toString();
+ break;
+ }
+ case CURRENT_DATE:
+ sql = "CAST( GETDATE() AS DATE)";
+ break;
+
+ case CURRENT_TIMESTAMP:
+ sql = "GETDATE()";
+ break;
+
+ case SYSDATE:
+ sql = "CAST( SYSDATETIME() AS DATE)";
+ break;
+
+ case SYSTIMESTAMP:
+ sql = "SYSDATETIME()";
+ break;
+
+
+ case ST_X:
+ builder.append(argumentsSql.get(0)+".STX") ;
+ sql = builder.toString();
+ break;
+
+ case ST_Y:
+ builder.append(argumentsSql.get(0)+".STY") ;
+ sql = builder.toString();
+ break;
+
+ case ST_ENDPOINT:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STEndPoint()") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_ISCLOSED:
+ builder.append(argumentsSql.get(0)+".STIsClosed()") ;
+ sql = builder.toString();
+ break;
+
+ case ST_ISRING:
+ builder.append(argumentsSql.get(0)+".STIsRing()") ;
+ sql = builder.toString();
+ break;
+
+ case ST_LENGTH:
+ builder.append(argumentsSql.get(0)+".STLength()") ;
+ sql = builder.toString();
+ break;
+
+ case ST_NUMPOINTS:
+ builder.append(argumentsSql.get(0)+".STNumPoints()") ;
+ sql = builder.toString();
+ break;
+
+ case ST_POINTN:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STPointN("+argumentsSql.get(1)+")") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_STARTPOINT:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STStartPoint()") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_AREA:
+ builder.append(argumentsSql.get(0)+".STArea()") ;
+ sql = builder.toString();
+ break;
+
+ case ST_EXTERIORRING:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STExteriorRing()") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_INTERIORRINGN:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STInteriorRingN ("+argumentsSql.get(1)+")") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_NUMINTERIORRINGS:
+ builder.append(argumentsSql.get(0)+".STNumInteriorRing()") ;
+ sql = builder.toString();
+ break;
+
+ case ST_GEOMETRYN:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STGeometryN("+argumentsSql.get(1)+")") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_NUMGEOMETRIES:
+ builder.append(argumentsSql.get(0)+".STNumGeometries()") ;
+ sql = builder.toString();
+ break;
+
+ case ST_BOUNDARY:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STBoundary()") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_BUFFER:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STBuffer("+argumentsSql.get(1)+")") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_CENTROID:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STCentroid()") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_CONTAINS:
+ builder.append(argumentsSql.get(0)+".STContains("+argumentsSql.get(1)+")") ;
+ sql = builder.toString();
+ break;
+
+ case ST_CONVEXHULL:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STConvexHull()") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_CROSSES:
+ builder.append(argumentsSql.get(0)+".STCrosses("+argumentsSql.get(1)+")") ;
+ sql = builder.toString();
+ break;
+
+ case ST_DIFFERENCE:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STDifference("+argumentsSql.get(1)+")") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_DIMENSION:
+ builder.append(argumentsSql.get(0)+".STDimension()") ;
+ sql = builder.toString();
+ break;
+
+ case ST_DISJOINT:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STDisjoint("+argumentsSql.get(1)+")") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_DISTANCE:
+ builder.append(argumentsSql.get(0)+".STDistance("+argumentsSql.get(1)+")") ;
+ sql = builder.toString();
+ break;
+
+ case ST_ENVELOPE:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STEnvelope()") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_EQUALS:
+ builder.append(argumentsSql.get(0)+".STEquals("+argumentsSql.get(1)+")") ;
+ sql = builder.toString();
+ break;
+
+
+ case ST_GEOMETRYTYPE:
+ builder.append(argumentsSql.get(0)+".STGeometryType()") ;
+ sql = builder.toString();
+ break;
+
+ case ST_INTERSECTION:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STIntersection("+argumentsSql.get(1)+")") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_INTERSECTS:
+ builder.append(argumentsSql.get(0)+".STIntersects("+argumentsSql.get(1)+")") ;
+ sql = builder.toString();
+ break;
+
+ case ST_ISEMPTY:
+ builder.append(argumentsSql.get(0)+".STIsEmpty()") ;
+ sql = builder.toString();
+ break;
+
+ case ST_ISSIMPLE:
+ builder.append(argumentsSql.get(0)+".STIsSimple()") ;
+ sql = builder.toString();
+ break;
+ case ST_OVERLAPS:
+ builder.append(argumentsSql.get(0)+".STOverlaps("+argumentsSql.get(1)+")") ;
+ sql = builder.toString();
+ break;
+
+ case ST_SYMDIFFERENCE:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STSymDifference ("+argumentsSql.get(1)+")") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_TOUCHES:
+ builder.append(argumentsSql.get(0)+".STTouches("+argumentsSql.get(1)+")") ;
+ sql = builder.toString();
+ break;
+
+ case ST_UNION:
+ builder.append("CAST(");
+ builder.append(argumentsSql.get(0)+".STUnion("+argumentsSql.get(1)+")") ;
+ builder.append("as VARCHAR("+SybaseSqlDialect.maxSybaseVarcharSize+") )");
+ sql = builder.toString();
+ break;
+
+ case ST_WITHIN:
+ builder.append(argumentsSql.get(0)+".STWithin("+argumentsSql.get(1)+")") ;
+ sql = builder.toString();
+ break;
+
+ case BIT_AND:
+ builder.append(argumentsSql.get(0)+" & "+argumentsSql.get(1));
+ sql = builder.toString();
+ break;
+
+ case BIT_OR:
+ builder.append(argumentsSql.get(0)+" | "+argumentsSql.get(1));
+ sql = builder.toString();
+ break;
+
+ case BIT_XOR:
+ builder.append(argumentsSql.get(0)+" ^ "+argumentsSql.get(1));
+ sql = builder.toString();
+ break;
+
+ case BIT_NOT:
+ builder.append("~ "+argumentsSql.get(0));
+ sql = builder.toString();
+ break;
+
+ case HASH_MD5:
+ builder.append("CONVERT(Char, HASHBYTES('MD5',"+argumentsSql.get(0)+"), 2)");
+ sql = builder.toString();
+ break;
+ case HASH_SHA1:
+ builder.append("CONVERT(Char, HASHBYTES('SHA1',"+argumentsSql.get(0)+"), 2)");
+ sql = builder.toString();
+ break;
+
+ case HASH_SHA:
+ builder.append("CONVERT(Char, HASHBYTES('SHA',"+argumentsSql.get(0)+"), 2)");
+ sql = builder.toString();
+ break;
+
+ case ZEROIFNULL:
+ builder.append("ISNULL("+argumentsSql.get(0)+",0)");
+ sql = builder.toString();
+ break;
+
+ default:
+ break;
+ }
+
+
+ return sql;
+ }
+
+ @Override
+ public String visit(SqlOrderBy orderBy) throws AdapterException {
+ // ORDER BY [ASC/DESC] [NULLS FIRST/LAST]
+ // ASC and NULLS LAST are default in EXASOL
+ List sqlOrderElement = new ArrayList<>();
+ for (int i = 0; i < orderBy.getExpressions().size(); ++i) {
+ String elementSql = orderBy.getExpressions().get(i).accept(this);
+ boolean isNullsLast = orderBy.nullsLast().get(i);
+ boolean isAscending = orderBy.isAscending().get(i);
+
+ if (!isAscending && !isNullsLast) {
+ elementSql = "(CASE WHEN " + elementSql + " IS NULL THEN 0 ELSE 1 END), " + elementSql;
+ }
+
+ if (isAscending && isNullsLast) {
+ elementSql = "(CASE WHEN " + elementSql + " IS NULL THEN 1 ELSE 0 END), " + elementSql;
+ }
+
+ if (!isAscending) {
+ elementSql += " DESC";
+ }
+
+ sqlOrderElement.add(elementSql);
+ }
+ return "ORDER BY " + Joiner.on(", ").join(sqlOrderElement);
+ }
+
+}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/TeradataSqlDialect.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/TeradataSqlDialect.java
index 8bb50dd80..25d2cb180 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/TeradataSqlDialect.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/dialects/impl/TeradataSqlDialect.java
@@ -9,30 +9,30 @@
import com.exasol.adapter.capabilities.MainCapability;
import com.exasol.adapter.capabilities.PredicateCapability;
import com.exasol.adapter.capabilities.ScalarFunctionCapability;
-import com.exasol.adapter.dialects.*;
+import com.exasol.adapter.dialects.AbstractSqlDialect;
+import com.exasol.adapter.dialects.JdbcTypeDescription;
+import com.exasol.adapter.dialects.SqlDialectContext;
+import com.exasol.adapter.dialects.SqlGenerationContext;
+import com.exasol.adapter.dialects.SqlGenerationVisitor;
import com.exasol.adapter.jdbc.JdbcAdapterProperties;
import com.exasol.adapter.metadata.DataType;
+public class TeradataSqlDialect extends AbstractSqlDialect {
+ public final static int maxTeradataVarcharSize = 32000;
+ private static final String NAME = "TERADATA";
-public class TeradataSqlDialect extends AbstractSqlDialect{
+ public TeradataSqlDialect(final SqlDialectContext context) {
+ super(context);
+ }
- public final static int maxTeradataVarcharSize = 32000;
-
- public TeradataSqlDialect(SqlDialectContext context) {
- super(context);
- }
+ public static String getPublicName() {
+ return NAME;
+ }
- public static final String NAME = "TERADATA";
-
- @Override
- public String getPublicName() {
- return NAME;
- }
+ @Override
+ public Capabilities getCapabilities() {
- @Override
- public Capabilities getCapabilities() {
-
- Capabilities cap = new Capabilities();
+ final Capabilities cap = new Capabilities();
cap.supportMainCapability(MainCapability.SELECTLIST_PROJECTION);
cap.supportMainCapability(MainCapability.SELECTLIST_EXPRESSIONS);
@@ -45,7 +45,6 @@ public Capabilities getCapabilities() {
cap.supportMainCapability(MainCapability.ORDER_BY_COLUMN);
cap.supportMainCapability(MainCapability.ORDER_BY_EXPRESSION);
cap.supportMainCapability(MainCapability.LIMIT);
-
// Predicates
cap.supportPredicate(PredicateCapability.AND);
@@ -62,7 +61,7 @@ public Capabilities getCapabilities() {
cap.supportPredicate(PredicateCapability.IN_CONSTLIST);
cap.supportPredicate(PredicateCapability.IS_NULL);
cap.supportPredicate(PredicateCapability.IS_NOT_NULL);
-
+
// Literals
// BOOL is not supported
cap.supportLiteral(LiteralCapability.NULL);
@@ -73,8 +72,7 @@ public Capabilities getCapabilities() {
cap.supportLiteral(LiteralCapability.EXACTNUMERIC);
cap.supportLiteral(LiteralCapability.STRING);
cap.supportLiteral(LiteralCapability.INTERVAL);
-
-
+
// Aggregate functions
cap.supportAggregateFunction(AggregateFunctionCapability.COUNT);
cap.supportAggregateFunction(AggregateFunctionCapability.COUNT_STAR);
@@ -83,7 +81,7 @@ public Capabilities getCapabilities() {
// GEO_INTERSECTION_AGGREGATE is not supported
// GEO_UNION_AGGREGATE is not supported
// APPROXIMATE_COUNT_DISTINCT not supported
-
+
cap.supportAggregateFunction(AggregateFunctionCapability.SUM);
cap.supportAggregateFunction(AggregateFunctionCapability.SUM_DISTINCT);
cap.supportAggregateFunction(AggregateFunctionCapability.MIN);
@@ -93,26 +91,26 @@ public Capabilities getCapabilities() {
cap.supportAggregateFunction(AggregateFunctionCapability.MEDIAN);
cap.supportAggregateFunction(AggregateFunctionCapability.FIRST_VALUE);
cap.supportAggregateFunction(AggregateFunctionCapability.LAST_VALUE);
- //cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV);
- //cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_DISTINCT);
+ // cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV);
+ // cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_DISTINCT);
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_POP);
// STDDEV_POP_DISTINCT
cap.supportAggregateFunction(AggregateFunctionCapability.STDDEV_SAMP);
// STDDEV_SAMP_DISTINCT
- //cap.supportAggregateFunction(AggregateFunctionCapability.VARIANCE);
- //cap.supportAggregateFunction(AggregateFunctionCapability.VARIANCE_DISTINCT);
+ // cap.supportAggregateFunction(AggregateFunctionCapability.VARIANCE);
+ // cap.supportAggregateFunction(AggregateFunctionCapability.VARIANCE_DISTINCT);
cap.supportAggregateFunction(AggregateFunctionCapability.VAR_POP);
// VAR_POP_DISTINCT
cap.supportAggregateFunction(AggregateFunctionCapability.VAR_SAMP);
// VAR_SAMP_DISTINCT
-
+
cap.supportScalarFunction(ScalarFunctionCapability.CEIL);
cap.supportScalarFunction(ScalarFunctionCapability.DIV);
cap.supportScalarFunction(ScalarFunctionCapability.FLOOR);
cap.supportScalarFunction(ScalarFunctionCapability.ROUND);
cap.supportScalarFunction(ScalarFunctionCapability.SIGN);
cap.supportScalarFunction(ScalarFunctionCapability.TRUNC);
-
+
cap.supportScalarFunction(ScalarFunctionCapability.ADD);
cap.supportScalarFunction(ScalarFunctionCapability.SUB);
cap.supportScalarFunction(ScalarFunctionCapability.MULT);
@@ -141,15 +139,15 @@ public Capabilities getCapabilities() {
cap.supportScalarFunction(ScalarFunctionCapability.SQRT);
cap.supportScalarFunction(ScalarFunctionCapability.TAN);
cap.supportScalarFunction(ScalarFunctionCapability.TANH);
-
-
- cap.supportScalarFunction(ScalarFunctionCapability.ASCII);
+
+ cap.supportScalarFunction(ScalarFunctionCapability.ASCII);
// BIT_LENGTH is not supported. Can be different for Unicode characters.
cap.supportScalarFunction(ScalarFunctionCapability.CHR);
// COLOGNE_PHONETIC is not supported.
// CONCAT is not supported. Number of arguments can be different.
// DUMP is not supported. Output is different.
- // EDIT_DISTANCE is not supported. Output is different. UTL_MATCH.EDIT_DISTANCE returns -1 with NULL argument.
+ // EDIT_DISTANCE is not supported. Output is different. UTL_MATCH.EDIT_DISTANCE
+ // returns -1 with NULL argument.
// INSERT is not supported.
cap.supportScalarFunction(ScalarFunctionCapability.INSTR);
cap.supportScalarFunction(ScalarFunctionCapability.LENGTH);
@@ -164,7 +162,8 @@ public Capabilities getCapabilities() {
cap.supportScalarFunction(ScalarFunctionCapability.REPEAT);
cap.supportScalarFunction(ScalarFunctionCapability.REPLACE);
cap.supportScalarFunction(ScalarFunctionCapability.REVERSE);
- // RIGHT is not supported. Possible solution with SUBSTRING (must handle corner cases correctly).
+ // RIGHT is not supported. Possible solution with SUBSTRING (must handle corner
+ // cases correctly).
cap.supportScalarFunction(ScalarFunctionCapability.RPAD);
cap.supportScalarFunction(ScalarFunctionCapability.RTRIM);
cap.supportScalarFunction(ScalarFunctionCapability.SOUNDEX);
@@ -182,142 +181,141 @@ public Capabilities getCapabilities() {
cap.supportScalarFunction(ScalarFunctionCapability.ADD_SECONDS);
cap.supportScalarFunction(ScalarFunctionCapability.ADD_WEEKS);
cap.supportScalarFunction(ScalarFunctionCapability.ADD_YEARS);
-
+
cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_DATE);
cap.supportScalarFunction(ScalarFunctionCapability.CURRENT_TIMESTAMP);
-
+
cap.supportScalarFunction(ScalarFunctionCapability.NULLIFZERO);
cap.supportScalarFunction(ScalarFunctionCapability.ZEROIFNULL);
-
+
return cap;
- }
+ }
-
@Override
- public DataType dialectSpecificMapJdbcType(JdbcTypeDescription jdbcTypeDescription) throws SQLException {
+ public DataType dialectSpecificMapJdbcType(final JdbcTypeDescription jdbcTypeDescription) throws SQLException {
DataType colType = null;
- int jdbcType = jdbcTypeDescription.getJdbcType();
+ final int jdbcType = jdbcTypeDescription.getJdbcType();
switch (jdbcType) {
- case Types.TIME:
- colType = DataType.createVarChar(21, DataType.ExaCharset.UTF8);
- break;
- case 2013: //Types.TIME_WITH_TIMEZONE is Java 1.8 specific
- colType = DataType.createVarChar(21, DataType.ExaCharset.UTF8);
- break;
- case Types.NUMERIC:
- int decimalPrec = jdbcTypeDescription.getPrecisionOrSize();
- int decimalScale = jdbcTypeDescription.getDecimalScale();
-
- if (decimalPrec <= DataType.maxExasolDecimalPrecision) {
- colType = DataType.createDecimal(decimalPrec, decimalScale);
- } else {
- colType = DataType.createDouble();
- }
- break;
- case Types.OTHER: // Teradata JDBC uses OTHER for several data types GEOMETRY, INTERVAL etc...
- String columnTypeName = jdbcTypeDescription.getTypeName();
-
- if ( columnTypeName.equals("GEOMETRY") )
- colType = DataType.createVarChar(jdbcTypeDescription.getPrecisionOrSize(), DataType.ExaCharset.UTF8);
- else if (columnTypeName.startsWith("INTERVAL") )
- colType = DataType.createVarChar(30, DataType.ExaCharset.UTF8); //TODO verify that varchar 30 is sufficient in all cases
- else if (columnTypeName.startsWith("PERIOD") )
- colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
- else
- colType = DataType.createVarChar(TeradataSqlDialect.maxTeradataVarcharSize, DataType.ExaCharset.UTF8);
- break;
-
- case Types.SQLXML:
- colType = DataType.createVarChar(TeradataSqlDialect.maxTeradataVarcharSize, DataType.ExaCharset.UTF8);
- break;
-
- case Types.CLOB:
- colType = DataType.createVarChar(TeradataSqlDialect.maxTeradataVarcharSize, DataType.ExaCharset.UTF8);
- break;
-
- case Types.BLOB:
- case Types.VARBINARY:
- case Types.BINARY:
- colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
- break;
- case Types.DISTINCT:
- colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
- break;
+ case Types.TIME:
+ colType = DataType.createVarChar(21, DataType.ExaCharset.UTF8);
+ break;
+ case 2013: // Types.TIME_WITH_TIMEZONE is Java 1.8 specific
+ colType = DataType.createVarChar(21, DataType.ExaCharset.UTF8);
+ break;
+ case Types.NUMERIC:
+ final int decimalPrec = jdbcTypeDescription.getPrecisionOrSize();
+ final int decimalScale = jdbcTypeDescription.getDecimalScale();
+
+ if (decimalPrec <= DataType.maxExasolDecimalPrecision) {
+ colType = DataType.createDecimal(decimalPrec, decimalScale);
+ } else {
+ colType = DataType.createDouble();
+ }
+ break;
+ case Types.OTHER: // Teradata JDBC uses OTHER for several data types GEOMETRY, INTERVAL etc...
+ final String columnTypeName = jdbcTypeDescription.getTypeName();
+
+ if (columnTypeName.equals("GEOMETRY")) {
+ colType = DataType.createVarChar(jdbcTypeDescription.getPrecisionOrSize(), DataType.ExaCharset.UTF8);
+ } else if (columnTypeName.startsWith("INTERVAL")) {
+ colType = DataType.createVarChar(30, DataType.ExaCharset.UTF8); // TODO verify that varchar 30 is
+ // sufficient in all cases
+ } else if (columnTypeName.startsWith("PERIOD")) {
+ colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
+ } else {
+ colType = DataType.createVarChar(TeradataSqlDialect.maxTeradataVarcharSize, DataType.ExaCharset.UTF8);
+ }
+ break;
+
+ case Types.SQLXML:
+ colType = DataType.createVarChar(TeradataSqlDialect.maxTeradataVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+
+ case Types.CLOB:
+ colType = DataType.createVarChar(TeradataSqlDialect.maxTeradataVarcharSize, DataType.ExaCharset.UTF8);
+ break;
+
+ case Types.BLOB:
+ case Types.VARBINARY:
+ case Types.BINARY:
+ colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
+ break;
+ case Types.DISTINCT:
+ colType = DataType.createVarChar(100, DataType.ExaCharset.UTF8);
+ break;
}
return colType;
}
-
-
- @Override
- public SchemaOrCatalogSupport supportsJdbcCatalogs() {
+
+ @Override
+ public SchemaOrCatalogSupport supportsJdbcCatalogs() {
return SchemaOrCatalogSupport.UNSUPPORTED;
- }
+ }
- @Override
- public SchemaOrCatalogSupport supportsJdbcSchemas() {
+ @Override
+ public SchemaOrCatalogSupport supportsJdbcSchemas() {
return SchemaOrCatalogSupport.SUPPORTED;
- }
+ }
- @Override
- public SqlGenerationVisitor getSqlGenerationVisitor(SqlGenerationContext context) {
+ @Override
+ public SqlGenerationVisitor getSqlGenerationVisitor(final SqlGenerationContext context) {
return new TeradataSqlGenerationVisitor(this, context);
}
-
- @Override
- public IdentifierCaseHandling getUnquotedIdentifierHandling() {
- return IdentifierCaseHandling.INTERPRET_AS_UPPER;
- }
-
- @Override
- public IdentifierCaseHandling getQuotedIdentifierHandling() {
+
+ @Override
+ public IdentifierCaseHandling getUnquotedIdentifierHandling() {
+ return IdentifierCaseHandling.INTERPRET_AS_UPPER;
+ }
+
+ @Override
+ public IdentifierCaseHandling getQuotedIdentifierHandling() {
return IdentifierCaseHandling.INTERPRET_CASE_SENSITIVE;
- }
-
- @Override
- public String applyQuote(String identifier) {
- return "\"" + identifier.replace("\"", "\"\"") + "\"";
- }
-
- @Override
- public String applyQuoteIfNeeded(String identifier) {
- boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
- if (isSimpleIdentifier) {
- return identifier;
- } else {
- return applyQuote(identifier);
- }
- }
-
- @Override
- public boolean requiresCatalogQualifiedTableNames(
- SqlGenerationContext context) {
- return false;
- }
-
- @Override
- public boolean requiresSchemaQualifiedTableNames(
- SqlGenerationContext context) {
- return true;
- }
-
- @Override
- public NullSorting getDefaultNullSorting() {
- return NullSorting.NULLS_SORTED_HIGH;
- }
-
- @Override
- public String getStringLiteral(String value) {
- return "'" + value.replace("'", "''") + "'";
- }
-
- @Override
- public void handleException(SQLException exception, JdbcAdapterProperties.ExceptionHandlingMode exceptionMode) throws SQLException {
- if (exceptionMode == JdbcAdapterProperties.ExceptionHandlingMode.IGNORE_INVALID_VIEWS) {
- if (exception.getMessage().contains("Teradata Database") && exception.getMessage().contains("Error 3807")) {
- return;
+ }
+
+ @Override
+ public String applyQuote(final String identifier) {
+ return "\"" + identifier.replace("\"", "\"\"") + "\"";
+ }
+
+ @Override
+ public String applyQuoteIfNeeded(final String identifier) {
+ final boolean isSimpleIdentifier = identifier.matches("^[A-Z][0-9A-Z_]*");
+ if (isSimpleIdentifier) {
+ return identifier;
+ } else {
+ return applyQuote(identifier);
+ }
+ }
+
+ @Override
+ public boolean requiresCatalogQualifiedTableNames(final SqlGenerationContext context) {
+ return false;
+ }
+
+ @Override
+ public boolean requiresSchemaQualifiedTableNames(final SqlGenerationContext context) {
+ return true;
+ }
+
+ @Override
+ public NullSorting getDefaultNullSorting() {
+ return NullSorting.NULLS_SORTED_HIGH;
+ }
+
+ @Override
+ public String getStringLiteral(final String value) {
+ return "'" + value.replace("'", "''") + "'";
+ }
+
+ @Override
+ public void handleException(final SQLException exception,
+ final JdbcAdapterProperties.ExceptionHandlingMode exceptionMode) throws SQLException {
+ if (exceptionMode == JdbcAdapterProperties.ExceptionHandlingMode.IGNORE_INVALID_VIEWS) {
+ if (exception.getMessage().contains("Teradata Database") && exception.getMessage().contains("Error 3807")) {
+ return;
}
}
- throw exception;
- };
+ throw exception;
+ };
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/jdbc/JdbcAdapter.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/jdbc/JdbcAdapter.java
index 9010ec5bb..8f70e52cf 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/jdbc/JdbcAdapter.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/jdbc/JdbcAdapter.java
@@ -1,129 +1,148 @@
package com.exasol.adapter.jdbc;
+import java.io.OutputStream;
+import java.sql.*;
+import java.util.List;
+import java.util.Map;
+import java.util.logging.*;
+
import com.exasol.ExaConnectionInformation;
import com.exasol.ExaMetadata;
import com.exasol.adapter.AdapterException;
import com.exasol.adapter.capabilities.*;
import com.exasol.adapter.dialects.*;
-import com.exasol.adapter.dialects.impl.*;
import com.exasol.adapter.json.RequestJsonParser;
import com.exasol.adapter.json.ResponseJsonSerializer;
-import com.exasol.adapter.metadata.DataType;
-import com.exasol.adapter.metadata.SchemaMetadata;
-import com.exasol.adapter.metadata.SchemaMetadataInfo;
+import com.exasol.adapter.metadata.*;
import com.exasol.adapter.request.*;
+import com.exasol.logging.CompactFormatter;
import com.exasol.utils.JsonHelper;
import com.exasol.utils.UdfUtils;
-import com.google.common.collect.ImmutableList;
-
-import java.sql.*;
-import java.util.List;
-import java.util.Map;
public class JdbcAdapter {
-
public static final int MAX_STRING_CHAR_LENGTH = 2000000;
-
- final static SqlDialects supportedDialects;
- static {
- supportedDialects = new SqlDialects(
- ImmutableList.of(
- GenericSqlDialect.NAME,
- ExasolSqlDialect.NAME,
- ImpalaSqlDialect.NAME,
- OracleSqlDialect.NAME,
- TeradataSqlDialect.NAME,
- RedshiftSqlDialect.NAME,
- HiveSqlDialect.NAME,
- DB2SqlDialect.NAME,
- SqlServerSqlDialect.NAME,
- PostgreSQLSqlDialect.NAME));
- }
+ private static Logger logger = null;
/**
- * This method gets called by the database during interactions with the
- * virtual schema.
+ * This method gets called by the database during interactions with the virtual
+ * schema.
*
- * @param meta
- * Metadata object
- * @param input
- * json request, as defined in the Adapter Script API
- * @return json response, as defined in the Adapter Script API
+ * @param meta Metadata object
+ * @param input JSON request, as defined in the Adapter Script API
+ * @return JSON response, as defined in the Adapter Script API
*/
- public static String adapterCall(ExaMetadata meta, String input) throws Exception {
+ public static String adapterCall(final ExaMetadata meta, final String input) throws Exception {
String result = "";
try {
- AdapterRequest request = new RequestJsonParser().parseRequest(input);
- tryAttachToOutputService(request.getSchemaMetadataInfo());
- System.out.println("----------\nAdapter Request:\n----------\n" + input);
-
+ final AdapterRequest request = new RequestJsonParser().parseRequest(input);
+ final SchemaMetadataInfo schemaMetadata = request.getSchemaMetadataInfo();
+ configureLogOutput(schemaMetadata);
+ logger.fine(() -> "Adapter request:\n" + input);
+
switch (request.getType()) {
case CREATE_VIRTUAL_SCHEMA:
- result = handleCreateVirtualSchema((CreateVirtualSchemaRequest)request, meta);
+ result = handleCreateVirtualSchema((CreateVirtualSchemaRequest) request, meta);
break;
case DROP_VIRTUAL_SCHEMA:
- result = handleDropVirtualSchema((DropVirtualSchemaRequest)request);
+ result = handleDropVirtualSchema((DropVirtualSchemaRequest) request);
break;
case REFRESH:
- result = handleRefresh((RefreshRequest)request, meta);
+ result = handleRefresh((RefreshRequest) request, meta);
break;
case SET_PROPERTIES:
- result = handleSetProperty((SetPropertiesRequest)request, meta);
+ result = handleSetProperty((SetPropertiesRequest) request, meta);
break;
case GET_CAPABILITIES:
- result = handleGetCapabilities((GetCapabilitiesRequest)request);
+ result = handleGetCapabilities((GetCapabilitiesRequest) request);
break;
case PUSHDOWN:
- result = handlePushdownRequest((PushdownRequest)request, meta);
+ result = handlePushdownRequest((PushdownRequest) request, meta);
break;
default:
throw new RuntimeException("Request Type not supported: " + request.getType());
}
- assert(result.isEmpty());
- System.out.println("----------\nResponse:\n----------\n" + JsonHelper.prettyJson(JsonHelper.getJsonObject(result)));
+ assert (result.isEmpty());
+ logger.fine("Response:\n" + JsonHelper.prettyJson(JsonHelper.getJsonObject(result)));
return result;
- } catch (AdapterException ex) {
- throw ex;
+ } catch (final AdapterException e) {
+ throw e;
+ } catch (final Exception e) {
+ throw new Exception("Unexpected error in adapter for following request: " + input + "\nResponse: " + result,
+ e);
}
- catch (Exception ex) {
- String stacktrace = UdfUtils.traceToString(ex);
- throw new Exception("Unexpected error in adapter: " + ex.getMessage() + "\nStacktrace: " + stacktrace + "\nFor following request: " + input + "\nResponse: " + result);
+ }
+
+ private static void configureLogOutput(final SchemaMetadataInfo schemaMetadata)
+ throws AdapterException, InvalidPropertyException {
+ final OutputStream out = tryAttachToOutputService(schemaMetadata);
+ if (out == null) {
+ // Fall back to regular STDOUT in case the socket output stream is not
+ // available. In most cases (except unit test scenarios) this will mean that
+ // logs will not be available.
+ configureLogger(System.out, schemaMetadata.getProperties());
+ } else {
+ configureLogger(out, schemaMetadata.getProperties());
+ }
+
+ }
+
+ private static synchronized void configureLogger(final OutputStream out, final Map properties)
+ throws InvalidPropertyException {
+ if (logger == null) {
+ final Level logLevel = determineLogLevel(properties);
+ final Formatter formatter = new CompactFormatter();
+ final StreamHandler handler = new StreamHandler(out, formatter);
+ handler.setFormatter(formatter);
+ handler.setLevel(logLevel);
+ final Logger baseLogger = Logger.getLogger("com.exasol");
+ baseLogger.setLevel(logLevel);
+ baseLogger.addHandler(handler);
+ logger = Logger.getLogger(JdbcAdapter.class.getName());
+ logger.info(() -> "Attached to output service with log level " + logLevel + ".");
}
}
- private static String handleCreateVirtualSchema(CreateVirtualSchemaRequest request, ExaMetadata meta) throws SQLException, AdapterException {
- JdbcAdapterProperties.checkPropertyConsistency(request.getSchemaMetadataInfo().getProperties(), supportedDialects);
- SchemaMetadata remoteMeta = readMetadata(request.getSchemaMetadataInfo(), meta);
+ private static Level determineLogLevel(final Map properties) throws InvalidPropertyException {
+ return (JdbcAdapterProperties.getLogLevel(properties) == null) //
+ ? Level.INFO //
+ : JdbcAdapterProperties.getLogLevel(properties);
+ }
+
+ private static String handleCreateVirtualSchema(final CreateVirtualSchemaRequest request, final ExaMetadata meta)
+ throws SQLException, AdapterException {
+ final SchemaMetadataInfo schemaMetadata = request.getSchemaMetadataInfo();
+ final Map properties = schemaMetadata.getProperties();
+ configureLogOutput(schemaMetadata);
+ JdbcAdapterProperties.checkPropertyConsistency(properties);
+ final SchemaMetadata remoteMeta = readMetadata(schemaMetadata, meta);
return ResponseJsonSerializer.makeCreateVirtualSchemaResponse(remoteMeta);
}
-
- private static SchemaMetadata readMetadata(SchemaMetadataInfo schemaMeta, ExaMetadata meta) throws SQLException, AdapterException {
- List tables = JdbcAdapterProperties.getTableFilter(schemaMeta.getProperties());
+
+ private static SchemaMetadata readMetadata(final SchemaMetadataInfo schemaMeta, final ExaMetadata meta)
+ throws SQLException, AdapterException {
+ final List tables = JdbcAdapterProperties.getTableFilter(schemaMeta.getProperties());
return readMetadata(schemaMeta, tables, meta);
}
- private static SchemaMetadata readMetadata(SchemaMetadataInfo meta, List tables, ExaMetadata exaMeta) throws SQLException, AdapterException {
+ private static SchemaMetadata readMetadata(final SchemaMetadataInfo meta, final List tables,
+ final ExaMetadata exaMeta) throws SQLException, AdapterException {
// Connect via JDBC and read metadata
- ExaConnectionInformation connection = JdbcAdapterProperties.getConnectionInformation(meta.getProperties(), exaMeta);
- String catalog = JdbcAdapterProperties.getCatalog(meta.getProperties());
- String schema = JdbcAdapterProperties.getSchema(meta.getProperties());
- return JdbcMetadataReader.readRemoteMetadata(
- connection.getAddress(),
- connection.getUser(),
- connection.getPassword(),
- catalog,
- schema,
- tables,
- supportedDialects,
- JdbcAdapterProperties.getSqlDialectName(meta.getProperties(), supportedDialects),
+ final ExaConnectionInformation connection = JdbcAdapterProperties.getConnectionInformation(meta.getProperties(),
+ exaMeta);
+ final String catalog = JdbcAdapterProperties.getCatalog(meta.getProperties());
+ final String schema = JdbcAdapterProperties.getSchema(meta.getProperties());
+ return JdbcMetadataReader.readRemoteMetadata(connection.getAddress(), connection.getUser(),
+ connection.getPassword(), catalog, schema, tables,
+ JdbcAdapterProperties.getSqlDialectName(meta.getProperties()),
JdbcAdapterProperties.getExceptionHandlingMode(meta.getProperties()));
}
-
- private static String handleRefresh(RefreshRequest request, ExaMetadata meta) throws SQLException, AdapterException {
+
+ private static String handleRefresh(final RefreshRequest request, final ExaMetadata meta)
+ throws SQLException, AdapterException {
SchemaMetadata remoteMeta;
- JdbcAdapterProperties.checkPropertyConsistency(request.getSchemaMetadataInfo().getProperties(), supportedDialects);
+ JdbcAdapterProperties.checkPropertyConsistency(request.getSchemaMetadataInfo().getProperties());
if (request.isRefreshForTables()) {
- List tables = request.getTables();
+ final List tables = request.getTables();
remoteMeta = readMetadata(request.getSchemaMetadataInfo(), tables, meta);
} else {
remoteMeta = readMetadata(request.getSchemaMetadataInfo(), meta);
@@ -131,59 +150,61 @@ private static String handleRefresh(RefreshRequest request, ExaMetadata meta) th
return ResponseJsonSerializer.makeRefreshResponse(remoteMeta);
}
- private static String handleSetProperty(SetPropertiesRequest request, ExaMetadata exaMeta) throws SQLException, AdapterException {
- Map changedProperties = request.getProperties();
- Map newSchemaMeta = JdbcAdapterProperties.getNewProperties(
- request.getSchemaMetadataInfo().getProperties(), changedProperties);
- JdbcAdapterProperties.checkPropertyConsistency(newSchemaMeta, supportedDialects);
+ private static String handleSetProperty(final SetPropertiesRequest request, final ExaMetadata exaMeta)
+ throws SQLException, AdapterException {
+ final Map changedProperties = request.getProperties();
+ final Map newSchemaMeta = JdbcAdapterProperties
+ .getNewProperties(request.getSchemaMetadataInfo().getProperties(), changedProperties);
+ JdbcAdapterProperties.checkPropertyConsistency(newSchemaMeta);
if (JdbcAdapterProperties.isRefreshNeeded(changedProperties)) {
- ExaConnectionInformation connection = JdbcAdapterProperties.getConnectionInformation(newSchemaMeta, exaMeta);
- List tableFilter = JdbcAdapterProperties.getTableFilter(newSchemaMeta);
- SchemaMetadata remoteMeta = JdbcMetadataReader.readRemoteMetadata(
- connection.getAddress(),
- connection.getUser(),
- connection.getPassword(),
- JdbcAdapterProperties.getCatalog(newSchemaMeta),
- JdbcAdapterProperties.getSchema(newSchemaMeta),
- tableFilter,
- supportedDialects,
- JdbcAdapterProperties.getSqlDialectName(newSchemaMeta, supportedDialects),
+ final ExaConnectionInformation connection = JdbcAdapterProperties.getConnectionInformation(newSchemaMeta,
+ exaMeta);
+ final List tableFilter = JdbcAdapterProperties.getTableFilter(newSchemaMeta);
+ final SchemaMetadata remoteMeta = JdbcMetadataReader.readRemoteMetadata(connection.getAddress(),
+ connection.getUser(), connection.getPassword(), JdbcAdapterProperties.getCatalog(newSchemaMeta),
+ JdbcAdapterProperties.getSchema(newSchemaMeta), tableFilter,
+ JdbcAdapterProperties.getSqlDialectName(newSchemaMeta),
JdbcAdapterProperties.getExceptionHandlingMode(newSchemaMeta));
return ResponseJsonSerializer.makeSetPropertiesResponse(remoteMeta);
}
return ResponseJsonSerializer.makeSetPropertiesResponse(null);
}
- private static String handleDropVirtualSchema(DropVirtualSchemaRequest request) {
+ private static String handleDropVirtualSchema(final DropVirtualSchemaRequest request) {
return ResponseJsonSerializer.makeDropVirtualSchemaResponse();
}
-
- public static String handleGetCapabilities(GetCapabilitiesRequest request) throws AdapterException {
- SqlDialectContext dialectContext = new SqlDialectContext(SchemaAdapterNotes.deserialize(request.getSchemaMetadataInfo().getAdapterNotes(), request.getSchemaMetadataInfo().getSchemaName()));
- SqlDialect dialect = JdbcAdapterProperties.getSqlDialect(request.getSchemaMetadataInfo().getProperties(), supportedDialects, dialectContext);
- Capabilities capabilities = dialect.getCapabilities();
- Capabilities excludedCapabilities = parseExcludedCapabilities(
+
+ public static String handleGetCapabilities(final GetCapabilitiesRequest request) throws AdapterException {
+ final SqlDialectContext dialectContext = new SqlDialectContext(SchemaAdapterNotes.deserialize(
+ request.getSchemaMetadataInfo().getAdapterNotes(), request.getSchemaMetadataInfo().getSchemaName()));
+ final SqlDialect dialect = JdbcAdapterProperties.getSqlDialect(request.getSchemaMetadataInfo().getProperties(),
+ dialectContext);
+ final Capabilities capabilities = dialect.getCapabilities();
+ final Capabilities excludedCapabilities = parseExcludedCapabilities(
JdbcAdapterProperties.getExcludedCapabilities(request.getSchemaMetadataInfo().getProperties()));
capabilities.subtractCapabilities(excludedCapabilities);
return ResponseJsonSerializer.makeGetCapabilitiesResponse(capabilities);
}
-
- private static Capabilities parseExcludedCapabilities(String excludedCapabilitiesStr) {
- System.out.println("Excluded Capabilities: " + excludedCapabilitiesStr);
- Capabilities excludedCapabilities = new Capabilities();
- for (String cap : excludedCapabilitiesStr.split(",")) {
+
+ private static Capabilities parseExcludedCapabilities(final String excludedCapabilitiesStr) {
+ logger.info(() -> "Excluded Capabilities: "
+ + (excludedCapabilitiesStr.isEmpty() ? "none" : excludedCapabilitiesStr));
+ final Capabilities excludedCapabilities = new Capabilities();
+ for (final String cap : excludedCapabilitiesStr.split(",")) {
if (cap.trim().isEmpty()) {
continue;
}
if (cap.startsWith(ResponseJsonSerializer.LITERAL_PREFIX)) {
- String literalCap = cap.replaceFirst(ResponseJsonSerializer.LITERAL_PREFIX, "");
+ final String literalCap = cap.replaceFirst(ResponseJsonSerializer.LITERAL_PREFIX, "");
excludedCapabilities.supportLiteral(LiteralCapability.valueOf(literalCap));
} else if (cap.startsWith(ResponseJsonSerializer.AGGREGATE_FUNCTION_PREFIX)) {
// Aggregate functions must be checked before scalar functions
- String aggregateFunctionCap = cap.replaceFirst(ResponseJsonSerializer.AGGREGATE_FUNCTION_PREFIX, "");
- excludedCapabilities.supportAggregateFunction(AggregateFunctionCapability.valueOf(aggregateFunctionCap));
+ final String aggregateFunctionCap = cap.replaceFirst(ResponseJsonSerializer.AGGREGATE_FUNCTION_PREFIX,
+ "");
+ excludedCapabilities
+ .supportAggregateFunction(AggregateFunctionCapability.valueOf(aggregateFunctionCap));
} else if (cap.startsWith(ResponseJsonSerializer.SCALAR_FUNCTION_PREFIX)) {
- String scalarFunctionCap = cap.replaceFirst(ResponseJsonSerializer.SCALAR_FUNCTION_PREFIX, "");
+ final String scalarFunctionCap = cap.replaceFirst(ResponseJsonSerializer.SCALAR_FUNCTION_PREFIX, "");
excludedCapabilities.supportScalarFunction(ScalarFunctionCapability.valueOf(scalarFunctionCap));
} else {
// High Level Capability
@@ -193,16 +214,23 @@ private static Capabilities parseExcludedCapabilities(String excludedCapabilitie
return excludedCapabilities;
}
- private static String handlePushdownRequest(PushdownRequest request, ExaMetadata exaMeta) throws AdapterException {
+ private static String handlePushdownRequest(final PushdownRequest request, final ExaMetadata exaMeta)
+ throws AdapterException {
// Generate SQL pushdown query
- SchemaMetadataInfo meta = request.getSchemaMetadataInfo();
- SqlDialectContext dialectContext = new SqlDialectContext(SchemaAdapterNotes.deserialize(request.getSchemaMetadataInfo().getAdapterNotes(), request.getSchemaMetadataInfo().getSchemaName()));
- SqlDialect dialect = JdbcAdapterProperties.getSqlDialect(request.getSchemaMetadataInfo().getProperties(), supportedDialects, dialectContext);
- SqlGenerationContext context = new SqlGenerationContext(JdbcAdapterProperties.getCatalog(meta.getProperties()), JdbcAdapterProperties.getSchema(meta.getProperties()), JdbcAdapterProperties.isLocal(meta.getProperties()));
- SqlGenerationVisitor sqlGeneratorVisitor = dialect.getSqlGenerationVisitor(context);
- String pushdownQuery = request.getSelect().accept(sqlGeneratorVisitor);
+ final SchemaMetadataInfo meta = request.getSchemaMetadataInfo();
+ final SqlDialectContext dialectContext = new SqlDialectContext(SchemaAdapterNotes.deserialize(
+ request.getSchemaMetadataInfo().getAdapterNotes(), request.getSchemaMetadataInfo().getSchemaName()));
+ final SqlDialect dialect = JdbcAdapterProperties.getSqlDialect(request.getSchemaMetadataInfo().getProperties(),
+ dialectContext);
+ final SqlGenerationContext context = new SqlGenerationContext(
+ JdbcAdapterProperties.getCatalog(meta.getProperties()),
+ JdbcAdapterProperties.getSchema(meta.getProperties()),
+ JdbcAdapterProperties.isLocal(meta.getProperties()));
+ final SqlGenerationVisitor sqlGeneratorVisitor = dialect.getSqlGenerationVisitor(context);
+ final String pushdownQuery = request.getSelect().accept(sqlGeneratorVisitor);
- ExaConnectionInformation connection = JdbcAdapterProperties.getConnectionInformation(meta.getProperties(), exaMeta);
+ final ExaConnectionInformation connection = JdbcAdapterProperties.getConnectionInformation(meta.getProperties(),
+ exaMeta);
String credentials = "";
if (connection.getUser() != null || connection.getPassword() != null) {
credentials = "USER '" + connection.getUser() + "' IDENTIFIED BY '" + connection.getPassword() + "'";
@@ -213,13 +241,11 @@ private static String handlePushdownRequest(PushdownRequest request, ExaMetadata
sql = pushdownQuery;
} else if (JdbcAdapterProperties.isImportFromExa(meta.getProperties())) {
sql = String.format("IMPORT FROM EXA AT '%s' %s STATEMENT '%s'",
- JdbcAdapterProperties.getExaConnectionString(meta.getProperties()),
- credentials,
+ JdbcAdapterProperties.getExaConnectionString(meta.getProperties()), credentials,
pushdownQuery.replace("'", "''"));
} else if (JdbcAdapterProperties.isImportFromOra(meta.getProperties())) {
sql = String.format("IMPORT FROM ORA AT %s %s STATEMENT '%s'",
- JdbcAdapterProperties.getOraConnectionName(meta.getProperties()),
- credentials,
+ JdbcAdapterProperties.getOraConnectionName(meta.getProperties()), credentials,
pushdownQuery.replace("'", "''"));
} else {
if (JdbcAdapterProperties.userSpecifiedConnection(meta.getProperties())) {
@@ -228,53 +254,48 @@ private static String handlePushdownRequest(PushdownRequest request, ExaMetadata
credentials = "'" + connection.getAddress() + "' " + credentials;
}
- String columnDescription = createColumnDescription(exaMeta, meta, pushdownQuery, dialect);
+ final String columnDescription = createColumnDescription(exaMeta, meta, pushdownQuery, dialect);
if (columnDescription == null) {
- sql = String.format("IMPORT FROM JDBC AT %s STATEMENT '%s'",
- credentials,
+ sql = String.format("IMPORT FROM JDBC AT %s STATEMENT '%s'", credentials,
pushdownQuery.replace("'", "''"));
} else {
- sql = String.format("IMPORT INTO %s FROM JDBC AT %s STATEMENT '%s'",
- columnDescription,
- credentials,
+ sql = String.format("IMPORT INTO %s FROM JDBC AT %s STATEMENT '%s'", columnDescription, credentials,
pushdownQuery.replace("'", "''"));
}
}
-
+
return ResponseJsonSerializer.makePushdownResponse(sql);
}
- private static String createColumnDescription(ExaMetadata exaMeta,
- SchemaMetadataInfo meta,
- String pushdownQuery,
- SqlDialect dialect) throws AdapterException {
+ private static String createColumnDescription(final ExaMetadata exaMeta, final SchemaMetadataInfo meta,
+ final String pushdownQuery, final SqlDialect dialect) throws AdapterException {
PreparedStatement ps = null;
- ExaConnectionInformation connectionInformation = JdbcAdapterProperties.getConnectionInformation(meta.getProperties(), exaMeta);
+ final ExaConnectionInformation connectionInformation = JdbcAdapterProperties
+ .getConnectionInformation(meta.getProperties(), exaMeta);
Connection connection = null;
- int val = -1;
try {
connection = establishConnection(connectionInformation);
+ logger.fine(() -> "createColumnDescription: " + pushdownQuery);
ps = connection.prepareStatement(pushdownQuery);
- ResultSetMetaData metadata=ps.getMetaData();
- if (metadata==null){
+ ResultSetMetaData metadata = ps.getMetaData();
+ if (metadata == null) {
ps.execute();
- metadata=ps.getMetaData();
- if (metadata==null) {
+ metadata = ps.getMetaData();
+ if (metadata == null) {
throw new SQLException("getMetaData() failed");
}
}
- DataType[] internalTypes = new DataType[metadata.getColumnCount()];
- for(int col=1; col <= metadata.getColumnCount(); ++col) {
- int jdbcType = metadata.getColumnType(col);
- int jdbcPrecisions = metadata.getPrecision(col);
- int jdbcScales = metadata.getScale(col);
- JdbcTypeDescription description = new JdbcTypeDescription(jdbcType,
- jdbcScales, jdbcPrecisions, 0,
+ final DataType[] internalTypes = new DataType[metadata.getColumnCount()];
+ for (int col = 1; col <= metadata.getColumnCount(); ++col) {
+ final int jdbcType = metadata.getColumnType(col);
+ final int jdbcPrecisions = metadata.getPrecision(col);
+ final int jdbcScales = metadata.getScale(col);
+ final JdbcTypeDescription description = new JdbcTypeDescription(jdbcType, jdbcScales, jdbcPrecisions, 0,
metadata.getColumnTypeName(col));
internalTypes[col - 1] = dialect.mapJdbcType(description);
}
- StringBuffer buffer = new StringBuffer();
+ final StringBuffer buffer = new StringBuffer();
buffer.append('(');
for (int i = 0; i < internalTypes.length; i++) {
buffer.append("c");
@@ -288,19 +309,18 @@ private static String createColumnDescription(ExaMetadata exaMeta,
buffer.append(')');
return buffer.toString();
- } catch (SQLException e) {
- throw new RuntimeException("Cannot resolve column types." + e.getMessage());
-
+ } catch (final SQLException e) {
+ throw new RuntimeException("Cannot resolve column types.", e);
}
}
- private static Connection establishConnection(ExaConnectionInformation connection) throws SQLException {
+ private static Connection establishConnection(final ExaConnectionInformation connection) throws SQLException {
final String connectionString = connection.getAddress();
final String user = connection.getUser();
final String password = connection.getPassword();
- System.out.println("conn: " + connectionString);
+ logger.fine(() -> "Connection parameters: " + connectionString);
- java.util.Properties info = new java.util.Properties();
+ final java.util.Properties info = new java.util.Properties();
if (user != null) {
info.put("user", user);
}
@@ -310,8 +330,7 @@ private static Connection establishConnection(ExaConnectionInformation connectio
if (KerberosUtils.isKerberosAuth(password)) {
try {
KerberosUtils.configKerberos(user, password);
- }
- catch (Exception e) {
+ } catch (final Exception e) {
e.printStackTrace();
throw new RuntimeException("Error configuring Kerberos: " + e.getMessage(), e);
}
@@ -320,17 +339,18 @@ private static Connection establishConnection(ExaConnectionInformation connectio
}
// Forward stdout to an external output service
- private static void tryAttachToOutputService(SchemaMetadataInfo meta) throws AdapterException {
- String debugAddress = JdbcAdapterProperties.getDebugAddress(meta.getProperties());
+ private static OutputStream tryAttachToOutputService(final SchemaMetadataInfo meta) throws AdapterException {
+ final String debugAddress = JdbcAdapterProperties.getDebugAddress(meta.getProperties());
if (!debugAddress.isEmpty()) {
try {
- String debugHost = debugAddress.split(":")[0];
- int debugPort = Integer.parseInt(debugAddress.split(":")[1]);
- UdfUtils.tryAttachToOutputService(debugHost, debugPort);
- } catch (Exception ex) {
- throw new AdapterException("You have to specify a valid hostname and port for the udf debug service, e.g. 'hostname:3000'");
+ final String debugHost = debugAddress.split(":")[0];
+ final int debugPort = Integer.parseInt(debugAddress.split(":")[1]);
+ return UdfUtils.tryAttachToOutputService(debugHost, debugPort);
+ } catch (final Exception ex) {
+ throw new AdapterException(
+ "You have to specify a valid hostname and port for the udf debug service, e.g. 'hostname:3000'");
}
}
+ return null;
}
-
-}
+}
\ No newline at end of file
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/jdbc/JdbcAdapterProperties.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/jdbc/JdbcAdapterProperties.java
index 33e4d9067..f73cd97c1 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/jdbc/JdbcAdapterProperties.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/jdbc/JdbcAdapterProperties.java
@@ -1,5 +1,13 @@
package com.exasol.adapter.jdbc;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.logging.Level;
+import java.util.logging.Logger;
+
import com.exasol.ExaConnectionAccessException;
import com.exasol.ExaConnectionInformation;
import com.exasol.ExaMetadata;
@@ -8,12 +16,12 @@
import com.exasol.adapter.dialects.SqlDialectContext;
import com.exasol.adapter.dialects.SqlDialects;
-import java.util.*;
-
/**
- * Class to expose a nice interface to properties. Casts to the correct data types, checks for valid property values and consistency.
+ * Class to expose a nice interface to properties. Casts to the correct data
+ * types, checks for valid property values and consistency.
*/
-public class JdbcAdapterProperties {
+public final class JdbcAdapterProperties {
+ static final Logger LOGGER = Logger.getLogger(JdbcAdapterProperties.class.getName());
// One of the following needs to be set
static final String PROP_CATALOG_NAME = "CATALOG_NAME";
@@ -34,86 +42,96 @@ public class JdbcAdapterProperties {
static final String PROP_ORA_CONNECTION_NAME = "ORA_CONNECTION_NAME";
static final String PROP_EXCLUDED_CAPABILITIES = "EXCLUDED_CAPABILITIES";
static final String PROP_EXCEPTION_HANDLING = "EXCEPTION_HANDLING";
+ static final String PROP_LOG_LEVEL = "LOG_LEVEL";
+
+ private static final String DEFAULT_LOG_LEVEL = "INFO";
+
+ private JdbcAdapterProperties() {
+ // prevent instantiation of static helper class
+ }
// Specifies different exception handling strategies
public enum ExceptionHandlingMode {
- IGNORE_INVALID_VIEWS,
- NONE
+ IGNORE_INVALID_VIEWS, NONE
}
- private static String getProperty(Map properties, String name, String defaultValue) {
+ private static String getProperty(final Map properties, final String name,
+ final String defaultValue) {
if (properties.containsKey(name)) {
return properties.get(name);
} else {
return defaultValue;
}
}
-
- public static String getCatalog(Map properties) {
+
+ public static String getCatalog(final Map properties) {
return getProperty(properties, PROP_CATALOG_NAME, "");
}
-
- public static String getSchema(Map properties) {
+
+ public static String getSchema(final Map properties) {
return getProperty(properties, PROP_SCHEMA_NAME, "");
}
- public static boolean userSpecifiedConnection(Map properties) {
- String connName = getProperty(properties, PROP_CONNECTION_NAME, "");
+ public static boolean userSpecifiedConnection(final Map properties) {
+ final String connName = getProperty(properties, PROP_CONNECTION_NAME, "");
return (connName != null && !connName.isEmpty());
}
- public static String getConnectionName(Map properties) {
- String connName = getProperty(properties, PROP_CONNECTION_NAME, "");
- assert(connName != null && !connName.isEmpty());
+ public static String getConnectionName(final Map properties) {
+ final String connName = getProperty(properties, PROP_CONNECTION_NAME, "");
+ assert (connName != null && !connName.isEmpty());
return connName;
}
/**
- * Returns the credentials for the remote system. These are either directly specified
- * in the properties or obtained from a connection (requires privilege to access the connection
- * .
+ * Returns the credentials for the remote system. These are either directly
+ * specified in the properties or obtained from a connection (requires privilege
+ * to access the connection .
*/
- public static ExaConnectionInformation getConnectionInformation(Map properties, ExaMetadata exaMeta) {
- String connName = getProperty(properties, PROP_CONNECTION_NAME, "");
+ public static ExaConnectionInformation getConnectionInformation(final Map properties,
+ final ExaMetadata exaMeta) {
+ final String connName = getProperty(properties, PROP_CONNECTION_NAME, "");
if (connName != null && !connName.isEmpty()) {
try {
- ExaConnectionInformation connInfo = exaMeta.getConnection(connName);
+ final ExaConnectionInformation connInfo = exaMeta.getConnection(connName);
return connInfo;
- } catch (ExaConnectionAccessException e) {
- throw new RuntimeException("Could not access the connection information of connection " + connName + ". Error: " + e.toString());
+ } catch (final ExaConnectionAccessException e) {
+ throw new RuntimeException("Could not access the connection information of connection " + connName
+ + ". Error: " + e.toString());
}
} else {
- String connectionString = properties.get(PROP_CONNECTION_STRING);
- String user = properties.get(PROP_USERNAME);
- String password = properties.get(PROP_PASSWORD);
+ final String connectionString = properties.get(PROP_CONNECTION_STRING);
+ final String user = properties.get(PROP_USERNAME);
+ final String password = properties.get(PROP_PASSWORD);
return new ExaConnectionInformationJdbc(connectionString, user, password);
}
}
-
- public static void checkPropertyConsistency(Map properties, SqlDialects supportedDialects) throws AdapterException {
- validatePropertyValues(properties);
-
- checkMandatoryProperties(properties, supportedDialects);
+ public static void checkPropertyConsistency(final Map properties) throws AdapterException {
+ validatePropertyValues(properties);
+ checkMandatoryProperties(properties);
checkImportPropertyConsistency(properties, PROP_IMPORT_FROM_EXA, PROP_EXA_CONNECTION_STRING);
checkImportPropertyConsistency(properties, PROP_IMPORT_FROM_ORA, PROP_ORA_CONNECTION_NAME);
}
- private static void checkImportPropertyConsistency(Map properties, String propImportFromX, String propConnection) throws InvalidPropertyException {
- boolean isImport = getProperty(properties, propImportFromX, "").toUpperCase().equals("TRUE");
- boolean connectionIsEmpty = getProperty(properties, propConnection, "").isEmpty();
+ private static void checkImportPropertyConsistency(final Map properties,
+ final String propImportFromX, final String propConnection) throws InvalidPropertyException {
+ final boolean isImport = getProperty(properties, propImportFromX, "").toUpperCase().equals("TRUE");
+ final boolean connectionIsEmpty = getProperty(properties, propConnection, "").isEmpty();
if (isImport) {
if (connectionIsEmpty) {
- throw new InvalidPropertyException("You defined the property " + propImportFromX + ", please also define " + propConnection);
+ throw new InvalidPropertyException(
+ "You defined the property " + propImportFromX + ", please also define " + propConnection);
}
} else {
if (!connectionIsEmpty) {
- throw new InvalidPropertyException("You defined the property " + propConnection + " without setting " + propImportFromX + " to 'TRUE'. This is not allowed");
+ throw new InvalidPropertyException("You defined the property " + propConnection + " without setting "
+ + propImportFromX + " to 'TRUE'. This is not allowed");
}
}
}
- private static void validatePropertyValues(Map properties) throws AdapterException {
+ private static void validatePropertyValues(final Map properties) throws AdapterException {
validateBooleanProperty(properties, PROP_IS_LOCAL);
validateBooleanProperty(properties, PROP_IMPORT_FROM_EXA);
validateBooleanProperty(properties, PROP_IMPORT_FROM_ORA);
@@ -124,81 +142,90 @@ private static void validatePropertyValues(Map properties) throw
validateExceptionHandling(properties.get(PROP_EXCEPTION_HANDLING));
}
}
-
- private static void validateBooleanProperty(Map properties, String property) throws AdapterException {
+
+ private static void validateBooleanProperty(final Map properties, final String property)
+ throws AdapterException {
if (properties.containsKey(property)) {
if (!properties.get(property).toUpperCase().matches("^TRUE$|^FALSE$")) {
- throw new InvalidPropertyException("The value '" + properties.get(property) + "' for the property " + property + " is invalid. It has to be either 'true' or 'false' (case insensitive).");
+ throw new InvalidPropertyException("The value '" + properties.get(property) + "' for the property "
+ + property + " is invalid. It has to be either 'true' or 'false' (case insensitive).");
}
}
}
- private static void validateDebugOutputAddress(String debugAddress) throws AdapterException {
+ private static void validateDebugOutputAddress(final String debugAddress) throws AdapterException {
if (!debugAddress.isEmpty()) {
- String error = "You specified an invalid hostname and port for the udf debug service (" + PROP_DEBUG_ADDRESS + "). Please provide a valid value, e.g. 'hostname:3000'";
- try {
- String debugHost = debugAddress.split(":")[0];
- int debugPort = Integer.parseInt(debugAddress.split(":")[1]);
- } catch (Exception ex) {
+ final String error = "You specified an invalid hostname and port for the udf debug service ("
+ + PROP_DEBUG_ADDRESS + "). Please provide a valid value, e.g. 'hostname:3000'";
+ if (debugAddress.split(":").length != 2) {
throw new AdapterException(error);
}
- if (debugAddress.split(":").length != 2) {
+ try {
+ Integer.parseInt(debugAddress.split(":")[1]);
+ } catch (final Exception ex) {
throw new AdapterException(error);
}
}
}
- private static void validateExceptionHandling(String exceptionHandling) throws AdapterException {
+ private static void validateExceptionHandling(final String exceptionHandling) throws AdapterException {
if (!(exceptionHandling == null || exceptionHandling.isEmpty())) {
- for (ExceptionHandlingMode mode : ExceptionHandlingMode.values()) {
+ for (final ExceptionHandlingMode mode : ExceptionHandlingMode.values()) {
if (mode.name().equals(exceptionHandling)) {
return;
}
}
- String error = "You specified an invalid exception mode (" + exceptionHandling + ").";
+ final String error = "You specified an invalid exception mode (" + exceptionHandling + ").";
throw new AdapterException(error);
}
}
- private static void checkMandatoryProperties(Map properties, SqlDialects supportedDialects) throws AdapterException {
+ private static void checkMandatoryProperties(final Map properties) throws AdapterException {
+ final String availableDialects = "Available dialects: " + SqlDialects.getInstance().getDialectsString();
if (!properties.containsKey(PROP_SQL_DIALECT)) {
- throw new InvalidPropertyException("You have to specify the SQL dialect (" + PROP_SQL_DIALECT + "). Available dialects: " + supportedDialects.getDialectsString());
+ throw new InvalidPropertyException(
+ "You have to specify the SQL dialect (" + PROP_SQL_DIALECT + "). " + availableDialects);
}
- if (!supportedDialects.isSupported(properties.get(PROP_SQL_DIALECT))) {
- throw new InvalidPropertyException("SQL Dialect not supported: " + properties.get(PROP_SQL_DIALECT) + ". Available dialects: " + supportedDialects.getDialectsString());
+ if (!SqlDialects.getInstance().isSupported(properties.get(PROP_SQL_DIALECT))) {
+ throw new InvalidPropertyException(
+ "SQL Dialect \"" + properties.get(PROP_SQL_DIALECT) + "\" is not supported. " + availableDialects);
}
if (properties.containsKey(PROP_CONNECTION_NAME)) {
- if (properties.containsKey(PROP_CONNECTION_STRING) || properties.containsKey(PROP_USERNAME) || properties.containsKey(PROP_PASSWORD) ) {
- throw new InvalidPropertyException("You specified a connection (" + PROP_CONNECTION_NAME + ") and therefore may not specify the properties " + PROP_CONNECTION_STRING + ", " + PROP_USERNAME + " and " + PROP_PASSWORD);
+ if (properties.containsKey(PROP_CONNECTION_STRING) || properties.containsKey(PROP_USERNAME)
+ || properties.containsKey(PROP_PASSWORD)) {
+ throw new InvalidPropertyException("You specified a connection (" + PROP_CONNECTION_NAME
+ + ") and therefore may not specify the properties " + PROP_CONNECTION_STRING + ", "
+ + PROP_USERNAME + " and " + PROP_PASSWORD);
}
} else {
if (!properties.containsKey(PROP_CONNECTION_STRING)) {
- throw new InvalidPropertyException("You did not specify a connection (" + PROP_CONNECTION_NAME + ") and therefore have to specify the property " + PROP_CONNECTION_STRING);
+ throw new InvalidPropertyException("You did not specify a connection (" + PROP_CONNECTION_NAME
+ + ") and therefore have to specify the property " + PROP_CONNECTION_STRING);
}
}
}
-
- public static boolean isImportFromExa(Map properties) {
+
+ public static boolean isImportFromExa(final Map properties) {
return getProperty(properties, PROP_IMPORT_FROM_EXA, "").toUpperCase().equals("TRUE");
}
- public static boolean isImportFromOra(Map properties) {
+ public static boolean isImportFromOra(final Map properties) {
return getProperty(properties, PROP_IMPORT_FROM_ORA, "").toUpperCase().equals("TRUE");
}
- public static String getExaConnectionString(Map properties) {
+ public static String getExaConnectionString(final Map properties) {
return getProperty(properties, PROP_EXA_CONNECTION_STRING, "");
}
- public static String getOraConnectionName(Map properties) {
+ public static String getOraConnectionName(final Map properties) {
return getProperty(properties, PROP_ORA_CONNECTION_NAME, "");
}
- public static List getTableFilter(Map properties) {
- String tableNames = getProperty(properties, PROP_TABLES, "");
+ public static List getTableFilter(final Map properties) {
+ final String tableNames = getProperty(properties, PROP_TABLES, "");
if (!tableNames.isEmpty()) {
- List tables = Arrays.asList(tableNames.split(","));
- for (int i=0; i tables = Arrays.asList(tableNames.split(","));
+ for (int i = 0; i < tables.size(); ++i) {
tables.set(i, tables.get(i).trim());
}
return tables;
@@ -207,37 +234,40 @@ public static List getTableFilter(Map properties) {
}
}
- public static String getExcludedCapabilities(Map properties) {
+ public static String getExcludedCapabilities(final Map properties) {
return getProperty(properties, PROP_EXCLUDED_CAPABILITIES, "");
}
- public static String getDebugAddress(Map properties) {
+ public static String getDebugAddress(final Map properties) {
return getProperty(properties, PROP_DEBUG_ADDRESS, "");
}
- public static boolean isLocal(Map properties) {
+ public static boolean isLocal(final Map properties) {
return getProperty(properties, PROP_IS_LOCAL, "").toUpperCase().equals("TRUE");
}
- public static String getSqlDialectName(Map properties, SqlDialects supportedDialects) {
+ public static String getSqlDialectName(final Map properties) {
return getProperty(properties, PROP_SQL_DIALECT, "");
}
- public static SqlDialect getSqlDialect(Map properties, SqlDialects supportedDialects, SqlDialectContext dialectContext) throws AdapterException {
- String dialectName = getProperty(properties, PROP_SQL_DIALECT, "");
- SqlDialect dialect = supportedDialects.getDialectByName(dialectName, dialectContext);
+ public static SqlDialect getSqlDialect(final Map properties, final SqlDialectContext dialectContext)
+ throws InvalidPropertyException {
+ final String dialectName = getProperty(properties, PROP_SQL_DIALECT, "");
+ final SqlDialect dialect = SqlDialects.getInstance().getDialectInstanceForNameWithContext(dialectName,
+ dialectContext);
if (dialect == null) {
- throw new InvalidPropertyException("SQL Dialect not supported: " + dialectName + " - all dialects: " + supportedDialects.getDialectsString());
+ throw new InvalidPropertyException("SQL Dialect not supported: " + dialectName + " - all dialects: "
+ + SqlDialects.getInstance().getDialectsString());
}
return dialect;
}
- public static ExceptionHandlingMode getExceptionHandlingMode(Map properties) {
- String propertyValue = getProperty(properties, PROP_EXCEPTION_HANDLING, "");
+ public static ExceptionHandlingMode getExceptionHandlingMode(final Map properties) {
+ final String propertyValue = getProperty(properties, PROP_EXCEPTION_HANDLING, "");
if (propertyValue == null || propertyValue.isEmpty()) {
return ExceptionHandlingMode.NONE;
}
- for (ExceptionHandlingMode mode : ExceptionHandlingMode.values()) {
+ for (final ExceptionHandlingMode mode : ExceptionHandlingMode.values()) {
if (mode.name().equals(propertyValue)) {
return mode;
}
@@ -245,23 +275,29 @@ public static ExceptionHandlingMode getExceptionHandlingMode(Map
return ExceptionHandlingMode.NONE;
}
- public static boolean isRefreshNeeded(Map newProperties) {
- return newProperties.containsKey(PROP_CONNECTION_STRING)
- || newProperties.containsKey(PROP_CONNECTION_NAME)
- || newProperties.containsKey(PROP_USERNAME)
- || newProperties.containsKey(PROP_PASSWORD)
- || newProperties.containsKey(PROP_SCHEMA_NAME)
- || newProperties.containsKey(PROP_CATALOG_NAME)
+ public static Level getLogLevel(final Map properties) throws InvalidPropertyException {
+ final String levelAsText = getProperty(properties, PROP_LOG_LEVEL, DEFAULT_LOG_LEVEL);
+ try {
+ return Level.parse(levelAsText);
+ } catch (IllegalArgumentException | NullPointerException e) {
+ throw new InvalidPropertyException("Unable to set log level \"" + levelAsText + "\"");
+ }
+ }
+
+ public static boolean isRefreshNeeded(final Map newProperties) {
+ return newProperties.containsKey(PROP_CONNECTION_STRING) || newProperties.containsKey(PROP_CONNECTION_NAME)
+ || newProperties.containsKey(PROP_USERNAME) || newProperties.containsKey(PROP_PASSWORD)
+ || newProperties.containsKey(PROP_SCHEMA_NAME) || newProperties.containsKey(PROP_CATALOG_NAME)
|| newProperties.containsKey(PROP_TABLES);
}
-
+
public static class ExaConnectionInformationJdbc implements ExaConnectionInformation {
-
- private String address;
- private String user; // can be null
- private String password; // can be null
-
- public ExaConnectionInformationJdbc(String address, String user, String password) {
+
+ private final String address;
+ private final String user; // can be null
+ private final String password; // can be null
+
+ public ExaConnectionInformationJdbc(final String address, final String user, final String password) {
this.address = address;
this.user = user;
this.password = password;
@@ -289,14 +325,16 @@ public String getPassword() {
}
/**
- * Returns the properties as they would be after successfully applying the changes to the existing (old) set of properties.
+ * Returns the properties as they would be after successfully applying the
+ * changes to the existing (old) set of properties.
*/
- public static Map getNewProperties (
- Map oldProperties, Map changedProperties) {
- Map newCompleteProperties = new HashMap<>(oldProperties);
- for (Map.Entry changedProperty : changedProperties.entrySet()) {
+ public static Map getNewProperties(final Map oldProperties,
+ final Map changedProperties) {
+ final Map newCompleteProperties = new HashMap<>(oldProperties);
+ for (final Map.Entry changedProperty : changedProperties.entrySet()) {
if (changedProperty.getValue() == null) {
- // Null values represent properties which are deleted by the user (might also have never existed actually)
+ // Null values represent properties which are deleted by the user (might also
+ // have never existed actually)
newCompleteProperties.remove(changedProperty.getKey());
} else {
newCompleteProperties.put(changedProperty.getKey(), changedProperty.getValue());
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/jdbc/JdbcMetadataReader.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/jdbc/JdbcMetadataReader.java
index 6aff45af0..a8acf9167 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/jdbc/JdbcMetadataReader.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/adapter/jdbc/JdbcMetadataReader.java
@@ -1,75 +1,64 @@
package com.exasol.adapter.jdbc;
-import com.exasol.adapter.AdapterException;
-import com.exasol.adapter.dialects.SqlDialect;
-import com.exasol.adapter.dialects.SqlDialectContext;
-import com.exasol.adapter.dialects.SqlDialects;
-import com.exasol.adapter.metadata.ColumnMetadata;
-import com.exasol.adapter.metadata.SchemaMetadata;
-import com.exasol.adapter.metadata.TableMetadata;
-import com.google.common.base.Joiner;
-
import java.sql.*;
import java.util.ArrayList;
import java.util.List;
+import java.util.logging.Logger;
+
+import com.exasol.adapter.AdapterException;
+import com.exasol.adapter.dialects.*;
+import com.exasol.adapter.metadata.*;
+import com.google.common.base.Joiner;
/**
- * TODO Find good solutions to handle tables with unsupported data types, or tables that generate exceptions. Ideas: Skip such tables by adding a boolean property like IGNORE_INVALID_TABLES.
+ * TODO Find good solutions to handle tables with unsupported data types, or
+ * tables that generate exceptions. Ideas: Skip such tables by adding a boolean
+ * property like IGNORE_INVALID_TABLES.
*/
public class JdbcMetadataReader {
+ private static final Logger LOGGER = Logger.getLogger(JdbcMetadataReader.class.getName());
- public static SchemaMetadata readRemoteMetadata(String connectionString,
- String user,
- String password,
- String catalog,
- String schema,
- List tableFilter,
- SqlDialects dialects,
- String dialectName,
- JdbcAdapterProperties.ExceptionHandlingMode exceptionMode) throws SQLException, AdapterException {
+ public static SchemaMetadata readRemoteMetadata(final String connectionString, final String user,
+ final String password, String catalog, String schema, final List tableFilter,
+ final String dialectName, final JdbcAdapterProperties.ExceptionHandlingMode exceptionMode)
+ throws SQLException, AdapterException {
assert (catalog != null);
assert (schema != null);
try {
- Connection conn = establishConnection(connectionString, user, password);
- DatabaseMetaData dbMeta = conn.getMetaData();
+ final Connection conn = establishConnection(connectionString, user, password);
+ final DatabaseMetaData dbMeta = conn.getMetaData();
- // Retrieve relevant parts of DatabaseMetadata. Will be cached in adapternotes of the schema.
- SchemaAdapterNotes schemaAdapterNotes = new SchemaAdapterNotes(
- dbMeta.getCatalogSeparator(),
- dbMeta.getIdentifierQuoteString(),
- dbMeta.storesLowerCaseIdentifiers(),
- dbMeta.storesUpperCaseIdentifiers(),
- dbMeta.storesMixedCaseIdentifiers(),
- dbMeta.supportsMixedCaseIdentifiers(),
- dbMeta.storesLowerCaseQuotedIdentifiers(),
- dbMeta.storesUpperCaseQuotedIdentifiers(),
- dbMeta.storesMixedCaseQuotedIdentifiers(),
- dbMeta.supportsMixedCaseQuotedIdentifiers(),
- dbMeta.nullsAreSortedAtEnd(),
- dbMeta.nullsAreSortedAtStart(),
- dbMeta.nullsAreSortedHigh(),
- dbMeta.nullsAreSortedLow());
+ // Retrieve relevant parts of DatabaseMetadata. Will be cached in adapternotes
+ // of the schema.
+ final SchemaAdapterNotes schemaAdapterNotes = new SchemaAdapterNotes(dbMeta.getCatalogSeparator(),
+ dbMeta.getIdentifierQuoteString(), dbMeta.storesLowerCaseIdentifiers(),
+ dbMeta.storesUpperCaseIdentifiers(), dbMeta.storesMixedCaseIdentifiers(),
+ dbMeta.supportsMixedCaseIdentifiers(), dbMeta.storesLowerCaseQuotedIdentifiers(),
+ dbMeta.storesUpperCaseQuotedIdentifiers(), dbMeta.storesMixedCaseQuotedIdentifiers(),
+ dbMeta.supportsMixedCaseQuotedIdentifiers(), dbMeta.nullsAreSortedAtEnd(),
+ dbMeta.nullsAreSortedAtStart(), dbMeta.nullsAreSortedHigh(), dbMeta.nullsAreSortedLow());
- SqlDialect dialect = dialects.getDialectByName(dialectName, new SqlDialectContext(schemaAdapterNotes));
+ final SqlDialect dialect = SqlDialects.getInstance().getDialectInstanceForNameWithContext(dialectName,
+ new SqlDialectContext(schemaAdapterNotes));
catalog = findCatalog(catalog, dbMeta, dialect);
schema = findSchema(schema, dbMeta, dialect);
- List tables = findTables(catalog, schema, tableFilter, dbMeta, dialect, exceptionMode);
+ final List tables = findTables(catalog, schema, tableFilter, dbMeta, dialect, exceptionMode);
conn.close();
return new SchemaMetadata(SchemaAdapterNotes.serialize(schemaAdapterNotes), tables);
- } catch (SQLException e) {
+ } catch (final SQLException e) {
e.printStackTrace();
throw e;
}
}
- private static Connection establishConnection(String connectionString, String user, String password) throws SQLException {
- System.out.println("conn: " + connectionString);
-
- java.util.Properties info = new java.util.Properties();
+ private static Connection establishConnection(final String connectionString, final String user,
+ final String password) throws SQLException {
+ LOGGER.fine(() -> "Establishing connection with paramters: " + connectionString);
+ final java.util.Properties info = new java.util.Properties();
if (user != null) {
info.put("user", user);
}
@@ -79,8 +68,7 @@ private static Connection establishConnection(String connectionString, String us
if (KerberosUtils.isKerberosAuth(password)) {
try {
KerberosUtils.configKerberos(user, password);
- }
- catch (Exception e) {
+ } catch (final Exception e) {
e.printStackTrace();
throw new RuntimeException("Error configuring Kerberos: " + e.getMessage(), e);
}
@@ -88,23 +76,24 @@ private static Connection establishConnection(String connectionString, String us
return DriverManager.getConnection(connectionString, info);
}
- private static String findCatalog(String catalog, DatabaseMetaData dbMeta, SqlDialect dialect) throws SQLException, AdapterException {
+ private static String findCatalog(final String catalog, final DatabaseMetaData dbMeta, final SqlDialect dialect)
+ throws SQLException, AdapterException {
boolean foundCatalog = false;
String curCatalog = "";
int numCatalogs = 0;
- List allCatalogs = new ArrayList<>();
+ final List allCatalogs = new ArrayList<>();
ResultSet res = null;
try {
res = dbMeta.getCatalogs();
while (res.next()) {
- curCatalog = res.getString("TABLE_CAT"); // EXA_DB in case of EXASOL
+ curCatalog = res.getString("TABLE_CAT"); // EXA_DB in case of EXASOL
allCatalogs.add(curCatalog);
if (curCatalog.equals(catalog)) {
foundCatalog = true;
}
- ++ numCatalogs;
+ ++numCatalogs;
}
- } catch (Exception ex) {
+ } catch (final Exception ex) {
if (dialect.supportsJdbcCatalogs() == SqlDialect.SchemaOrCatalogSupport.SUPPORTED) {
throw new RuntimeException("Unexpected exception when accessing the catalogs: " + ex.getMessage(), ex);
} else if (dialect.supportsJdbcCatalogs() == SqlDialect.SchemaOrCatalogSupport.UNSUPPORTED) {
@@ -112,17 +101,20 @@ private static String findCatalog(String catalog, DatabaseMetaData dbMeta, SqlDi
ex.printStackTrace();
return null;
} else {
- // We don't know if system supports catalogs. If user specified an catalog, we have a problem, otherwise we ignore the error
+ // We don't know if system supports catalogs. If user specified an catalog, we
+ // have a problem, otherwise we ignore the error
if (!catalog.isEmpty()) {
- throw new RuntimeException("Unexpected exception when accessing the catalogs: " + ex.getMessage(), ex);
+ throw new RuntimeException("Unexpected exception when accessing the catalogs: " + ex.getMessage(),
+ ex);
} else {
ex.printStackTrace();
return null;
}
}
} finally {
- if(res != null)
- res.close();
+ if (res != null) {
+ res.close();
+ }
}
if (dialect.supportsJdbcCatalogs() == SqlDialect.SchemaOrCatalogSupport.SUPPORTED
|| dialect.supportsJdbcCatalogs() == SqlDialect.SchemaOrCatalogSupport.UNKNOWN) {
@@ -131,43 +123,50 @@ private static String findCatalog(String catalog, DatabaseMetaData dbMeta, SqlDi
} else {
if (catalog.isEmpty()) {
if (dialect.supportsJdbcCatalogs() == SqlDialect.SchemaOrCatalogSupport.SUPPORTED) {
- throw new AdapterException("You have to specify a catalog. Available catalogs: " + Joiner.on(", ").join(allCatalogs));
+ throw new AdapterException("You have to specify a catalog. Available catalogs: "
+ + Joiner.on(", ").join(allCatalogs));
} else {
if (numCatalogs == 0) {
return null;
} else {
- throw new AdapterException("You have to specify a catalog. Available catalogs: " + Joiner.on(", ").join(allCatalogs));
+ throw new AdapterException("You have to specify a catalog. Available catalogs: "
+ + Joiner.on(", ").join(allCatalogs));
}
}
} else {
- throw new AdapterException("Catalog " + catalog + " does not exist. Available catalogs: " + Joiner.on(", ").join(allCatalogs));
+ throw new AdapterException("Catalog " + catalog + " does not exist. Available catalogs: "
+ + Joiner.on(", ").join(allCatalogs));
}
}
} else {
- assert(dialect.supportsJdbcCatalogs() == SqlDialect.SchemaOrCatalogSupport.UNSUPPORTED);
+ assert (dialect.supportsJdbcCatalogs() == SqlDialect.SchemaOrCatalogSupport.UNSUPPORTED);
if (catalog.isEmpty()) {
if (numCatalogs == 0) {
return null;
- } else if (numCatalogs == 1) {
- // Take the one and only catalog (in case of EXASOL this is always EXA_DB). Returning null would probably also work fine.
+ } else if (numCatalogs == 1) {
+ // Take the one and only catalog (in case of EXASOL this is always EXA_DB).
+ // Returning null would probably also work fine.
return curCatalog;
} else {
- throw new AdapterException("The data source is not expected to support catalogs, but has " + numCatalogs + " catalogs: " + Joiner.on(", ").join(allCatalogs));
+ throw new AdapterException("The data source is not expected to support catalogs, but has "
+ + numCatalogs + " catalogs: " + Joiner.on(", ").join(allCatalogs));
}
} else {
- throw new AdapterException("You specified a catalog, however the data source does not support the concept of catalogs.");
+ throw new AdapterException(
+ "You specified a catalog, however the data source does not support the concept of catalogs.");
}
}
}
- private static String findSchema(String schema, DatabaseMetaData dbMeta, SqlDialect dialect) throws SQLException, AdapterException {
+ private static String findSchema(final String schema, final DatabaseMetaData dbMeta, final SqlDialect dialect)
+ throws SQLException, AdapterException {
// Check if schema exists
boolean foundSchema = false;
- List allSchemas = new ArrayList<>();
+ final List allSchemas = new ArrayList<>();
int numSchemas = 0;
String curSchema = "";
ResultSet schemas = null;
-
+
try {
schemas = dbMeta.getSchemas();
while (schemas.next()) {
@@ -178,7 +177,7 @@ private static String findSchema(String schema, DatabaseMetaData dbMeta, SqlDial
}
++numSchemas;
}
- } catch (Exception ex) {
+ } catch (final Exception ex) {
if (dialect.supportsJdbcSchemas() == SqlDialect.SchemaOrCatalogSupport.SUPPORTED) {
throw new RuntimeException("Unexpected exception when accessing the schema: " + ex.getMessage(), ex);
} else if (dialect.supportsJdbcSchemas() == SqlDialect.SchemaOrCatalogSupport.UNSUPPORTED) {
@@ -188,17 +187,19 @@ private static String findSchema(String schema, DatabaseMetaData dbMeta, SqlDial
} else {
// We don't know if system supports schemas.
if (!schema.isEmpty()) {
- throw new RuntimeException("Unexpected exception when accessing the schemas: " + ex.getMessage(), ex);
+ throw new RuntimeException("Unexpected exception when accessing the schemas: " + ex.getMessage(),
+ ex);
} else {
ex.printStackTrace();
return null;
}
}
} finally {
- if (schemas != null)
- schemas.close();
+ if (schemas != null) {
+ schemas.close();
+ }
}
-
+
if (dialect.supportsJdbcSchemas() == SqlDialect.SchemaOrCatalogSupport.SUPPORTED
|| dialect.supportsJdbcSchemas() == SqlDialect.SchemaOrCatalogSupport.UNKNOWN) {
if (foundSchema) {
@@ -206,65 +207,69 @@ private static String findSchema(String schema, DatabaseMetaData dbMeta, SqlDial
} else {
if (schema.isEmpty()) {
if (dialect.supportsJdbcSchemas() == SqlDialect.SchemaOrCatalogSupport.SUPPORTED) {
- throw new AdapterException("You have to specify a schema. Available schemas: " + Joiner.on(", ").join(allSchemas));
+ throw new AdapterException(
+ "You have to specify a schema. Available schemas: " + Joiner.on(", ").join(allSchemas));
} else {
if (numSchemas == 0) {
return null;
} else {
- throw new AdapterException("You have to specify a schema. Available schemas: " + Joiner.on(", ").join(allSchemas));
+ throw new AdapterException("You have to specify a schema. Available schemas: "
+ + Joiner.on(", ").join(allSchemas));
}
}
} else {
- throw new AdapterException("Schema " + schema + " does not exist. Available schemas: " + Joiner.on(", ").join(allSchemas));
+ throw new AdapterException("Schema " + schema + " does not exist. Available schemas: "
+ + Joiner.on(", ").join(allSchemas));
}
}
} else {
- assert(dialect.supportsJdbcSchemas() == SqlDialect.SchemaOrCatalogSupport.UNSUPPORTED);
+ assert (dialect.supportsJdbcSchemas() == SqlDialect.SchemaOrCatalogSupport.UNSUPPORTED);
if (schema.isEmpty()) {
if (numSchemas == 0) {
return null;
- } else if (numSchemas == 1) {
+ } else if (numSchemas == 1) {
// Take the one and only schema. Returning null would probably also work fine.
return curSchema;
} else {
- throw new AdapterException("The data source is not expected to support schemas, but has " + numSchemas + " schemas: " + Joiner.on(", ").join(allSchemas));
+ throw new AdapterException("The data source is not expected to support schemas, but has "
+ + numSchemas + " schemas: " + Joiner.on(", ").join(allSchemas));
}
} else {
- throw new AdapterException("You specified a schema, however the data source does not support the concept of schemas.");
+ throw new AdapterException(
+ "You specified a schema, however the data source does not support the concept of schemas.");
}
}
}
- private static List findTables(String catalog, String schema, List tableFilter,
- DatabaseMetaData dbMeta, SqlDialect dialect,
- JdbcAdapterProperties.ExceptionHandlingMode exceptionMode)
- throws SQLException {
- List tables = new ArrayList<>();
-
- String[] supportedTableTypes = {"TABLE", "VIEW", "SYSTEM TABLE"};
-
- ResultSet resTables = dbMeta.getTables(catalog, schema, null, supportedTableTypes);
- List< SqlDialect.MappedTable> tablesMapped = new ArrayList<>();
- //List tableComments = new ArrayList<>();
+ private static List findTables(final String catalog, final String schema,
+ final List tableFilter, final DatabaseMetaData dbMeta, final SqlDialect dialect,
+ final JdbcAdapterProperties.ExceptionHandlingMode exceptionMode) throws SQLException {
+ final List tables = new ArrayList<>();
+
+ final String[] supportedTableTypes = { "TABLE", "VIEW", "SYSTEM TABLE" };
+
+ final ResultSet resTables = dbMeta.getTables(catalog, schema, null, supportedTableTypes);
+ final List tablesMapped = new ArrayList<>();
+ // List tableComments = new ArrayList<>();
while (resTables.next()) {
- SqlDialect.MappedTable mappedTable = dialect.mapTable(resTables);
+ final SqlDialect.MappedTable mappedTable = dialect.mapTable(resTables);
if (!mappedTable.isIgnored()) {
- tablesMapped.add(mappedTable);
- //tableComments.add(mappedTable.getTableComment());
+ tablesMapped.add(mappedTable);
+ // tableComments.add(mappedTable.getTableComment());
}
}
-
+
resTables.close();
// Columns
- for (int i=0; i "Processing columns for table \"" + table + "\"");
try {
if (!tableFilter.isEmpty()) {
boolean isInFilter = false;
if (identifiersAreCaseInsensitive(dialect)) {
- for (String curTable : tableFilter) {
+ for (final String curTable : tableFilter) {
if (curTable.equalsIgnoreCase(table.getTableName())) {
isInFilter = true;
}
@@ -273,44 +278,44 @@ private static List findTables(String catalog, String schema, Lis
isInFilter = tableFilter.contains(table.getTableName());
}
if (!isInFilter) {
- System.out.println("Skip table: " + table);
+ LOGGER.finest(() -> "Skipping table \"" + table + "\"");
continue;
}
}
- List columns = readColumns(dbMeta, catalog, schema, table.getOriginalTableName(),
+ final List columns = readColumns(dbMeta, catalog, schema, table.getOriginalTableName(),
dialect, exceptionMode);
if (columns != null) {
tables.add(new TableMetadata(table.getTableName(), "", columns, table.getTableComment()));
}
- } catch (Exception ex) {
+ } catch (final Exception ex) {
throw new RuntimeException("Exception for table " + table.getOriginalTableName(), ex);
}
}
return tables;
}
- private static boolean identifiersAreCaseInsensitive(SqlDialect dialect) {
+ private static boolean identifiersAreCaseInsensitive(final SqlDialect dialect) {
return (dialect.getQuotedIdentifierHandling() == dialect.getUnquotedIdentifierHandling())
&& dialect.getQuotedIdentifierHandling() != SqlDialect.IdentifierCaseHandling.INTERPRET_CASE_SENSITIVE;
}
- private static List readColumns(DatabaseMetaData dbMeta, String catalog, String schema,
- String table, SqlDialect dialect,
- JdbcAdapterProperties.ExceptionHandlingMode exceptionMode) throws SQLException {
- List columns = new ArrayList<>();
+ private static List readColumns(final DatabaseMetaData dbMeta, final String catalog,
+ final String schema, final String table, final SqlDialect dialect,
+ final JdbcAdapterProperties.ExceptionHandlingMode exceptionMode) throws SQLException {
+ final List columns = new ArrayList<>();
try {
- ResultSet cols = dbMeta.getColumns(catalog, schema, table, null);
+ final ResultSet cols = dbMeta.getColumns(catalog, schema, table, null);
while (cols.next()) {
columns.add(dialect.mapColumn(cols));
}
if (columns.isEmpty()) {
- System.out.println("Warning: Found a table without columns: " + table);
+ LOGGER.warning(() -> "Found a table \"" + table + "\" that has no columns.");
}
cols.close();
- } catch (SQLException exception) {
+ } catch (final SQLException exception) {
dialect.handleException(exception, exceptionMode);
return null;
}
return columns;
}
-}
+}
\ No newline at end of file
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/logging/CompactFormatter.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/logging/CompactFormatter.java
new file mode 100644
index 000000000..8990a46c1
--- /dev/null
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/logging/CompactFormatter.java
@@ -0,0 +1,65 @@
+package com.exasol.logging;
+
+import java.time.Instant;
+import java.time.ZoneId;
+import java.time.format.DateTimeFormatter;
+import java.util.logging.Formatter;
+import java.util.logging.LogRecord;
+
+/**
+ * Formatter for compact log messages.
+ */
+public class CompactFormatter extends Formatter {
+ private static final String LOG_LEVEL_FORMAT = "%-8s";
+ private final DateTimeFormatter dateTimeFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS");
+
+ /**
+ * Formats a log record according in a compact manner.
+ *
+ * The parts of the package name between the dots are abbreviated with their
+ * first letter. Timestamps are displayed as 24h UTC+0.
+ *
+ * yyyy-MM-dd HH:mm:ss.SSS LEVEL [c.e.ClassName] The message.
+ */
+ @Override
+ public String format(final LogRecord record) {
+ final StringBuilder builder = new StringBuilder();
+ builder.append(formatTimestamp(record.getMillis()));
+ builder.append(" ");
+ builder.append(String.format(LOG_LEVEL_FORMAT, record.getLevel()));
+ appendClassName(record.getSourceClassName(), builder);
+ builder.append(record.getMessage());
+ builder.append(System.lineSeparator());
+ return builder.toString();
+ }
+
+ private void appendClassName(final String className, final StringBuilder builder) {
+ if (className != null && !className.isEmpty()) {
+ builder.append("[");
+ appendNonEmptyClassName(className, builder);
+ builder.append("] ");
+ }
+ }
+
+ private void appendNonEmptyClassName(final String className, final StringBuilder builder) {
+ int lastPosition = -1;
+ int position = className.indexOf(".");
+ while (position > 0) {
+ final String characterAfterDot = className.substring(lastPosition + 1, lastPosition + 2);
+ if (!characterAfterDot.equals(".")) {
+ builder.append(characterAfterDot);
+ }
+ builder.append(".");
+ lastPosition = position;
+ position = className.indexOf(".", position + 1);
+ }
+ if (lastPosition < className.length()) {
+ builder.append(className.substring(lastPosition + 1));
+ }
+ }
+
+ private String formatTimestamp(final long millis) {
+ final Instant instant = Instant.ofEpochMilli(millis);
+ return this.dateTimeFormatter.format(instant.atZone(ZoneId.of("Z")));
+ }
+}
\ No newline at end of file
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/utils/UdfUtils.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/utils/UdfUtils.java
index 480f263cb..5d37ccc15 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/utils/UdfUtils.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/java/com/exasol/utils/UdfUtils.java
@@ -1,25 +1,24 @@
package com.exasol.utils;
-import java.io.PrintStream;
+import java.io.OutputStream;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.net.Socket;
public class UdfUtils {
-
- public static void tryAttachToOutputService(String ip, int port) {
+ public static OutputStream tryAttachToOutputService(final String ip, final int port) {
// Start before: udf_debug.py
try {
@SuppressWarnings("resource")
- Socket socket = new Socket(ip, port);
- PrintStream out = new PrintStream(socket.getOutputStream(), true);
- System.setOut(out);
- System.out.println("\n\n\nAttached to outputservice");
- } catch (Exception ex) {} // could not start output server}
+ final Socket socket = new Socket(ip, port);
+ return socket.getOutputStream();
+ } catch (final Exception ex) {
+ return null;
+ } // could not start output server}
}
- public static String traceToString(Exception ex) {
- StringWriter errors = new StringWriter();
+ public static String traceToString(final Exception ex) {
+ final StringWriter errors = new StringWriter();
ex.printStackTrace(new PrintWriter(errors));
return errors.toString();
}
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/main/resources/sql_dialects.properties b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/resources/sql_dialects.properties
new file mode 100644
index 000000000..0b10e0766
--- /dev/null
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/main/resources/sql_dialects.properties
@@ -0,0 +1,13 @@
+com.exasol.adapter.dialects.supported=\
+com.exasol.adapter.dialects.impl.DB2SqlDialect,\
+com.exasol.adapter.dialects.impl.ExasolSqlDialect,\
+com.exasol.adapter.dialects.impl.GenericSqlDialect,\
+com.exasol.adapter.dialects.impl.HiveSqlDialect,\
+com.exasol.adapter.dialects.impl.ImpalaSqlDialect,\
+com.exasol.adapter.dialects.impl.MysqlSqlDialect,\
+com.exasol.adapter.dialects.impl.OracleSqlDialect,\
+com.exasol.adapter.dialects.impl.PostgreSQLSqlDialect,\
+com.exasol.adapter.dialects.impl.RedshiftSqlDialect,\
+com.exasol.adapter.dialects.impl.SqlServerSqlDialect,\
+com.exasol.adapter.dialects.impl.SybaseSqlDialect,\
+com.exasol.adapter.dialects.impl.TeradataSqlDialect
\ No newline at end of file
diff --git a/jdbc-adapter/virtualschema-jdbc-adapter/src/test/java/com/exasol/adapter/dialects/AbstractIntegrationTest.java b/jdbc-adapter/virtualschema-jdbc-adapter/src/test/java/com/exasol/adapter/dialects/AbstractIntegrationTest.java
index 14123e12c..5e0561ddc 100644
--- a/jdbc-adapter/virtualschema-jdbc-adapter/src/test/java/com/exasol/adapter/dialects/AbstractIntegrationTest.java
+++ b/jdbc-adapter/virtualschema-jdbc-adapter/src/test/java/com/exasol/adapter/dialects/AbstractIntegrationTest.java
@@ -1,7 +1,18 @@
package com.exasol.adapter.dialects;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
import java.io.FileNotFoundException;
-import java.sql.*;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.sql.Timestamp;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.GregorianCalendar;
@@ -9,8 +20,6 @@
import java.util.regex.Matcher;
import java.util.regex.Pattern;
-import static org.junit.Assert.*;
-
public class AbstractIntegrationTest {
private static Connection connection;
@@ -25,9 +34,10 @@ public static IntegrationTestConfig getConfig() throws FileNotFoundException {
}
/**
- * You have to call this method with a connection to your EXASOL database during the @BeforeClass method of your integration test
+ * You have to call this method with a connection to your EXASOL database during
+ * the @BeforeClass method of your integration test
*/
- public static void setConnection(Connection connection) {
+ public static void setConnection(final Connection connection) {
AbstractIntegrationTest.connection = connection;
}
@@ -40,49 +50,54 @@ private static void checkConnection() {
}
public static Connection connectToExa() throws ClassNotFoundException, SQLException, FileNotFoundException {
- String user = config.getExasolUser();
- String password = config.getExasolPassword();
+ final String user = config.getExasolUser();
+ final String password = config.getExasolPassword();
return connectToExa(user, password);
}
- public static Connection connectToExa(String user, String password) throws ClassNotFoundException, SQLException, FileNotFoundException {
- String exaAddress = config.getExasolAddress();
+ public static Connection connectToExa(final String user, final String password)
+ throws ClassNotFoundException, SQLException, FileNotFoundException {
+ final String exaAddress = config.getExasolAddress();
Class.forName("com.exasol.jdbc.EXADriver");
return DriverManager.getConnection("jdbc:exa:" + exaAddress, user, password);
}
- public ResultSet executeQuery(Connection conn, String query) throws SQLException {
+ public ResultSet executeQuery(final Connection conn, final String query) throws SQLException {
return conn.createStatement().executeQuery(query);
}
- public ResultSet executeQuery(String query) throws SQLException {
+ public ResultSet executeQuery(final String query) throws SQLException {
checkConnection();
return executeQuery(connection, query);
}
- public int executeUpdate(String query) throws SQLException {
+ public int executeUpdate(final String query) throws SQLException {
checkConnection();
return connection.createStatement().executeUpdate(query);
}
- public static void createJDBCAdapter(Connection conn, List jarIncludes) throws SQLException {
- Statement stmt = conn.createStatement();
+
+ public static void createJDBCAdapter(final Connection conn, final List jarIncludes) throws SQLException {
+ final Statement stmt = conn.createStatement();
stmt.execute("CREATE SCHEMA IF NOT EXISTS ADAPTER");
String sql = "CREATE OR REPLACE JAVA ADAPTER SCRIPT ADAPTER.JDBC_ADAPTER AS\n";
sql += " %scriptclass com.exasol.adapter.jdbc.JdbcAdapter;\n";
- for (String includePath : jarIncludes) {
+ for (final String includePath : jarIncludes) {
sql += " %jar " + includePath + ";\n";
}
- //sql += " %jvmoption -Xms64m -Xmx64m;";
+ // sql += " %jvmoption -Xms64m -Xmx64m;";
sql += "/";
stmt.execute(sql);
}
- public static void createJDBCAdapter(List jarIncludes) throws SQLException {
+ public static void createJDBCAdapter(final List jarIncludes) throws SQLException {
checkConnection();
createJDBCAdapter(connection, jarIncludes);
}
- public static void createVirtualSchema(Connection conn, String virtualSchemaName, String dialect, String remoteCatalog, String remoteSchema, String connectionName, String user, String password, String adapter, String remoteConnectionString, boolean isLocal, String debugAddress, String tableFilter, String suffix) throws SQLException {
+ public static void createVirtualSchema(final Connection conn, final String virtualSchemaName, final String dialect,
+ final String remoteCatalog, final String remoteSchema, final String connectionName, final String user,
+ final String password, final String adapter, final String remoteConnectionString, final boolean isLocal,
+ final String debugAddress, final String tableFilter, final String suffix) throws SQLException {
removeVirtualSchema(conn, virtualSchemaName);
String sql = "CREATE VIRTUAL SCHEMA " + virtualSchemaName;
sql += " USING " + adapter;
@@ -113,6 +128,7 @@ public static void createVirtualSchema(Connection conn, String virtualSchemaName
if (!debugAddress.isEmpty()) {
sql += " DEBUG_ADDRESS='" + debugAddress + "'";
}
+ sql += " LOG_LEVEL='ALL'";
if (!tableFilter.isEmpty()) {
sql += " TABLE_FILTER='" + tableFilter + "'";
}
@@ -122,27 +138,34 @@ public static void createVirtualSchema(Connection conn, String virtualSchemaName
conn.createStatement().execute(sql);
}
- public static void createVirtualSchema(String virtualSchemaName, String dialect, String remoteCatalog, String remoteSchema, String connectionName, String user, String password, String adapter, String remoteConnectionString, boolean isLocal, String debugAddress, String tableFilter, String suffix) throws SQLException {
+ public static void createVirtualSchema(final String virtualSchemaName, final String dialect,
+ final String remoteCatalog, final String remoteSchema, final String connectionName, final String user,
+ final String password, final String adapter, final String remoteConnectionString, final boolean isLocal,
+ final String debugAddress, final String tableFilter, final String suffix) throws SQLException {
checkConnection();
- createVirtualSchema(connection, virtualSchemaName, dialect, remoteCatalog, remoteSchema, connectionName, user, password, adapter, remoteConnectionString, isLocal, debugAddress, tableFilter, suffix);
+ createVirtualSchema(connection, virtualSchemaName, dialect, remoteCatalog, remoteSchema, connectionName, user,
+ password, adapter, remoteConnectionString, isLocal, debugAddress, tableFilter, suffix);
}
- public static void createConnection(Connection conn, String connectionName, String connectionString, String user, String password) throws SQLException {
+ public static void createConnection(final Connection conn, final String connectionName,
+ final String connectionString, final String user, final String password) throws SQLException {
removeConnection(conn, connectionName);
String sql = "CREATE CONNECTION " + connectionName;
sql += " TO '" + connectionString + "'";
sql += " USER '" + user + "'";
- sql += " IDENTIFIED BY '" + password +"'";
+ sql += " IDENTIFIED BY '" + password + "'";
conn.createStatement().execute(sql);
}
- public static void createConnection(String connectionName, String connectionString, String user, String password) throws SQLException {
+ public static void createConnection(final String connectionName, final String connectionString, final String user,
+ final String password) throws SQLException {
checkConnection();
createConnection(connection, connectionName, connectionString, user, password);
}
- public static String getPortOfConnectedDatabase(Connection conn) throws SQLException {
- ResultSet result = conn.createStatement().executeQuery("SELECT PARAM_VALUE FROM EXA_COMMANDLINE where PARAM_NAME = 'port'");
+ public static String getPortOfConnectedDatabase(final Connection conn) throws SQLException {
+ final ResultSet result = conn.createStatement()
+ .executeQuery("SELECT PARAM_VALUE FROM EXA_COMMANDLINE where PARAM_NAME = 'port'");
result.next();
return result.getString("PARAM_VALUE");
}
@@ -152,37 +175,42 @@ public static String getPortOfConnectedDatabase() throws SQLException {
return getPortOfConnectedDatabase(connection);
}
- public static void matchNextRow(ResultSet result, Object... expectedElements) throws SQLException {
+ public static void matchNextRow(final ResultSet result, final Object... expectedElements) throws SQLException {
result.next();
- assertEquals(getDiffWithTypes(Arrays.asList(expectedElements), rowToObject(result)), Arrays.asList(expectedElements), rowToObject(result));
+ assertEquals(getDiffWithTypes(Arrays.asList(expectedElements), rowToObject(result)),
+ Arrays.asList(expectedElements), rowToObject(result));
}
- public static void matchLastRow(ResultSet result, Object... expectedElements) throws SQLException {
+ public static void matchLastRow(final ResultSet result, final Object... expectedElements) throws SQLException {
matchNextRow(result, expectedElements);
assertFalse(result.next());
}
- private static void removeConnection(Connection conn, String connectionName) throws SQLException {
- Statement stmt = conn.createStatement();
- String sql = "DROP CONNECTION IF EXISTS " + connectionName;
+ private static void removeConnection(final Connection conn, final String connectionName) throws SQLException {
+ final Statement stmt = conn.createStatement();
+ final String sql = "DROP CONNECTION IF EXISTS " + connectionName;
stmt.execute(sql);
}
- private static void removeVirtualSchema(Connection conn, String schemaName) throws SQLException {
- Statement stmt = conn.createStatement();
- String sql = "DROP VIRTUAL SCHEMA IF EXISTS " + schemaName + " CASCADE";
+ private static void removeVirtualSchema(final Connection conn, final String schemaName) throws SQLException {
+ final Statement stmt = conn.createStatement();
+ final String sql = "DROP VIRTUAL SCHEMA IF EXISTS " + schemaName + " CASCADE";
stmt.execute(sql);
}
/**
- * This method shows the diff with the types. Normally, only the String representation is shown in the diff, so you cannot distinguish between (int)1 and (long)1.
+ * This method shows the diff with the types. Normally, only the String
+ * representation is shown in the diff, so you cannot distinguish between (int)1
+ * and (long)1.
*/
- private static String getDiffWithTypes(List