-
Notifications
You must be signed in to change notification settings - Fork 23
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Sybase Virtual Schema (#32) * Use SqlServer dialect as starting point * Add sybase dialect and override for ORDER BY * Setup Sybase integration tests: includes order by tests * Add WHERE test * Date/time types: handle conversion and add tests * Add sybase integer datatype tests * Add a whole bunch of data type tests * Add binary, varbinary, image and bit tests * Add Sybase to the supported dialects * Pmi 73 take over sybase from hendrik (#33) * PMI-16: Set fixed database version for CI build. Cleaned up integration test script. * PMI-16: Fixed quoting. * PMI-16: Corrected docker image version. * PMI-76: Cherry-picked integration test script improvements. * Pmi 74 improve travis ci build (#35) * PMI-74: Used "mktemp". Improved configurability and readability. * PMI-74: Parameterized more hard-coded paths. * PMI-74: Change temp dir to rooted temp dir. * PMI-74: Improved readability by splitting file into functions. * PIM-74: Travis badge now points to default branch. * Update README.md * Added DBeaver files to ".gitignore" * Feature/pmi 69 invert dialect dependencies (#31) * refactor(PMI-69): Turned hard-coded SQL dialect list into registry. * feat(PMI-16): Dialect registry now uses scanning. * fix(PMI-16): Explicitly set report output encoding to avoid warning. * PMI-16: Added "local" directory to .gitignore * PMI-16: Got new SQL dialect registry running and improved logging. * PMI-16: Got properties-controlled dialects regsitry running. * PMI-16: Set fixed database version for CI build. Cleaned up integration test script. * PMI-16: Fixed quoting. * PMI-16: Corrected docker image version. * PMI-69: Fixed review findings of Andre Hacker. * PMI-69: Removed superfluous "@OverRide". * Pmi 75 sybase integration test (#36) * Rename EXASOL to Exasol * Update supported-dialects.md * SQL generation to use SQL standard "<>" for not-equal predicate * Support for native import from Oracle (issue #26) (#27) * Increment to version 1.0.1 for releasing * Increment version to 6.0.2-SNAPSHOT for next development iteration * Readme fixes (#29) * Display SQL statements, etc. in monospace and clean up syntax * Add newline after heading * Add script to run Exasol integration tests on Travis CI (#28) A new shell script `run_integration_tests.sh` executes the integration tests as defined in `integration-test-travis.yaml`. It uses the exasol/docker-db image to spin up an Exasol instance and execute the Exasol dialect integration tests. Travis CI automatically executes the test script for each new commit. * refactor(PMI-69): Turned hard-coded SQL dialect list into registry. * feat(PMI-16): Dialect registry now uses scanning. * fix(PMI-16): Explicitly set report output encoding to avoid warning. * PMI-16: Added "local" directory to .gitignore * PMI-16: Got new SQL dialect registry running and improved logging. * PMI-16: Got properties-controlled dialects regsitry running. * PMI-16: Set fixed database version for CI build. Cleaned up integration test script. * PMI-16: Fixed quoting. * PMI-16: Corrected docker image version. * Sybase Virtual Schema (#32) * Use SqlServer dialect as starting point * Add sybase dialect and override for ORDER BY * Setup Sybase integration tests: includes order by tests * Add WHERE test * Date/time types: handle conversion and add tests * Add sybase integer datatype tests * Add a whole bunch of data type tests * Add binary, varbinary, image and bit tests * Add Sybase to the supported dialects * Pmi 73 take over sybase from hendrik (#33) * PMI-16: Set fixed database version for CI build. Cleaned up integration test script. * PMI-16: Fixed quoting. * PMI-16: Corrected docker image version. * PMI-76: Cherry-picked integration test script improvements. * PMI-74: Used "mktemp". Improved configurability and readability. * PMI-74: Parameterized more hard-coded paths. * PMI-74: Change temp dir to rooted temp dir. * PMI-74: Improved readability by splitting file into functions. * PIM-74: Travis badge now points to default branch. * PMI-75: Adapted to new SQL dialect registry. * Added local and Scripts to .gitignore * Added .dbeaver* to .gitignore * PMI-75: Cleaned up and split documentation. * PMI-75: Fixed integration tests. Fixed database preparation scripts * PMI-75: Refactored IntegrationTestSetup.java for better readability and to reduce static methods to a minimum. * PMI-75: Got remote logging running. * PMI-75: Improved log message formatting. Added unit tests for custom formatter. * PMI-75: Set default log level explicitly. * PMI-75: Added fallback to STDOUT in case socket output stream is not available. * PMI-75: Improved documentation. * PMI-75: Removed distracting introduction. * Feature/pmi 91 update version to 1.1.0 (#37) * Deleted `increment-version.sh` script * Created `version.sh` script that has a verification and an updater mode * Introduced `product.version` property in master `pom.xml` * Used that property in all child-POM files * Used `version.sh` to update documentation * Added `version.sh verify` as build breaker in `.travis.yml`
- Loading branch information
1 parent
9d4a955
commit efee3c7
Showing
82 changed files
with
6,895 additions
and
3,740 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -15,4 +15,7 @@ dependency-reduced-pom.xml | |
|
||
# Others | ||
.DS_Store | ||
*.swp | ||
*.swp | ||
**/local | ||
Scripts | ||
.dbeaver* |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,56 +1,70 @@ | ||
## Deploying the Adapter Step By Step | ||
# Deploying the Adapter Step By Step | ||
|
||
Run the following steps to deploy your adapter: | ||
|
||
### 1. Prerequisites: | ||
* EXASOL >= 6.0 | ||
## Prerequisites | ||
|
||
* Exasol Version 6.0 or later | ||
* Advanced edition (which includes the ability to execute adapter scripts), or Free Small Business Edition | ||
* EXASOL must be able to connect to the host and port specified in the JDBC connection string. In case of problems you can use a [UDF to test the connectivity](https://www.exasol.com/support/browse/SOL-307). | ||
* If the JDBC driver requires Kerberos authentication (e.g. for Hive or Impala), the EXASOL database will authenticate using a keytab file. Each EXASOL node needs access to port 88 of the the Kerberos KDC (key distribution center). | ||
* Exasol must be able to connect to the host and port specified in the JDBC connection string. In case of problems you can use a [UDF to test the connectivity](https://www.exasol.com/support/browse/SOL-307). | ||
* If the JDBC driver requires Kerberos authentication (e.g. for Hive or Impala), the Exasol database will authenticate using a keytab file. Each Exasol node needs access to port 88 of the the Kerberos KDC (key distribution center). | ||
|
||
### 2. Obtain Jar: | ||
## Obtaining JAR Archives | ||
|
||
First you have to obtain the so called fat jar (including all dependencies). | ||
First you have to obtain the so called fat JAR (including all dependencies). | ||
|
||
The easiest way is to download the jar from the last [Release](https://github.com/EXASOL/virtual-schemas/releases). | ||
The easiest way is to download the JAR from the last [Release](https://github.com/Exasol/virtual-schemas/releases). | ||
|
||
Alternatively you can clone the repository and build the jar as follows: | ||
``` | ||
git clone https://github.com/EXASOL/virtual-schemas.git | ||
Alternatively you can clone the repository and build the JAR as follows: | ||
|
||
```bash | ||
git clone https://github.com/Exasol/virtual-schemas.git | ||
cd virtual-schemas/jdbc-adapter/ | ||
mvn clean -DskipTests package | ||
``` | ||
|
||
The resulting fat jar is stored in `virtualschema-jdbc-adapter-dist/target/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar`. | ||
The resulting fat JAR is stored in `virtualschema-jdbc-adapter-dist/target/virtualschema-jdbc-adapter-dist-1.1.0.jar`. | ||
|
||
### 3. Upload Adapter Jar | ||
## Uploading the Adapter JAR Archive | ||
|
||
You have to upload the jar of the adapter to a bucket of your choice in the EXASOL bucket file system (BucketFS). This will allow using the jar in the adapter script. | ||
You have to upload the JAR of the adapter to a bucket of your choice in the Exasol bucket file system (BucketFS). This will allow using the jar in the adapter script. | ||
|
||
Following steps are required to upload a file to a bucket: | ||
* Make sure you have a bucket file system (BucketFS) and you know the port for either http or https. This can be done in EXAOperation under "EXABuckets". E.g. the id could be `bucketfs1` and the http port 2580. | ||
* Check if you have a bucket in the BucketFS. Simply click on the name of the BucketFS in EXAOperation and add a bucket there, e.g. `bucket1`. Also make sure you know the write password. For simplicity we assume that the bucket is defined as a public bucket, i.e. it can be read by any script. | ||
* Now upload the file into this bucket, e.g. using curl (adapt the hostname, BucketFS port, bucket name and bucket write password). | ||
``` | ||
curl -X PUT -T virtualschema-jdbc-adapter-dist/target/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar \ | ||
http://w:[email protected]:2580/bucket1/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar | ||
|
||
1. Make sure you have a bucket file system (BucketFS) and you know the port for either HTTP or HTTPS. | ||
|
||
This can be done in EXAOperation under "EXABuckets". E.g. the id could be `bucketfs1` and the HTTP port 2580. | ||
|
||
1. Check if you have a bucket in the BucketFS. Simply click on the name of the BucketFS in EXAOperation and add a bucket there, e.g. `bucket1`. | ||
|
||
Also make sure you know the write password. For simplicity we assume that the bucket is defined as a public bucket, i.e. it can be read by any script. | ||
|
||
1. Now upload the file into this bucket, e.g. using curl (adapt the hostname, BucketFS port, bucket name and bucket write password). | ||
|
||
```bash | ||
curl -X PUT -T virtualschema-jdbc-adapter-dist/target/virtualschema-jdbc-adapter-dist-1.1.0.jar \ | ||
http://w:[email protected]:2580/bucket1/virtualschema-jdbc-adapter-dist-1.1.0.jar | ||
``` | ||
|
||
See chapter 3.6.4. "The synchronous cluster file system BucketFS" in the EXASolution User Manual for more details about BucketFS. | ||
|
||
### 4. Upload JDBC Driver Files | ||
## Deploying JDBC Driver Files | ||
|
||
You have to upload the JDBC driver files of your remote database **twice**: | ||
|
||
You have to upload the JDBC driver files of your remote database **two times**: | ||
* Upload all files of the JDBC driver into a bucket of your choice, so that they can be accessed from the adapter script. This happens the same way as described above for the adapter jar. You can use the same bucket. | ||
* Upload all files of the JDBC driver into a bucket of your choice, so that they can be accessed from the adapter script. | ||
This happens the same way as described above for the adapter JAR. You can use the same bucket. | ||
* Upload all files of the JDBC driver as a JDBC driver in EXAOperation | ||
- In EXAOperation go to Software -> JDBC Drivers | ||
- Add the JDBC driver by specifying the jdbc main class and the prefix of the JDBC connection string | ||
- Add the JDBC driver by specifying the JDBC main class and the prefix of the JDBC connection string | ||
- Upload all files (one by one) to the specific JDBC to the newly added JDBC driver. | ||
|
||
Note that some JDBC drivers consist of several files and that you have to upload all of them. To find out which jar you need, consult the [supported dialects page](supported-dialects.md). | ||
Note that some JDBC drivers consist of several files and that you have to upload all of them. To find out which JAR you need, consult the [supported dialects page](supported_sql_dialects.md). | ||
|
||
## Deploying the Adapter Script | ||
|
||
### 5. Deploy Adapter Script | ||
Then run the following SQL commands to deploy the adapter in the database: | ||
|
||
```sql | ||
-- The adapter is simply a script. It has to be stored in any regular schema. | ||
CREATE SCHEMA adapter; | ||
|
@@ -61,7 +75,7 @@ CREATE JAVA ADAPTER SCRIPT adapter.jdbc_adapter AS | |
|
||
// This will add the adapter jar to the classpath so that it can be used inside the adapter script | ||
// Replace the names of the bucketfs and the bucket with the ones you used. | ||
%jar /buckets/your-bucket-fs/your-bucket/virtualschema-jdbc-adapter-dist-1.0.2-SNAPSHOT.jar; | ||
%jar /buckets/your-bucket-fs/your-bucket/virtualschema-jdbc-adapter-dist-1.1.0.jar; | ||
|
||
// You have to add all files of the data source jdbc driver here (e.g. Hive JDBC driver files) | ||
%jar /buckets/your-bucket-fs/your-bucket/name-of-data-source-jdbc-driver.jar; | ||
|
Oops, something went wrong.