- [cos] Fixing Spark streaming write flows, by adding support for object rename
- [bug] Fixing globber. Filter accept method may skip slash of the parent, thus reject certain objects.
- [bug] fixing Spark streaming write into COS
- [swift] code cleanup
- update http libraries
- [cos] data cleanup for failed parts
- [cos] adding tests for data cleanup
- [cos] more unitests
- [cos-ibm-sdk] support for token provided via url
- [cos] improvements to fault tolerance algorithm
- fixes to glober with brackets
- [cos] array ouput stream. fixing package names
- support for bracket globber mode
- [test] adding COS tests for globber
- [test] refactoring
- Upgrade JOSS version from 0.10.1 to 0.10.2
- [cos] list with UTF-8 to resolve + sign issues
- remove getFileStatus call within mkdirs
- update powermock dependency
- [bug] Additional globber fixes
- fixes to glober to bypass more patterns
[bug] fixing csv partitions that may fail with CSV files
- [swift] Allow to configure TLS protocol version
- [cos] ignore exception if stocator.properties not found
- [cos] Fixing issues with unified object name extraction
- Fixes to globber
- Preserve file extensions for data parts
- [swift] Allow Swift container name to contain spaces
- Add shade generated pom to gitignore
- [swift] Pass back failed keystone authentication failures
- Update thirdparty dependencies to the latest compatible versions
- [swift] Update PasswordScopeAccessProvider.java
- [cos] Fixing BUFFER_DIR to provide input for the temp files folders
- [swift, cos] Introducing object store flat globber
- [cos] partial to support + in object names
- avoid mkdirs on the directory that already exists. This resolved bug with TeraGen where mkdir writes into folder with existing data
- [swift] Improvements for the connections (when data is ready and not on create)
- Updates of the dependencies
- [cos] Improve exceptions when IAM credentials are not valid
- [cos] Fixing list on the root level
- [cos] Usage of Statistics class to report metrics of bytes read or write
- [cos] Remove preconditions check to avoid issues with dependencies
- Using sha-256 for temp files. This prevents issues with long names
- Fixing Null Pointer Exception when runnign with output comitter version 1
- [cos] New configuration key to define Guava cache size
- [cos] Fixing content type for block uploads
- Improve object read flows
- Allign list with Hadoop connectors. New configuration flag defines previous flat listing or new nested listing
- Introduce Guava caching for frequent objects. This greatly reduces number of HEAD calls
- Bucket names now can include dots
- Support for partitions, introduced in Spark 2.0.X
- Imrove temp file generation for write flows
- Improve Object Store Globber
- Improve response errors. This fixed bug that reported NPE instead of not authenticated.
- Block upload for COS connector. Based on disk
- Implementation for isFile / isDirectory methods
- Move to Hadoop 2.7.3
- Code cleanup
- Remove dependance on FSExceptionMessages
- Adjust copyright headers
- Fix parallel bucket creation
- Custom user agent
- Stocator COS support
- Fixing Stocator user agent
- Fixing issues with streaming
- Proper handle of the Filter for list operations
- Moving JOSS to 0.9.15
- Avoid duplicate get container
- Making API to work with non US locale
- Better debug prints
- Reducing number of GET requests
- Fixing list status
- Fixing get file status on temp object
- Remove duplicate call to get object length
- Support for temp urls
- Added thread pool for create method
- Support spaces in the names
- Modified JOSS to disable HEAD on account when accessing containers. This caused issues when user doesn't has access on account level, but only on container level.
- Fixed regression caused by consumeQueitely. This fix improved read by 3 times
- Added cache to contain object length and last modified time stamp. This cache is filled during list and usefull for Spark flows.
- Removed need to HEAD object before GET. This reduces number of HEAD requests.
- Continue improvements with container listing
- Object upload now based on the Apache HttpClient 4.5.2
- New configuration keys to tune connection properties
- Moving Hadoop to 2.7.2
- Adapting Stocator to work with Hadoop testDFSIO. This includes support for certain flows that required by Hadoop.
- Continue improvements to logging.
- Fixing object store globber. Resolving issues with container listings
- Introducing SwiftConnectionManager that is based on PoolingHttpClientConnectionManager. This makes better connection utilizations both for SwiftAPIDirect and JOSS.
- Resolving issues with 16 minutes timeouts. Using custom retry handler to retry failed attempts
- Redesign SwiftOutputStream. This resolved various Parquet related issues, like EOF bug
- Fixing double authentication calls during SwiftAPIClient init method
- Supporting multiple schemas
- Improving error messages
- Better logging
- Improving unitests
- Checking for 100-continue in write operations before uploading the data.
- Fixing token expiration issues in write and read operations
- Remoded object store HEAD request on the _temporary object
- Improving unitests
- Added capability to support different schemas, not just swift2d://
- Moving JOSS to 0.9.12
- Applying Apache Trademark guidelines to Readme