Skip to content

Commit

Permalink
Address bucket replication comments
Browse files Browse the repository at this point in the history
Signed-off-by: Ben <[email protected]>
  • Loading branch information
Neon-White committed Nov 25, 2024
1 parent 619c810 commit dfa3840
Showing 1 changed file with 29 additions and 14 deletions.
43 changes: 29 additions & 14 deletions doc/bucket-replication.md
Original file line number Diff line number Diff line change
@@ -1,46 +1,61 @@
[NooBaa Operator](../README.md) /

# Bucket Replication
Bucket replication is a NooBaa feature that allows a user to set a replication policy for all or some objects. The goal of replication policies is simple - to define a target for objects to be copied to.
Bucket replication is a NooBaa feature that allows a user to set a replication policy for all or some objects. The goal of replication policies is simple - to define a target bucket for objects to be copied to.

When applied to a bucketclass, the policy will apply to all future bucket claims that'll utilize the bucketclass.
To utilize bucket replication, we need to first decide what will be our source bucket and what will be our target bucket. The source bucket is the bucket that contains the objects that we want to replicate, and the target bucket is the bucket that will contain the replicated objects. The replication policy is set on the source bucket, and it defines the target bucket(s) and the rules for replication.

In general, a replication policy is a JSON-compliant string which defines an array containing at least one rule -
- Each rule is an object containing a `rule_id`, a `destination bucket`, and an optional `filter` key that contains a `prefix` field.
- Each rule is an object containing a `rule_id`, a `destination_bucket`, and an optional `filter` key that contains a `prefix` field.
- When a filter with prefix is provided - only objects keys that match the prefix will be replicated

The main mechanism behind bucket replication lists all objects across the two buckets, compares the differences, and copies the missing objects from the source to the target bucket. Log-based optimizations utilize AWS S3 server access logging or Azure Monitor to optimize the replication process by copying objects that have been created or modified since the feature was turned on - effectively allowing users to get up to speed with up-to-date recent objects, as the rest replicate in the background with the classic method.
Behind the scenes, bucket replication esstentialy works by comparing object lists. NooBaa lists all objects on the source and target buckets, and checks which objects are missing on the target bucket. It then copies the missing objects from the source to the target bucket (while adhering to any provided rules).
It is possible to accelerate replication by utilizing logs - at the time of writing, AWS S3 server access logging or Azure Monitor. This mechanism allows NooBaa to copy only objects that have been created or modified since the feature was turned on, while the rest replicate in the background. This allows users to get up to speed with recent objects, while the classic replication mechanism catches up with the rest.

## Bucket Class Replication
Bucket replication policies can also be applied to a bucketclasses. In those cases, the policy will automatically be 'inherited' by all bucket claims that utilize the bucketclass in the future.

## Replication Policy Parameters
As stated above, a replication policy is a JSON-compliant array of rules (examples are provided at the bottom of this section)
- Each rule is an object that contains the following keys:
- `rule_id` - which identifies the rule
- `rule_id` - a unique ID which is used to identify the rule. The rule should utilize classic alphanumeric characters (a-zA-Z0-9) and is chosen by the user. Note that is not possible to create several rules with the same ID.
- `destination_bucket` - which dictates the target NooBaa buckets that the objects will be copied to
- (optional) `{"filter": {"prefix": <>}}` - if the user wishes to filter the objects that are replicated, the value of this field can be set to a prefix string
- (optional, log-based optimization, see below) `sync_deletions` - can be set to a boolean value to indicate whether deletions should be replicated
- (optional, log-based optimization, see below) `sync_versions` - can be set to a boolean value to indicate whether object versions should be replicated
- (Optional) `{"filter": {"prefix": <>}}` - if the user wishes to filter the objects that are replicated, the value of this field can be set to a prefix string
- (Optional, log-based optimization) `sync_deletions` - can be set to a boolean value to indicate whether deletions should be replicated (i.e. objects that were deleted on the source bucket should be deleted on the target bucket)
- (Optional, log-based optimization) `sync_versions` - can be set to a boolean value to indicate whether object versions should be replicated (i.e. if the source bucket has versioning enabled, the target bucket will also have versioning enabled, and all object versions will be synced)

In addition, when the bucketclass is backed by namespacestores, each policy can be set to optimize replication by utilizing logs (configured and supplied by the user, currently only supports AWS S3 and Azure Blob):
- <sup><sub>(optional, only supported on namespace buckets)</sub></sup> `log_replication_info` - an object that contains data related to log-based replication optimization -
- <sup><sub>(necessary on Azure)</sub></sup> `endpoint_type` - this field can be set to an appropriate endpoint type (currently, only AZURE is supported)
- <sup><sub>(necessary on AWS)</sub></sup> `{"logs_location": {"logs_bucket": <>}}` - this field should be set to the location of the AWS S3 server access logs
- (Optional, only supported on namespace buckets) `log_replication_info` - an object that contains data related to log-based replication optimization -
- (Necessary on Azure) `endpoint_type` - this field can be set to an appropriate endpoint type (currently, only AZURE is supported)
- (Necessary on AWS) `{"logs_location": {"logs_bucket": <>}}` - this field should be set to the location of the AWS S3 server access logs

## Examples
Note that the example poicies below can also be saved as files and passed to the NooBaa CLI. In that case, it's necessary to omit the outer single quotes.
### AWS replication policy:

`'{"rules":[{"rule_id":"aws-rule-1", "destination_bucket":"first.bucket", "filter": {"prefix": "a."}}]}'`

### AWS replication policy with log optimization:

`'{"rules":[{"rule_id":"aws-rule-1", "destination_bucket":"first.bucket", "filter": {"prefix": "a."}}], "log_replication_info": {"logs_location": {"logs_bucket": "logsarehere"}}}'`

### Azure replication policy with log optimization:
### Azure replication policy with log optimization, deletion and version sync:

`'{"rules":[{"rule_id":"azure-rule-1", "sync_deletions": true, "sync_versions": false, "destination_bucket":"first.bucket"}], "log_replication_info": {"endpoint_type": "AZURE"}}'`

### Namespace bucketclass with replication to first.bucket:
### Namespace bucketclass creation with replication to first.bucket:

With the NooBaa CLI -

/path/to/json-file.json is the path to a JSON file which defines the replication policy
```shell
noobaa -n app-namespace bucketclass create namespace-bucketclass single bc --resource azure-blob-ns --replication-policy=/path/to/json-file.json
```
/path/to/json-file.json is the path to a JSON file which defines the replication policy, e.g. -
```json
{"rules":[{ "rule_id": "rule-1", "destination_bucket": "first.bucket", "filter": {"prefix": "d"}} ]}
```

With a YAML file:
```yaml
apiVersion: noobaa.io/v1alpha1
kind: BucketClass
Expand Down

0 comments on commit dfa3840

Please sign in to comment.