-
Notifications
You must be signed in to change notification settings - Fork 22
/
config.yaml
488 lines (482 loc) · 20.8 KB
/
config.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
options:
loglevel:
type: int
default: 1
description: OSD debug level. Max is 20.
source:
type: string
default: caracal
description: |
Optional configuration to support use of additional sources such as:
.
- ppa:myteam/ppa
- cloud:bionic-ussuri
- cloud:xenial-proposed/queens
- http://my.archive.com/ubuntu main
.
The last option should be used in conjunction with the key configuration
option.
key:
type: string
default:
description: |
Key ID to import to the apt keyring to support use with arbitrary source
configuration from outside of Launchpad archives or PPA's.
The accepted formats should be a GPG key in ASCII armor format,
including BEGIN and END markers or a keyid.
use-syslog:
type: boolean
default: False
description: |
If set to True, supporting services will log to syslog.
harden:
type: string
default:
description: |
Apply system hardening. Supports a space-delimited list of modules
to run. Supported modules currently include os, ssh, apache and mysql.
config-flags:
type: string
default:
description: |
User provided Ceph configuration. Supports a string representation of
a python dictionary where each top-level key represents a section in
the ceph.conf template. You may only use sections supported in the
template.
.
WARNING: this is not the recommended way to configure the underlying
services that this charm installs and is used at the user's own risk.
This option is mainly provided as a stop-gap for users that either
want to test the effect of modifying some config or who have found
a critical bug in the way the charm has configured their services
and need it fixed immediately. We ask that whenever this is used,
that the user consider opening a bug on this charm at
http://bugs.launchpad.net/charms providing an explanation of why the
config was needed so that we may consider it for inclusion as a
natively supported config in the charm.
osd-devices:
type: string
default:
description: |
The devices to format and set up as OSD volumes, space separated.
.
These devices are the range of devices that will be checked for and
used across all service units, in addition to any volumes attached
via the --storage flag during deployment.
Any devices not found will be ignored.
.
For ceph < 14.2.0 (Nautilus) these can also be directories instead of
devices. If the value does not start with "/dev" then it will be
interpreted as a directory.
NOTE: if the value does not start with "/dev" then apparmor
"enforce" profile is not supported.
bdev-enable-discard:
type: string
default: auto
description: |
Enables async discard on devices. This option will enable/disable both
bdev-enable-discard and bdev-async-discard options in ceph configuration
at the same time. The default value "auto" will try to autodetect and
should work in most cases. If you need to force a behaviour you can
set it to "enable" or "disable". Only applies for Ceph Mimic or later.
osd-journal:
type: string
default:
description: |
The devices to use as shared journal drives for all OSDs on a node, space separated.
By default a journal partition will be created on each OSD volume device for
use by that OSD. The default behaviour is also the fallback for the case
where the specified journal device does not exist on a node.
.
Only supported with ceph >= 0.48.3.
bluestore-wal:
type: string
default:
description: |
Path to BlueStore WAL block devices or files, space separated.
Should only be set if using
a separate physical device that is faster than the DB device (such as an
NVDIMM or faster SSD). Otherwise BlueStore automatically maintains the
WAL inside of the DB device. This block device is used as an LVM PV and
then space is allocated for each block device as needed based on the
bluestore-block-wal-size setting.
bluestore-db:
type: string
default:
description: |
Path to BlueStore WAL db block devices or files, space separated.
If you have a separate
physical device faster than the block device this will store all of the
filesystem metadata (RocksDB) there and also integrates the Write Ahead
Log (WAL) unless a further separate bluestore-wal device is configured
which is not needed unless it is faster again than the bluestore-db
device. This block device is used as an LVM PV and then space is
allocated for each block device as needed based on the
bluestore-block-db-size setting.
osd-journal-size:
type: int
default: 1024
description: |
Ceph OSD journal size. The journal size should be at least twice the
product of the expected drive speed multiplied by filestore max sync
interval. However, the most common practice is to partition the journal
drive (often an SSD), and mount it such that Ceph uses the entire
partition for the journal.
.
Only supported with ceph >= 0.48.3.
bluestore-block-wal-size:
type: int
default: 0
description: |
Size (in bytes) of a partition, file or LV to use for
BlueStore WAL (RocksDB WAL), provided on a per backend device basis.
.
Example: 128 GB device, 8 data devices provided in "osd-devices"
gives 128 / 8 GB = 16 GB = 16000000000 bytes per device.
.
A default value is not set as it is calculated by ceph-disk (before Luminous)
or the charm itself, when ceph-volume is used (Luminous and above).
bluestore-block-db-size:
type: int
default: 0
description: |
Size (in bytes) of a partition, file or LV to use for BlueStore
metadata or RocksDB SSTs, provided on a per backend device basis.
.
Example: 128 GB device, 8 data devices provided in "osd-devices"
gives 128 / 8 GB = 16 GB = 16000000000 bytes per device.
.
A default value is not set as it is calculated by ceph-disk (before Luminous)
or the charm itself, when ceph-volume is used (Luminous and above).
osd-format:
type: string
default: xfs
description: |
Format of filesystem to use for OSD devices. Supported formats include:
.
xfs (Default with >= ceph 0.48.3)
ext4 (Only option < ceph 0.48.3)
btrfs (experimental and not recommended)
.
Only supported with >= ceph 0.48.3.
.
Used with FileStore storage backend.
.
Always applies prior to ceph 12.2.0. Otherwise, only applies when the
"bluestore" option is False.
osd-encrypt:
type: boolean
default: False
description: |
By default, the charm will not encrypt Ceph OSD devices; however, by
setting osd-encrypt to True, Ceph's dmcrypt support will be used to
encrypt OSD devices.
.
Specifying this option on a running Ceph OSD node will have no effect
until new disks are added, at which point new disks will be encrypted.
osd-encrypt-keymanager:
type: string
default: ceph
description: |
Keymanager to use for storage of dm-crypt keys used for OSD devices;
by default 'ceph' itself will be used for storage of keys, making use
of the key/value storage provided by the ceph-mon cluster.
.
Alternatively 'vault' may be used for storage of dm-crypt keys. Both
approaches ensure that keys are never written to the local filesystem.
This also requires a relation to the vault charm.
crush-initial-weight:
type: float
default:
description: |
The initial crush weight for newly added osds into crushmap. Use this
option only if you wish to set the weight for newly added OSDs in order
to gradually increase the weight over time. Be very aware that setting
this overrides the default setting, which can lead to imbalance in the
cluster, especially if there are OSDs of different sizes in use. By
default, the initial crush weight for the newly added osd is set to its
volume size in TB. Leave this option unset to use the default provided
by Ceph itself. This option only affects NEW OSDs, not existing ones.
osd-max-backfills:
type: int
default:
description: |
The maximum number of backfills allowed to or from a single OSD.
.
Setting this option on a running Ceph OSD node will not affect running
OSD devices, but will add the setting to ceph.conf for the next restart.
osd-recovery-max-active:
type: int
default:
description: |
The number of active recovery requests per OSD at one time. More requests
will accelerate recovery, but the requests places an increased load on the
cluster.
.
Setting this option on a running Ceph OSD node will not affect running
OSD devices, but will add the setting to ceph.conf for the next restart.
tune-osd-memory-target:
type: string
default:
description: |
Set to tune the value of osd_memory_target.
If unset or set to an empty string,
the charm will not update the value for ceph.
This means that a new deployment with this value unset will default to ceph's default (4GB).
And if a value was set, but then later unset, ceph will remain configured with the last set value.
This is to allow for manually configuring this value in ceph without interference from the charm.
If set to "{n}%" (where n is an integer), the value will be set as follows:
total ram * (n/100) / number of osds on the host
If set to "{n}GB" (n is an integer), osd_memory_target will be set per OSD directly.
Take care when choosing a value that it both provides enough memory for ceph
and leave enough memory for the system and other workloads to function.
For common cases,
it is recommended to stay within the bounds of 4GB < value < 90% of system memory.
If these bounds are broken, a warning will be emitted by the charm,
but the value will still be set.
ignore-device-errors:
type: boolean
default: False
description: |
By default, the charm will raise errors if a whitelisted device is found,
but for some reason the charm is unable to initialize the device for use
by Ceph.
.
Setting this option to 'True' will result in the charm classifying such
problems as warnings only and will not result in a hook error.
ephemeral-unmount:
type: string
default:
description: |
Cloud instances provide ephemeral storage which is normally mounted
on /mnt.
.
Setting this option to the path of the ephemeral mountpoint will force
an unmount of the corresponding device so that it can be used as a OSD
storage device. This is useful for testing purposes (cloud deployment
is not a typical use case).
ceph-public-network:
type: string
default:
description: |
The IP address and netmask of the public (front-side) network (e.g.,
192.168.0.0/24)
.
If multiple networks are to be used, a space-delimited list of a.b.c.d/x
can be provided.
ceph-cluster-network:
type: string
default:
description: |
The IP address and netmask of the cluster (back-side) network (e.g.,
192.168.0.0/24)
.
If multiple networks are to be used, a space-delimited list of a.b.c.d/x
can be provided.
prefer-ipv6:
type: boolean
default: False
description: |
If True enables IPv6 support. The charm will expect network interfaces
to be configured with an IPv6 address. If set to False (default) IPv4
is expected.
.
NOTE: these charms do not currently support IPv6 privacy extension. In
order for this charm to function correctly, the privacy extension must be
disabled and a non-temporary address must be configured/available on
your network interface.
sysctl:
type: string
default: '{ kernel.pid_max : 2097152, vm.max_map_count : 524288,
kernel.threads-max: 2097152 }'
description: |
YAML-formatted associative array of sysctl key/value pairs to be set
persistently. By default we set pid_max, max_map_count and
threads-max to a high value to avoid problems with large numbers (>20)
of OSDs recovering. very large clusters should set those values even
higher (e.g. max for kernel.pid_max is 4194303).
customize-failure-domain:
type: boolean
default: false
description: |
Setting this to true will tell Ceph to replicate across Juju's
Availability Zone instead of specifically by host.
availability_zone:
type: string
default:
description: |
Custom availability zone to provide to Ceph for the OSD placement
max-sectors-kb:
type: int
default: 1048576
description: |
This parameter will adjust every block device in your server to allow
greater IO operation sizes. If you have a RAID card with cache on it
consider tuning this much higher than the 1MB default. 1MB is a safe
default for spinning HDDs that don't have much cache.
nagios_context:
type: string
default: "juju"
description: |
Used by the nrpe-external-master subordinate charm.
A string that will be prepended to instance name to set the hostname
in nagios. So for instance the hostname would be something like:
.
juju-myservice-0
.
If you're running multiple environments with the same services in them
this allows you to differentiate between them.
nagios_servicegroups:
type: string
default: ""
description: |
A comma-separated list of nagios servicegroups.
If left empty, the nagios_context will be used as the servicegroup
use-direct-io:
type: boolean
default: True
description: Configure use of direct IO for OSD journals.
autotune:
type: boolean
default: False
description: |
Enabling this option will attempt to tune your network card sysctls and
hard drive settings. This changes hard drive read ahead settings and
max_sectors_kb. For the network card this will detect the link speed
and make appropriate sysctl changes.
WARNING: This option is DEPRECATED and will be removed in the next release.
Exercise caution when enabling this feature; examine and
confirm sysctl values are appropriate for your environment. See
http://pad.lv/1798794 for a full discussion.
aa-profile-mode:
type: string
default: 'disable'
description: |
Enable apparmor profile. Valid settings: 'complain', 'enforce' or
'disable'.
.
NOTE: changing the value of this option is disruptive to a running Ceph
cluster as all ceph-osd processes must be restarted as part of changing
the apparmor profile enforcement mode. Always test in pre-production
before enabling AppArmor on a live cluster.
NOTE: apparmor 'enforce' profile is supported only if osd-device
name starts with "/dev"
bluestore-compression-algorithm:
type: string
default: lz4
description: |
The default compressor to use (if any) if the per-pool property
compression_algorithm is not set.
.
NOTE: The recommended approach is to adjust this configuration option on
the charm responsible for creating the specific pool you are interested
in tuning. Changing the configuration option on the ceph-osd charm will
affect ALL pools on the OSDs managed by the named application of the
ceph-osd charm in the Juju model.
bluestore-compression-mode:
type: string
default:
description: |
The default policy for using compression if the per-pool property
compression_mode is not set. 'none' means never use compression.
'passive' means use compression when clients hint that data is
compressible. 'aggressive' means use compression unless clients hint that
data is not compressible. 'force' means use compression under all
circumstances even if the clients hint that the data is not compressible.
.
NOTE: The recommended approach is to adjust this configuration option on
the charm responsible for creating the specific pool you are interested
in tuning. Changing the configuration option on the ceph-osd charm will
affect ALL pools on the OSDs managed by the named application of the
ceph-osd charm in the Juju model.
bluestore-compression-required-ratio:
type: float
default:
description: |
The ratio of the size of the data chunk after compression relative to the
original size must be at least this small in order to store the
compressed version. The per-pool property `compression-required-ratio`
overrides this setting.
.
NOTE: The recommended approach is to adjust this configuration option on
the charm responsible for creating the specific pool you are interested
in tuning. Changing the configuration option on the ceph-osd charm will
affect ALL pools on the OSDs managed by the named application of the
ceph-osd charm in the Juju model.
bluestore-compression-min-blob-size:
type: int
default:
description: |
Chunks smaller than this are never compressed. The per-pool property
`compression_min_blob_size` overrides this setting.
.
NOTE: The recommended approach is to adjust this configuration option on
the charm responsible for creating the specific pool you are interested
in tuning. Changing the configuration option on the ceph-osd charm will
affect ALL pools on the OSDs managed by the named application of the
ceph-osd charm in the Juju model.
bluestore-compression-min-blob-size-hdd:
type: int
default:
description: |
Default value of bluestore compression min blob size for rotational
media. The per-pool property `compression-min-blob-size-hdd` overrides
this setting.
.
NOTE: The recommended approach is to adjust this configuration option on
the charm responsible for creating the specific pool you are interested
in tuning. Changing the configuration option on the ceph-osd charm will
affect ALL pools on the OSDs managed by the named application of the
ceph-osd charm in the Juju model.
bluestore-compression-min-blob-size-ssd:
type: int
default:
description: |
Default value of bluestore compression min blob size for solid state
media. The per-pool property `compression-min-blob-size-ssd` overrides
this setting.
.
NOTE: The recommended approach is to adjust this configuration option on
the charm responsible for creating the specific pool you are interested
in tuning. Changing the configuration option on the ceph-osd charm will
affect ALL pools on the OSDs managed by the named application of the
ceph-osd charm in the Juju model.
bluestore-compression-max-blob-size:
type: int
default:
description: |
Chunks larger than this are broken into smaller blobs sizing bluestore
compression max blob size before being compressed. The per-pool property
`compression_max_blob_size` overrides this setting.
.
NOTE: The recommended approach is to adjust this configuration option on
the charm responsible for creating the specific pool you are interested
in tuning. Changing the configuration option on the ceph-osd charm will
affect ALL pools on the OSDs managed by the named application of the
ceph-osd charm in the Juju model.
bluestore-compression-max-blob-size-hdd:
type: int
default:
description: |
Default value of bluestore compression max blob size for rotational
media. The per-pool property `compression-max-blob-size-hdd` overrides
this setting.
.
NOTE: The recommended approach is to adjust this configuration option on
the charm responsible for creating the specific pool you are interested
in tuning. Changing the configuration option on the ceph-osd charm will
affect ALL pools on the OSDs managed by the named application of the
ceph-osd charm in the Juju model.
bluestore-compression-max-blob-size-ssd:
type: int
default:
description: |
Default value of bluestore compression max blob size for solid state
media. The per-pool property `compression-max-blob-size-ssd` overrides
this setting.
.
NOTE: The recommended approach is to adjust this configuration option on
the charm responsible for creating the specific pool you are interested
in tuning. Changing the configuration option on the ceph-osd charm will
affect ALL pools on the OSDs managed by the named application of the
ceph-osd charm in the Juju model.