Skip to content

Commit

Permalink
feat(ansible_snippets#Ansible add a sleep): Ansible add a sleep
Browse files Browse the repository at this point in the history
```yaml
- name: Pause for 5 minutes to build app cache
  ansible.builtin.pause:
    minutes: 5
```

feat(beautifulsoup#Searching by attribute and value): Searching by attribute and value

```python
soup = BeautifulSoup(html)
results = soup.findAll("td", {"valign" : "top"})
```

fix(pydantic#Initialize attributes at object creation): Initialize attributes at object creation

`pydantic` recommends [using root validators](pydantic.md#using-root-validators), but it's difficult to undestand how to do it and to debug the errors. You also don't have easy access to the default values of the model. I'd rather use the [overwriting the `__init__` method](pydantic.md#overwriting-the-__init__-method).

```python
class fish(BaseModel):
    name: str
    color: str

    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        print("Fish initialization successful!")
        self.color=complex_function()
```

feat(python_snippets#Read file with Pathlib): Read file with Pathlib

```python
file_ = Path('/to/some/file')
file_.read_text()
```

feat(python_snippets#Get changed time of a file): Get changed time of a file

```python
import os

os.path.getmtime(path)
```

feat(docker#Installation): Install a specific version of Docker

Follow [these instructions](https://docs.docker.com/engine/install/debian/)

If that doesn't install the version of `docker-compose` that you want use [the next snippet](https://stackoverflow.com/questions/49839028/how-to-upgrade-docker-compose-to-latest-version):

```bash
VERSION=$(curl --silent https://api.github.com/repos/docker/compose/releases/latest | grep -Po '"tag_name": "\K.*\d')
DESTINATION=/usr/local/bin/docker-compose
sudo curl -L https://github.com/docker/compose/releases/download/${VERSION}/docker-compose-$(uname -s)-$(uname -m) -o $DESTINATION
sudo chmod 755 $DESTINATION
```

If you don't want the latest version set the `VERSION` variable.

feat(gardening#Fertilizing with manure): Fertilizing with manure

Manure is one of the best organic fertilizers for plants. It's made by the accumulation of excrements of bats, sea birds and seals and it usually doesn't contain additives or synthetic chemical components.

This fertilizer is rich in nitrogen, phosphorus and potassium, which are key minerals for the growth of plants. These components help the regeneration of the soil, the enrichment in terms of nutrients and also acts as fungicide preventing plagues.

Manure is a fertilizer of slow absorption, which means that it's released to the plants in an efficient, controlled and slow pace. That way the plants take the nutrients when they need them.

The best moment to use it is at spring and depending on the type of plant you should apply it between each month and a half and three months. It's use in winter is not recommended, as it may burn the plant's roots.

Manure can be obtained in dust or liquid state. The first is perfect to scatter directly over the earth, while the second is better used on plant pots. You don't need to use much, in fact, with just a pair of spoons per pot is enough. Apply it around the base of the plant, avoiding it's touch with leaves, stem or exposed roots, as it may burn them. After you apply them remember to water them often, keep in mind that it's like a heavy greasy sandwich for the plants, and they need water to digest it.

For my indoor plants I'm going to apply a small dose (one spoon per plant) at the start of Autumn (first days of September), and two spoons at the start of spring (first days of March).

fix(grafana): Improve installation method

Add more configuration values such as:

```
GF_SERVER_ENABLE_GZIP="true"

GF_AUTH_GENERIC_OAUTH_ALLOW_ASSIGN_GRAFANA_ADMIN="true"

GF_LOG_MODE="console file"
GF_LOG_LEVEL="info"
```

fix(grafana#Configure datasources): Warning when configuring datasources

Be careful to set the `timeInterval` variable to the value of how often you scrape the data from the node exporter to avoid [this issue](rfmoz/grafana-dashboards#137).

feat(zfs#Repair a DEGRADED pool): Repair a DEGRADED pool

First let’s offline the device we are going to replace:

```bash
zpool offline tank0 ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx
```

Now let us have a look at the pool status.

```bash
zpool status

NAME                                            STATE     READ WRITE CKSUM
tank0                                           DEGRADED     0     0     0
  raidz2-1                                      DEGRADED     0     0     0
    ata-TOSHIBA_HDWN180_xxxxxxxxxxxx            ONLINE       0     0     0
    ata-TOSHIBA_HDWN180_xxxxxxxxxxxx            ONLINE       0     0     0
    ata-TOSHIBA_HDWN180_xxxxxxxxxxxx            ONLINE       0     0     0
    ata-WDC_WD80EFZX-68UW8N0_xxxxxxxx           ONLINE       0     0     0
    ata-TOSHIBA_HDWG180_xxxxxxxxxxxx            ONLINE       0     0     0
    ata-TOSHIBA_HDWG180_xxxxxxxxxxxx            ONLINE       0     0     0
    ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx  OFFLINE      0     0     0
    ata-ST4000VX007-2DT166_xxxxxxxx             ONLINE       0     0     0
```

Sweet, the device is offline (last time it didn't show as offline for me, but the offline command returned a status code of 0).

Time to shut the server down and physically replace the disk.

```bash
shutdown -h now
```

When you start again the server, it’s time to instruct ZFS to replace the removed device with the disk we just installed.

```bash
zpool replace tank0 \
    ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx \
    /dev/disk/by-id/ata-TOSHIBA_HDWG180_xxxxxxxxxxxx
```

```bash
zpool status tank0

pool: main
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Sep 22 12:40:28 2023
        4.00T scanned at 6.85G/s, 222G issued at 380M/s, 24.3T total
        54.7G resilvered, 0.89% done, 18:28:03 to go
NAME                                              STATE     READ WRITE CKSUM
tank0                                             DEGRADED     0     0     0
  raidz2-1                                        DEGRADED     0     0     0
    ata-TOSHIBA_HDWN180_xxxxxxxxxxxx              ONLINE       0     0     0
    ata-TOSHIBA_HDWN180_xxxxxxxxxxxx              ONLINE       0     0     0
    ata-TOSHIBA_HDWN180_xxxxxxxxxxxx              ONLINE       0     0     0
    ata-WDC_WD80EFZX-68UW8N0_xxxxxxxx             ONLINE       0     0     0
    ata-TOSHIBA_HDWG180_xxxxxxxxxxxx              ONLINE       0     0     0
    ata-TOSHIBA_HDWG180_xxxxxxxxxxxx              ONLINE       0     0     0
    replacing-6                                   DEGRADED     0     0     0
      ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx  OFFLINE      0     0     0
      ata-TOSHIBA_HDWG180_xxxxxxxxxxxx            ONLINE       0     0     0  (resilvering)
    ata-ST4000VX007-2DT166_xxxxxxxx               ONLINE       0     0     0
```

The disk is replaced and getting resilvered (which may take a long time to run (18 hours in a 8TB disk in my case).

Once the resilvering is done; this is what the pool looks like.

```bash
zpool list

NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank0    43.5T  33.0T  10.5T     14.5T     7%    75%  1.00x  ONLINE  -
```

If you want to read other blogs that have covered the same topic check out [1](https://madaboutbrighton.net/articles/replace-disk-in-zfs-pool).

feat(zfs): Stop a ZFS scrub

```bash
zpool scrub -s my_pool
```

feat(linux_snippets#Limit the resources a docker is using): Limit the resources a docker is using

You can either use limits in the `docker` service itself, see [1](https://unix.stackexchange.com/questions/537645/how-to-limit-docker-total-resources) and [2](https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html).

Or/and you can limit it for each docker, see [1](https://www.baeldung.com/ops/docker-memory-limit) and [2](https://docs.docker.com/config/containers/resource_constraints/).

feat(orgmode): Refile from the capture window

If you refile from the capture window, [until this issue is solved](joaomsa/telescope-orgmode.nvim#4), your task will be refiled but the capture window won't be closed.

Be careful that it only refiles the first task there is, so you need to close the capture before refiling the next

feat(python_gnupg): Receive keys from a keyserver

```python
import_result = gpg.recv_keys('server-name', 'keyid1', 'keyid2', ...)
```

feat(questionary#Autocomplete answers): Autocomplete answers

If you want autocomplete with fuzzy finding use:

```python
import questionary
from prompt_toolkit.completion import FuzzyWordCompleter

questionary.autocomplete(
    "Save to (q to cancel): ",
    choices=destination_directories,
    completer=FuzzyWordCompleter(destination_directories),
).ask()
```

feat(vial): Introduce Vial

[Vial](https://get.vial.today/) is an open-source cross-platform (Windows, Linux and Mac) GUI and a QMK fork for configuring your keyboard in real time.

Even though you can use a [web version](https://vial.rocks/) you can install it locally through an [AppImage](https://itsfoss.com/use-appimage-linux/)

- Download [the latest version](https://get.vial.today/download/)
- Give it execution permissions
- Add the file somewhere in your `$PATH`

On linux you [need to configure an `udev` rule](https://get.vial.today/manual/linux-udev.html).

For a universal access rule for any device with Vial firmware, run this in your shell while logged in as your user (this will only work with sudo installed):

```bash
export USER_GID=`id -g`; sudo --preserve-env=USER_GID sh -c 'echo "KERNEL==\"hidraw*\", SUBSYSTEM==\"hidraw\", ATTRS{serial}==\"*vial:f64c2b3c*\", MODE=\"0660\", GROUP=\"$USER_GID\", TAG+=\"uaccess\", TAG+=\"udev-acl\"" > /etc/udev/rules.d/99-vial.rules && udevadm control --reload && udevadm trigger'
```

This command will automatically create a `udev` rule and reload the `udev` system.
  • Loading branch information
lyz-code committed Sep 25, 2023
1 parent 2ef339f commit 7e8761e
Show file tree
Hide file tree
Showing 17 changed files with 290 additions and 10 deletions.
2 changes: 1 addition & 1 deletion docs/aleph.md
Original file line number Diff line number Diff line change
Expand Up @@ -244,5 +244,5 @@ conditional that only matches one of both threads.

# References

- [Source](https://github.com/alephdata/aleph)
- [Docs](http://docs.alephdata.org/)
- [Git](https://github.com/alephdata/aleph)
8 changes: 8 additions & 0 deletions docs/ansible_snippets.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,14 @@ date: 20220119
author: Lyz
---

# Ansible add a sleep

```yaml
- name: Pause for 5 minutes to build app cache
ansible.builtin.pause:
minutes: 5
```
# Ansible condition that uses a regexp
```yaml
Expand Down
7 changes: 7 additions & 0 deletions docs/beautifulsoup.md
Original file line number Diff line number Diff line change
Expand Up @@ -654,6 +654,13 @@ soup.find_all("a", string="Elsie")
# [<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>]
```

#### [Searching by attribute and value](https://stackoverflow.com/questions/8933863/how-to-find-tags-with-only-certain-attributes-beautifulsoup)

```python
soup = BeautifulSoup(html)
results = soup.findAll("td", {"valign" : "top"})
```

#### The limit argument

`find_all()` returns all the tags and strings that match your filters. This can
Expand Down
19 changes: 18 additions & 1 deletion docs/coding/python/pydantic.md
Original file line number Diff line number Diff line change
Expand Up @@ -485,7 +485,24 @@ This is a deliberate decision of *pydantic*, and in general it's the most useful
approach. See [here](https://github.com/samuelcolvin/pydantic/issues/578) for a
longer discussion on the subject.

## [Initialize attributes at object creation](https://stackoverflow.com/questions/60695759/creating-objects-with-id-and-populating-other-fields)
## Initialize attributes at object creation

`pydantic` recommends [using root validators](#using-root-validators), but it's difficult to undestand how to do it and to debug the errors. You also don't have easy access to the default values of the model. I'd rather use the [overwriting the `__init__` method](#overwriting-the-__init__-method).

### [Overwriting the `__init__` method](https://stackoverflow.com/questions/76286148/how-do-custom-init-functions-work-in-pydantic-with-inheritance)

```python
class fish(BaseModel):
name: str
color: str

def __init__(self, **kwargs):
super().__init__(**kwargs)
print("Fish initialization successful!")
self.color=complex_function()
```

### [Using root validators](https://stackoverflow.com/questions/60695759/creating-objects-with-id-and-populating-other-fields)

If you want to initialize attributes of the object automatically at object
creation, similar of what you'd do with the `__init__` method of the class, you
Expand Down
16 changes: 16 additions & 0 deletions docs/coding/python/python_snippets.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,20 @@ date: 20200717
author: Lyz
---

# Read file with Pathlib

```python
file_ = Path('/to/some/file')
file_.read_text()
```

# [Get changed time of a file](https://stackoverflow.com/questions/237079/how-do-i-get-file-creation-and-modification-date-times)

```python
import os

os.path.getmtime(path)
```
# [Sort the returned paths of glob](https://stackoverflow.com/questions/6773584/how-are-glob-globs-return-values-ordered)


Expand Down Expand Up @@ -1054,6 +1068,8 @@ print(html2text.html2text(html))

# [Parse a datetime from a string](https://stackoverflow.com/questions/466345/converting-string-into-datetime)

Convert a string to a datetime.

```python
from dateutil import parser

Expand Down
15 changes: 15 additions & 0 deletions docs/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,21 @@ they can communicate with each other through well-defined channels. Because
all of the containers share the services of a single operating system kernel,
they use fewer resources than virtual machines.

# Installation

Follow [these instructions](https://docs.docker.com/engine/install/debian/)

If that doesn't install the version of `docker-compose` that you want use [the next snippet](https://stackoverflow.com/questions/49839028/how-to-upgrade-docker-compose-to-latest-version):

```bash
VERSION=$(curl --silent https://api.github.com/repos/docker/compose/releases/latest | grep -Po '"tag_name": "\K.*\d')
DESTINATION=/usr/local/bin/docker-compose
sudo curl -L https://github.com/docker/compose/releases/download/${VERSION}/docker-compose-$(uname -s)-$(uname -m) -o $DESTINATION
sudo chmod 755 $DESTINATION
```

If you don't want the latest version set the `VERSION` variable.

# How to keep containers updated

## [With Renovate](renovate.md)
Expand Down
17 changes: 17 additions & 0 deletions docs/gardening.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# [Fertilizing with manure](https://cuidatucactus.com/que-es-el-guano-abono/)

Manure is one of the best organic fertilizers for plants. It's made by the accumulation of excrements of bats, sea birds and seals and it usually doesn't contain additives or synthetic chemical components.

This fertilizer is rich in nitrogen, phosphorus and potassium, which are key minerals for the growth of plants. These components help the regeneration of the soil, the enrichment in terms of nutrients and also acts as fungicide preventing plagues.

Manure is a fertilizer of slow absorption, which means that it's released to the plants in an efficient, controlled and slow pace. That way the plants take the nutrients when they need them.

## When to fertilize with manure

The best moment to use it is at spring and depending on the type of plant you should apply it between each month and a half and three months. It's use in winter is not recommended, as it may burn the plant's roots.

## How to fertilize with manure

Manure can be obtained in dust or liquid state. The first is perfect to scatter directly over the earth, while the second is better used on plant pots. You don't need to use much, in fact, with just a pair of spoons per pot is enough. Apply it around the base of the plant, avoiding it's touch with leaves, stem or exposed roots, as it may burn them. After you apply them remember to water them often, keep in mind that it's like a heavy greasy sandwich for the plants, and they need water to digest it.

For my indoor plants I'm going to apply a small dose (one spoon per plant) at the start of Autumn (first days of September), and two spoons at the start of spring (first days of March).
2 changes: 0 additions & 2 deletions docs/gitea.md
Original file line number Diff line number Diff line change
Expand Up @@ -536,8 +536,6 @@ gitea --config /etc/gitea/app.ini admin user change-password -u username -p pass
- Until [#542](https://gitea.com/gitea/tea/issues/542) is fixed manually create a token with all the permissions
- Run `tea login add` to set your credentials.



# References

* [Home](https://gitea.io/en-us/)
Expand Down
48 changes: 46 additions & 2 deletions docs/grafana.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,13 +93,14 @@ you can use the next docker-compose file.

```yaml
---
version: "3.8"
version: "3.3"
services:
grafana:
image: grafana/grafana-oss:${GRAFANA_VERSION:-latest}
container_name: grafana
restart: unless-stopped
volumes:
- config:/etc/grafana
- data:/var/lib/grafana
networks:
- grafana
Expand Down Expand Up @@ -136,12 +137,18 @@ networks:
name: swag

volumes:
config:
driver: local
driver_opts:
type: none
o: bind
device: /data/grafana/app/config
data:
driver: local
driver_opts:
type: none
o: bind
device: /data/grafana/app
device: /data/grafana/app/data
db-data:
driver: local
driver_opts:
Expand Down Expand Up @@ -226,14 +233,27 @@ export GF_FEATURE_TOGGLES_ENABLE=newNavigation
And in the docker compose you can edit the `.env` file. Mine looks similar to:

```bash
# Check all configuration options at:
# https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana
# -----------------------------
# --- General configuration ---
# -----------------------------
GRAFANA_VERSION=latest
GF_DEFAULT_INSTANCE_NAME="production"
GF_SERVER_ROOT_URL="https://your.domain.org"
# Set this option to true to enable HTTP compression, this can improve transfer
# speed and bandwidth utilization. It is recommended that most users set it to
# true. By default it is set to false for compatibility reasons.
GF_SERVER_ENABLE_GZIP="true"
# ------------------------------
# --- Database configuration ---
# ------------------------------
DATABASE_VERSION=15
GF_DATABASE_TYPE=postgres
DATABASE_VERSION=15
GF_DATABASE_HOST=grafana-db:5432
Expand All @@ -257,8 +277,29 @@ GF_AUTH_GENERIC_OAUTH_API_URL="https://authentik.company/application/o/userinfo/
GF_AUTH_SIGNOUT_REDIRECT_URL="https://authentik.company/application/o/<Slug of the application from above>/end-session/"
# Optionally enable auto-login (bypasses Grafana login screen)
GF_AUTH_OAUTH_AUTO_LOGIN="true"
# Set to true to enable automatic sync of the Grafana server administrator
# role. If this option is set to true and the result of evaluating
# role_attribute_path for a user is GrafanaAdmin, Grafana grants the user the
# server administrator privileges and organization administrator role. If this
# option is set to false and the result of evaluating role_attribute_path for a
# user is GrafanaAdmin, Grafana grants the user only organization administrator
# role.
GF_AUTH_GENERIC_OAUTH_ALLOW_ASSIGN_GRAFANA_ADMIN="true"
# Optionally enable auto-login (bypasses Grafana login screen)
# Optionally map user groups to Grafana roles
# Optionally map user groups to Grafana roles
GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH="contains(groups[*], 'Grafana Admins') && 'Admin' || contains(groups[*], 'Grafana Editors') && 'Editor' || 'Viewer'"
# Set to true to disable (hide) the login form, useful if you use OAuth. Default is false.
GF_AUTH_DISABLE_LOGIN_FORM="true"
# -------------------------
# --- Log configuration ---
# -------------------------
# Options are “console”, “file”, and “syslog”. Default is “console” and “file”. Use spaces to separate multiple modes, e.g. console file.
GF_LOG_MODE="console file"
# Options are “debug”, “info”, “warn”, “error”, and “critical”. Default is info.
GF_LOG_LEVEL="info"
```

### [Configure datasources](https://grafana.com/docs/grafana/latest/administration/provisioning/#data-sources)
Expand All @@ -281,6 +322,7 @@ datasources:
jsonData:
httpMethod: POST
manageAlerts: true
timeInterval: 30s
prometheusType: Prometheus
prometheusVersion: 2.44.0
cacheLevel: 'High'
Expand All @@ -289,6 +331,8 @@ datasources:
exemplarTraceIdDestinations: []
```

Be careful to set the `timeInterval` variable to the value of how often you scrape the data from the node exporter to avoid [this issue](https://github.com/rfmoz/grafana-dashboards/issues/137).

### [Configure dashboards](https://grafana.com/docs/grafana/latest/administration/provisioning/#dashboards)

You can manage dashboards in Grafana by adding one or more YAML config files in the `provisioning/dashboards` directory. Each config file can contain a list of dashboards providers that load dashboards into Grafana from the local filesystem.
Expand Down
107 changes: 107 additions & 0 deletions docs/linux/zfs.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,6 +198,87 @@ users/home/neil 18K 16.5G 18K /users/home/neil
users/home/neil@2daysago 0 - 18K -
```
## [Repair a DEGRADED pool](https://blog.cavelab.dev/2021/01/zfs-replace-disk-expand-pool/)
First let’s offline the device we are going to replace:
```bash
zpool offline tank0 ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx
```
Now let us have a look at the pool status.
```bash
zpool status

NAME STATE READ WRITE CKSUM
tank0 DEGRADED 0 0 0
raidz2-1 DEGRADED 0 0 0
ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0
ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0
ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0
ata-WDC_WD80EFZX-68UW8N0_xxxxxxxx ONLINE 0 0 0
ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0
ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0
ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx OFFLINE 0 0 0
ata-ST4000VX007-2DT166_xxxxxxxx ONLINE 0 0 0
```
Sweet, the device is offline (last time it didn't show as offline for me, but the offline command returned a status code of 0).
Time to shut the server down and physically replace the disk.
```bash
shutdown -h now
```
When you start again the server, it’s time to instruct ZFS to replace the removed device with the disk we just installed.
```bash
zpool replace tank0 \
ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx \
/dev/disk/by-id/ata-TOSHIBA_HDWG180_xxxxxxxxxxxx
```
```bash
zpool status tank0
pool: main
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Fri Sep 22 12:40:28 2023
4.00T scanned at 6.85G/s, 222G issued at 380M/s, 24.3T total
54.7G resilvered, 0.89% done, 18:28:03 to go
NAME STATE READ WRITE CKSUM
tank0 DEGRADED 0 0 0
raidz2-1 DEGRADED 0 0 0
ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0
ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0
ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0
ata-WDC_WD80EFZX-68UW8N0_xxxxxxxx ONLINE 0 0 0
ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0
ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0
replacing-6 DEGRADED 0 0 0
ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx OFFLINE 0 0 0
ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0 (resilvering)
ata-ST4000VX007-2DT166_xxxxxxxx ONLINE 0 0 0
```
The disk is replaced and getting resilvered (which may take a long time to run (18 hours in a 8TB disk in my case).
Once the resilvering is done; this is what the pool looks like.
```bash
zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank0 43.5T 33.0T 10.5T 14.5T 7% 75% 1.00x ONLINE -
```
If you want to read other blogs that have covered the same topic check out [1](https://madaboutbrighton.net/articles/replace-disk-in-zfs-pool).
# Installation
## Install the required programs
Expand Down Expand Up @@ -503,8 +584,28 @@ The following table summarizes the file or directory changes that are identified
If you've used the `-o keyformat=raw -o keylocation=file:///etc/zfs/keys/home.key` arguments to encrypt your datasets you can't use a `keyformat=passphase` encryption on the cold storage device. You need to copy those keys on the disk. One way of doing it is to:
- Create a 100M LUKS partition protected with a passphrase where you store the keys.
- The rest of the space is left for a partition for the zpool.
WARNING: substitute `/dev/sde` for the partition you need to work on in the next snippets
To do it:
- Create the partitions:
```bash
fdisk /dev/sde
n
+100M
n
w
```
- Create the zpool
```bash
zpool create cold-backup-01 /dev/sde2
```
# Troubleshooting
## [Clear a permanent ZFS error in a healthy pool](https://serverfault.com/questions/576898/clear-a-permanent-zfs-error-in-a-healthy-pool)
Expand All @@ -517,6 +618,12 @@ You can read [this long discussion](https://github.com/openzfs/zfs/discussions/9
It takes a long time to run, so be patient.
If you want [to stop a scrub](https://sotechdesign.com.au/zfs-stopping-a-scrub/) run:
```bash
zpool scrub -s my_pool
```
## ZFS pool is in suspended mode
Probably because you've unplugged a device without unmounting it.
Expand Down
6 changes: 6 additions & 0 deletions docs/linux_snippets.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,12 @@ date: 20200826
author: Lyz
---

# Limit the resources a docker is using

You can either use limits in the `docker` service itself, see [1](https://unix.stackexchange.com/questions/537645/how-to-limit-docker-total-resources) and [2](https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html).

Or/and you can limit it for each docker, see [1](https://www.baeldung.com/ops/docker-memory-limit) and [2](https://docs.docker.com/config/containers/resource_constraints/).

# [Get the current git branch](https://stackoverflow.com/questions/6245570/how-do-i-get-the-current-branch-name-in-git)

```bash
Expand Down
Loading

0 comments on commit 7e8761e

Please sign in to comment.