diff --git a/docs/aleph.md b/docs/aleph.md index 8cf1cf41dbc..039b12e313f 100644 --- a/docs/aleph.md +++ b/docs/aleph.md @@ -244,5 +244,5 @@ conditional that only matches one of both threads. # References +- [Source](https://github.com/alephdata/aleph) - [Docs](http://docs.alephdata.org/) -- [Git](https://github.com/alephdata/aleph) diff --git a/docs/ansible_snippets.md b/docs/ansible_snippets.md index a4872fada45..5e2fb92b630 100644 --- a/docs/ansible_snippets.md +++ b/docs/ansible_snippets.md @@ -4,6 +4,14 @@ date: 20220119 author: Lyz --- +# Ansible add a sleep + +```yaml +- name: Pause for 5 minutes to build app cache + ansible.builtin.pause: + minutes: 5 +``` + # Ansible condition that uses a regexp ```yaml diff --git a/docs/beautifulsoup.md b/docs/beautifulsoup.md index eb76158731d..9ad70c18351 100644 --- a/docs/beautifulsoup.md +++ b/docs/beautifulsoup.md @@ -654,6 +654,13 @@ soup.find_all("a", string="Elsie") # [Elsie] ``` +#### [Searching by attribute and value](https://stackoverflow.com/questions/8933863/how-to-find-tags-with-only-certain-attributes-beautifulsoup) + +```python +soup = BeautifulSoup(html) +results = soup.findAll("td", {"valign" : "top"}) +``` + #### The limit argument `find_all()` returns all the tags and strings that match your filters. This can diff --git a/docs/coding/python/pydantic.md b/docs/coding/python/pydantic.md index 731880e3be3..3596d2b6e10 100644 --- a/docs/coding/python/pydantic.md +++ b/docs/coding/python/pydantic.md @@ -485,7 +485,24 @@ This is a deliberate decision of *pydantic*, and in general it's the most useful approach. See [here](https://github.com/samuelcolvin/pydantic/issues/578) for a longer discussion on the subject. -## [Initialize attributes at object creation](https://stackoverflow.com/questions/60695759/creating-objects-with-id-and-populating-other-fields) +## Initialize attributes at object creation + +`pydantic` recommends [using root validators](#using-root-validators), but it's difficult to undestand how to do it and to debug the errors. You also don't have easy access to the default values of the model. I'd rather use the [overwriting the `__init__` method](#overwriting-the-__init__-method). + +### [Overwriting the `__init__` method](https://stackoverflow.com/questions/76286148/how-do-custom-init-functions-work-in-pydantic-with-inheritance) + +```python +class fish(BaseModel): + name: str + color: str + + def __init__(self, **kwargs): + super().__init__(**kwargs) + print("Fish initialization successful!") + self.color=complex_function() +``` + +### [Using root validators](https://stackoverflow.com/questions/60695759/creating-objects-with-id-and-populating-other-fields) If you want to initialize attributes of the object automatically at object creation, similar of what you'd do with the `__init__` method of the class, you diff --git a/docs/coding/python/python_snippets.md b/docs/coding/python/python_snippets.md index 39cef27198f..e796604a333 100644 --- a/docs/coding/python/python_snippets.md +++ b/docs/coding/python/python_snippets.md @@ -4,6 +4,20 @@ date: 20200717 author: Lyz --- +# Read file with Pathlib + +```python +file_ = Path('/to/some/file') +file_.read_text() +``` + +# [Get changed time of a file](https://stackoverflow.com/questions/237079/how-do-i-get-file-creation-and-modification-date-times) + +```python +import os + +os.path.getmtime(path) +``` # [Sort the returned paths of glob](https://stackoverflow.com/questions/6773584/how-are-glob-globs-return-values-ordered) @@ -1054,6 +1068,8 @@ print(html2text.html2text(html)) # [Parse a datetime from a string](https://stackoverflow.com/questions/466345/converting-string-into-datetime) +Convert a string to a datetime. + ```python from dateutil import parser diff --git a/docs/docker.md b/docs/docker.md index c183ded515c..c9af3181160 100644 --- a/docs/docker.md +++ b/docs/docker.md @@ -12,6 +12,21 @@ they can communicate with each other through well-defined channels. Because all of the containers share the services of a single operating system kernel, they use fewer resources than virtual machines. +# Installation + +Follow [these instructions](https://docs.docker.com/engine/install/debian/) + +If that doesn't install the version of `docker-compose` that you want use [the next snippet](https://stackoverflow.com/questions/49839028/how-to-upgrade-docker-compose-to-latest-version): + +```bash +VERSION=$(curl --silent https://api.github.com/repos/docker/compose/releases/latest | grep -Po '"tag_name": "\K.*\d') +DESTINATION=/usr/local/bin/docker-compose +sudo curl -L https://github.com/docker/compose/releases/download/${VERSION}/docker-compose-$(uname -s)-$(uname -m) -o $DESTINATION +sudo chmod 755 $DESTINATION +``` + +If you don't want the latest version set the `VERSION` variable. + # How to keep containers updated ## [With Renovate](renovate.md) diff --git a/docs/gardening.md b/docs/gardening.md new file mode 100644 index 00000000000..08875dd8aae --- /dev/null +++ b/docs/gardening.md @@ -0,0 +1,17 @@ +# [Fertilizing with manure](https://cuidatucactus.com/que-es-el-guano-abono/) + +Manure is one of the best organic fertilizers for plants. It's made by the accumulation of excrements of bats, sea birds and seals and it usually doesn't contain additives or synthetic chemical components. + +This fertilizer is rich in nitrogen, phosphorus and potassium, which are key minerals for the growth of plants. These components help the regeneration of the soil, the enrichment in terms of nutrients and also acts as fungicide preventing plagues. + +Manure is a fertilizer of slow absorption, which means that it's released to the plants in an efficient, controlled and slow pace. That way the plants take the nutrients when they need them. + +## When to fertilize with manure + +The best moment to use it is at spring and depending on the type of plant you should apply it between each month and a half and three months. It's use in winter is not recommended, as it may burn the plant's roots. + +## How to fertilize with manure + +Manure can be obtained in dust or liquid state. The first is perfect to scatter directly over the earth, while the second is better used on plant pots. You don't need to use much, in fact, with just a pair of spoons per pot is enough. Apply it around the base of the plant, avoiding it's touch with leaves, stem or exposed roots, as it may burn them. After you apply them remember to water them often, keep in mind that it's like a heavy greasy sandwich for the plants, and they need water to digest it. + +For my indoor plants I'm going to apply a small dose (one spoon per plant) at the start of Autumn (first days of September), and two spoons at the start of spring (first days of March). diff --git a/docs/gitea.md b/docs/gitea.md index 45ab379cbef..e3b98639208 100644 --- a/docs/gitea.md +++ b/docs/gitea.md @@ -536,8 +536,6 @@ gitea --config /etc/gitea/app.ini admin user change-password -u username -p pass - Until [#542](https://gitea.com/gitea/tea/issues/542) is fixed manually create a token with all the permissions - Run `tea login add` to set your credentials. - - # References * [Home](https://gitea.io/en-us/) diff --git a/docs/grafana.md b/docs/grafana.md index fcd7e7f7415..d4b27680914 100644 --- a/docs/grafana.md +++ b/docs/grafana.md @@ -93,13 +93,14 @@ you can use the next docker-compose file. ```yaml --- -version: "3.8" +version: "3.3" services: grafana: image: grafana/grafana-oss:${GRAFANA_VERSION:-latest} container_name: grafana restart: unless-stopped volumes: + - config:/etc/grafana - data:/var/lib/grafana networks: - grafana @@ -136,12 +137,18 @@ networks: name: swag volumes: + config: + driver: local + driver_opts: + type: none + o: bind + device: /data/grafana/app/config data: driver: local driver_opts: type: none o: bind - device: /data/grafana/app + device: /data/grafana/app/data db-data: driver: local driver_opts: @@ -226,14 +233,27 @@ export GF_FEATURE_TOGGLES_ENABLE=newNavigation And in the docker compose you can edit the `.env` file. Mine looks similar to: ```bash +# Check all configuration options at: +# https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana + +# ----------------------------- +# --- General configuration --- +# ----------------------------- + GRAFANA_VERSION=latest GF_DEFAULT_INSTANCE_NAME="production" GF_SERVER_ROOT_URL="https://your.domain.org" +# Set this option to true to enable HTTP compression, this can improve transfer +# speed and bandwidth utilization. It is recommended that most users set it to +# true. By default it is set to false for compatibility reasons. +GF_SERVER_ENABLE_GZIP="true" + # ------------------------------ # --- Database configuration --- # ------------------------------ +DATABASE_VERSION=15 GF_DATABASE_TYPE=postgres DATABASE_VERSION=15 GF_DATABASE_HOST=grafana-db:5432 @@ -257,8 +277,29 @@ GF_AUTH_GENERIC_OAUTH_API_URL="https://authentik.company/application/o/userinfo/ GF_AUTH_SIGNOUT_REDIRECT_URL="https://authentik.company/application/o//end-session/" # Optionally enable auto-login (bypasses Grafana login screen) GF_AUTH_OAUTH_AUTO_LOGIN="true" +# Set to true to enable automatic sync of the Grafana server administrator +# role. If this option is set to true and the result of evaluating +# role_attribute_path for a user is GrafanaAdmin, Grafana grants the user the +# server administrator privileges and organization administrator role. If this +# option is set to false and the result of evaluating role_attribute_path for a +# user is GrafanaAdmin, Grafana grants the user only organization administrator +# role. +GF_AUTH_GENERIC_OAUTH_ALLOW_ASSIGN_GRAFANA_ADMIN="true" +# Optionally enable auto-login (bypasses Grafana login screen) +# Optionally map user groups to Grafana roles # Optionally map user groups to Grafana roles GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH="contains(groups[*], 'Grafana Admins') && 'Admin' || contains(groups[*], 'Grafana Editors') && 'Editor' || 'Viewer'" +# Set to true to disable (hide) the login form, useful if you use OAuth. Default is false. +GF_AUTH_DISABLE_LOGIN_FORM="true" + +# ------------------------- +# --- Log configuration --- +# ------------------------- + +# Options are “console”, “file”, and “syslog”. Default is “console” and “file”. Use spaces to separate multiple modes, e.g. console file. +GF_LOG_MODE="console file" +# Options are “debug”, “info”, “warn”, “error”, and “critical”. Default is info. +GF_LOG_LEVEL="info" ``` ### [Configure datasources](https://grafana.com/docs/grafana/latest/administration/provisioning/#data-sources) @@ -281,6 +322,7 @@ datasources: jsonData: httpMethod: POST manageAlerts: true + timeInterval: 30s prometheusType: Prometheus prometheusVersion: 2.44.0 cacheLevel: 'High' @@ -289,6 +331,8 @@ datasources: exemplarTraceIdDestinations: [] ``` +Be careful to set the `timeInterval` variable to the value of how often you scrape the data from the node exporter to avoid [this issue](https://github.com/rfmoz/grafana-dashboards/issues/137). + ### [Configure dashboards](https://grafana.com/docs/grafana/latest/administration/provisioning/#dashboards) You can manage dashboards in Grafana by adding one or more YAML config files in the `provisioning/dashboards` directory. Each config file can contain a list of dashboards providers that load dashboards into Grafana from the local filesystem. diff --git a/docs/linux/zfs.md b/docs/linux/zfs.md index 3f399ebd04e..9adf5752fd5 100644 --- a/docs/linux/zfs.md +++ b/docs/linux/zfs.md @@ -198,6 +198,87 @@ users/home/neil 18K 16.5G 18K /users/home/neil users/home/neil@2daysago 0 - 18K - ``` +## [Repair a DEGRADED pool](https://blog.cavelab.dev/2021/01/zfs-replace-disk-expand-pool/) + +First let’s offline the device we are going to replace: + +```bash +zpool offline tank0 ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx +``` + +Now let us have a look at the pool status. + +```bash +zpool status + +NAME STATE READ WRITE CKSUM +tank0 DEGRADED 0 0 0 + raidz2-1 DEGRADED 0 0 0 + ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0 + ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0 + ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0 + ata-WDC_WD80EFZX-68UW8N0_xxxxxxxx ONLINE 0 0 0 + ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0 + ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0 + ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx OFFLINE 0 0 0 + ata-ST4000VX007-2DT166_xxxxxxxx ONLINE 0 0 0 +``` + +Sweet, the device is offline (last time it didn't show as offline for me, but the offline command returned a status code of 0). + +Time to shut the server down and physically replace the disk. + +```bash +shutdown -h now +``` + +When you start again the server, it’s time to instruct ZFS to replace the removed device with the disk we just installed. + +```bash +zpool replace tank0 \ + ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx \ + /dev/disk/by-id/ata-TOSHIBA_HDWG180_xxxxxxxxxxxx +``` + +```bash +zpool status tank0 + +pool: main +state: DEGRADED +status: One or more devices is currently being resilvered. The pool will + continue to function, possibly in a degraded state. +action: Wait for the resilver to complete. + scan: resilver in progress since Fri Sep 22 12:40:28 2023 + 4.00T scanned at 6.85G/s, 222G issued at 380M/s, 24.3T total + 54.7G resilvered, 0.89% done, 18:28:03 to go +NAME STATE READ WRITE CKSUM +tank0 DEGRADED 0 0 0 + raidz2-1 DEGRADED 0 0 0 + ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0 + ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0 + ata-TOSHIBA_HDWN180_xxxxxxxxxxxx ONLINE 0 0 0 + ata-WDC_WD80EFZX-68UW8N0_xxxxxxxx ONLINE 0 0 0 + ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0 + ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0 + replacing-6 DEGRADED 0 0 0 + ata-WDC_WD2003FZEX-00SRLA0_WD-xxxxxxxxxxxx OFFLINE 0 0 0 + ata-TOSHIBA_HDWG180_xxxxxxxxxxxx ONLINE 0 0 0 (resilvering) + ata-ST4000VX007-2DT166_xxxxxxxx ONLINE 0 0 0 +``` + +The disk is replaced and getting resilvered (which may take a long time to run (18 hours in a 8TB disk in my case). + +Once the resilvering is done; this is what the pool looks like. + +```bash +zpool list + +NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT +tank0 43.5T 33.0T 10.5T 14.5T 7% 75% 1.00x ONLINE - +``` + +If you want to read other blogs that have covered the same topic check out [1](https://madaboutbrighton.net/articles/replace-disk-in-zfs-pool). + # Installation ## Install the required programs @@ -503,8 +584,28 @@ The following table summarizes the file or directory changes that are identified If you've used the `-o keyformat=raw -o keylocation=file:///etc/zfs/keys/home.key` arguments to encrypt your datasets you can't use a `keyformat=passphase` encryption on the cold storage device. You need to copy those keys on the disk. One way of doing it is to: - Create a 100M LUKS partition protected with a passphrase where you store the keys. + - The rest of the space is left for a partition for the zpool. +WARNING: substitute `/dev/sde` for the partition you need to work on in the next snippets + +To do it: +- Create the partitions: + + ```bash + fdisk /dev/sde + n + +100M + n + w + ``` + +- Create the zpool + + ```bash + zpool create cold-backup-01 /dev/sde2 + ``` + # Troubleshooting ## [Clear a permanent ZFS error in a healthy pool](https://serverfault.com/questions/576898/clear-a-permanent-zfs-error-in-a-healthy-pool) @@ -517,6 +618,12 @@ You can read [this long discussion](https://github.com/openzfs/zfs/discussions/9 It takes a long time to run, so be patient. +If you want [to stop a scrub](https://sotechdesign.com.au/zfs-stopping-a-scrub/) run: + +```bash +zpool scrub -s my_pool +``` + ## ZFS pool is in suspended mode Probably because you've unplugged a device without unmounting it. diff --git a/docs/linux_snippets.md b/docs/linux_snippets.md index 492ab800c6b..da2b542dfdf 100644 --- a/docs/linux_snippets.md +++ b/docs/linux_snippets.md @@ -4,6 +4,12 @@ date: 20200826 author: Lyz --- +# Limit the resources a docker is using + +You can either use limits in the `docker` service itself, see [1](https://unix.stackexchange.com/questions/537645/how-to-limit-docker-total-resources) and [2](https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html). + +Or/and you can limit it for each docker, see [1](https://www.baeldung.com/ops/docker-memory-limit) and [2](https://docs.docker.com/config/containers/resource_constraints/). + # [Get the current git branch](https://stackoverflow.com/questions/6245570/how-do-i-get-the-current-branch-name-in-git) ```bash diff --git a/docs/newsletter/2022_09.md b/docs/newsletter/2022_09.md index 88fc1ab4d5f..1fb7e38fd51 100644 --- a/docs/newsletter/2022_09.md +++ b/docs/newsletter/2022_09.md @@ -271,4 +271,4 @@ * Correction: Correct argument to use pipes in terminals. - You don't use `check=True` but `shell=True`, thanks [pawamoy](https://github.com/pawamoy) \ No newline at end of file + You don't use `check=True` but `shell=True`, thanks [pawamoy](https://github.com/pawamoy) diff --git a/docs/orgmode.md b/docs/orgmode.md index 653bc691dbb..02c3bb61da1 100644 --- a/docs/orgmode.md +++ b/docs/orgmode.md @@ -656,6 +656,10 @@ vim.api.nvim_create_autocmd('FileType', { If the auto command doesn't override the default `orgmode` one, bind it to another keys and never use it. +If you refile from the capture window, [until this issue is solved](https://github.com/joaomsa/telescope-orgmode.nvim/issues/4), your task will be refiled but the capture window won't be closed. + +Be careful that it only refiles the first task there is, so you need to close the capture before refiling the next + The plugin also allows you to use `telescope` to search through the headings of the different files with `search_headings`, with the configuration above you'd use `g`. ## Agenda diff --git a/docs/python_gnupg.md b/docs/python_gnupg.md index e73c5311a9c..28a08c3531d 100644 --- a/docs/python_gnupg.md +++ b/docs/python_gnupg.md @@ -64,7 +64,11 @@ Note: I've already created an adapter for gpg called `KeyStore` available in [`p return keys ``` -- [Get information of a key]( +- [Receive keys from a keyserver](https://gnupg.readthedocs.io/en/latest/index.html#importing-and-receiving-keys) + + ```python + import_result = gpg.recv_keys('server-name', 'keyid1', 'keyid2', ...) + ``` # References - [Docs](https://gnupg.readthedocs.io/en/latest/) diff --git a/docs/questionary.md b/docs/questionary.md index 782aadc60f6..01366ad6c74 100644 --- a/docs/questionary.md +++ b/docs/questionary.md @@ -99,6 +99,7 @@ as usual and the default value will be ignored. If you want the question to exit when it receives a `KeyboardInterrupt` event, use `unsafe_ask` instead of `ask`. + ## [Question types](https://questionary.readthedocs.io/en/stable/pages/types.html) The different question types are meant to cover different use cases. The @@ -147,6 +148,21 @@ question types: Check the [examples](https://github.com/tmbo/questionary/tree/master/examples) to see them in action and how to use them. +### [Autocomplete answers](https://questionary.readthedocs.io/en/stable/pages/types.html#autocomplete) + +If you want autocomplete with fuzzy finding use: + +```python +import questionary +from prompt_toolkit.completion import FuzzyWordCompleter + +questionary.autocomplete( + "Save to (q to cancel): ", + choices=destination_directories, + completer=FuzzyWordCompleter(destination_directories), +).ask() +``` + # Styling ## [Don't highlight the selected option by default](https://github.com/tmbo/questionary/issues/195) diff --git a/docs/vial.md b/docs/vial.md new file mode 100644 index 00000000000..eab64b4d6f4 --- /dev/null +++ b/docs/vial.md @@ -0,0 +1,19 @@ +[Vial](https://get.vial.today/) is an open-source cross-platform (Windows, Linux and Mac) GUI and a QMK fork for configuring your keyboard in real time. + +# [Installation](https://get.vial.today/download/) + +Even though you can use a [web version](https://vial.rocks/) you can install it locally through an [AppImage](https://itsfoss.com/use-appimage-linux/) + +- Download [the latest version](https://get.vial.today/download/) +- Give it execution permissions +- Add the file somewhere in your `$PATH` + +On linux you [need to configure an `udev` rule](https://get.vial.today/manual/linux-udev.html). + +For a universal access rule for any device with Vial firmware, run this in your shell while logged in as your user (this will only work with sudo installed): + +```bash +export USER_GID=`id -g`; sudo --preserve-env=USER_GID sh -c 'echo "KERNEL==\"hidraw*\", SUBSYSTEM==\"hidraw\", ATTRS{serial}==\"*vial:f64c2b3c*\", MODE=\"0660\", GROUP=\"$USER_GID\", TAG+=\"uaccess\", TAG+=\"udev-acl\"" > /etc/udev/rules.d/99-vial.rules && udevadm control --reload && udevadm trigger' +``` + +This command will automatically create a `udev` rule and reload the `udev` system. diff --git a/mkdocs.yml b/mkdocs.yml index 2f1af7b25bc..29925ca70b9 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -478,8 +478,6 @@ nav: - cooking.md - Cooking Basics: cooking_basics.md - Cooking software: cooking_software.md - - Pilates: [pilates.md] - - Aerial Silk: aerial_silk.md - Dancing: - Rave Dances: dancing/rave_dances.md - Swing: @@ -490,9 +488,12 @@ nav: - Kicks: dancing/shuffle_kicks.md - Spins: dancing/shuffle_spins.md - Cutting Shapes: dancing/cutting_shapes_basics.md + - Pilates: pilates.md + - Aerial Silk: aerial_silk.md - Meditation: meditation.md - Maker: - Redox: redox.md + - Vial: vial.md - Video Gaming: - King Arthur Gold: kag.md - The Battle for Wesnoth: @@ -507,6 +508,7 @@ nav: - Drawing: - drawing/drawing.md - Exercise Pool: drawing/exercise_pool.md + - Gardening: gardening.md - Origami: origami.md - Fun: fun.md - Book Binding: book_binding.md