diff --git a/docs/aiocron.md b/docs/aiocron.md new file mode 100644 index 00000000000..a84747c5faf --- /dev/null +++ b/docs/aiocron.md @@ -0,0 +1,56 @@ +[`aiocron`](https://github.com/gawel/aiocron?tab=readme-ov-file) is a python library to run cron jobs in python asyncronously. + +# Usage + +You can run it using a decorator + +```python +>>> import aiocron +>>> import asyncio +>>> +>>> @aiocron.crontab('*/30 * * * *') +... async def attime(): +... print('run') +... +>>> asyncio.get_event_loop().run_forever() +``` + +Or by calling the function yourself + +```python +>>> cron = crontab('0 * * * *', func=yourcoroutine, start=False) +``` + +[Here's a simple example](https://stackoverflow.com/questions/65551736/python-3-9-scheduling-periodic-calls-of-async-function-with-different-paramete) on how to run it in a script: + +```python +import asyncio +from datetime import datetime +import aiocron + + +async def foo(param): + print(datetime.now().time(), param) + + +async def main(): + cron_min = aiocron.crontab('*/1 * * * *', func=foo, args=("At every minute",), start=True) + cron_hour = aiocron.crontab('0 */1 * * *', func=foo, args=("At minute 0 past every hour.",), start=True) + cron_day = aiocron.crontab('0 9 */1 * *', func=foo, args=("At 09:00 on every day-of-month",), start=True) + cron_week = aiocron.crontab('0 9 * * Mon', func=foo, args=("At 09:00 on every Monday",), start=True) + + while True: + await asyncio.sleep(1) + +asyncio.run(main()) +``` + +You have more complex examples [in the repo](https://github.com/gawel/aiocron/tree/master/examples) +# Installation + +```bash +pip install aiocron +``` + +# References +- [Source](https://github.com/gawel/aiocron?tab=readme-ov-file) diff --git a/docs/alot.md b/docs/alot.md index 6a654cc044f..682a9e6e873 100644 --- a/docs/alot.md +++ b/docs/alot.md @@ -4,6 +4,8 @@ date: 20210820 author: Lyz --- +DEPRECATED: Use [himalaya](himalaya.md) instead. + [alot](https://github.com/pazz/alot) is a terminal-based mail user agent based on the [notmuch mail indexer](notmuch.md). It is written in python using the urwid toolkit and features a modular and command prompt driven interface to diff --git a/docs/email_automation.md b/docs/email_automation.md index a48cdbc5364..9ef57914878 100644 --- a/docs/email_automation.md +++ b/docs/email_automation.md @@ -12,7 +12,7 @@ One of the ways to achieve that goals is to use a combination of tools to synchronize the mailboxes, tag them, and run scripts automatically based on the tags. -# Installation +# Fetch emails First you need a program that syncs your mailboxes, following [pazz's advice ](https://github.com/pazz/alot/wiki/pazz's-mail-setup#fetching-mail-mbsync), @@ -24,7 +24,9 @@ example an account called `lyz` you should be able to sync all your emails with: mbsync -V lyz ``` -Now we need to install [`notmuch`](notmuch.md) a tool to index, search, read, +# Tag and index emails + +If you want to use [`alot`](alot.md) (which I no longer do) you need to install [`notmuch`](notmuch.md) a tool to index, search, read, and tag large collections of email messages. Follow the steps under [installation](notmuch.md#installation) under you have created the database that indexes your emails. @@ -33,6 +35,294 @@ Once we have that, we need a tool to tag the emails following our desired rules. [afew](afew.md) is one way to go. Follow the steps under [installation](afew.md#installation). +# Automatically sync emails + +## The new way + +I have many emails, and I want to fetch them with different frequencies, in the background and be notified if anything goes wrong. + +For that purpose I've created a python script, a systemd service and some loki rules to monitor it. + +### Script to sync emails and calendars with different frequencies + +The script iterates over the configured accounts in `accounts_config` and runs `mbsync` for email accounts and `vdirsyncer` for email accounts based on some cron expressions. It logs the output in `logfmt` format so that it's easily handled by [loki](loki.md) + +To run it you'll first need to create a virtualenv, I use `mkvirtualenv account_syncer` which creates a virtualenv in `~/.local/share/virtualenv/account_syncer`. + +Then install the dependencies: + +```bash +pip install aiocron +``` + +Then place this script somewhere, for example (`~/.local/bin/account_syncer.py`) + +```python +import asyncio +import logging +from datetime import datetime +import asyncio.subprocess +import aiocron + +# Dependencies: +# pip install aiocron + +# Configuration for accounts (example) +accounts_config = { + "emails": [ + { + "account_name": "lyz", + "cron_expressions": ["*/15 9-23 * * *"], + }, + { + "account_name": "work", + "cron_expressions": ["*/60 8-17 * * 1-5"], # Monday-Friday + }, + { + "account_name": "monitorization", + "cron_expressions": ["*/5 * * * *"], + }, + ], + "calendars": [ + { + "account_name": "lyz", + "cron_expressions": ["*/15 9-23 * * *"], + }, + { + "account_name": "work", + "cron_expressions": ["*/60 8-17 * * 1-5"], # Monday-Friday + }, + ], +} + + +class LogfmtFormatter(logging.Formatter): + """Custom formatter to output logs in logfmt style.""" + + def format(self, record: logging.LogRecord) -> str: + log_message = ( + f"level={record.levelname.lower()} " + f"logger={record.name} " + f'msg="{record.getMessage()}"' + ) + return log_message + + +def setup_logging(logging_name: str) -> logging.Logger: + """Configure logging to use logfmt format. + Args: + logging_name (str): The logger's name and identifier in the systemd journal. + Returns: + Logger: The configured logger. + """ + console_handler = logging.StreamHandler() + logfmt_formatter = LogfmtFormatter() + console_handler.setFormatter(logfmt_formatter) + logger = logging.getLogger(logging_name) + logger.setLevel(logging.INFO) + logger.addHandler(console_handler) + return logger + + +log = setup_logging("account_syncer") + + +async def run_mbsync(account_name: str) -> None: + """Run mbsync command asynchronously for email accounts. + + Args: + account_name (str): The name of the email account to sync. + """ + command = f"mbsync {account_name}" + log.info(f"Syncing emails for {account_name}...") + process = await asyncio.create_subprocess_shell( + command, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE + ) + stdout, stderr = await process.communicate() + if stdout: + log.info(f"Output for {account_name}: {stdout.decode()}") + if stderr: + log.error(f"Error for {account_name}: {stderr.decode()}") + + +async def run_vdirsyncer(account_name: str) -> None: + """Run vdirsyncer command asynchronously for calendar accounts. + + Args: + account_name (str): The name of the calendar account to sync. + """ + command = f"vdirsyncer sync {account_name}" + log.info(f"Syncing calendar for {account_name}...") + process = await asyncio.create_subprocess_shell( + command, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE + ) + _, stderr = await process.communicate() + if stderr: + command_log = stderr.decode().strip() + if "error" in command_log or "critical" in command_log: + log.error(f"Output for {account_name}: {command_log}") + elif len(command_log.splitlines()) > 1: + log.info(f"Output for {account_name}: {command_log}") + + +def should_i_sync_today(cron_expr: str) -> bool: + """Check if the current time matches the cron expression day and hour constraints.""" + _, hour, _, _, day_of_week = cron_expr.split() + now = datetime.now() + if "*" in hour: + return True + elif not (int(hour.split("-")[0]) <= now.hour <= int(hour.split("-")[1])): + return False + if day_of_week != "*" and str(now.weekday()) not in day_of_week.split(","): + return False + return True + + +async def main(): + log.info("Starting account syncer for emails and calendars") + accounts_to_sync = {"emails": [], "calendars": []} + + # Schedule email accounts + for account in accounts_config["emails"]: + account_name = account["account_name"] + for cron_expression in account["cron_expressions"]: + if ( + should_i_sync_today(cron_expression) + and account_name not in accounts_to_sync["emails"] + ): + accounts_to_sync["emails"].append(account_name) + aiocron.crontab(cron_expression, func=run_mbsync, args=[account_name]) + log.info( + f"Scheduled mbsync for {account_name} with cron expression: {cron_expression}" + ) + + # Schedule calendar accounts + for account in accounts_config["calendars"]: + account_name = account["account_name"] + for cron_expression in account["cron_expressions"]: + if ( + should_i_sync_today(cron_expression) + and account_name not in accounts_to_sync["calendars"] + ): + accounts_to_sync["calendars"].append(account_name) + aiocron.crontab(cron_expression, func=run_vdirsyncer, args=[account_name]) + log.info( + f"Scheduled vdirsyncer for {account_name} with cron expression: {cron_expression}" + ) + + log.info("Running an initial fetch on today's accounts") + for account_name in accounts_to_sync["emails"]: + await run_mbsync(account_name) + for account_name in accounts_to_sync["calendars"]: + await run_vdirsyncer(account_name) + + log.info("Finished loading accounts") + while True: + await asyncio.sleep(60) + + +# Run the main async loop +if __name__ == "__main__": + asyncio.run(main()) +``` + +Where: + +- `accounts_config`: Holds your account configuration. Each account must contain an `account_name` which should be the name of the `mbsync` or `vdirsyncer` profile, and `cron_expressions` must be a list of cron valid expressions you want the email to be synced. + +### Create the systemd service + +We're using a non-root systemd service. You can follow [these instructions](linux_snippets.md#create-a-systemd-service-for-a-non-root-user) to configure this service: + +```ini +[Unit] +Description=Account Sync Service for emails and calendars +After=graphical-session.target + +[Service] +Type=simple +# Run the script using the virtual environment's Python interpreter +ExecStart=/home/lyz/.local/share/virtualenvs/account_syncer/bin/python /home/lyz/.local/bin/ +WorkingDirectory=/home/lyz/.local/bin +Restart=on-failure +StandardOutput=journal +StandardError=journal +SyslogIdentifier=account_syncer +# Set the virtual environment's bin directory in the PATH +Environment="PATH=/home/lyz/.local/share/virtualenvs/account_syncer/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" +# Environment variable to use the current user's DISPLAY and DBUS_SESSION +Environment="DISPLAY=:0" +Environment="DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus" + +[Install] +WantedBy=graphical-session.target +``` + +Remember to tweak the service to match your current case and paths. + +As we'll probably need to enter our `pass` password we need the service to start once we've logged into the graphical interface. + +### Monitor the automation + +It's always nice to know if the system is working as expected without adding mental load. To do that I'm creating the next [loki](loki.md) rules: + +```yaml +groups: + - name: account_sync + rules: + - alert: AccountSyncIsNotRunningWarning + expr: | + (sum by(hostname) (count_over_time({job="systemd-journal", syslog_identifier="account_syncer"}[15m])) or sum by(hostname) (count_over_time({hostname="my_computer"} [15m])) * 0 ) == 0 + for: 0m + labels: + severity: warning + annotations: + summary: "The account sync script is not running {{ $labels.hostname}}" + - alert: AccountSyncIsNotRunningError + expr: | + (sum by(hostname) (count_over_time({job="systemd-journal", syslog_identifier="account_syncer"}[3h])) or sum by(hostname) (count_over_time({hostname="my_computer"} [3h])) * 0 ) == 0 + for: 0m + labels: + severity: error + annotations: + summary: "The account sync script has been down for at least 3 hours {{ $labels.hostname}}" + - alert: AccountSyncError + expr: | + count(rate({job="systemd-journal", syslog_identifier="account_syncer"} |= `` | logfmt | level_extracted=`error` [5m])) > 0 + for: 0m + labels: + severity: warning + annotations: + summary: "There are errors in the account sync log at {{ $labels.hostname}}" + + - alert: EmailAccountIsOutOfSyncLyz + expr: | + (sum by(hostname) (count_over_time({job="systemd-journal", syslog_identifier="account_syncer"} | logfmt | msg=`Syncing emails for lyz...`[1h])) or sum by(hostname) (count_over_time({hostname="my_computer"} [1h])) * 0 ) == 0 + for: 0m + labels: + severity: error + annotations: + summary: "The email account lyz has been out of sync for 1h {{ $labels.hostname}}" + + - alert: CalendarAccountIsOutOfSyncLyz + expr: | + (sum by(hostname) (count_over_time({job="systemd-journal", syslog_identifier="account_syncer"} | logfmt | msg=`Syncing calendar for lyz...`[3h])) or sum by(hostname) (count_over_time({hostname="my_computer"} [3h])) * 0 ) == 0 + for: 0m + labels: + severity: error + annotations: + summary: "The calendar account lyz has been out of sync for 3h {{ $labels.hostname}}" +``` +Where: +- You need to change `my_computer` for the hostname of the device running the service +- Tweak the OutOfSync alerts to match your account (change the `lyz` part). + +These rules will raise: +- A warning if the sync has not shown any activity in the last 15 minutes. +- An error if the sync has not shown any activity in the last 3 hours. +- An error if there is an error in the logs of the automation. + +## The old way The remaining step to keep the inboxes synced and tagged is to run all the steps above in a cron. Particularize [pazz's script](https://github.com/pazz/alot/wiki/pazz's-mail-setup#automation) for your diff --git a/docs/gancio.md b/docs/gancio.md index de8de765258..0eacf18289f 100644 --- a/docs/gancio.md +++ b/docs/gancio.md @@ -6,6 +6,13 @@ Telegram bridge to republish Gancio events +## [Wordpress plugin](https://wordpress.org/plugins/wpgancio/) + +This plugin allows you to embed a list of events or a single event from your Gancio website using a shortcode. +It also allows you to connects a Gancio instance to a your wordpress website to automatically push events published on WordPress: +for this to work an event manager plugin is required, Event Organiser and The Events Calendar are supported. Adding another plugin it’s an easy task and you have a guide available in the repo that shows you how to do it. + +The source code of the plugin is [in the wp-plugin](https://framagit.org/les/gancio/-/tree/master/wp-plugin?ref_type=heads) directory of the official repo # References diff --git a/docs/himalaya.md b/docs/himalaya.md new file mode 100644 index 00000000000..a65cf095120 --- /dev/null +++ b/docs/himalaya.md @@ -0,0 +1,213 @@ +[himalaya](https://github.com/pimalaya/himalaya) is a Rust CLI to manage emails. + +Features: + +- Multi-accounting +- Interactive configuration via **wizard** (requires `wizard` feature) +- Mailbox, envelope, message and flag management +- Message composition based on `$EDITOR` +- **IMAP** backend (requires `imap` feature) +- **Maildir** backend (requires `maildir` feature) +- **Notmuch** backend (requires `notmuch` feature) +- **SMTP** backend (requires `smtp` feature) +- **Sendmail** backend (requires `sendmail` feature) +- Global system **keyring** for managing secrets (requires `keyring` feature) +- **OAuth 2.0** authorization (requires `oauth2` feature) +- **JSON** output via `--output json` +- **PGP** encryption: + - via shell commands (requires `pgp-commands` feature) + - via [GPG](https://www.gnupg.org/) bindings (requires `pgp-gpg` feature) + - via native implementation (requires `pgp-native` feature) + +Cons: + +- Documentation is inexistent, you have to dive into the `--help` to understand stuff. + +# [Installation](https://github.com/pimalaya/himalaya) + +*The `v1.0.0` is currently being tested on the `master` branch, and is the prefered version to use. Previous versions (including GitHub beta releases and repositories published versions) are not recommended.* + +Himalaya CLI `v1.0.0` can be installed with a pre-built binary. Find the latest [`pre-release`](https://github.com/pimalaya/himalaya/actions/workflows/pre-release.yml) GitHub workflow and look for the *Artifacts* section. You should find a pre-built binary matching your OS. + +Himalaya CLI `v1.0.0` can also be installed with [cargo](https://doc.rust-lang.org/cargo/): + +```bash +$ cargo install --git https://github.com/pimalaya/himalaya.git --force himalaya +``` +# [Configuration](https://github.com/pimalaya/himalaya?tab=readme-ov-file#configuration) + +Just run `himalaya`, the wizard will help you to configure your default account. + +You can also manually edit your own configuration, from scratch: + +- Copy the content of the documented [`./config.sample.toml`](https://github.com/pimalaya/himalaya/blob/master/config.sample.toml) +- Paste it in a new file `~/.config/himalaya/config.toml` +- Edit, then comment or uncomment the options you want + +## If using mbrsync + +My generic configuration for an mbrsync account is: + +``` +[accounts.account_name] + +email = "lyz@example.org" +display-name = "lyz" +envelope.list.table.unseen-char = "u" +envelope.list.table.replied-char = "r" +backend.type = "maildir" +backend.root-dir = "/home/lyz/.local/share/mail/lyz-example" +backend.maildirpp = false +message.send.backend.type = "smtp" +message.send.backend.host = "example.org" +message.send.backend.port = 587 +message.send.backend.encryption = "start-tls" +message.send.backend.login = "lyz" +message.send.backend.auth.type = "password" +message.send.backend.auth.command = "pass show mail/lyz.example" +``` + +Once you've set it then you need to [fix the INBOX directory](#cannot-find-maildir-matching-name-inbox). + +Then you can check if it works by running `himalaya envelopes list -a lyz-example` + +## Vim plugin installation + +Using lazy: + +```lua +return { + { + "pimalaya/himalaya-vim", + }, +} +``` + +You can then run `:Himalaya account_name` and it will open himalaya in your editor. + +### Configure the account bindings + +To avoid typing `:Himalaya account_name` each time you want to check the email you can set some bindings: + +```lua +return { + { + "pimalaya/himalaya-vim", + keys = { + { "ma", "Himalaya account_name", desc = "Open account_name@example.org" }, + { "ml", "Himalaya lyz", desc = "Open lyz@example.org" }, + }, + }, +} +``` + +Setting the description is useful to see the configured accounts with which-key by typing `m` and waiting. + +### Configure extra bindings + +The default plugin doesn't yet have all the bindings I'd like so I've added the next ones: + +- In the list of emails view: + - `dd` in normal mode or `d` in visual: Delete emails + - `q`: exit the program + +- In the email view: + - `d`: Delete email + - `q`: Return to the list of emails view + +If you want them too set the next config: + +```lua +return { + { + "pimalaya/himalaya-vim", + config = function() + vim.api.nvim_create_augroup("HimalayaCustomBindings", { clear = true }) + vim.api.nvim_create_autocmd("FileType", { + group = "HimalayaCustomBindings", + pattern = "himalaya-email-listing", + callback = function() + -- Bindings to delete emails + vim.api.nvim_buf_set_keymap(0, "n", "dd", "(himalaya-email-delete)", { noremap = true, silent = true }) + vim.api.nvim_buf_set_keymap(0, "x", "d", "(himalaya-email-delete)", { noremap = true, silent = true }) + -- Bind `q` to close the window + vim.api.nvim_buf_set_keymap(0, "n", "q", ":bd", { noremap = true, silent = true }) + end, + }) + + vim.api.nvim_create_augroup("HimalayaEmailCustomBindings", { clear = true }) + vim.api.nvim_create_autocmd("FileType", { + group = "HimalayaEmailCustomBindings", + pattern = "mail", + callback = function() + -- Bind `q` to close the window + vim.api.nvim_buf_set_keymap(0, "n", "q", ":q", { noremap = true, silent = true }) + -- Bind `d` to delete the email and close the window + vim.api.nvim_buf_set_keymap( + 0, + "n", + "d", + "(himalaya-email-delete):q", + { noremap = true, silent = true } + ) + end, + }) + end, + }, +} +``` + +### Configure email fetching from within vim + +[Fetching emails from within vim](https://github.com/pimalaya/himalaya-vim/issues/13) is not yet supported, so I'm manually refreshing by account: + +```lua +return { + { + "pimalaya/himalaya-vim", + keys = { + -- Email refreshing bindings + { "rj", ':lua FetchEmails("lyz")', desc = "Fetch lyz@example.org" }, + }, + config = function() + function FetchEmails(account) + vim.notify("Fetching emails for " .. account .. ", please wait...", vim.log.levels.INFO) + vim.cmd("redraw") + vim.fn.jobstart("mbsync " .. account, { + on_exit = function(_, exit_code, _) + if exit_code == 0 then + vim.notify("Emails for " .. account .. " fetched successfully!", vim.log.levels.INFO) + else + vim.notify("Failed to fetch emails for " .. account .. ". Check the logs.", vim.log.levels.ERROR) + end + end, + }) + end + end, + }, +} +``` + +You still need to open again `:Himalaya account_name` as the plugin does not reload if there are new emails. + +## Show notifications when emails arrive + +You can set up [mirador](mirador.md) to get those notifications. +# Not there yet + +- [With the vim plugin you can't switch accounts](https://github.com/pimalaya/himalaya-vim/issues/8) +- [Let the user delete emails without confirmation](https://github.com/pimalaya/himalaya-vim/issues/12) +- [Fetching emails from within vim](https://github.com/pimalaya/himalaya-vim/issues/13) + +# Troubleshooting + +## [Cannot find maildir matching name INBOX](https://github.com/pimalaya/himalaya/issues/490) + +`mbrsync` uses `Inbox` instead of the default `INBOX` so it doesn't find it. In theory you can use `folder.alias.inbox = "Inbox"` but it didn't work with me, so I finally ended up doing a symbolic link from `INBOX` to `Inbox`. + +## Cannot find maildir matching name Trash + +That's because the `Trash` directory does not follow the Maildir structure. I had to create the `cur` `tmp` and `new` directories. +# References +- [Source](https://github.com/pimalaya/himalaya) +- [Vim plugin source](https://github.com/pimalaya/himalaya-vim) diff --git a/docs/life_planning.md b/docs/life_planning.md deleted file mode 100644 index 294d5275675..00000000000 --- a/docs/life_planning.md +++ /dev/null @@ -1,93 +0,0 @@ - -# The weekly planning - -I follows these steps: - -- Empty your inbox channels -- Check and adjust the tickler and the calendar of the next two weeks -- If you have it check the month plan guidelines -- Do the project refinement. - -It's difficult to do the weekly review until you get the correct process that suits you right now. I'm still struggling with it. - -# Month plan - -The objectives of the month plan are: - -- Define the month objectives according to the trimester plan and the insights gathered in the past month review. -- Make your backlog and todo list match the month objectives -- Define the topics to learn -- Define the habits to incorporate -- Define the checks you want to do at the end of the month. - -It's interesting to do the plannings on meaningful days such as the first one of the month. Usually we don't have enough flexibility in our life to do it exactly that day, so schedule it the closest you can to that date. It's a good idea to do both the review and the planning on the same day. - -We'll divide the planning process in these phases: - -- Prepare -- Clarify your state -- Decide the month objectives - -## Prepare - -It's important that you prepare your environment for the planning. You need to be present and fully focused on the process itself. To do so you can: - -- Make sure you don't get interrupted: - - Check your action manager tools to make sure that you don't have anything urgent to address in the next hour. - - Disable all notifications -- Set your analysis environment: - - Put on the music that helps you get *in the zone*. - - Get all the things you may need for the review: - - The checklist that defines the process of your planning (this document in my case). - - Somewhere to write down the insights. - - Your action manager system - - Your habit manager system - - Your *Objective list*. - - Your *Thinking list*. - - Your *Reading list*. - - Remove from your environment everything else that may distract you - -## Clarify your state - -To be able to make a good decision on your month's path you need to sort out which is your current state. To do so: - -- Clean your inbox: If it's feasible in a short period of time refile each item until it's empty otherwise quickly overview it to see if there is any thing that needs to be addressed this month. -- Clean your todo: Review each todo element by deciding if they should still be in the todo. If they do and they belong to a month objective, add it. If they don't need to be in the todo, refile it. -- Clean your agenda and get an feeling of the busyness of the month: - - Open the orgmode month view agenda and clean it - - Read the rest of your calendars - -## Decide the month objectives - -Create the month objectives in your roadmap file after addressing each element of: - -- Your last month review document. -- The trimester objectives of your roadmap. - - You can add notes on the trimester objectives - -Then reorder the objectives in order of priority. Try to have at least one objective that improves your life. - -## Decide the next steps - -- For each of your month and trimester objectives: - - Decide whether it makes sense to address it this month. If not, mark it as inactive - - Create a clear plan of action for this month on that objective. - - Reorder the projects as needed - - Mark as INACTIVE the ones that you don't feel need to be focused on this month. - -- Tweak your week distribution - - Check your calendar to see how many free days you have. - - Taking into account the month objectives select what do you want to focus on in each week day. - - Document the week distribution in your roadmap document and make it visible in your weekly planning process. - -- Refine the roadmap of each of the selected areas (change this to the trimestral planning) -- Define the todo of each device (mobile, tablet, laptop) -- Select at least one coding project in case you enter in programming mode -- Clean your mobile browser tabs -- Tweak your *things to think about list*. -- Tweak your *investigations list*. -- Tweak your *reading list*. -- Tweak your learning list. -- Tweak your *habit manager system*. - -[![](not-by-ai.svg){: .center}](https://notbyai.fyi) diff --git a/docs/linux/zfs.md b/docs/linux/zfs.md index a3444d3419e..3a373e1bd8e 100644 --- a/docs/linux/zfs.md +++ b/docs/linux/zfs.md @@ -943,7 +943,6 @@ The following table summarizes the file or directory changes that are identified If you've used the `-o keyformat=raw -o keylocation=file:///etc/zfs/keys/home.key` arguments to encrypt your datasets you can't use a `keyformat=passphrase` encryption on the cold storage device. You need to copy those keys on the disk. One way of doing it is to: - Create a 100M LUKS partition protected with a passphrase where you store the keys. - - The rest of the space is left for a partition for the zpool. WARNING: substitute `/dev/sde` for the partition you need to work on in the next snippets @@ -964,6 +963,56 @@ To do it: ```bash zpool create cold-backup-01 /dev/sde2 ``` + +### Sync an already created cold backup +#### Mount the existent pool + +Imagine your pool is at `/dev/sdf2`: + +- Connect your device +- Check for available ZFS pools: First, check if the system detects any ZFS pools that can be imported: + + ```bash + sudo zpool import + ``` + + This command will list all pools that are available for import, including the one stored in `/dev/sdf2`. Look for the pool name you want to import. + +- Import the pool: If you see the pool listed and you know its name (let's say the pool name is `mypool`), you can import it with: + + ```bash + sudo zpool import mypool + ``` + +- Import the pool from a specific device: If the pool isn't showing up or you want to specify the device directly, you can use: + + ```bash + sudo zpool import -d /dev/sdf2 + ``` + + This tells ZFS to look specifically at `/dev/sdf2` for any pools. If you don't know the name of the pool this is also the command to run. + + This should list any pools found on the device. If it shows a pool, import it using: + + ```bash + sudo zpool import -d /dev/sdf2 + ``` + +- Mount the pool: Once the pool is imported, ZFS should automatically mount any datasets associated with the pool. You can check the status of the pool with: + + ```bash + sudo zpool status + ``` + +Additional options: + +- If the pool was exported cleanly, you can use `zpool import` without additional flags. +- If the pool wasn’t properly exported or was interrupted, you might need to use `-f` (force) to import it: + + ```bash + sudo zpool import -f mypool + ``` + # Monitorization ## Monitor the ZFS events diff --git a/docs/linux_snippets.md b/docs/linux_snippets.md index 076fd00d224..89ed62b8eca 100644 --- a/docs/linux_snippets.md +++ b/docs/linux_snippets.md @@ -4,6 +4,286 @@ date: 20200826 author: Lyz --- +# Create a systemd service for a non-root user + +To set up a systemd service as a **non-root user**, you can create a user-specific service file under your home directory. User services are defined in `~/.config/systemd/user/` and can be managed without root privileges. + +1. Create the service file: + + Open a terminal and create a new service file in `~/.config/systemd/user/`. For example, if you want to create a service for a script named `my_script.py`, follow these steps: + + ```bash + mkdir -p ~/.config/systemd/user + nano ~/.config/systemd/user/my_script.service + ``` + +2. Edit the service file: + + In the `my_script.service` file, add the following configuration: + + ```ini + [Unit] + Description=My Python Script Service + After=network.target + + [Service] + Type=simple + ExecStart=/usr/bin/python3 /path/to/your/script/my_script.py + WorkingDirectory=/path/to/your/script/ + SyslogIdentifier=my_script + Restart=on-failure + StandardOutput=journal + StandardError=journal + + [Install] + WantedBy=default.target + ``` + + - **Description**: A short description of what the service does. + - **ExecStart**: The command to run your script. Replace `/path/to/your/script/my_script.py` with the full path to your Python script. If you want to run the script within a virtualenv you can use `/path/to/virtualenv/bin/python` instead of `/usr/bin/python3`. + + You'll need to add the virtualenv path to Path + ```ini + # Add virtualenv's bin directory to PATH + Environment="PATH=/path/to/virtualenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" + ``` + - **WorkingDirectory**: Set the working directory to where your script is located (optional). + - **Restart**: Restart the service if it fails. + - **StandardOutput** and **StandardError**: This ensures that the output is captured in the systemd journal. + - **WantedBy**: Specifies the target to which this service belongs. `default.target` is commonly used for user services. + +3. Reload systemd to recognize the new service: + + Run the following command to reload systemd's user service files: + + ```bash + systemctl --user daemon-reload + ``` + +4. Enable and start the service: + + To start the service immediately and enable it to run on boot (for your user session), use the following commands: + + ```bash + systemctl --user start my_script.service + systemctl --user enable my_script.service + ``` + +5. Check the status and logs: + + - To check if the service is running: + + ```bash + systemctl --user status my_script.service + ``` + + - To view logs specific to your service: + + ```bash + journalctl --user -u my_script.service -f + ``` + +## If you need to use the graphical interface + +If your script requires user interaction (like entering a GPG passphrase), it’s crucial to ensure that the service is tied to your graphical user session, which ensures that prompts can be displayed and interacted with. + +To handle this situation, you should make a few adjustments to your systemd service: + +### Ensure service is bound to graphical session + +Change the `WantedBy` target to `graphical-session.target` instead of `default.target`. This makes sure the service waits for the full graphical environment to be available. + +### Use `Type=forking` instead of `Type=simple` (optional) + +If you need the service to wait until the user is logged in and has a desktop session ready, you might need to tweak the service type. Usually, `Type=simple` is fine, but you can also experiment with `Type=forking` if you notice any issues with user prompts. + +### Updated Service File + +Here’s how you should modify your `mbsync_syncer.service` file: + +```ini +[Unit] +Description=My Python Script Service +After=graphical-session.target + +[Service] +Type=simple +ExecStart=/usr/bin/python3 /path/to/your/script/my_script.py +WorkingDirectory=/path/to/your/script/ +Restart=on-failure +StandardOutput=journal +StandardError=journal +SyslogIdentifier=my_script +# Environment variable to use the current user's DISPLAY and DBUS_SESSION +Environment="DISPLAY=:0" +Environment="DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus" + +[Install] +WantedBy=graphical-session.target +``` + +After modifying the service, reload and restart it: + +```bash +systemctl --user daemon-reload +systemctl --user restart my_script.service +``` +# Debugging high IOwait + +High I/O wait (`iowait`) on the CPU, especially at 50%, typically indicates that your system is spending a large portion of its time waiting for I/O operations (such as disk access) to complete. This can be caused by a variety of factors, including disk bottlenecks, overloaded storage systems, or inefficient applications making disk-intensive operations. + +Here’s a structured approach to debug and analyze high I/O wait on your server: + +## Monitor disk I/O + First, verify if disk I/O is indeed the cause. Tools like `iostat`, `iotop`, and `dstat` can give you an overview of disk activity: + + - **`iostat`**: This tool reports CPU and I/O statistics. You can install it with `apt-get install sysstat`. Run the following command to check disk I/O stats: + + ```bash + iostat -x 1 + ``` + The `-x` flag provides extended statistics, and `1` means it will report every second. Look for high values in the `%util` and `await` columns, which represent: + - `%util`: Percentage of time the disk is busy (ideally should be below 90% for most systems). + - `await`: Average time for I/O requests to complete. + + If either of these values is unusually high, it indicates that the disk subsystem is likely overloaded. + + - **`iotop`**: If you want a more granular look at which processes are consuming disk I/O, use `iotop`: + + ```bash + sudo iotop -o + ``` + + This will show you the processes that are actively performing I/O operations. + + - **`dstat`**: Another useful tool for monitoring disk I/O in real-time: + + ```bash + dstat -cdl 1 + ``` + + This shows CPU, disk, and load stats, refreshing every second. Pay attention to the `dsk/await` value. + +### Check disk health + Disk issues such as bad sectors or failing drives can also lead to high I/O wait times. To check the health of your disks: + + - **Use `smartctl`**: This tool can give you a health check of your disks if they support S.M.A.R.T. + + ```bash + sudo smartctl -a /dev/sda + ``` + + Check for any errors or warnings in the output. Particularly look for things like reallocated sectors or increasing "pending sectors." + + - **`dmesg` logs**: Look at the system logs for disk errors or warnings: + + ```bash + dmesg | grep -i "error" + ``` + + If there are frequent disk errors, it may be time to replace the disk or investigate hardware issues. + +### Look for disk saturation + If the disk is saturated, no matter how fast the CPU is, it will be stuck waiting for data to come back from the disk. To further investigate disk saturation: + + - **`df -h`**: Check if your disk partitions are full or close to full. + + ```bash + df -h + ``` + + - **`lsblk`**: Check how your disks are partitioned and how much data is written to each partition: + + ```bash + lsblk -o NAME,SIZE,TYPE,MOUNTPOINT + ``` + + - **`blktrace`**: For advanced debugging, you can use `blktrace`, which traces block layer events on your system. + + ```bash + sudo blktrace -d /dev/sda -o - | blkparse -i - + ``` + + This will give you very detailed insights into how the system is interacting with the block device. + +### Check for heavy disk-intensive processes + Identify processes that might be using excessive disk I/O. You can use tools like `iotop` (as mentioned earlier) or `pidstat` to look for processes with high disk usage: + + - **`pidstat`**: Track per-process disk activity: + + ```bash + pidstat -d 1 + ``` + + This command will give you I/O statistics per process every second. Look for processes with high `I/O` values (`r/s` and `w/s`). + + - **`top`** or **`htop`**: While `top` or `htop` can show CPU usage, they can also show process-level disk activity. Focus on processes consuming high CPU or memory, as they might also be performing heavy I/O operations. + +### check file system issues + Sometimes the file system itself can be the source of I/O bottlenecks. Check for any file system issues that might be causing high I/O wait. + + - **Check file system consistency**: If you suspect the file system is causing issues (e.g., due to corruption), run a file system check. For `ext4`: + + ```bash + sudo fsck /dev/sda1 + ``` + + Ensure you unmount the disk first or do this in single-user mode. + + - **Check disk scheduling**: Some disk schedulers (like `cfq` or `deadline`) might perform poorly depending on your workload. You can check the scheduler used by your disk with: + + ```bash + cat /sys/block/sda/queue/scheduler + ``` + + You can change the scheduler with: + + ```bash + echo deadline > /sys/block/sda/queue/scheduler + ``` + + This might improve disk performance, especially for certain workloads. + +### Examine system logs + The system logs (`/var/log/syslog` or `/var/log/messages`) may contain additional information about hardware issues, I/O bottlenecks, or kernel-related warnings: + + ```bash + sudo tail -f /var/log/syslog + ``` + + or + + ```bash + sudo tail -f /var/log/messages + ``` + + Look for I/O or disk-related warnings or errors. + +### Consider hardware upgrades or tuning + - **SSD vs HDD**: If you're using HDDs, consider upgrading to SSDs. HDDs can be much slower in terms of I/O, especially if you have a high number of random read/write operations. + - **RAID Configuration**: If you are using RAID, check the RAID configuration and ensure it's properly tuned for performance (e.g., using RAID-10 for a good balance of speed and redundancy). + - **Memory and CPU Tuning**: If the server is swapping due to insufficient RAM, it can result in increased I/O wait. You might need to add more RAM or optimize the system to avoid excessive swapping. + +### Check for swapping issues + Excessive swapping can contribute to high I/O wait times. If your system is swapping (which happens when physical RAM is exhausted), I/O wait spikes as the system reads from and writes to swap space on disk. + + - **Check swap usage**: + + ```bash + free -h + ``` + + If swap usage is high, you may need to add more physical RAM or optimize applications to reduce memory pressure. + +--- + +# Create a file with random data + +Of 3.5 GB + +```bash +dd if=/dev/urandom of=random_file.bin bs=1M count=3584 +``` # [Set the vim filetype syntax in a comment](https://unix.stackexchange.com/questions/19867/is-there-a-way-to-place-a-comment-in-a-file-which-vim-will-process-in-order-to-s) Add somewhere in your file: diff --git a/docs/loki.md b/docs/loki.md index a266c6e6091..78630c48789 100644 --- a/docs/loki.md +++ b/docs/loki.md @@ -450,6 +450,8 @@ limits_config: ``` But probably you're doing something wrong. + +# [Upgrading loki](https://grafana.com/docs/loki/latest/setup/upgrade/) # Things that don't still work ## [Get a useful Source link in the alertmanager](https://github.com/grafana/loki/issues/4722) Currently for the ruler `external_url` if you use the URL of your Grafana installation: e.g. `external_url: "https://grafana.example.com"` it creates a Source link in alertmanager similar to https://grafana.example.com/graph?g0.expr=%28sum+by%28thing%29%28count_over_time%28%7Bnamespace%3D%22foo%22%7D+%7C+json+%7C+bar%3D%22maxRetries%22%5B5m%5D%29%29+%3E+0%29&g0.tab=1, which isn't valid. diff --git a/docs/mailbox.md b/docs/mailbox.md new file mode 100644 index 00000000000..9d603905f74 --- /dev/null +++ b/docs/mailbox.md @@ -0,0 +1,58 @@ +[`mailbox`](https://docs.python.org/3/library/mailbox.html) is a python library to work with MailDir and mbox local mailboxes. + +It's part of the core python libraries, so you don't need to install anything. + +# Usage + +The docs are not very pleasant to read, so I got most of the usage knowledge from these sources: + +- [pymowt docs](https://pymotw.com/2/mailbox/) +- [Cleanup maildir directories](https://cr-net.be/posts/maildir_cleanup_with_python/) +- [Parsing maildir directories](https://gist.github.com/tyndyll/6f6145f8b1e82d8b0ad8) + +One thing to keep in mind is that an account can have many mailboxes (INBOX, Sent, ...), there is no "root mailbox" that contains all of the other + +## initialise a mailbox + +```python +mbox = mailbox.Maildir('path/to/your/mailbox') +``` + +Where the `path/to/your/mailbox` is the directory that contains the `cur`, `new`, and `tmp` directories. + +## Working with mailboxes + +It's not very clear how to work with them, the Maildir mailbox contains the emails in iterators `[m for m in mbox]`, it acts kind of a dictionary, you can get the keys of the emails with `[k for k in mbox.iterkeys]`, and then you can `mbox[key]` to get an email, you cannot modify those emails (flags, subdir, ...) directly in the `mbox` object (for example `mbox[key].set_flags('P')` doesn't work). You need to `mail = mbox.pop(key)`, do the changes in the `mail` object and then `mbox.add(mail)` it again, with the downside that after you added it again, the `key` has changed! But it's the return value of the `add` method. + +If the program gets interrupted between the `pop` and the `add` then you'll loose the email. The best way to work with it would be then: + +- `mail = mbox.get(key)` the email +- Do all the process you need to do with the email +- `mbox.pop(key)` and `key = mbox.add(mail)` + +In theory `mbox` has an `update` method that does this, but I don't understand it and it doesn't work as expected :S. + +## Moving emails around + +You can't just move the files between directories like you'd do with python as each directory contains it's own identifiers. + +### Moving a message between the maildir directories + +The `Message` has a `set_subdir` + +## [Creating folders](https://pymotw.com/2/mailbox/#maildir-folders) + +Even though you can create folders with `mailbox` it creates them in a way that mbsync doesn't understand it. It's easier to manually create the `cur`, `tmp`, and `new` directories. I'm using the next function: + +```python +if not (mailbox_dir / "cur").exists(): + for dir in ["cur", "tmp", "new"]: + (mailbox_dir / dir).mkdir(parents=True) + log.info(f"Initialized mailbox: {mailbox}") +else: + log.debug(f"{mailbox} already exists") +``` +# References +- [Reference Docs](https://docs.python.org/3/library/mailbox.html) +- [Non official useful docs](https://pymotw.com/2/mailbox/) + diff --git a/docs/maildir.md b/docs/maildir.md new file mode 100644 index 00000000000..cd9cd84f9e0 --- /dev/null +++ b/docs/maildir.md @@ -0,0 +1,9 @@ +The [Maildir](https://en.wikipedia.org/wiki/Maildir) e-mail format is a common way of storing email messages on a file system, rather than in a database. Each message is assigned a file with a unique name, and each mail folder is a file system directory containing these files. + +A Maildir directory (often named Maildir) usually has three subdirectories named `tmp`, `new`, and `cur`. + +- The `tmp` subdirectory temporarily stores e-mail messages that are in the process of being delivered. This subdirectory may also store other kinds of temporary files. +- The `new` subdirectory stores messages that have been delivered, but have not yet been seen by any mail application. +- The `cur` subdirectory stores messages that have already been seen by mail applications. +# References +- [Wikipedia](https://en.wikipedia.org/wiki/Maildir) diff --git a/docs/mbsync.md b/docs/mbsync.md index 34f9686e5d9..129515596fa 100644 --- a/docs/mbsync.md +++ b/docs/mbsync.md @@ -34,7 +34,7 @@ you have your password stored in `pass` under `mail/example`. MaildirStore example-local Path ~/mail/example/ - Inbox ~/mail/example/Inbox + Inbox ~/mail/example/INBOX Channel example Master :example-remote: @@ -52,6 +52,32 @@ You need to manually create the directories where you store the emails. mkdir -p ~/mail/example ``` +# Troubleshooting + +## [My emails are not being deleted on the source IMAP server](https://isync-devel.narkive.com/lC9HJC40/how-do-i-get-mbsync-to-remove-mail-on-the-imap-server) + +That's the default behavior of `mbsync`, if you want it to actually delete the emails on the source you need to add: + +``` +Expunge Both +``` +Under your channel (close to `Sync All`, `Create Both`) +``` +``` +## [mbsync error: UID is beyond highest assigned UID](https://stackoverflow.com/questions/39513469/mbsync-error-uid-is-beyond-highest-assigned-uid) + +If during the sync you receive the following errors: + +``` +mbsync error: UID is 3 beyond highest assigned UID 1 +``` + +Go to the place where `mbsync` is storing the emails and find the file that is giving the error, you need to find the files that contain `U=3`, imagine that it's something like `1568901502.26338_1.hostname,U=3:2,S`. You can strip off everything from the `,U=` from that filename and resync and it should be fine, e.g. + +```bash +mv '1568901502.26338_1.hostname,U=3:2,S' '1568901502.26338_1.hostname' +``` + # References * [Homepage](https://isync.sourceforge.io/mbsync.html) diff --git a/docs/mirador.md b/docs/mirador.md new file mode 100644 index 00000000000..d5b1d17ef41 --- /dev/null +++ b/docs/mirador.md @@ -0,0 +1,48 @@ + +DEPRECATED: as of 2024-11-15 the tool has many errors ([1](https://github.com/pimalaya/mirador/issues/4), [2](https://github.com/pimalaya/mirador/issues/3)), few stars (4) and few commits (8). use [watchdog](watchdog_python.md) instead and build your own solution. + +[mirador](https://github.com/pimalaya/mirador) is a CLI to watch mailbox changes made by the maintaner of [himalaya](himalaya.md). + +Features: + +- Watches and executes actions on mailbox changes +- Interactive configuration via **wizard** (requires `wizard` feature) +- Supported events: **on message added**. +- Supported actions: **send system notification**, **execute shell command**. +- Supports **IMAP** mailboxes (requires `imap` feature) +- Supports **Maildir** folders (requires `maildir` feature) +- Supports global system **keyring** to manage secrets (requires `keyring` feature) +- Supports **OAuth 2.0** (requires `oauth2` feature) + +*Mirador CLI is written in [Rust](https://www.rust-lang.org/), and relies on [cargo features](https://doc.rust-lang.org/cargo/reference/features.html) to enable or disable functionalities. Default features can be found in the `features` section of the [`Cargo.toml`](https://github.com/pimalaya/mirador/blob/master/Cargo.toml#L18).* + +# [Installation](https://github.com/pimalaya/mirador) + +*The `v1.0.0` is currently being tested on the `master` branch, and is the preferred version to use. Previous versions (including GitHub beta releases and repositories published versions) are not recommended.* + +## Cargo (git) + +Mirador CLI `v1.0.0` can also be installed with [cargo](https://doc.rust-lang.org/cargo/): + +```bash +$ cargo install --frozen --force --git https://github.com/pimalaya/mirador.git +``` + +## Pre-built binary + +Mirador CLI `v1.0.0` can be installed with a pre-built binary. Find the latest [`pre-release`](https://github.com/pimalaya/mirador/actions/workflows/pre-release.yml) GitHub workflow and look for the *Artifacts* section. You should find a pre-built binary matching your OS. + +# Configuration + +Just run `mirador`, the wizard will help you to configure your default account. + +You can also manually edit your own configuration, from scratch: + +- Copy the content of the documented [`./config.sample.toml`](https://github.com/pimalaya/mirador/blob/master/config.sample.toml) +- Paste it in a new file `~/.config/mirador/config.toml` +- Edit, then comment or uncomment the options you want + +# Usage + +# References +- [Source](https://github.com/pimalaya/mirador) diff --git a/docs/orgmode.md b/docs/orgmode.md index c07ef25e4f4..0cd97532063 100644 --- a/docs/orgmode.md +++ b/docs/orgmode.md @@ -1054,7 +1054,6 @@ vim.api.nvim_create_autocmd("FileType", { end, }) ``` - ## [Capture](https://orgmode.org/manual/Capture.html) Capture lets you quickly store notes with little interruption of your work flow. It works the next way: diff --git a/docs/parkour.md b/docs/parkour.md index f33d8955366..644a8d358b9 100644 --- a/docs/parkour.md +++ b/docs/parkour.md @@ -7,10 +7,78 @@ - [Strengthen your knees](#strengthen-your-knees) - [Know the most frequent injuries](#avoid-frequent-injuries) -## Warming up +## [Warming up](https://www.youtube.com/watch?app=desktop&v=qGSc-EUlyrQ) + +Never do static stretches if you're cold, it's better to do dynamic stretches. + +### Take the joints through rotations + +- Head: + - Nod 10 times + - Say no 10 times + - Ear shoulder 10 times + - Circles 10 times each direction + +- Shoulders + - Circles back 10 times + - Circles forward 10 times + +- Elbows + - Circles 10 each direction + +- Wrists: + - Circle 10 each direction + +- Chest: + - Chest out/in 10 times + - Chest one side to the other 10 times + - Chest in circles + +- Hips: + - Circles 10 each direction + - Figure eight 10 times each direction + +- Knees: + - Circular rotations 10 each direction feet and knees together + - 10 ups and downs with knees together + - Circular rotations 10 each direction feet waist width + +- Ankles: + - Circular 10 rotations each direction + +### Light exercises + +- 10 steps forward of walking on your toes, 10 back +- 10 steps forward of walking on your toes feet rotated outwards, 10 back +- 10 steps forward of walking on your toes feet rotated inwards, 10 back +- 10 steps forward of walking on your heels feet rotated outwards, 10 back +- 10 steps forward of walking on your heels feet rotated inwards, 10 back + +- 2 x 10 x Side step, carry the leg up (from out to in) while you turn 180, keep on moving on that direction +- 2 x 10 x Front step carrying the leg up (from in to out)while you turn 45, then side step, keep on moving on that direction +- 10 light skips on one leg: while walking forward lift your knee and arms and do a slight jump +- 10 steps with high knees +- 10 steps with heel to butt +- 10 side shuffles (like basketball defense) + +- 5 lunges forward, 5 backwards + +- 10 rollups and downs from standing position +- 5 push-ups +- 10 rotations from the pushup position on each direction with straigth arms +- 5 push-ups +- 10 rotations from the pushup position on each direction with shoulders at ankle level +- 3 downward monkeys: from piramid do a low pushup and go to cobra, then a pushup + +- 10 steps forward walking on all fours + +### Strengthen your knees + +Follow [there steps](#strengthen-your-knees) +### Transit to the parkour place + +Go by bike, skate, jogging to the parkour place -- Do [this routine](https://www.youtube.com/watch?app=desktop&v=qGSc-EUlyrQ) before your parkour session -- Go by bike, skate, jogging to the parkour place ## [Stretching](https://www.youtube.com/playlist?list=PLtb634PJ9wDYDQh2koMeLkUBmx1jTiMbs) Stretching is necessary to maintain your flexibility and movement to prevent injuries. Keep in mind though that [static stretches doesn't help when warming up it does work after your workout](https://www.youtube.com/watch?v=DTZvg4yy-e4) @@ -37,7 +105,7 @@ The hip flexors are the iliacus, psoas major and psoas minor, they are attached Not stretching your hip flexors may result in: lower back pain, knee pain, it may pull your pelvis out of position and that can ricochet all the way up to your body. -- From a lunges position (keep the knee just above the heel or behind it) +- From a lunge position (keep the knee just above the heel or behind it) - Spine up to the ceiling - Then tuck under your pelvis while keeping your back straight (bum goes slightly forward) - then lean slightly forward. With this movement you're pulling up the part above the hip, and pulling down the part below the hip. @@ -131,7 +199,7 @@ You can stretch by: ### Glutes stretch -- From lying down face up on tabletop +- From lying on the floor face up on tabletop - Cross one leg over the other - Grab the other leg with both hands and bring it towards you while you push the other with one of the elbows @@ -178,6 +246,7 @@ Another good stretch is the [butterfly](https://yewtu.be/watch?v=DWD8gY04JPo) - From sitting position join your feet - Bring the knees down - Bring your back straight forward + ## [Strengthen your knees](https://www.youtube.com/watch?v=Tja6i5MysT8) Make sure you have the strength to do 3 sets of 20 squats or drops at each level before moving to the next step: diff --git a/docs/python_inotify.md b/docs/python_inotify.md index 307766c8264..a671c445421 100644 --- a/docs/python_inotify.md +++ b/docs/python_inotify.md @@ -1,3 +1,5 @@ +DEPRECATED: As of 2024-11-15 it's been 4 years since the last commit. [watchdog](watchdog_python.md) has 6.6k stars and last commit was done 2 days ago. + [inotify](https://pypi.org/project/inotify/) is a python library that acts as a bridge to the `inotify` linux kernel which allows you to register one or more directories for watching, and to simply block and wait for notification events. This is obviously far more efficient than polling one or more directories to determine if anything has changed. # Installation @@ -67,4 +69,3 @@ The wait will be done in the `list(events)` line # References - [Source](https://github.com/dsoprea/PyInotify) -[![](not-by-ai.svg){: .center}](https://notbyai.fyi) diff --git a/docs/python_logging.md b/docs/python_logging.md index 88f316da29a..4b48cc16d9b 100644 --- a/docs/python_logging.md +++ b/docs/python_logging.md @@ -126,3 +126,75 @@ if __name__ == "__main__": logging.warning("This is a warning message") logging.error("This is an error message") ``` + +## Configure the logging module to log directly to systemd's journal + +To use `systemd.journal` in Python, you need to install the `systemd-python` package. This package provides bindings for systemd functionality. + +Install it using pip: + +```bash +pip install systemd-python +``` +Below is an example Python script that configures logging to send messages to the systemd journal: + +```python +import logging +from systemd.journal import JournalHandler + +# Set up logging to use the systemd journal +logger = logging.getLogger('my_app') +logger.setLevel(logging.DEBUG) # Set the logging level + +# Create a handler for the systemd journal +journal_handler = JournalHandler() +journal_handler.setLevel(logging.DEBUG) # Adjust logging level if needed +# Add extra information to ensure the correct identifier is used in journalctl +journal_handler.addFilter( + lambda record: setattr(record, "SYSLOG_IDENTIFIER", "mbsync_syncer") or True +) + +# Optional: Add a formatter to include additional info in the log entries +formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') +journal_handler.setFormatter(formatter) + +# Add the handler to the logger +logger.addHandler(journal_handler) + +# Example usage +logger.info("This is an info message.") +logger.error("This is an error message.") +logger.debug("Debugging information.") +``` + +When you run the script, the log messages will be sent to the systemd journal. You can view them using the `journalctl` command: + +```bash +sudo journalctl -f +``` + +This command will show the latest log entries in real time. You can filter by your application name using: + +```bash +sudo journalctl -f -t my_app +``` + +Replace `my_app` with the logger name you used (e.g., `'my_app'`). + +### Additional Tips: +- **Tagging**: You can add a custom identifier for your logs by setting `logging.getLogger('your_tag')`. This will allow you to filter logs using `journalctl -t your_tag`. +- **Log Levels**: You can control the verbosity of the logs by setting different levels (e.g., `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`). + +### Example Output in the Systemd Journal: + +You should see entries similar to the following in the systemd journal: + +``` +Nov 15 12:45:30 my_hostname my_app[12345]: 2024-11-15 12:45:30,123 - my_app - INFO - This is an info message. +Nov 15 12:45:30 my_hostname my_app[12345]: 2024-11-15 12:45:30,124 - my_app - ERROR - This is an error message. +Nov 15 12:45:30 my_hostname my_app[12345]: 2024-11-15 12:45:30,125 - my_app - DEBUG - Debugging information. +``` + +This approach ensures that your logs are accessible through standard systemd tools and are consistent with other system logs. Let me know if you have any additional requirements or questions! +```python +``` diff --git a/docs/roadmap_adjustment.md b/docs/roadmap_adjustment.md index f3a88fa2c1c..038a27631ea 100644 --- a/docs/roadmap_adjustment.md +++ b/docs/roadmap_adjustment.md @@ -358,13 +358,14 @@ There are two tools that will help to follow the day plan: ### Control your inbox -The [Inbox](roadmap_tools.md#inbox) is a nasty daemon that loves to get out of control. You need to develop your inbox cleaning skills and proceses up to the point that you're sure that the important stuff tracked where it should be tracked. So far aiming to have a element inbox is unrealistic though, at least for me. +The [Inbox](roadmap_tools.md#inbox) is a nasty daemon that loves to get out of control. You need to develop your inbox cleaning skills and proceses up to the point that you're sure that the important stuff tracked where it should be tracked. So far aiming to have a 0 element inbox is unrealistic though, at least for me. ## Survive the week At this level you're able to open your myopic eyes, so you start to guess what life throws at you. This may be enough to be able to gracefully handle some of the small stuff. The fast ones will still hit you though as you still don't have too much time or definition to react. This adjustment is whatever you need to do to get your head empty again and get oriented for the next 9 days. It's split in the next phases: - [Week plan](#week-plan) + ### Week plan No matter how good our intentions or system may be, you're going to take in more opportunities than you can handle. The more efficient you become, the more ground you'll try to grasp. You're going to have to learn to say no faster, and to more things, in order to stay afloat and comfortable. Having some dedicated time in the week to at least get up to the project level of thinking goes a long way towards making that easier. @@ -532,6 +533,76 @@ Finally pat yourself in the shoulder as you've finished the review ^^. Objectives are: - Identify deadlines. +- Define the month objectives according to the trimester plan and the insights gathered in the past month review. +- Make your backlog and todo list match the month objectives +- Define the topics to learn +- Define the habits to incorporate +- Define the checks you want to do at the end of the month. + +It's interesting to do the plannings on meaningful days such as the first one of the month. Usually we don't have enough flexibility in our life to do it exactly that day, so schedule it the closest you can to that date. It's a good idea to do both the review and the planning on the same day. + +We'll divide the planning process in these phases: + +- Prepare +- Clarify your state +- Decide the month objectives + +#### Prepare + +It's important that you prepare your environment for the planning. You need to be present and fully focused on the process itself. To do so you can: + +- Make sure you don't get interrupted: + - Check your action manager tools to make sure that you don't have anything urgent to address in the next hour. + - Disable all notifications +- Set your analysis environment: + - Put on the music that helps you get *in the zone*. + - Get all the things you may need for the review: + - The checklist that defines the process of your planning (this document in my case). + - Somewhere to write down the insights. + - Your action manager system + - Your habit manager system + - Your *Objective list*. + - Your *Thinking list*. + - Your *Reading list*. + - Remove from your environment everything else that may distract you + +#### Clarify your state + +To be able to make a good decision on your month's path you need to sort out which is your current state. To do so: + +- Clean your todo: Review each todo element by deciding if they should still be in the todo. If they do and they belong to a month objective, add it. If they don't need to be in the todo, refile it. +- Clean your agenda and get an feeling of the busyness of the month: + - Open the orgmode month view agenda and clean it + - Read the rest of your calendars + +#### Decide the month objectives + +Create the month objectives in your roadmap file after addressing each element of: + +- The trimester objectives of your roadmap. + - You can add notes on the trimester objectives +- The `planning_box.org` file + +Then reorder the objectives in order of priority. Try to have at least one objective that improves your life. + +#### Decide the next steps + +- For each of your month and trimester objectives: + - Decide whether it makes sense to address it this month. If not, mark it as inactive + - Create a clear plan of action for this month on that objective. + - Reorder the projects as needed + - Mark as INACTIVE the ones that you don't feel need to be focused on this month. + +- Refine the roadmap of each of the selected areas (change this to the trimestral planning) +- Define the todo of each device (mobile, tablet, laptop) +- Select at least one coding project in case you enter in programming mode +- Clean your mobile browser tabs +- Tweak your *things to think about list*. +- Tweak your *investigations list*. +- Tweak your *reading list*. +- Tweak your learning list. +- Tweak your *habit manager system*. + ## Dream about the trimester Now that we know how to read and react to the signals our inner self sends we are in a better position to align our roadmap with what we understand for a fulfilling life. We'll get into the philosophical ground of discovering life's meaning. I wanted to say answer the question, but I'm increasingly convinced that there is no answer and that the best we can aim to is to leave our thoughts guide us without any certainty. diff --git a/docs/rocketchat.md b/docs/rocketchat.md index 7cfd25f7ece..d39b1590243 100644 --- a/docs/rocketchat.md +++ b/docs/rocketchat.md @@ -144,5 +144,6 @@ If you want to do more complex things uncomment the part of the attachments. # References - [Code]() +- [End of life for the versions](https://docs.rocket.chat/docs/version-durability) [![](not-by-ai.svg){: .center}](https://notbyai.fyi) diff --git a/docs/vdirsyncer.md b/docs/vdirsyncer.md index 86ad5dbcd1b..a8a50ab9750 100644 --- a/docs/vdirsyncer.md +++ b/docs/vdirsyncer.md @@ -70,7 +70,7 @@ accounts. ### Syncing a calendar -To sync to a nextcloud calendar: +#### Sync to a nextcloud calendar: ```ini [pair my_calendars] @@ -100,6 +100,25 @@ verify = true Read the [SSl and certificate validation](#ssl-and-certificate-validation) section to see how to create the `verify_fingerprint`. +#### Sync to a read-only ics + +```ini +[pair calendar_name] +a = "calendar_name_local" +b = "calendar_name_remote" +collections = null +conflict_resolution = ["command", "vimdiff"] +metadata = ["displayname", "color"] + +[storage calendar_name_local] +type = "filesystem" +path = "~/.calendars/calendar_name" +fileext = ".ics" + +[storage calendar_name_remote] +type = "http" +url = "https://example.org/calendar.ics" +``` ### Syncing an address book The following example synchronizes ownCloud’s addressbooks to `~/.contacts/`: @@ -350,6 +369,10 @@ If you create a git repository where you have your calendars you can do a `git diff` and see the files that have changed. If you do a commit after each sync you can have all the history. +## Automatically sync calendars + +You can use the script shown in the [automatically sync emails](#script-to-sync-emails-and-calendars-with-different-frequencies) + # Troubleshooting ## [Database is locked](https://github.com/pimutils/vdirsyncer/issues/720) diff --git a/docs/velero.md b/docs/velero.md index 241b67ca5bd..3557a5d2c3f 100644 --- a/docs/velero.md +++ b/docs/velero.md @@ -319,7 +319,6 @@ If you want to use an EBS snapshot that is not managed by `velero` you need to: Keep in mind that if you are trying to restore a backup created by an EBS lifecycle hook you'll receive an error when restoring because these snapshots have a tag that starts with `aws:` which is reserved for AWS only. The solution is to copy the snapshot into a new one, assign a tag, for example `Name`, and use that snapshot instead. If you don't define any tag you'll get another error :/. - # [Overview of Velero](https://velero.io/docs/main/how-velero-works/) Each Velero operation – on-demand backup, scheduled backup, restore – is a custom resource, defined with a Kubernetes Custom Resource Definition (CRD) and stored in etcd. Velero also includes controllers that process the custom resources to perform backups, restores, and all related operations. diff --git a/docs/watchdog_python.md b/docs/watchdog_python.md new file mode 100644 index 00000000000..0cf43919b24 --- /dev/null +++ b/docs/watchdog_python.md @@ -0,0 +1,45 @@ +[watchdog](https://github.com/gorakhargosh/watchdog?tab=readme-ov-file) is a Python library and shell utilities to monitor filesystem events. + +Cons: + +- The [docs](https://python-watchdog.readthedocs.io/en/stable/api.html) suck. + +# Installation + +```bash +pip install watchdog +``` + +# Usage + +A simple program that uses watchdog to monitor directories specified as command-line arguments and logs events generated: + +```python +import time + +from watchdog.events import FileSystemEvent, FileSystemEventHandler +from watchdog.observers import Observer + + +class MyEventHandler(FileSystemEventHandler): + def on_any_event(self, event: FileSystemEvent) -> None: + print(event) + + +event_handler = MyEventHandler() +observer = Observer() +observer.schedule(event_handler, ".", recursive=True) +observer.start() +try: + while True: + time.sleep(1) +finally: + observer.stop() + observer.join() +``` + +# References +- [Source](https://github.com/gorakhargosh/watchdog?tab=readme-ov-file) +- [Docs](https://python-watchdog.readthedocs.io) + + diff --git a/docs/zfs_exporter.md b/docs/zfs_exporter.md index 8160fbb6586..d8b701f979c 100644 --- a/docs/zfs_exporter.md +++ b/docs/zfs_exporter.md @@ -284,4 +284,3 @@ Sometimes you don't mind if the size of the data saved in the filesystems doesn' # References - [Source](https://github.com/pdf/zfs_exporter) -[![](not-by-ai.svg){: .center}](https://notbyai.fyi) diff --git a/mkdocs.yml b/mkdocs.yml index 830d1557318..c5ecd0c2e6c 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -41,7 +41,6 @@ nav: - Action Management: action_management.md - Roadmap Adjustment: - roadmap_adjustment.md - - Life planning: life_planning.md - Life review: life_review.md - Strategy: strategy.md - Systems Thinking: systems_thinking.md @@ -53,41 +52,55 @@ nav: - Orgzly: orgzly.md - OpenProject: openproject.md - Habit management: habit_management.md - - Interruption Management: + - Interruption management: - interruption_management.md - Interruption Management Analysis: - Work Interruption Analysis: work_interruption_analysis.md - Personal Interruption Analysis: personal_interruption_analysis.md - - Week Management: week_management.md - - Calendar Management: + - Week management: week_management.md + - Calendar management: - calendar_management.md + - Calendar automation: + - vdirsyncer: vdirsyncer.md + - Calendar clients: + - Khal: khal.md - Gancio: gancio.md - - vdirsyncer: vdirsyncer.md - - Khal: khal.md - Time management theories: - Getting Things Done: gtd.md - - Life Chores Management: - - Trip Management: - - Route Management: route_management.md - - Map Management: map_management.md - - Food Management: food_management.md - - Stock Management: + - Life chores management: + - Trip management: + - Route management: route_management.md + - Map management: map_management.md + - Food management: food_management.md + - Stock management: - Grocy: grocy_management.md - - Money Management: + - Money management: - money_management.md - beancount: - beancount.md - bean-sql: bean_sql.md - Fava Dashboards: fava_dashboards.md - - Tools Management: + - Tools management: - tool_management.md - - Email Management: + - Email management: - email_management.md - - Email Automation: email_automation.md - - afew: afew.md - - alot: alot.md - - mbsync: mbsync.md - - notmuch: notmuch.md + - Email automation: + - email_automation.md + - Email automation tools: + - mbsync: mbsync.md + - mirador: mirador.md + - afew: afew.md + - notmuch: notmuch.md + - Email automation libraries: + - mailbox: mailbox.md + - IMAP: + - IMAP library comparison: python_imap.md + - imap-tools: imap_tools.md + - Email clients: + - himalaya: himalaya.md + - alot: alot.md + - Email protocols: + - Maildir: maildir.md - Instant Messages Management: - instant_messages_management.md - XMPP/Jabber: @@ -201,6 +214,7 @@ nav: - Libraries: - Alembic: coding/python/alembic.md - asyncio: asyncio.md + - aiocron: aiocron.md - Apprise: apprise.md - aiohttp: aiohttp.md - BeautifulSoup: beautifulsoup.md @@ -221,9 +235,7 @@ nav: - Goodconf: goodconf.md - ICS: ics.md - Inotify: python_inotify.md - - IMAP: - - IMAP library comparison: python_imap.md - - imap-tools: imap_tools.md + - watchdog: watchdog_python.md - Jinja2: python_jinja2.md - Maison: maison.md - mkdocstrings: coding/python/mkdocstrings.md