Skip to content

Commit

Permalink
format
Browse files Browse the repository at this point in the history
  • Loading branch information
ntk148v committed Dec 25, 2024
1 parent 8f44e4c commit 7cbe6f9
Show file tree
Hide file tree
Showing 8 changed files with 51 additions and 51 deletions.
12 changes: 6 additions & 6 deletions content/posts/bgp-ecmp-load-balancing.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ References:

## Configure

### ISPRouter
### ISPRouter

Follow the [Fortigate document](https://docs.fortinet.com/document/fortigate/7.0.3) for the basic commands.

Expand Down Expand Up @@ -82,7 +82,7 @@ config firewall policy
end
```

### EdgeRouter
### EdgeRouter

```bash
config system interface
Expand Down Expand Up @@ -147,7 +147,7 @@ config system settings
end
```

### Switch
### Switch

- Set up mode access.

Expand All @@ -160,7 +160,7 @@ exit
copy running-config start-config
```

### Servers
### Servers

- Configure `10.13.13.1` on the local loopback interface.

Expand Down Expand Up @@ -341,7 +341,7 @@ lb1$ sudo service exabgp status

To make sure everything works as expected.

### Scenario 1: Both servers are OK
### Scenario 1: Both servers are OK

- Client check.

Expand Down Expand Up @@ -443,7 +443,7 @@ Origin codes: i - IGP, e - EGP, ? - incomplete
Total number of prefixes 1
```

### Scenario 2: One lb is down
### Scenario 2: One lb is down

- Stop nginx on lb2 (ExaBGP only, Bird may require the complete shutdown) or stop lb2 physically.

Expand Down
10 changes: 5 additions & 5 deletions content/posts/docker-iptables.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ This is a basic packet flow from outside:

Alright, before we make it right, there are some common mistakes.

### Modify Docker generated rules manually
### Modify Docker generated rules manually

Docker generates iptables rules, then adds to `DOCKER` chains. Some users may manipulate this chain manually in order to block connections.

Expand All @@ -39,7 +39,7 @@ _Please don't do it_. Yes, you are able to do it, there is nothing prevent you t
Right: Do not manipulate Docker rules manually.
{{< /quote >}}

### Insert you rules in the wrong chain
### Insert you rules in the wrong chain

iptables basic: iptables is divied into three levels: tables, chains and rules. We only use the filter tables, which contains:

Expand All @@ -53,7 +53,7 @@ Commonly, to block connection from external, put reject rules in INPUT chain. Bu
Right: Add rules which load before Docker's rules, add them to DOCKER-USER.
{{< /quote >}}

### Modify and persistent iptables wrong
### Modify and persistent iptables wrong

You modify and persistent iptables rules like this:

Expand All @@ -70,7 +70,7 @@ Right: Do not save, flush then restore all rules. Check the following solution.

## Do it right!

### Overview
### Overview

I have create a repository for this, which is highly inspired by [systemd-service-iptables](https://github.com/boTux-fr/systemd-service-iptables): https://github.com/ntk148v/systemd-iptables

Expand Down Expand Up @@ -234,7 +234,7 @@ I have create a repository for this, which is highly inspired by [systemd-servic
COMMIT
```

### Getting started
### Getting started

- Ofc you need iptables and systemd installed.
- On the Linux, run as root:
Expand Down
14 changes: 7 additions & 7 deletions content/posts/getting-started-tiling-wm-part-1-i3.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ I love customizing desktop. I make changes in my desktop everyday, make it look

First of all, you have to know the basic concepts.

### Desktop Environment vs. Window Manager
### Desktop Environment vs. Window Manager

We'll begin by showing how the Linux graphical desktop is layered. There are basically 3 layers that can be included in the Linux desktop:

Expand All @@ -40,7 +40,7 @@ A[Desktop Environment] --> B[Window Manager];
B --> C[X Windows];
{{< /mermaid >}}

### Types of Window Manager
### Types of Window Manager

- **Stack window manager**:
- A stack window manager renders the window one-by-one onto the screen at specific co-orinates. If one window's area overlaps another, then the window "on top" overwites part of the other's visible appearance. This results in the appearance familiar to many users in which windows act a little like pieces of paper on a desktop, which can be moved around and allowed to overlap.
Expand All @@ -65,7 +65,7 @@ B --> C[X Windows];

## Minimal I3 setup

### Operating System
### Operating System

- Ubuntu 20.04 (Desktop/Server), download the [installer](https://ubuntu.com/download/) and install Ubuntu by walking through installer.
- If you choose Ubuntu Server, you'll need a display server so let's install X Window System ([Xorg](https://wiki.archlinux.org/index.php/Xorg)).
Expand All @@ -76,7 +76,7 @@ sudo apt install xinit
# You can override it by creating and modifying ~/.xinitrc
```

### Install I3
### Install I3

- You can install i3 from [Ubuntu repository](https://packages.ubuntu.com/search?keywords=i3). It includes the window manager, a screen locker and two programs which write a status line to i3bar through stdout.

Expand Down Expand Up @@ -124,14 +124,14 @@ sudo ninja install

This post doesn't aim to cover everything about i3, see the [official documentation](https://i3wm.org/docs/userguide.html) for more information.

### Keybindings
### Keybindings

- In i3, commands are invoked with a modifier key, referred to as `$mod`. This is `Alt (Mod1)` by default, with `Super (Mod4)` being a popular alternative. Super is the key usually represented on a keyboard as a Windows icon, or on an Apple keyboard as a Command key.
- See [i3 reference card](https://i3wm.org/docs/refcard.html) and [Using i3](https://i3wm.org/docs/userguide.html#_using_i3) for defaults.

{{< figure class="figure" src="/photos/getting-started-tiling-wm-part-1/i3-refcard.png" >}}

### Workspace, Container and Window
### Workspace, Container and Window

{{< mermaid >}}

Expand Down Expand Up @@ -165,7 +165,7 @@ style Container3 fill:#6fa8dc;
- A window, where an application is running, can be created in a container. It will automatically position itself and be in focus, depending on the container’s layout. You can move them around or even change the layout of the container using keystrokes.
- There are two different sorts of windows: **fixed window**s (by default) and **floating windows**.

### Application launcher
### Application launcher

- i3 uses [dmenu](https://wiki.archlinux.org/title/Dmenu) as an application launcher, which is bound by default to `$mod+d`.
- [rofi]({{< ref "/posts/getting-started-tiling-wm-part-2-rofi.md" >}}) is a popular dmenu replacement and more that can list dekstop entries.
Expand Down
10 changes: 5 additions & 5 deletions content/posts/linux-swap-space-note.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@ Swap file systems support virtual memory, data is written to a swap file system

## Swap partition size

### Old rule of thumb
### Old rule of thumb

```
swap: 2 * the-amount-of-RAM
```

So if a computer had 64KB of RAM, a swap partition of 128KB would be an optimum size. This rule took into the facts that RAM sizes were typically quite small at the time. Nowadays, RAM has become a `cheap` & `affordable` commondity, so the 2x rule is outdated.

### What is the right amount of swap space?
### What is the right amount of swap space?

Choosing the correct swap size is important. Too much swap space can hide memory leaks, also the storage space is allocated but idle. It can affect the system performance in general.

Expand All @@ -41,11 +41,11 @@ swap <= 10% * total-size-hard-drives && swap <= 128GB (if hibernation is allowed

## Common misconceptions & gotchas

### Increasing swap size would increase performance
### Increasing swap size would increase performance

- No, it wouldn't. Remember that the slowest part of memory is your hard-disk - _swap_ just provides the ability to use more memory by swapping some pages out to the disk, which is **slow** compared to RAM operations. Swap can also [increase disk I/O & CPU load](https://askubuntu.com/questions/367881/does-swap-file-usage-increase-disk-i-o-and-cpu-load). This is a tradeoff. Without swap, the OOM may get you. It causes a downtime and in the real life scenario, the application can be slow a bit rather than down completely.

### Swappiness
### Swappiness

- The linux kernel tunable parameter `vm.swappiness` (/proc/sys/vm/swappiness) can be used to define how aggressively memory pages are swapped to disk.
- The default value: `60`. The lower the value, the less swapping is used & the more memory pages are kept in the physical memory.
Expand All @@ -70,7 +70,7 @@ swap <= 10% * total-size-hard-drives && swap <= 128GB (if hibernation is allowed

- On SSDs, swapping out anonymous pages and reclaiming file pages are essentially equivalent in terms of performance/latency. On older spinning disks, swap reads are slower due to random reads, so a lower vm.swappiness setting makes sense there.

### Using swap as emergency memory
### Using swap as emergency memory

- Swap is not generally about getting emergency memory, it's about making memory reclamation egalitarian and efficient. In fact, using it as "emergency memory" is generally actively harmful.

Expand Down
14 changes: 7 additions & 7 deletions content/posts/openstack-autoscaling-new-approach.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This guide describes how to automatically scale out your Compute instances in re

Let's talk about the standard OpenStack Autoscaling approach before goes to the new approach.

### Main components
### Main components

- Orchestration: The core component providing automatic scaling is Orchestration (heat). Orchestration allows you to define rules using human-readable YAML templates. These rules are applied to evaluate system load based on Telemetry data to find out whether there is need to more instances into the stack. Once the load has dropped, Orchestration can automatically remove the unused instances again.

Expand All @@ -23,11 +23,11 @@ Let's talk about the standard OpenStack Autoscaling approach before goes to the
- Gnocchi: provides a time-series resource indexing, metric storage service with enables users to capture OpenStack resources and the metrics associated with them.
- Aodh: enables the abiltity to trigger actions based on defined rules against sample or event data collected by Ceilometer.

### Autoscaling process
### Autoscaling process

For more details, you could check [IBM help documentation](https://ibm-blue-box-help.github.io/help-documentation/heat/autoscaling-with-heat/)

### Drawbacks
### Drawbacks

- Ceilometer, Aodh are lacking of contribution. Ceilometer API was [deprecated](https://review.opendev.org/#/c/512286/). Either Transform and pipeline was [the same state](https://review.opendev.org/#/c/560854/), it means cpu_util will be unusable soon. In the commit message, @sileht - Ceilometer Core reviewer wrote that "Also backend like Gnocchi offers a better alternative to compute them". But Aodh still [deprecated Gnocchi aggregation API](https://github.com/openstack/aodh/blob/master/aodh/evaluator/gnocchi.py#L140) which doesn't support `rate:mean`. For more details, you can follow the [issue I've opened before](https://github.com/gnocchixyz/gnocchi/issues/999). Be honest, I was gave up on it - 3 projects which was tightly related together, one change might cause a sequence and break the whole stack, how can I handle that?
- Aodh has its own formula to define rule based on Ceilometer metrics (that were stored in Gnocchi). But it isn't correct sometimes cause the wrong scaling action.
Expand All @@ -36,7 +36,7 @@ For more details, you could check [IBM help documentation](https://ibm-blue-box-

## The new approach with Faythe

### The idea
### The idea

Actually, this isn't a complete new approach, it still leverages Orchestration (heat) to do scaling action. The different comes from Monitor service.

Expand All @@ -57,7 +57,7 @@ The _another service_ is [Prometheus stack](https://prometheus.io/). The questio
- Flexibile: Beside the system factor like CPU/Memory usage, I can evaluate any metrics I can collect, for example: JVM metrics.
- // Take time to investigate about Prometheus and fill it here by yourself

### The implementation
### The implementation

**The ideal architecture**

Expand Down Expand Up @@ -143,7 +143,7 @@ We need a 3rd service to solve these problems - `Faythe does some magic`.
- Prometheus alertmanager sends Alerts via pre-configured webhook URL - Faythe endpoint.
- Faythe receives and processes Alerts (dedup, group alert and generate a Heat signal URL) and creates a POST request to scale endpoint.

### Guideline
### Guideline

The current aprroach requires some further setup and configuration from Prometheus and Heat stack. You will see that it's quite complicated.

Expand Down Expand Up @@ -304,7 +304,7 @@ server_config:

<center><iframe src="https://giphy.com/embed/cLlVn5zC5UOSmQZKJ7" width="480" height="270" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/RobertEBlackmon-bye-go-away-anna-wintour-cLlVn5zC5UOSmQZKJ7">via GIPHY</a></p></center>

### Drawbacks and TODO
### Drawbacks and TODO

**Drawbacks**

Expand Down
20 changes: 10 additions & 10 deletions content/posts/operate-etcd-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,21 +15,21 @@ Etcd Version `v3.4.0`.

## Requirements

### Number of nodes
### Number of nodes

- \>= 3 nodes. A etcd cluster needs a majority of nodes, a quorum to agree on updates to the cluster state. For a cluster with **n-members**, quorum is **(n/2)+1**.

### CPUs
### CPUs

- Etcd doesn't require a lot of CPU capacity.
- Typical clusters need **2-4 cores** to run smoothly.

### Memory
### Memory

- Etcd performance depends on having enough memory (cache key-value data, tracking watchers...).
- Typical **8GB** is enough.

### Disk
### Disk

- An etcd cluster is very sensitive to disk latencies. Since etcd must persist proposals to its log, disk activity from other processes may cause long `fsync` latencies. The upshot is etcd may miss heartbeats, causing request timeouts and temporary leader loss. An etcd server can sometimes stably run alongside these processes when given a high disk priority.
- Check whether a disk is fast enough for etcd using [fio](https://github.com/axboe/fio). If the 99th percentile of fdatasync is **<10ms**, your storage is ok.
Expand All @@ -41,15 +41,15 @@ $ fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data \

- **SSD** is recommended.

### Network
### Network

- Etcd cluster should be deployed in a fast and reliable network. Low latency ensures etcd members can communicate fast. High bandwidth can reduce the time to recover a failed etcd member.
- **1GbE** is sufficient for common etcd.
- Note that the network isn't the only source of latency. Each request and response may be impacted by slow disks on both the leader and followers.

## Tuning

### Time parameters
### Time parameters

- `Heartbeat interval`.
- The frequency with which the leader will notify followers that it is still the leader.
Expand All @@ -70,7 +70,7 @@ $ etcd --heartbeat-interval=100 --election-timeout=500
$ ETCD_HEARTBEAT_INTERVAL=100 ETCD_ELECTION_TIMEOUT=500 etcd
```

### Disk
### Disk

- An etcd server can sometimes stably run alongside these processes when given a high disk priority using [ionice](https://linux.die.net/man/1/ionice).

Expand All @@ -79,7 +79,7 @@ $ ETCD_HEARTBEAT_INTERVAL=100 ETCD_ELECTION_TIMEOUT=500 etcd
$ sudo ionice -c2 -n0 -p `pgrep etcd`
```

### Snapshot
### Snapshot

- etcd appends all key changes to a log file -> huge log that grows forever :point_up:
- Solution: Make periodic snapshots (save the current and remove old logs).
Expand All @@ -96,7 +96,7 @@ $ ETCD_SNAPSHOT_COUNT=5000 etcd

## Maintenance

### History compaction
### History compaction

- Etcd keeps an exact history of its keyspace, the history should be periodically compacted to avoid performance degradation and eventual storage space exhaustion.
- Etcd can be set to automatically compact the keyspace with the `--auto-compaction-*` option with a period of hours.
Expand All @@ -110,7 +110,7 @@ $ etcd --auto-compaction-retention=1 --auto-compaction-mode=periodic
- Revision-based: `--auto-compaction-mode=revision --auto-compaction-retention=1000` automatically Compact on "latest revision" - 1000 every 5-minute (when latest revision is 30000, compact on revision 29000). Use this when having a large keyspace.
- Periodic: `--auto-compaction-mode=periodic --auto-compaction-retention=72h` automatically Compact with 72-hour retention window every 1-hour. Use this when having a huge number of revisions for a key-value pair.

### Defragmentation
### Defragmentation

- Compacting old revisions internally fragments etcd by leaving gaps in backend database - `internal fragmentation`.
- Internal fragmentation space is available for use by etcd but unavailable to the host filesystem.
Expand Down
10 changes: 5 additions & 5 deletions content/posts/shanghai-2019.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ draft: true
## Chuẩn bị

### Ứng dụng cần cài đặt
### Ứng dụng cần cài đặt

### Tiền tệ
### Tiền tệ

## Di chuyển

Expand All @@ -22,8 +22,8 @@ draft: true

## Chuyện bên lề

### Trung Quốc và QR Code
### Trung Quốc và QR Code

### Ngôn ngữ
### Ngôn ngữ

### Đi vệ sinh...
### Đi vệ sinh...
Loading

0 comments on commit 7cbe6f9

Please sign in to comment.