Skip to content

Commit

Permalink
Release v3.3.0 (patroni#3043)
Browse files Browse the repository at this point in the history
* Make sure tests are not making external calls
and pass url with scheme to urllib3 to avoid warnings

* Make sure unit tests not rely on filesystem state

* Bump pyright and "solve" reported "issues"

Most of them are related to partially unknown types of values from empty
dict or list. To solve it for the empty dict we use `EMPTY_DICT` object of
newly introduced `_FrozenDict` class.

* Improve unit-tests code coverage

* Add release notes for 3.3.0

* Bump version

* Fix pyinstaller spec file

* python 3.6 compatibility

---------

Co-authored-by: Polina Bungina <[email protected]>
  • Loading branch information
CyberDem0n and hughcapet authored Apr 4, 2024
1 parent 3fd7c98 commit 48fbf64
Show file tree
Hide file tree
Showing 26 changed files with 201 additions and 57 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ jobs:

- uses: jakebailey/pyright-action@v1
with:
version: 1.1.347
version: 1.1.356

docs:
runs-on: ubuntu-latest
Expand Down
70 changes: 70 additions & 0 deletions docs/releases.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,76 @@
Release notes
=============

Version 3.3.0
-------------

.. warning::
All older Partoni versions are not compatible with ``ydiff>=1.3``.

There are the following options available to "fix" the problem:

1. upgrade Patroni to the latest version
2. install ``ydiff<1.3`` after installing Patroni
3. install ``cdiff`` module


**New features**

- Add ability to pass ``auth_data`` to Zookeeper client (Aras Mumcuyan)

It allows to specify the authentication credentials to use for the connection.

- Add a contrib script for ``Barman`` integration (Israel Barth Rubio)

Provide an application ``patroni_barman`` that allows to perform ``Barman`` operations remotely and can be used as a custom bootstrap/custom replica method or as an ``on_role_change`` callback. Please check :ref:`here <tools_integration>` for more information.

- Support ``JSON`` log format (alisalemmi)

Apart from ``plain`` (default), Patroni now also supports ``json`` log format. Requires ``python-json-logger>=2.0.2`` library to be installed.

- Show ``pending_restart_reason`` information (Polina Bungina)

Provide extended information about the PostgreSQL parameters that caused ``pending_restart`` flag to be set. Both ``patronictl list`` and ``/patroni`` REST API endpoint now show the parameters names and their "diff" as ``pending_restart_reason``.

- Implement ``nostream`` tag (Grigory Smolkin)

If ``nostream`` tag is set to ``true``, the node will not use replication protocol to stream WAL but instead rely on archive recovery (if ``restore_command`` is configured). It also disables copying and synchronization of permanent logical replication slots on the node itself and all its cascading replicas.


**Improvements**

- Implement validation of the ``log`` section (Alexander Kukushkin)

Until now validator was not checking the correctness of the logging configuration provided.

- Improve logging for PostgreSQL parameters change (Polina Bungina)

Convert old values to a human-readable format and log information about the ``pg_controldata`` vs Patroni global configuration mismatch.


**Bugfixes**

- Properly filter out not allowed ``pg_basebackup`` options (Israel Barth Rubio)

Due to a bug, Patroni was not properly filtering out the not allowed options configured for the ``basebackup`` replica bootstrap method, when provided in the ``- setting: value`` format.

- Fix ``etcd3`` authentication error handling (Alexander Kukushkin)

Always retry one time on ``etcd3`` authentication error if authentication was not done right before executing the request. Also, do not restart watchers on reauthentication.

- Improve logic of the validator files discovery (Waynerv)

Use ``importlib`` library to discover the files with available configuration parameters when possible (for Python 3.9+). This implementation is more stable and doesn't break the Patroni distributions based on ``zip`` archives.

- Use ``target_session_attrs`` only when multiple hosts are specified in the ``standby_cluster`` section (Alexander Kukushkin)

``target_session_attrs=read-write`` is now added to the ``primary_conninfo`` on the standby leader node only when ``standby_cluster.host`` section contains multiple hosts separated by commas.

- Add compatibility code for ``ydiff`` library version 1.3+ (Alexander Kukushkin)

Patroni is relying on some API from ``ydiff`` that is not public because it is supposed to be just a terminal tool rather than a python module. Unfortunately, the API change in 1.3 broke old Patroni versions.


Version 3.2.2
-------------

Expand Down
2 changes: 2 additions & 0 deletions docs/tools_integration.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _tools_integration:

Integration with other tools
============================

Expand Down
12 changes: 8 additions & 4 deletions patroni.spec
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,17 @@ def hiddenimports():
sys.path.pop(0)


def resources():
import os
res_dir = 'patroni/postgresql/available_parameters/'
exts = set(f.split('.')[-1] for f in os.listdir(res_dir))
return [(res_dir + '*.' + e, res_dir) for e in exts if e.lower() in {'yml', 'yaml'}]


a = Analysis(['patroni/__main__.py'],
pathex=[],
binaries=None,
datas=[
('patroni/postgresql/available_parameters/*.yml', 'patroni/postgresql/available_parameters'),
('patroni/postgresql/available_parameters/*.yaml', 'patroni/postgresql/available_parameters'),
],
datas=resources(),
hiddenimports=hiddenimports(),
hookspath=[],
runtime_hooks=[],
Expand Down
53 changes: 49 additions & 4 deletions patroni/collections.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
"""Patroni custom object types somewhat like :mod:`collections` module.
Provides a case insensitive :class:`dict` and :class:`set` object types.
Provides a case insensitive :class:`dict` and :class:`set` object types, and `EMPTY_DICT` frozen dictionary object.
"""
from collections import OrderedDict
from typing import Any, Collection, Dict, Iterator, KeysView, MutableMapping, MutableSet, Optional
from copy import deepcopy
from typing import Any, Collection, Dict, Iterator, KeysView, Mapping, MutableMapping, MutableSet, Optional


class CaseInsensitiveSet(MutableSet[str]):
Expand Down Expand Up @@ -48,7 +49,7 @@ def __str__(self) -> str:
"""
return str(set(self._values.values()))

def __contains__(self, value: str) -> bool:
def __contains__(self, value: object) -> bool:
"""Check if set contains *value*.
The check is performed case-insensitively.
Expand All @@ -57,7 +58,7 @@ def __contains__(self, value: str) -> bool:
:returns: ``True`` if *value* is already in the set, ``False`` otherwise.
"""
return value.lower() in self._values
return isinstance(value, str) and value.lower() in self._values

def __iter__(self) -> Iterator[str]:
"""Iterate over the values in this set.
Expand Down Expand Up @@ -207,3 +208,47 @@ def __repr__(self) -> str:
"<CaseInsensitiveDict{'A': 'B', 'c': 'd'} at ..."
"""
return '<{0}{1} at {2:x}>'.format(type(self).__name__, dict(self.items()), id(self))


class _FrozenDict(Mapping[str, Any]):
"""Frozen dictionary object."""

def __init__(self, *args: Any, **kwargs: Any) -> None:
"""Create a new instance of :class:`_FrozenDict` with given data."""
self.__values: Dict[str, Any] = dict(*args, **kwargs)

def __iter__(self) -> Iterator[str]:
"""Iterate over keys of this dict.
:yields: each key present in the dict. Yields each key with its last case that has been stored.
"""
return iter(self.__values)

def __len__(self) -> int:
"""Get the length of this dict.
:returns: number of keys in the dict.
:Example:
>>> len(_FrozenDict())
0
"""
return len(self.__values)

def __getitem__(self, key: str) -> Any:
"""Get the value corresponding to *key*.
:returns: value corresponding to *key*.
"""
return self.__values[key]

def copy(self) -> Dict[str, Any]:
"""Create a copy of this dict.
:return: a new dict object with the same keys and values of this dict.
"""
return deepcopy(self.__values)


EMPTY_DICT = _FrozenDict()
6 changes: 3 additions & 3 deletions patroni/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
from typing import Any, Callable, Collection, Dict, List, Optional, Union, TYPE_CHECKING

from . import PATRONI_ENV_PREFIX
from .collections import CaseInsensitiveDict
from .collections import CaseInsensitiveDict, EMPTY_DICT
from .dcs import ClusterConfig
from .exceptions import ConfigParseError
from .file_perm import pg_perm
Expand Down Expand Up @@ -445,14 +445,14 @@ def _safe_copy_dynamic_configuration(self, dynamic_configuration: Dict[str, Any]

for name, value in dynamic_configuration.items():
if name == 'postgresql':
for name, value in (value or {}).items():
for name, value in (value or EMPTY_DICT).items():
if name == 'parameters':
config['postgresql'][name].update(self._process_postgresql_parameters(value))
elif name not in ('connect_address', 'proxy_address', 'listen',
'config_dir', 'data_dir', 'pgpass', 'authentication'):
config['postgresql'][name] = deepcopy(value)
elif name == 'standby_cluster':
for name, value in (value or {}).items():
for name, value in (value or EMPTY_DICT).items():
if name in self.__DEFAULT_CONFIG['standby_cluster']:
config['standby_cluster'][name] = deepcopy(value)
elif name in config: # only variables present in __DEFAULT_CONFIG allowed to be overridden from DCS
Expand Down
10 changes: 7 additions & 3 deletions patroni/config_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
from psycopg2 import cursor

from . import psycopg
from .collections import EMPTY_DICT
from .config import Config
from .exceptions import PatroniException
from .log import PatroniLogger
Expand Down Expand Up @@ -244,7 +245,8 @@ def _get_int_major_version(self) -> int:
See :func:`~patroni.postgresql.misc.postgres_major_version_to_int` and
:func:`~patroni.utils.get_major_version`.
"""
postgres_bin = ((self.config.get('postgresql') or {}).get('bin_name') or {}).get('postgres', 'postgres')
postgres_bin = ((self.config.get('postgresql')
or EMPTY_DICT).get('bin_name') or EMPTY_DICT).get('postgres', 'postgres')
return postgres_major_version_to_int(get_major_version(self.config['postgresql'].get('bin_dir'), postgres_bin))

def generate(self) -> None:
Expand Down Expand Up @@ -411,8 +413,10 @@ def _set_su_params(self) -> None:
val = self.parsed_dsn.get(conn_param, os.getenv(env_var))
if val:
su_params[conn_param] = val
patroni_env_su_username = ((self.config.get('authentication') or {}).get('superuser') or {}).get('username')
patroni_env_su_pwd = ((self.config.get('authentication') or {}).get('superuser') or {}).get('password')
patroni_env_su_username = ((self.config.get('authentication')
or EMPTY_DICT).get('superuser') or EMPTY_DICT).get('username')
patroni_env_su_pwd = ((self.config.get('authentication')
or EMPTY_DICT).get('superuser') or EMPTY_DICT).get('password')
# because we use "username" in the config for some reason
su_params['username'] = su_params.pop('user', patroni_env_su_username) or getuser()
su_params['password'] = su_params.get('password', patroni_env_su_pwd) or \
Expand Down
4 changes: 4 additions & 0 deletions patroni/dcs/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,8 @@ def dcs_modules() -> List[str]:
:returns: list of known module names with absolute python module path namespace, e.g. ``patroni.dcs.etcd``.
"""
if TYPE_CHECKING: # pragma: no cover
assert isinstance(__package__, str)
return iter_modules(__package__)


Expand All @@ -101,6 +103,8 @@ def iter_dcs_classes(
:returns: an iterator of tuples, each containing the module ``name`` and the imported DCS class object.
"""
if TYPE_CHECKING: # pragma: no cover
assert isinstance(__package__, str)
return iter_classes(__package__, AbstractDCS, config)


Expand Down
3 changes: 2 additions & 1 deletion patroni/dcs/consul.py
Original file line number Diff line number Diff line change
Expand Up @@ -444,8 +444,9 @@ def _mpp_cluster_loader(self, path: str) -> Dict[int, Cluster]:
:returns: all MPP groups as :class:`dict`, with group IDs as keys and :class:`Cluster` objects as values.
"""
results: Optional[List[Dict[str, Any]]]
_, results = self.retry(self._client.kv.get, path, recurse=True, consistency=self._consistency)
clusters: Dict[int, Dict[str, Cluster]] = defaultdict(dict)
clusters: Dict[int, Dict[str, Dict[str, Any]]] = defaultdict(dict)
for node in results or []:
key = node['Key'][len(path):].split('/', 1)
if len(key) == 2 and self._mpp.group_re.match(key[0]):
Expand Down
25 changes: 14 additions & 11 deletions patroni/dcs/kubernetes.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
from typing import Any, Callable, Collection, Dict, List, Optional, Tuple, Type, Union, TYPE_CHECKING

from . import AbstractDCS, Cluster, ClusterConfig, Failover, Leader, Member, Status, SyncState, TimelineHistory
from ..collections import EMPTY_DICT
from ..exceptions import DCSError
from ..postgresql.mpp import AbstractMPP
from ..utils import deep_compare, iter_response_objects, keepalive_socket_options, \
Expand Down Expand Up @@ -470,7 +471,7 @@ def wrapper(*args: Any, **kwargs: Any) -> Union[urllib3.HTTPResponse, K8sObject]
if len(args) == 3: # name, namespace, body
body = args[2]
elif action == 'create': # namespace, body
body = args[1]
body = args[1] # pyright: ignore [reportGeneralTypeIssues]
elif action == 'delete': # name, namespace
body = kwargs.pop('body', None)
else:
Expand Down Expand Up @@ -509,7 +510,7 @@ def __init__(self, orig: K8sClient.rest.ApiException) -> None:
@property
def sleeptime(self) -> Optional[int]:
try:
return int((self.headers or {}).get('retry-after', ''))
return int((self.headers or EMPTY_DICT).get('retry-after', ''))
except Exception:
return None

Expand Down Expand Up @@ -654,15 +655,15 @@ def _process_event(self, event: Dict[str, Union[Any, Dict[str, Union[Any, Dict[s
obj = K8sObject(obj)
success, old_value = self.set(name, obj)
if success:
new_value = (obj.metadata.annotations or {}).get(self._annotations_map.get(name))
new_value = (obj.metadata.annotations or EMPTY_DICT).get(self._annotations_map.get(name, ''))
elif ev_type == 'DELETED':
success, old_value = self.delete(name, obj['metadata']['resourceVersion'])
else:
return logger.warning('Unexpected event type: %s', ev_type)

if success and obj.get('kind') != 'Pod':
if old_value:
old_value = (old_value.metadata.annotations or {}).get(self._annotations_map.get(name))
old_value = (old_value.metadata.annotations or EMPTY_DICT).get(self._annotations_map.get(name, ''))

value_changed = old_value != new_value and \
(name != self._dcs.config_path or old_value is not None and new_value is not None)
Expand Down Expand Up @@ -844,7 +845,7 @@ def reload_config(self, config: Union['Config', Dict[str, Any]]) -> None:

@staticmethod
def member(pod: K8sObject) -> Member:
annotations = pod.metadata.annotations or {}
annotations = pod.metadata.annotations or EMPTY_DICT
member = Member.from_node(pod.metadata.resource_version, pod.metadata.name, None, annotations.get('status', ''))
member.data['pod_labels'] = pod.metadata.labels
return member
Expand Down Expand Up @@ -925,7 +926,7 @@ def _cluster_from_nodes(self, group: str, nodes: Dict[str, K8sObject], pods: Col
failover = nodes.get(path + self._FAILOVER)
metadata = failover and failover.metadata
failover = metadata and Failover.from_node(metadata.resource_version,
(metadata.annotations or {}).copy())
(metadata.annotations or EMPTY_DICT).copy())

# get synchronization state
sync = nodes.get(path + self._SYNC)
Expand Down Expand Up @@ -1047,16 +1048,18 @@ def subsets_changed(last_observed_subsets: List[K8sObject], ip: str, ports: List

def __target_ref(self, leader_ip: str, latest_subsets: List[K8sObject], pod: K8sObject) -> K8sObject:
# we want to re-use existing target_ref if possible
empty_addresses: List[K8sObject] = []
for subset in latest_subsets:
for address in subset.addresses or []:
for address in subset.addresses or empty_addresses:
if address.ip == leader_ip and address.target_ref and address.target_ref.name == self._name:
return address.target_ref
return k8s_client.V1ObjectReference(kind='Pod', uid=pod.metadata.uid, namespace=self._namespace,
name=self._name, resource_version=pod.metadata.resource_version)

def _map_subsets(self, endpoints: Dict[str, Any], ips: List[str]) -> None:
leader = self._kinds.get(self.leader_path)
latest_subsets = leader and leader.subsets or []
empty_addresses: List[K8sObject] = []
latest_subsets = leader and leader.subsets or empty_addresses
if not ips:
# We want to have subsets empty
if latest_subsets:
Expand Down Expand Up @@ -1212,7 +1215,7 @@ def _retry(*args: Any, **kwargs: Any) -> Any:
if not retry.ensure_deadline(0.5):
return False

kind_annotations = kind and kind.metadata.annotations or {}
kind_annotations = kind and kind.metadata.annotations or EMPTY_DICT
kind_resource_version = kind and kind.metadata.resource_version

# There is different leader or resource_version in cache didn't change
Expand All @@ -1225,7 +1228,7 @@ def _retry(*args: Any, **kwargs: Any) -> Any:
def update_leader(self, leader: Leader, last_lsn: Optional[int],
slots: Optional[Dict[str, int]] = None, failsafe: Optional[Dict[str, str]] = None) -> bool:
kind = self._kinds.get(self.leader_path)
kind_annotations = kind and kind.metadata.annotations or {}
kind_annotations = kind and kind.metadata.annotations or EMPTY_DICT

if kind and kind_annotations.get(self._LEADER) != self._name:
return False
Expand Down Expand Up @@ -1346,7 +1349,7 @@ def _delete_leader(self, leader: Leader) -> bool:
def delete_leader(self, leader: Optional[Leader], last_lsn: Optional[int] = None) -> bool:
ret = False
kind = self._kinds.get(self.leader_path)
if kind and (kind.metadata.annotations or {}).get(self._LEADER) == self._name:
if kind and (kind.metadata.annotations or EMPTY_DICT).get(self._LEADER) == self._name:
annotations: Dict[str, Optional[str]] = {self._LEADER: None}
if last_lsn:
annotations[self._OPTIME] = str(last_lsn)
Expand Down
Loading

0 comments on commit 48fbf64

Please sign in to comment.