From 3775f1c23b11b96eafa35e0ce14c698038e8d8bf Mon Sep 17 00:00:00 2001 From: Kevin Date: Tue, 5 Nov 2024 12:31:07 +0100 Subject: [PATCH] feat (sync service): multi tenancy (#1886) _Closes #1868, closes #1844. I kept that PR up to consult its diff until all the issues have been resolved here._ This PR fixes #1591. It enables Electric to work with multiple databases (i.e. tenants). ### Architectural overview ![multi-tenant-electric](https://github.com/user-attachments/assets/9129d146-288d-43ea-8a9d-879656173443) TODO: - [x] rebase on top of main - [x] review commented out code and either remove it or make sure the place it had been moved to has been updated for multitenancy - [x] add a `database_id` query param to the health endpoint to request the health of a given tenant - [x] extract `validate_tenant_id`, `load_tenant` and `assign_tenant` which currently are duplicated in the create shape plug and delete shape plug. - [x] Implement a two-tier ETS storage for column info and relation data. In this PR we replace two named tables with anonymous ones, necessitating a roundtrip to a single gen server just to fetch the table handle. This gen server becomes a bottleneck. We should either create another static ETS table to lookup individual ETS table handles in or store all column info and relation info in two static (named) tables, ensuring they don't trump each other via proper key design. - [x] unit tests for multi tenancy - [x] unit tests for add DB plug - [x] unit tests for delete DB plug - [x] e2e tests for multi tenancy - [x] debug flaky cache tests - [x] add a diagram explaining the new architecture - [x] update the open API spec (for add DB endpoint and remove DB endpoint) - [x] persist tenant information and restore it on startup - [ ] e2e test for persistence of tenants - [x] Generate a random database id (i.e. tenant id) if you only provide a database_url env var --------- Co-authored-by: Oleksii Sholik Co-authored-by: msfstef Co-authored-by: Ilia Borovitinov --- .github/workflows/ts_test.yml | 4 +- integration-tests/tests/_macros.luxinc | 2 +- .../tests/invalidated-replication-slot.lux | 2 +- ...suming-replication-at-consistent-point.lux | 8 +- integration-tests/tests/rolling-deploy.lux | 8 +- .../react-hooks/test/support/global-setup.ts | 3 +- packages/sync-service/.env.dev | 3 + packages/sync-service/.env.test | 3 + packages/sync-service/config/runtime.exs | 44 +-- packages/sync-service/dev/docker-compose.yml | 18 ++ packages/sync-service/dev/postgres2.conf | 3 + .../sync-service/lib/electric/application.ex | 107 ++----- .../lib/electric/application/configuration.ex | 4 - .../lib/electric/connection/manager.ex | 81 +++-- .../lib/electric/connection/supervisor.ex | 25 +- .../lib/electric/plug/add_database_plug.ex | 193 ++++++++++++ .../lib/electric/plug/delete_shape_plug.ex | 3 + .../lib/electric/plug/health_check_plug.ex | 11 +- .../lib/electric/plug/remove_database_plug.ex | 25 ++ .../sync-service/lib/electric/plug/router.ex | 9 +- .../lib/electric/plug/serve_shape_plug.ex | 84 ++---- .../lib/electric/plug/tenant_utils.ex | 46 +++ .../sync-service/lib/electric/plug/utils.ex | 61 ++++ .../lib/electric/postgres/inspector.ex | 2 +- .../postgres/inspector/ets_inspector.ex | 122 +++++++- .../electric/postgres/replication_client.ex | 15 +- .../replication/shape_log_collector.ex | 9 +- .../lib/electric/service_status.ex | 8 +- .../sync-service/lib/electric/shape_cache.ex | 99 ++++-- .../shape_cache/crashing_file_storage.ex | 2 +- .../lib/electric/shape_cache/file_storage.ex | 30 +- .../electric/shape_cache/in_memory_storage.ex | 30 +- .../lib/electric/shape_cache/shape_status.ex | 6 +- .../lib/electric/shape_cache/storage.ex | 7 +- packages/sync-service/lib/electric/shapes.ex | 18 +- .../lib/electric/shapes/consumer.ex | 34 ++- .../electric/shapes/consumer/snapshotter.ex | 19 +- .../electric/shapes/consumer/supervisor.ex | 30 +- .../electric/shapes/consumer_supervisor.ex | 15 +- .../lib/electric/shapes/supervisor.ex | 16 +- .../lib/electric/tenant/dynamic_supervisor.ex | 36 +++ .../lib/electric/tenant/persistence.ex | 94 ++++++ .../lib/electric/tenant/supervisor.ex | 99 ++++++ .../lib/electric/tenant_manager.ex | 283 ++++++++++++++++++ .../sync-service/lib/electric/timeline.ex | 35 ++- packages/sync-service/lib/electric/utils.ex | 6 + .../electric/plug/add_database_plug_test.exs | 117 ++++++++ .../electric/plug/delete_shape_plug_test.exs | 65 +++- .../electric/plug/health_check_plug_test.exs | 60 ++-- .../plug/remove_database_plug_test.exs | 87 ++++++ .../test/electric/plug/router_test.exs | 4 +- .../electric/plug/serve_shape_plug_test.exs | 179 ++++++----- .../postgres/inspector/ets_inspector_test.exs | 6 +- .../postgres/replication_client_test.exs | 10 +- .../replication/shape_log_collector_test.exs | 22 +- .../shape_cache/shape_status_test.exs | 7 +- .../storage_implementations_test.exs | 9 +- .../electric/shape_cache/storage_test.exs | 11 +- .../test/electric/shape_cache_test.exs | 64 ++-- .../test/electric/shapes/consumer_test.exs | 108 ++++--- .../test/electric/shapes/shape_test.exs | 4 +- .../test/electric/tenant/persistence_test.exs | 122 ++++++++ .../test/electric/tenant_manager_test.exs | 259 ++++++++++++++++ .../test/electric/timeline_test.exs | 72 +++-- .../test/support/component_setup.ex | 209 ++++++++++++- .../sync-service/test/support/db_setup.ex | 4 +- packages/sync-service/test/support/mocks.ex | 1 + .../sync-service/test/support/test_storage.ex | 6 +- packages/typescript-client/src/client.ts | 14 + packages/typescript-client/src/constants.ts | 1 + .../test/integration.test.ts | 181 ++++++++++- .../test/support/global-setup.ts | 29 +- .../test/support/test-context.ts | 135 ++++++++- packages/typescript-client/vitest.config.ts | 1 + website/electric-api.yaml | 102 ++++++- 75 files changed, 3020 insertions(+), 631 deletions(-) create mode 100644 packages/sync-service/.env.test create mode 100644 packages/sync-service/dev/postgres2.conf create mode 100644 packages/sync-service/lib/electric/plug/add_database_plug.ex create mode 100644 packages/sync-service/lib/electric/plug/remove_database_plug.ex create mode 100644 packages/sync-service/lib/electric/plug/tenant_utils.ex create mode 100644 packages/sync-service/lib/electric/tenant/dynamic_supervisor.ex create mode 100644 packages/sync-service/lib/electric/tenant/persistence.ex create mode 100644 packages/sync-service/lib/electric/tenant/supervisor.ex create mode 100644 packages/sync-service/lib/electric/tenant_manager.ex create mode 100644 packages/sync-service/test/electric/plug/add_database_plug_test.exs create mode 100644 packages/sync-service/test/electric/plug/remove_database_plug_test.exs create mode 100644 packages/sync-service/test/electric/tenant/persistence_test.exs create mode 100644 packages/sync-service/test/electric/tenant_manager_test.exs diff --git a/.github/workflows/ts_test.yml b/.github/workflows/ts_test.yml index 926b914775..25d6108a4f 100644 --- a/.github/workflows/ts_test.yml +++ b/.github/workflows/ts_test.yml @@ -70,6 +70,8 @@ jobs: defaults: run: working-directory: ${{ matrix.package_dir }} + env: + DATABASE_ID: ci_test_tenant steps: - uses: actions/checkout@v4 - uses: erlef/setup-beam@v1 @@ -111,7 +113,7 @@ jobs: mix run --no-halt & wait-on: | - http-get://localhost:3000/v1/health + http-get://localhost:3000/v1/health?database_id=${{ env.DATABASE_ID }} tail: true log-output-resume: stderr diff --git a/integration-tests/tests/_macros.luxinc b/integration-tests/tests/_macros.luxinc index f1f76304fc..2de98043f1 100644 --- a/integration-tests/tests/_macros.luxinc +++ b/integration-tests/tests/_macros.luxinc @@ -91,7 +91,7 @@ [shell $shell_name] -$fail_pattern - !DATABASE_URL=$database_url PORT=$port $env ../scripts/electric_dev.sh + !DATABASE_ID=integration_test_tenant DATABASE_URL=$database_url PORT=$port $env ../scripts/electric_dev.sh [endmacro] [macro teardown] diff --git a/integration-tests/tests/invalidated-replication-slot.lux b/integration-tests/tests/invalidated-replication-slot.lux index d4ebbdbef5..34f980e453 100644 --- a/integration-tests/tests/invalidated-replication-slot.lux +++ b/integration-tests/tests/invalidated-replication-slot.lux @@ -6,7 +6,7 @@ [my invalidated_slot_error= """ - [error] GenServer Electric.Connection.Manager terminating + [error] :gen_statem {Electric.Registry.Processes, {Electric.Postgres.ReplicationClient, :default, "integration_test_tenant"}} terminating ** (Postgrex.Error) ERROR 55000 (object_not_in_prerequisite_state) cannot read from logical replication slot "electric_slot_integration" This slot has been invalidated because it exceeded the maximum reserved size. diff --git a/integration-tests/tests/resuming-replication-at-consistent-point.lux b/integration-tests/tests/resuming-replication-at-consistent-point.lux index 3ccc822044..f6e6b8256c 100644 --- a/integration-tests/tests/resuming-replication-at-consistent-point.lux +++ b/integration-tests/tests/resuming-replication-at-consistent-point.lux @@ -75,15 +75,15 @@ ?Txn received in Shapes.Consumer: %Electric.Replication.Changes.Transaction{xid: $xid # Both consumers hit their call limit and exit with simulated storage failures. - ?\[error\] GenServer {Electric\.Registry\.Processes, {Electric\.Shapes\.Consumer, :default, "[0-9-]+"}} terminating + ?\[error\] GenServer {Electric\.Registry\.Processes, {Electric\.Shapes\.Consumer, :default, "integration_test_tenant", "[0-9-]+"}} terminating ??Simulated storage failure - ?\[error\] GenServer {Electric\.Registry\.Processes, {Electric\.Shapes\.Consumer, :default, "[0-9-]+"}} terminating + ?\[error\] GenServer {Electric\.Registry\.Processes, {Electric\.Shapes\.Consumer, :default, "integration_test_tenant", "[0-9-]+"}} terminating ??Simulated storage failure # The log collector process and the replication client both exit, as their lifetimes are tied # together by the supervision tree design. - ??[error] GenServer {Electric.Registry.Processes, {Electric.Replication.ShapeLogCollector, :default}} terminating - ??[error] :gen_statem {Electric.Registry.Processes, {Electric.Postgres.ReplicationClient, :default}} terminating + ??[error] GenServer {Electric.Registry.Processes, {Electric.Replication.ShapeLogCollector, :default, "integration_test_tenant"}} terminating + ??[error] :gen_statem {Electric.Registry.Processes, {Electric.Postgres.ReplicationClient, :default, "integration_test_tenant"}} terminating # Observe that both shape consumers and the replication client have restarted. ??[debug] Found existing replication slot diff --git a/integration-tests/tests/rolling-deploy.lux b/integration-tests/tests/rolling-deploy.lux index df7d6fa922..66af55ab69 100644 --- a/integration-tests/tests/rolling-deploy.lux +++ b/integration-tests/tests/rolling-deploy.lux @@ -19,7 +19,7 @@ # First service should be health and active [shell orchestator] - !curl -X GET http://localhost:3000/v1/health + !curl -X GET http://localhost:3000/v1/health?database_id=integration_test_tenant ??{"status":"active"} ## Start the second sync service. @@ -35,9 +35,9 @@ # Second service should be in waiting state, ready to take over [shell orchestator] - !curl -X GET http://localhost:3000/v1/health + !curl -X GET http://localhost:3000/v1/health?database_id=integration_test_tenant ??{"status":"active"} - !curl -X GET http://localhost:3001/v1/health + !curl -X GET http://localhost:3001/v1/health?database_id=integration_test_tenant ??{"status":"waiting"} ## Terminate first electric @@ -55,7 +55,7 @@ # Second service is now healthy and active [shell orchestator] - !curl -X GET http://localhost:3001/v1/health + !curl -X GET http://localhost:3001/v1/health?database_id=integration_test_tenant ??{"status":"active"} [cleanup] diff --git a/packages/react-hooks/test/support/global-setup.ts b/packages/react-hooks/test/support/global-setup.ts index a5aba0c2ce..5f494e74ea 100644 --- a/packages/react-hooks/test/support/global-setup.ts +++ b/packages/react-hooks/test/support/global-setup.ts @@ -3,6 +3,7 @@ import { makePgClient } from './test-helpers' const url = process.env.ELECTRIC_URL ?? `http://localhost:3000` const proxyUrl = process.env.ELECTRIC_PROXY_CACHE_URL ?? `http://localhost:3002` +const databaseId = process.env.DATABASE_ID ?? `test_tenant` // name of proxy cache container to execute commands against, // see docker-compose.yml that spins it up for details @@ -29,7 +30,7 @@ function waitForElectric(url: string): Promise { ) const tryHealth = async () => - fetch(`${url}/v1/health`) + fetch(`${url}/v1/health?database_id=${databaseId}`) .then(async (res): Promise => { if (!res.ok) return tryHealth() const { status } = (await res.json()) as { status: string } diff --git a/packages/sync-service/.env.dev b/packages/sync-service/.env.dev index 20c0c48cea..d1d6928c5b 100644 --- a/packages/sync-service/.env.dev +++ b/packages/sync-service/.env.dev @@ -5,3 +5,6 @@ CACHE_MAX_AGE=1 CACHE_STALE_AGE=3 # using a small chunk size of 10kB for dev to speed up tests LOG_CHUNK_BYTES_THRESHOLD=10000 +DATABASE_ID=test_tenant +# configuring a second database for multi-tenancy integration testing +OTHER_DATABASE_URL=postgresql://postgres:password@localhost:54322/electric?sslmode=disable diff --git a/packages/sync-service/.env.test b/packages/sync-service/.env.test new file mode 100644 index 0000000000..528bac35cf --- /dev/null +++ b/packages/sync-service/.env.test @@ -0,0 +1,3 @@ +LOG_LEVEL=info +DATABASE_URL=postgresql://postgres:password@localhost:54321/postgres?sslmode=disable +DATABASE_ID=test_tenant diff --git a/packages/sync-service/config/runtime.exs b/packages/sync-service/config/runtime.exs index ebc39be50c..aeffdb2593 100644 --- a/packages/sync-service/config/runtime.exs +++ b/packages/sync-service/config/runtime.exs @@ -29,8 +29,7 @@ config :logger, handle_sasl_reports: sasl? if config_env() == :test do - config(:logger, level: :info) - config(:electric, pg_version_for_tests: env!("POSTGRES_VERSION", :integer, 150_001)) + config :electric, pg_version_for_tests: env!("POSTGRES_VERSION", :integer, 150_001) end electric_instance_id = :default @@ -85,28 +84,32 @@ otel_simple_processor = config :opentelemetry, processors: [otel_batch_processor, otel_simple_processor] |> Enum.reject(&is_nil/1) -connection_opts = - if Config.config_env() == :test do - [ - hostname: "localhost", - port: 54321, - username: "postgres", - password: "password", - database: "postgres", - sslmode: :disable - ] - else - {:ok, database_url_config} = - env!("DATABASE_URL", :string) - |> Electric.ConfigParser.parse_postgresql_uri() +database_url = env!("DATABASE_URL", :string, nil) +default_tenant = env!("DATABASE_ID", :string, nil) + +case {database_url, default_tenant} do + {nil, nil} -> + # No default tenant provided + :ok + + {nil, _} -> + raise "DATABASE_URL must be provided when DATABASE_ID is set" + + {_, _} -> + # A default tenant is provided + {:ok, database_url_config} = Electric.ConfigParser.parse_postgresql_uri(database_url) database_ipv6_config = env!("DATABASE_USE_IPV6", :boolean, false) - database_url_config ++ [ipv6: database_ipv6_config] - end + connection_opts = database_url_config ++ [ipv6: database_ipv6_config] + + config :electric, default_connection_opts: Electric.Utils.obfuscate_password(connection_opts) -config :electric, connection_opts: Electric.Utils.obfuscate_password(connection_opts) + # if `default_tenant` is nil, generate a random UUID for it + tenant_id = default_tenant || Electric.Utils.uuid4() + config :electric, default_tenant: tenant_id +end enable_integration_testing = env!("ENABLE_INTEGRATION_TESTING", :boolean, false) cache_max_age = env!("CACHE_MAX_AGE", :integer, 60) @@ -205,4 +208,5 @@ config :electric, prometheus_port: prometheus_port, storage: storage, persistent_kv: persistent_kv, - listen_on_ipv6?: env!("LISTEN_ON_IPV6", :boolean, false) + listen_on_ipv6?: env!("LISTEN_ON_IPV6", :boolean, false), + tenant_tables_name: :tenant_tables diff --git a/packages/sync-service/dev/docker-compose.yml b/packages/sync-service/dev/docker-compose.yml index ebf10ce0ea..3fe51b382f 100644 --- a/packages/sync-service/dev/docker-compose.yml +++ b/packages/sync-service/dev/docker-compose.yml @@ -20,6 +20,24 @@ services: - docker-entrypoint.sh - -c - config_file=/etc/postgresql.conf + postgres2: + image: postgres:16-alpine + environment: + POSTGRES_DB: electric + POSTGRES_USER: postgres + POSTGRES_PASSWORD: password + ports: + - "54322:5433" + volumes: + - ./postgres2.conf:/etc/postgresql.conf:ro + - ./init.sql:/docker-entrypoint-initdb.d/00_shared_init.sql:ro + tmpfs: + - /var/lib/postgresql/data + - /tmp + entrypoint: + - docker-entrypoint.sh + - -c + - config_file=/etc/postgresql.conf nginx: image: nginx:latest ports: diff --git a/packages/sync-service/dev/postgres2.conf b/packages/sync-service/dev/postgres2.conf new file mode 100644 index 0000000000..58fbe8e138 --- /dev/null +++ b/packages/sync-service/dev/postgres2.conf @@ -0,0 +1,3 @@ +listen_addresses = '*' +wal_level = logical # minimal, replica, or logical +port = 5433 \ No newline at end of file diff --git a/packages/sync-service/lib/electric/application.ex b/packages/sync-service/lib/electric/application.ex index fe16ec77c9..038a5c5af5 100644 --- a/packages/sync-service/lib/electric/application.ex +++ b/packages/sync-service/lib/electric/application.ex @@ -1,17 +1,18 @@ defmodule Electric.Application do use Application + require Config @process_registry_name Electric.Registry.Processes def process_registry, do: @process_registry_name - @spec process_name(atom(), atom()) :: {:via, atom(), atom()} - def process_name(electric_instance_id, module) when is_atom(module) do - {:via, Registry, {@process_registry_name, {module, electric_instance_id}}} + @spec process_name(atom(), String.t(), atom()) :: {:via, atom(), {atom(), term()}} + def process_name(electric_instance_id, tenant_id, module) when is_atom(module) do + {:via, Registry, {@process_registry_name, {module, electric_instance_id, tenant_id}}} end - @spec process_name(atom(), atom(), term()) :: {:via, atom(), {atom(), term()}} - def process_name(electric_instance_id, module, id) when is_atom(module) do - {:via, Registry, {@process_registry_name, {module, electric_instance_id, id}}} + @spec process_name(atom(), String.t(), atom(), term()) :: {:via, atom(), {atom(), term()}} + def process_name(electric_instance_id, tenant_id, module, id) when is_atom(module) do + {:via, Registry, {@process_registry_name, {module, electric_instance_id, tenant_id, id}}} end @impl true @@ -20,27 +21,14 @@ defmodule Electric.Application do config = configure() - shape_log_collector = Electric.Replication.ShapeLogCollector.name(config.electric_instance_id) + tenant_id = Application.get_env(:electric, :default_tenant) + tenant_opts = [electric_instance_id: config.electric_instance_id] - connection_manager_opts = [ + router_opts = [ electric_instance_id: config.electric_instance_id, - connection_opts: config.connection_opts, - replication_opts: [ - publication_name: config.replication_opts.publication_name, - try_creating_publication?: true, - slot_name: config.replication_opts.slot_name, - slot_temporary?: config.replication_opts.slot_temporary?, - transaction_received: - {Electric.Replication.ShapeLogCollector, :store_transaction, [shape_log_collector]}, - relation_received: - {Electric.Replication.ShapeLogCollector, :handle_relation_msg, [shape_log_collector]} - ], - pool_opts: [ - name: Electric.DbPool, - pool_size: config.pool_opts.size, - types: PgInterop.Postgrex.Types - ], - persistent_kv: config.persistent_kv + tenant_manager: Electric.TenantManager.name(tenant_opts), + allow_shape_deletion: Application.get_env(:electric, :allow_shape_deletion, false), + registry: Registry.ShapeChanges ] # The root application supervisor starts the core global processes, including the HTTP @@ -61,41 +49,38 @@ defmodule Electric.Application do name: @process_registry_name, keys: :unique, partitions: System.schedulers_online()}, {Registry, name: Registry.ShapeChanges, keys: :duplicate, partitions: System.schedulers_online()}, - {Electric.Postgres.Inspector.EtsInspector, pool: Electric.DbPool}, + Electric.TenantSupervisor, + {Electric.TenantManager, router_opts}, {Bandit, - plug: - {Electric.Plug.Router, - storage: config.storage, - registry: Registry.ShapeChanges, - shape_cache: {Electric.ShapeCache, config.shape_cache_opts}, - get_service_status: &Electric.ServiceStatus.check/0, - inspector: config.inspector, - long_poll_timeout: 20_000, - max_age: Application.fetch_env!(:electric, :cache_max_age), - stale_age: Application.fetch_env!(:electric, :cache_stale_age), - allow_shape_deletion: Application.get_env(:electric, :allow_shape_deletion, false)}, + plug: {Electric.Plug.Router, router_opts}, port: Application.fetch_env!(:electric, :service_port), thousand_island_options: http_listener_options()} ], - prometheus_endpoint(Application.fetch_env!(:electric, :prometheus_port)), - [{Electric.Connection.Supervisor, connection_manager_opts}] + prometheus_endpoint(Application.fetch_env!(:electric, :prometheus_port)) ]) - Supervisor.start_link(children, - strategy: :one_for_one, - name: Electric.Supervisor - ) + {:ok, sup_pid} = + Supervisor.start_link(children, + strategy: :one_for_one, + name: Electric.Supervisor + ) + + if tenant_id do + connection_opts = Application.fetch_env!(:electric, :default_connection_opts) + Electric.TenantManager.create_tenant(tenant_id, connection_opts, tenant_opts) + end + + {:ok, sup_pid} end # This function is called once in the application's start() callback. It reads configuration # from the OTP application env, runs some pre-processing functions and stores the processed # configuration as a single map using `:persistent_term`. defp configure do - electric_instance_id = Application.fetch_env!(:electric, :electric_instance_id) + tenant_tables_name = Application.fetch_env!(:electric, :tenant_tables_name) + :ets.new(tenant_tables_name, [:public, :named_table, :set, {:read_concurrency, true}]) - {storage_module, storage_in_opts} = Application.fetch_env!(:electric, :storage) - storage_opts = storage_module.shared_opts(storage_in_opts) - storage = {storage_module, storage_opts} + electric_instance_id = Application.fetch_env!(:electric, :electric_instance_id) {kv_module, kv_fun, kv_params} = Application.fetch_env!(:electric, :persistent_kv) persistent_kv = apply(kv_module, kv_fun, [kv_params]) @@ -105,33 +90,9 @@ defmodule Electric.Application do slot_name = "electric_slot_#{replication_stream_id}" slot_temporary? = Application.get_env(:electric, :replication_slot_temporary?, false) - get_pg_version_fn = fn -> - Electric.Connection.Manager.get_pg_version(Electric.Connection.Manager) - end - - prepare_tables_mfa = - {Electric.Postgres.Configuration, :configure_tables_for_replication!, - [get_pg_version_fn, publication_name]} - - inspector = - {Electric.Postgres.Inspector.EtsInspector, server: Electric.Postgres.Inspector.EtsInspector} - - shape_cache_opts = [ - electric_instance_id: electric_instance_id, - storage: storage, - inspector: inspector, - prepare_tables_fn: prepare_tables_mfa, - chunk_bytes_threshold: Application.fetch_env!(:electric, :chunk_bytes_threshold), - log_producer: Electric.Replication.ShapeLogCollector.name(electric_instance_id), - consumer_supervisor: Electric.Shapes.ConsumerSupervisor.name(electric_instance_id), - registry: Registry.ShapeChanges - ] - config = %Electric.Application.Configuration{ electric_instance_id: electric_instance_id, - storage: storage, persistent_kv: persistent_kv, - connection_opts: Application.fetch_env!(:electric, :connection_opts), replication_opts: %{ stream_id: replication_stream_id, publication_name: publication_name, @@ -140,9 +101,7 @@ defmodule Electric.Application do }, pool_opts: %{ size: Application.fetch_env!(:electric, :db_pool_size) - }, - inspector: inspector, - shape_cache_opts: shape_cache_opts + } } Electric.Application.Configuration.save(config) diff --git a/packages/sync-service/lib/electric/application/configuration.ex b/packages/sync-service/lib/electric/application/configuration.ex index 36cbd45c64..709def2d45 100644 --- a/packages/sync-service/lib/electric/application/configuration.ex +++ b/packages/sync-service/lib/electric/application/configuration.ex @@ -6,13 +6,9 @@ defmodule Electric.Application.Configuration do defstruct ~w[ electric_instance_id - storage persistent_kv - connection_opts replication_opts pool_opts - inspector - shape_cache_opts ]a @type t :: %__MODULE__{} diff --git a/packages/sync-service/lib/electric/connection/manager.ex b/packages/sync-service/lib/electric/connection/manager.ex index 5fa08004cc..b9d1f59632 100644 --- a/packages/sync-service/lib/electric/connection/manager.ex +++ b/packages/sync-service/lib/electric/connection/manager.ex @@ -20,7 +20,8 @@ defmodule Electric.Connection.Manager do connection_opts: [...], replication_opts: [...], pool_opts: [...], - persistent_kv: ...} + timeline_opts: [...], + shape_cache_opts: [...]} ] Supervisor.start_link(children, strategy: :one_for_one) @@ -34,8 +35,10 @@ defmodule Electric.Connection.Manager do :replication_opts, # Database connection pool options :pool_opts, - # Application's persistent key-value storage reference - :persistent_kv, + # Options specific to `Electric.Timeline` + :timeline_opts, + # Options passed to the Shapes.Supervisor's start_link() function + :shape_cache_opts, # PID of the replication client :replication_client_pid, # PID of the Postgres connection lock @@ -55,7 +58,8 @@ defmodule Electric.Connection.Manager do # PostgreSQL system identifier :pg_system_identifier, # PostgreSQL timeline ID - :pg_timeline_id + :pg_timeline_id, + :tenant_id ] end @@ -70,17 +74,26 @@ defmodule Electric.Connection.Manager do | {:connection_opts, Keyword.t()} | {:replication_opts, Keyword.t()} | {:pool_opts, Keyword.t()} - | {:persistent_kv, map()} + | {:timeline_opts, Keyword.t()} + | {:shape_cache_opts, Keyword.t()} @type options :: [option] - @name __MODULE__ - @lock_status_logging_interval 10_000 @spec start_link(options) :: GenServer.on_start() def start_link(opts) do - GenServer.start_link(__MODULE__, opts, name: @name) + GenServer.start_link(__MODULE__, opts, name: name(opts)) + end + + def name(electric_instance_id, tenant_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__) + end + + def name(opts) do + electric_instance_id = Keyword.fetch!(opts, :electric_instance_id) + tenant_id = Keyword.fetch!(opts, :tenant_id) + name(electric_instance_id, tenant_id) end @doc """ @@ -127,18 +140,20 @@ defmodule Electric.Connection.Manager do |> Keyword.put(:connection_manager, self()) pool_opts = Keyword.fetch!(opts, :pool_opts) - - persistent_kv = Keyword.fetch!(opts, :persistent_kv) + timeline_opts = Keyword.fetch!(opts, :timeline_opts) + shape_cache_opts = Keyword.fetch!(opts, :shape_cache_opts) state = %State{ connection_opts: connection_opts, replication_opts: replication_opts, pool_opts: pool_opts, - persistent_kv: persistent_kv, + timeline_opts: timeline_opts, + shape_cache_opts: shape_cache_opts, pg_lock_acquired: false, backoff: {:backoff.init(1000, 10_000), nil}, - electric_instance_id: Keyword.fetch!(opts, :electric_instance_id) + electric_instance_id: Keyword.fetch!(opts, :electric_instance_id), + tenant_id: Keyword.fetch!(opts, :tenant_id) } # Try to acquire the connection lock on the replication slot @@ -188,11 +203,12 @@ defmodule Electric.Connection.Manager do end def handle_continue(:start_replication_client, %State{replication_client_pid: nil} = state) do - case start_replication_client( - state.electric_instance_id, - state.connection_opts, - state.replication_opts - ) do + opts = + state + |> Map.take([:electric_instance_id, :tenant_id, :replication_opts, :connection_opts]) + |> Map.to_list() + + case start_replication_client(opts) do {:ok, pid, connection_opts} -> state = %{state | replication_client_pid: pid, connection_opts: connection_opts} @@ -224,12 +240,18 @@ defmodule Electric.Connection.Manager do check_result = Electric.Timeline.check( {state.pg_system_identifier, state.pg_timeline_id}, - state.persistent_kv + state.timeline_opts ) + shape_cache_opts = + state.shape_cache_opts + |> Keyword.put(:purge_all_shapes?, check_result == :timeline_changed) + {:ok, shapes_sup_pid} = Electric.Connection.Supervisor.start_shapes_supervisor( - purge_all_shapes?: check_result == :timeline_changed + electric_instance_id: state.electric_instance_id, + tenant_id: state.tenant_id, + shape_cache_opts: shape_cache_opts ) # Everything is ready to start accepting and processing logical messages from Postgres. @@ -327,20 +349,18 @@ defmodule Electric.Connection.Manager do }} end - defp start_replication_client(electric_instance_id, connection_opts, replication_opts) do - case Electric.Postgres.ReplicationClient.start_link( - electric_instance_id, - connection_opts, - replication_opts - ) do + defp start_replication_client(opts) do + case Electric.Postgres.ReplicationClient.start_link(opts) do {:ok, pid} -> - {:ok, pid, connection_opts} + {:ok, pid, Keyword.fetch!(opts, :connection_opts)} {:error, %Postgrex.Error{message: "ssl not available"}} = error -> - if connection_opts[:sslmode] == :require do + sslmode = get_in(opts, [:connection_opts, :sslmode]) + + if sslmode == :require do error else - if connection_opts[:sslmode] do + if not is_nil(sslmode) do # Only log a warning when there's an explicit sslmode parameter in the database # config, meaning the user has requested a certain sslmode. Logger.warning( @@ -348,8 +368,9 @@ defmodule Electric.Connection.Manager do ) end - connection_opts = Keyword.put(connection_opts, :ssl, false) - start_replication_client(electric_instance_id, connection_opts, replication_opts) + opts + |> Keyword.update!(:connection_opts, &Keyword.put(&1, :ssl, false)) + |> start_replication_client() end error -> diff --git a/packages/sync-service/lib/electric/connection/supervisor.ex b/packages/sync-service/lib/electric/connection/supervisor.ex index 1e55638ab2..908757f242 100644 --- a/packages/sync-service/lib/electric/connection/supervisor.ex +++ b/packages/sync-service/lib/electric/connection/supervisor.ex @@ -20,10 +20,18 @@ defmodule Electric.Connection.Supervisor do use Supervisor - @name __MODULE__ + def name(electric_instance_id, tenant_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__) + end + + def name(opts) do + electric_instance_id = Access.fetch!(opts, :electric_instance_id) + tenant_id = Access.fetch!(opts, :tenant_id) + name(electric_instance_id, tenant_id) + end def start_link(opts) do - Supervisor.start_link(__MODULE__, opts, name: @name) + Supervisor.start_link(__MODULE__, opts, name: name(opts)) end def init(opts) do @@ -31,26 +39,29 @@ defmodule Electric.Connection.Supervisor do end def start_shapes_supervisor(opts) do - app_config = Electric.Application.Configuration.get() + electric_instance_id = Keyword.fetch!(opts, :electric_instance_id) + tenant_id = Keyword.fetch!(opts, :tenant_id) + shape_cache_opts = Keyword.fetch!(opts, :shape_cache_opts) + inspector = Keyword.fetch!(shape_cache_opts, :inspector) - shape_cache_opts = app_config.shape_cache_opts ++ Keyword.take(opts, [:purge_all_shapes?]) shape_cache_spec = {Electric.ShapeCache, shape_cache_opts} shape_log_collector_spec = {Electric.Replication.ShapeLogCollector, - electric_instance_id: app_config.electric_instance_id, inspector: app_config.inspector} + electric_instance_id: electric_instance_id, tenant_id: tenant_id, inspector: inspector} child_spec = Supervisor.child_spec( { Electric.Shapes.Supervisor, - electric_instance_id: app_config.electric_instance_id, + electric_instance_id: electric_instance_id, + tenant_id: tenant_id, shape_cache: shape_cache_spec, log_collector: shape_log_collector_spec }, restart: :temporary ) - Supervisor.start_child(@name, child_spec) + Supervisor.start_child(name(opts), child_spec) end end diff --git a/packages/sync-service/lib/electric/plug/add_database_plug.ex b/packages/sync-service/lib/electric/plug/add_database_plug.ex new file mode 100644 index 0000000000..89b4353d3d --- /dev/null +++ b/packages/sync-service/lib/electric/plug/add_database_plug.ex @@ -0,0 +1,193 @@ +defmodule Electric.Plug.AddDatabasePlug do + use Plug.Builder + use Plug.ErrorHandler + + # The halt/1 function is redefined further down below + import Plug.Conn, except: [halt: 1] + + alias OpenTelemetry.SemanticConventions, as: SC + + alias Electric.Telemetry.OpenTelemetry + alias Plug.Conn + + alias Electric.TenantManager + + require Logger + require SC.Trace + + defmodule Params do + alias Ecto.Changeset + use Ecto.Schema + import Ecto.Changeset + + @primary_key false + embedded_schema do + field(:database_url, :string) + field(:connection_params, :any, virtual: true) + field(:database_use_ipv6, :boolean, default: false) + field(:database_id, :string, autogenerate: {Electric.Utils, :uuid4, []}) + end + + def validate(params) do + %__MODULE__{} + |> cast(params, __schema__(:fields), message: fn _, _ -> "must be %{type}" end) + |> validate_required([:database_url, :database_id]) + |> validate_database_url() + |> apply_action(:validate) + |> case do + {:ok, params} -> + result = Map.from_struct(params) + + result = + if result.database_use_ipv6, + do: Map.update!(result, :connection_params, &Keyword.put(&1, :ipv6, true)), + else: result + + {:ok, result} + + {:error, changeset} -> + {:error, + Ecto.Changeset.traverse_errors(changeset, fn {msg, opts} -> + Regex.replace(~r"%{(\w+)}", msg, fn _, key -> + opts |> Keyword.get(String.to_existing_atom(key), key) |> to_string() + end) + end)} + end + end + + defp validate_database_url(changeset) do + case Changeset.fetch_change(changeset, :database_url) do + :error -> + changeset + + {:ok, value} -> + case Electric.ConfigParser.parse_postgresql_uri(value) do + {:ok, parsed} -> Changeset.put_change(changeset, :connection_params, parsed) + {:error, reason} -> Changeset.add_error(changeset, :database_url, reason) + end + end + end + end + + plug Plug.Parsers, + parsers: [:json], + json_decoder: Jason + + plug :put_resp_content_type, "application/json" + + # start_telemetry_span needs to always be the first plug after fetching query params. + plug :start_telemetry_span + + plug :validate_body + plug :create_tenant + + # end_telemetry_span needs to always be the last plug here. + plug :end_telemetry_span + + defp validate_body(%Conn{body_params: params} = conn, _) do + case Params.validate(params) do + {:ok, params} -> + %{conn | assigns: Map.merge(conn.assigns, params)} + + {:error, error_map} -> + conn + |> send_resp(400, Jason.encode_to_iodata!(error_map)) + |> halt() + end + end + + defp create_tenant(%Conn{assigns: %{database_id: tenant_id} = assigns} = conn, _) do + connection_opts = Electric.Utils.obfuscate_password(assigns.connection_params) + + OpenTelemetry.with_span("add_db.plug.create_tenant", [], fn -> + case TenantManager.create_tenant(tenant_id, connection_opts, conn.assigns.config) do + :ok -> + conn + |> send_resp(200, Jason.encode_to_iodata!(tenant_id)) + |> halt() + + {:error, {:tenant_already_exists, tenant_id}} -> + conn + |> send_resp(400, Jason.encode_to_iodata!("Database #{tenant_id} already exists.")) + |> halt() + + {:error, {:db_already_in_use, pg_id}} -> + conn + |> send_resp( + 400, + Jason.encode_to_iodata!("The database #{pg_id} is already in use by another tenant.") + ) + |> halt() + + {:error, error} -> + conn + |> send_resp(500, Jason.encode_to_iodata!(error)) + |> halt() + end + end) + end + + defp open_telemetry_attrs(%Conn{assigns: assigns} = conn) do + Electric.Plug.Utils.common_open_telemetry_attrs(conn) + |> Map.merge(%{ + "tenant.id" => assigns[:database_id], + "tenant.database_url" => assigns[:database_url] + }) + end + + # + ### Telemetry + # + + # Below, OpentelemetryTelemetry does the heavy lifting of setting up the span context in the + # current Elixir process to correctly attribute subsequent calls to OpenTelemetry.with_span() + # in this module as descendants of the root span, as they are all invoked in the same process + # unless a new process is spawned explicitly. + + # Start the root span for the shape request, serving as an ancestor for any subsequent + # sub-span. + defp start_telemetry_span(conn, _) do + OpentelemetryTelemetry.start_telemetry_span(OpenTelemetry, "plug_add_database", %{}, %{}) + add_span_attrs_from_conn(conn) + conn + end + + # Assign root span attributes based on the latest state of Plug.Conn and end the root span. + # + # We want to have all the relevant HTTP and shape request attributes on the root span. This + # is the place to assign them because we keep this plug last in the "plug pipeline" defined + # in this module. + defp end_telemetry_span(conn, _ \\ nil) do + add_span_attrs_from_conn(conn) + OpentelemetryTelemetry.end_telemetry_span(OpenTelemetry, %{}) + conn + end + + defp add_span_attrs_from_conn(conn) do + conn + |> open_telemetry_attrs() + |> OpenTelemetry.add_span_attributes() + end + + # This overrides Plug.Conn.halt/1 (which is deliberately "unimported" at the top of this + # module) so that we can record the response status in the OpenTelemetry span for this + # request. + defp halt(conn) do + conn + |> end_telemetry_span() + |> Plug.Conn.halt() + end + + @impl Plug.ErrorHandler + def handle_errors(conn, error) do + OpenTelemetry.record_exception(error.reason, error.stack) + + error_str = Exception.format(error.kind, error.reason) + + conn + |> assign(:error_str, error_str) + |> end_telemetry_span() + + conn + end +end diff --git a/packages/sync-service/lib/electric/plug/delete_shape_plug.ex b/packages/sync-service/lib/electric/plug/delete_shape_plug.ex index 64ee3ca293..f8ecfee487 100644 --- a/packages/sync-service/lib/electric/plug/delete_shape_plug.ex +++ b/packages/sync-service/lib/electric/plug/delete_shape_plug.ex @@ -5,10 +5,13 @@ defmodule Electric.Plug.DeleteShapePlug do alias Electric.Shapes alias Electric.Plug.ServeShapePlug.Params + import Electric.Plug.TenantUtils + plug :fetch_query_params plug :put_resp_content_type, "application/json" plug :allow_shape_deletion + plug :load_tenant plug :validate_query_params plug :truncate_or_delete_shape diff --git a/packages/sync-service/lib/electric/plug/health_check_plug.ex b/packages/sync-service/lib/electric/plug/health_check_plug.ex index 0834b69693..c960d4508a 100644 --- a/packages/sync-service/lib/electric/plug/health_check_plug.ex +++ b/packages/sync-service/lib/electric/plug/health_check_plug.ex @@ -2,7 +2,10 @@ defmodule Electric.Plug.HealthCheckPlug do alias Plug.Conn require Logger use Plug.Builder + import Electric.Plug.TenantUtils + plug :fetch_query_params + plug :load_tenant plug :check_service_status plug :put_relevant_headers plug :send_response @@ -10,13 +13,13 @@ defmodule Electric.Plug.HealthCheckPlug do # Match service status to a status code and status message, # keeping the message name decoupled from the internal representation # of the status to ensure the API is stable - defp check_service_status(conn, _) do - get_service_status = Access.fetch!(conn.assigns.config, :get_service_status) + defp check_service_status(%Conn{assigns: %{config: tenant_config}} = conn, _) do + get_service_status = Access.fetch!(tenant_config, :get_service_status) {status_code, status_text} = case get_service_status.() do - :waiting -> {200, "waiting"} - :starting -> {200, "starting"} + :waiting -> {503, "waiting"} + :starting -> {503, "starting"} :active -> {200, "active"} :stopping -> {503, "stopping"} end diff --git a/packages/sync-service/lib/electric/plug/remove_database_plug.ex b/packages/sync-service/lib/electric/plug/remove_database_plug.ex new file mode 100644 index 0000000000..cd8adec1ce --- /dev/null +++ b/packages/sync-service/lib/electric/plug/remove_database_plug.ex @@ -0,0 +1,25 @@ +defmodule Electric.Plug.RemoveDatabasePlug do + use Plug.Builder + + alias Plug.Conn + alias Electric.TenantManager + + require Logger + + plug :put_resp_content_type, "application/json" + plug :delete_tenant + + defp delete_tenant(%Conn{path_params: %{"database_id" => tenant_id}} = conn, _) do + case TenantManager.delete_tenant(tenant_id, conn.assigns.config) do + :ok -> + conn + |> send_resp(200, Jason.encode_to_iodata!(tenant_id)) + |> halt() + + :not_found -> + conn + |> send_resp(404, Jason.encode_to_iodata!("Database #{tenant_id} not found.")) + |> halt() + end + end +end diff --git a/packages/sync-service/lib/electric/plug/router.ex b/packages/sync-service/lib/electric/plug/router.ex index fe0c664106..426632d1a5 100644 --- a/packages/sync-service/lib/electric/plug/router.ex +++ b/packages/sync-service/lib/electric/plug/router.ex @@ -21,8 +21,10 @@ defmodule Electric.Plug.Router do get "/v1/health", to: Electric.Plug.HealthCheckPlug - match _, - do: send_resp(conn, 404, "Not found") + post "/v1/admin/database", to: Electric.Plug.AddDatabasePlug + delete "/v1/admin/database/:database_id", to: Electric.Plug.RemoveDatabasePlug + + match _, do: send_resp(conn, 404, "Not found") def server_header(conn, version), do: conn |> Plug.Conn.put_resp_header("server", "ElectricSQL/#{version}") @@ -30,6 +32,9 @@ defmodule Electric.Plug.Router do def put_cors_headers(%Plug.Conn{path_info: ["v1", "shape", _ | _]} = conn, _opts), do: CORSHeaderPlug.call(conn, %{methods: ["GET", "HEAD", "DELETE", "OPTIONS"]}) + def put_cors_headers(%Plug.Conn{path_info: ["v1", "admin", _ | _]} = conn, _opts), + do: CORSHeaderPlug.call(conn, %{methods: ["GET", "POST", "DELETE", "OPTIONS"]}) + def put_cors_headers(conn, _opts), do: CORSHeaderPlug.call(conn, %{methods: ["GET", "HEAD"]}) end diff --git a/packages/sync-service/lib/electric/plug/serve_shape_plug.ex b/packages/sync-service/lib/electric/plug/serve_shape_plug.ex index 51d39f6442..58ee9395d5 100644 --- a/packages/sync-service/lib/electric/plug/serve_shape_plug.ex +++ b/packages/sync-service/lib/electric/plug/serve_shape_plug.ex @@ -4,8 +4,7 @@ defmodule Electric.Plug.ServeShapePlug do # The halt/1 function is redefined further down below import Plug.Conn, except: [halt: 1] - - alias OpenTelemetry.SemConv, as: SC + import Electric.Plug.TenantUtils alias Electric.Shapes alias Electric.Schema @@ -178,6 +177,7 @@ defmodule Electric.Plug.ServeShapePlug do # start_telemetry_span needs to always be the first plug after fetching query params. plug :start_telemetry_span plug :put_resp_content_type, "application/json" + plug :load_tenant plug :validate_query_params plug :load_shape_info plug :put_schema_header @@ -313,10 +313,10 @@ defmodule Electric.Plug.ServeShapePlug do # If chunk offsets are available, use those instead of the latest available offset # to optimize for cache hits and response sizes defp determine_log_chunk_offset(%Conn{assigns: assigns} = conn, _) do - %{config: config, active_shape_id: shape_id, offset: offset} = assigns + %{config: config, active_shape_id: shape_id, offset: offset, tenant_id: tenant_id} = assigns chunk_end_offset = - Shapes.get_chunk_end_log_offset(config, shape_id, offset) || assigns.last_offset + Shapes.get_chunk_end_log_offset(config, shape_id, offset, tenant_id) || assigns.last_offset conn |> assign(:chunk_end_offset, chunk_end_offset) @@ -432,14 +432,15 @@ defmodule Electric.Plug.ServeShapePlug do assigns: %{ chunk_end_offset: chunk_end_offset, active_shape_id: shape_id, + tenant_id: tenant_id, up_to_date: maybe_up_to_date } } = conn ) do - case Shapes.get_snapshot(conn.assigns.config, shape_id) do + case Shapes.get_snapshot(conn.assigns.config, shape_id, tenant_id) do {:ok, {offset, snapshot}} -> log = - Shapes.get_log_stream(conn.assigns.config, shape_id, + Shapes.get_log_stream(conn.assigns.config, shape_id, tenant_id, since: offset, up_to: chunk_end_offset ) @@ -475,12 +476,13 @@ defmodule Electric.Plug.ServeShapePlug do offset: offset, chunk_end_offset: chunk_end_offset, active_shape_id: shape_id, + tenant_id: tenant_id, up_to_date: maybe_up_to_date } } = conn ) do log = - Shapes.get_log_stream(conn.assigns.config, shape_id, + Shapes.get_log_stream(conn.assigns.config, shape_id, tenant_id, since: offset, up_to: chunk_end_offset ) @@ -547,8 +549,12 @@ defmodule Electric.Plug.ServeShapePlug do ref = make_ref() registry = conn.assigns.config[:registry] - Registry.register(registry, shape_id, ref) - Logger.debug("Client #{inspect(self())} is registered for changes to #{shape_id}") + tenant = conn.assigns.tenant_id + Registry.register(registry, {tenant, shape_id}, ref) + + Logger.debug( + "[Tenant #{tenant}]: Client #{inspect(self())} is registered for changes to #{shape_id}" + ) assign(conn, :new_changes_ref, ref) else @@ -597,16 +603,10 @@ defmodule Electric.Plug.ServeShapePlug do conn.query_params["shape_id"] || assigns[:active_shape_id] || assigns[:shape_id] end - query_params_map = - if is_struct(conn.query_params, Plug.Conn.Unfetched) do - %{} - else - Map.new(conn.query_params, fn {k, v} -> {"http.query_param.#{k}", v} end) - end - maybe_up_to_date = if up_to_date = assigns[:up_to_date], do: up_to_date != [] - %{ + Electric.Plug.Utils.common_open_telemetry_attrs(conn) + |> Map.merge(%{ "shape.id" => shape_id, "shape.where" => assigns[:where], "shape.root_table" => assigns[:root_table], @@ -619,54 +619,8 @@ defmodule Electric.Plug.ServeShapePlug do "shape_req.is_immediate_response" => assigns[:ot_is_immediate_response] || true, "shape_req.is_cached" => if(conn.status, do: conn.status == 304), "shape_req.is_error" => if(conn.status, do: conn.status >= 400), - "shape_req.is_up_to_date" => maybe_up_to_date, - "error.type" => assigns[:error_str], - "http.request_id" => assigns[:plug_request_id], - "http.query_string" => conn.query_string, - SC.ClientAttributes.client_address() => client_ip(conn), - SC.ServerAttributes.server_address() => conn.host, - SC.ServerAttributes.server_port() => conn.port, - SC.HTTPAttributes.http_request_method() => conn.method, - SC.HTTPAttributes.http_response_status_code() => conn.status, - SC.Incubating.HTTPAttributes.http_response_size() => assigns[:streaming_bytes_sent], - SC.NetworkAttributes.network_transport() => :tcp, - SC.NetworkAttributes.network_local_port() => conn.port, - SC.UserAgentAttributes.user_agent_original() => user_agent(conn), - SC.Incubating.URLAttributes.url_path() => conn.request_path, - SC.URLAttributes.url_scheme() => conn.scheme, - SC.URLAttributes.url_full() => - %URI{ - scheme: to_string(conn.scheme), - host: conn.host, - port: conn.port, - path: conn.request_path, - query: conn.query_string - } - |> to_string() - } - |> Map.filter(fn {_k, v} -> not is_nil(v) end) - |> Map.merge(query_params_map) - |> Map.merge(Map.new(conn.req_headers, fn {k, v} -> {"http.request.header.#{k}", v} end)) - |> Map.merge(Map.new(conn.resp_headers, fn {k, v} -> {"http.response.header.#{k}", v} end)) - end - - defp client_ip(%Conn{remote_ip: remote_ip} = conn) do - case get_req_header(conn, "x-forwarded-for") do - [] -> - remote_ip - |> :inet_parse.ntoa() - |> to_string() - - [ip_address | _] -> - ip_address - end - end - - defp user_agent(%Conn{} = conn) do - case get_req_header(conn, "user-agent") do - [] -> "" - [head | _] -> head - end + "shape_req.is_up_to_date" => maybe_up_to_date + }) end # diff --git a/packages/sync-service/lib/electric/plug/tenant_utils.ex b/packages/sync-service/lib/electric/plug/tenant_utils.ex new file mode 100644 index 0000000000..9b49c74a79 --- /dev/null +++ b/packages/sync-service/lib/electric/plug/tenant_utils.ex @@ -0,0 +1,46 @@ +defmodule Electric.Plug.TenantUtils do + @moduledoc """ + Shared tenant-related plug functions used across Electric plugs. + """ + + use Plug.Builder + + alias Plug.Conn + alias Electric.TenantManager + + @doc """ + Load an appropriate tenant configuration into assigns based on the `database_id` query parameter. + """ + def load_tenant(%Conn{} = conn, _) do + # This is a no-op if they are already fetched. + conn = Conn.fetch_query_params(conn) + + Map.get(conn.query_params, "database_id", :not_provided) + |> maybe_get_tenant(conn.assigns.config) + |> case do + {:ok, tenant_config} -> + conn + |> assign(:config, tenant_config) + |> assign(:tenant_id, tenant_config[:tenant_id]) + + {:error, :not_found} -> + conn + |> send_resp(404, Jason.encode_to_iodata!(~s|Database not found|)) + |> halt() + + {:error, :several_tenants} -> + conn + |> send_resp( + 400, + Jason.encode_to_iodata!( + "Database ID was not provided and there are multiple databases. Please specify a database ID using the `database_id` query parameter." + ) + ) + |> halt() + end + end + + defp maybe_get_tenant(:not_provided, config), do: TenantManager.get_only_tenant(config) + defp maybe_get_tenant(id, config) when is_binary(id), do: TenantManager.get_tenant(id, config) + defp maybe_get_tenant(_, _), do: {:error, :not_found} +end diff --git a/packages/sync-service/lib/electric/plug/utils.ex b/packages/sync-service/lib/electric/plug/utils.ex index 6f7942020e..bb401eaabd 100644 --- a/packages/sync-service/lib/electric/plug/utils.ex +++ b/packages/sync-service/lib/electric/plug/utils.ex @@ -47,6 +47,67 @@ defmodule Electric.Plug.Utils do end) end + alias OpenTelemetry.SemConv, as: SC + + def common_open_telemetry_attrs(%Plug.Conn{assigns: assigns} = conn) do + query_params_map = + if is_struct(conn.query_params, Plug.Conn.Unfetched) do + %{} + else + Map.new(conn.query_params, fn {k, v} -> {"http.query_param.#{k}", v} end) + end + + %{ + "tenant.id" => assigns[:tenant_id], + "error.type" => assigns[:error_str], + "http.request_id" => assigns[:plug_request_id], + "http.query_string" => conn.query_string, + SC.ClientAttributes.client_address() => client_ip(conn), + SC.ServerAttributes.server_address() => conn.host, + SC.ServerAttributes.server_port() => conn.port, + SC.HTTPAttributes.http_request_method() => conn.method, + SC.HTTPAttributes.http_response_status_code() => conn.status, + SC.Incubating.HTTPAttributes.http_response_size() => assigns[:streaming_bytes_sent], + SC.NetworkAttributes.network_transport() => :tcp, + SC.NetworkAttributes.network_local_port() => conn.port, + SC.UserAgentAttributes.user_agent_original() => user_agent(conn), + SC.Incubating.URLAttributes.url_path() => conn.request_path, + SC.URLAttributes.url_scheme() => conn.scheme, + SC.URLAttributes.url_full() => + %URI{ + scheme: to_string(conn.scheme), + host: conn.host, + port: conn.port, + path: conn.request_path, + query: conn.query_string + } + |> to_string() + } + |> Map.filter(fn {_k, v} -> not is_nil(v) end) + |> Map.merge(query_params_map) + |> Map.merge(Map.new(conn.req_headers, fn {k, v} -> {"http.request.header.#{k}", v} end)) + |> Map.merge(Map.new(conn.resp_headers, fn {k, v} -> {"http.response.header.#{k}", v} end)) + end + + defp client_ip(%Plug.Conn{remote_ip: remote_ip} = conn) do + case Plug.Conn.get_req_header(conn, "x-forwarded-for") do + [] -> + remote_ip + |> :inet_parse.ntoa() + |> to_string() + + [ip_address | _] -> + ip_address + end + end + + defp user_agent(%Plug.Conn{} = conn) do + case Plug.Conn.get_req_header(conn, "user-agent") do + [] -> "" + [head | _] -> head + end + end + defmodule CORSHeaderPlug do @behaviour Plug import Plug.Conn diff --git a/packages/sync-service/lib/electric/postgres/inspector.ex b/packages/sync-service/lib/electric/postgres/inspector.ex index f590c7e4f4..b2924579fa 100644 --- a/packages/sync-service/lib/electric/postgres/inspector.ex +++ b/packages/sync-service/lib/electric/postgres/inspector.ex @@ -56,7 +56,7 @@ defmodule Electric.Postgres.Inspector do Clean up all information about a given relation using a provided inspector. """ @spec clean(relation(), inspector()) :: true - def clean(relation, {module, opts}), do: module.clean_column_info(relation, opts) + def clean(relation, {module, opts}), do: module.clean(relation, opts) @doc """ Get columns that should be considered a PK for table. If the table diff --git a/packages/sync-service/lib/electric/postgres/inspector/ets_inspector.ex b/packages/sync-service/lib/electric/postgres/inspector/ets_inspector.ex index 80c187e38f..cfa93119bc 100644 --- a/packages/sync-service/lib/electric/postgres/inspector/ets_inspector.ex +++ b/packages/sync-service/lib/electric/postgres/inspector/ets_inspector.ex @@ -8,16 +8,38 @@ defmodule Electric.Postgres.Inspector.EtsInspector do ## Public API - def start_link(opts), - do: + def name(electric_instance_id, tenant_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__) + end + + def name(opts) do + case Keyword.fetch(opts, :name) do + {:ok, name} -> + name + + :error -> + electric_instance_id = Keyword.fetch!(opts, :electric_instance_id) + tenant_id = Keyword.fetch!(opts, :tenant_id) + name(electric_instance_id, tenant_id) + end + end + + def start_link(opts) do + {:ok, pid} = GenServer.start_link( __MODULE__, Map.new(opts) |> Map.put_new(:pg_info_table, @default_pg_info_table) - |> Map.put_new(:pg_relation_table, @default_pg_relation_table), - name: Access.get(opts, :name, __MODULE__) + |> Map.put_new(:pg_relation_table, @default_pg_relation_table) + |> Map.put_new_lazy(:tenant_tables_name, fn -> + Application.fetch_env!(:electric, :tenant_tables_name) + end), + name: name(opts) ) + {:ok, pid} + end + @impl Electric.Postgres.Inspector def load_relation(table, opts) do case relation_from_ets(table, opts) do @@ -30,10 +52,8 @@ defmodule Electric.Postgres.Inspector.EtsInspector do end defp clean_relation(rel, opts_or_state) do - pg_relation_ets_table = - Access.get(opts_or_state, :pg_relation_table, @default_pg_relation_table) - - pg_info_ets_table = Access.get(opts_or_state, :pg_info_table, @default_pg_info_table) + pg_relation_ets_table = get_relation_table(opts_or_state) + pg_info_ets_table = get_column_info_table(opts_or_state) # Delete all tables that are associated with the relation tables_from_ets(rel, opts_or_state) @@ -58,7 +78,7 @@ defmodule Electric.Postgres.Inspector.EtsInspector do end defp clean_column_info(table, opts_or_state) do - ets_table = Access.get(opts_or_state, :pg_info_table, @default_pg_info_table) + ets_table = get_column_info_table(opts_or_state) :ets.delete(ets_table, {table, :columns}) end @@ -69,14 +89,45 @@ defmodule Electric.Postgres.Inspector.EtsInspector do clean_relation(relation, opts_or_state) end + # Removes the references to the tenant's ETS tables from the global ETS table + defp clean_tenant_info(opts) do + tenant_id = Access.fetch!(opts, :tenant_id) + tenant_tables_name = fetch_tenant_tables_name(opts) + + case :ets.whereis(tenant_tables_name) do + :undefined -> + true + + _ -> + :ets.delete(tenant_tables_name, {tenant_id, :pg_info_table}) + :ets.delete(tenant_tables_name, {tenant_id, :pg_relation_table}) + end + end + ## Internal API @impl GenServer def init(opts) do - pg_info_table = :ets.new(opts.pg_info_table, [:named_table, :public, :set]) - pg_relation_table = :ets.new(opts.pg_relation_table, [:named_table, :public, :bag]) + # Trap exits such that `terminate/2` is called + # when the parent process sends an exit signal + Process.flag(:trap_exit, true) + + # Each tenant creates its own ETS table. + # Name needs to be an atom but we don't want to dynamically create atoms. + # Instead, we will use the reference to the table that is returned by `:ets.new` + pg_info_table = :ets.new(opts.pg_info_table, [:public, :set]) + pg_relation_table = :ets.new(opts.pg_relation_table, [:public, :bag]) + + # Store both references in a global ETS table so that we can retrieve them later + tenant_id = Access.fetch!(opts, :tenant_id) + tenant_tables_name = Access.fetch!(opts, :tenant_tables_name) + + :ets.insert(tenant_tables_name, {{tenant_id, :pg_info_table}, pg_info_table}) + :ets.insert(tenant_tables_name, {{tenant_id, :pg_relation_table}, pg_relation_table}) state = %{ + tenant_id: tenant_id, + tenant_tables_name: tenant_tables_name, pg_info_table: pg_info_table, pg_relation_table: pg_relation_table, pg_pool: opts.pool @@ -141,16 +192,21 @@ defmodule Electric.Postgres.Inspector.EtsInspector do e -> {:reply, {:error, e, __STACKTRACE__}, state} end + @impl GenServer + def terminate(_reason, state) do + clean_tenant_info(state) + end + @pg_rel_position 2 defp relation_from_ets(table, opts_or_state) do - ets_table = Access.get(opts_or_state, :pg_info_table, @default_pg_info_table) + ets_table = get_column_info_table(opts_or_state) :ets.lookup_element(ets_table, {table, :table_to_relation}, @pg_rel_position, :not_found) end @pg_table_idx 1 defp tables_from_ets(relation, opts_or_state) do - ets_table = Access.get(opts_or_state, :pg_relation_table, @default_pg_relation_table) + ets_table = get_relation_table(opts_or_state) :ets.lookup(ets_table, {relation, :relation_to_table}) |> Enum.map(&elem(&1, @pg_table_idx)) @@ -158,8 +214,46 @@ defmodule Electric.Postgres.Inspector.EtsInspector do @column_info_position 2 defp column_info_from_ets(table, opts_or_state) do - ets_table = Access.get(opts_or_state, :pg_info_table, @default_pg_info_table) + ets_table = get_column_info_table(opts_or_state) :ets.lookup_element(ets_table, {table, :columns}, @column_info_position, :not_found) end + + # When called from within the GenServer it is passed the state + # which contains the reference to the ETS table. + # When called from outside the GenServer it is passed the opts keyword list + @pg_info_table_ref_position 2 + def get_column_info_table(%{pg_info_table: ets_table}), do: ets_table + + def get_column_info_table(opts) do + tenant_id = Access.fetch!(opts, :tenant_id) + tenant_tables_name = fetch_tenant_tables_name(opts) + + :ets.lookup_element( + tenant_tables_name, + {tenant_id, :pg_info_table}, + @pg_info_table_ref_position + ) + end + + @pg_relation_table_ref_position 2 + def get_relation_table(%{pg_relation_table: ets_table}), do: ets_table + + def get_relation_table(opts) do + tenant_id = Access.fetch!(opts, :tenant_id) + tenant_tables_name = fetch_tenant_tables_name(opts) + + :ets.lookup_element( + tenant_tables_name, + {tenant_id, :pg_relation_table}, + @pg_relation_table_ref_position + ) + end + + def fetch_tenant_tables_name(opts) do + case Access.fetch(opts, :tenant_tables_name) do + :error -> Application.fetch_env!(:electric, :tenant_tables_name) + {:ok, tenant_tables_name} -> tenant_tables_name + end + end end diff --git a/packages/sync-service/lib/electric/postgres/replication_client.ex b/packages/sync-service/lib/electric/postgres/replication_client.ex index 2f3af43ff3..6df1d9231d 100644 --- a/packages/sync-service/lib/electric/postgres/replication_client.ex +++ b/packages/sync-service/lib/electric/postgres/replication_client.ex @@ -91,21 +91,24 @@ defmodule Electric.Postgres.ReplicationClient do @repl_msg_primary_keepalive ?k @repl_msg_standby_status_update ?r - def start_link(electric_instance_id, connection_opts, replication_opts) do + @spec start_link(Keyword.t()) :: :gen_statem.start_ret() + def start_link(opts) do + config = Map.new(opts) + # Disable the reconnection logic in Postgex.ReplicationConnection to force it to exit with # the connection error. Without this, we may observe undesirable restarts in tests between # one test process exiting and the next one starting. start_opts = [ - name: name(electric_instance_id), + name: name(config.electric_instance_id, config.tenant_id), auto_reconnect: false - ] ++ Electric.Utils.deobfuscate_password(connection_opts) + ] ++ Electric.Utils.deobfuscate_password(config.connection_opts) - Postgrex.ReplicationConnection.start_link(__MODULE__, replication_opts, start_opts) + Postgrex.ReplicationConnection.start_link(__MODULE__, config.replication_opts, start_opts) end - def name(electric_instance_id) do - Electric.Application.process_name(electric_instance_id, __MODULE__) + def name(electric_instance_id, tenant_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__) end def start_streaming(client) do diff --git a/packages/sync-service/lib/electric/replication/shape_log_collector.ex b/packages/sync-service/lib/electric/replication/shape_log_collector.ex index acd2fbb06c..f281612b7e 100644 --- a/packages/sync-service/lib/electric/replication/shape_log_collector.ex +++ b/packages/sync-service/lib/electric/replication/shape_log_collector.ex @@ -14,6 +14,7 @@ defmodule Electric.Replication.ShapeLogCollector do @schema NimbleOptions.new!( electric_instance_id: [type: :atom, required: true], + tenant_id: [type: :string, required: true], inspector: [type: :mod_arg, required: true], # see https://hexdocs.pm/gen_stage/GenStage.html#c:init/1-options demand: [type: {:in, [:forward, :accumulate]}, default: :accumulate], @@ -23,12 +24,14 @@ defmodule Electric.Replication.ShapeLogCollector do def start_link(opts) do with {:ok, opts} <- NimbleOptions.validate(opts, @schema) do - GenStage.start_link(__MODULE__, Map.new(opts), name: name(opts[:electric_instance_id])) + GenStage.start_link(__MODULE__, Map.new(opts), + name: name(opts[:electric_instance_id], opts[:tenant_id]) + ) end end - def name(electric_instance_id) do - Electric.Application.process_name(electric_instance_id, __MODULE__) + def name(electric_instance_id, tenant_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__) end # use `GenStage.call/2` here to make the event processing synchronous. diff --git a/packages/sync-service/lib/electric/service_status.ex b/packages/sync-service/lib/electric/service_status.ex index 4168508a29..43f858f633 100644 --- a/packages/sync-service/lib/electric/service_status.ex +++ b/packages/sync-service/lib/electric/service_status.ex @@ -1,12 +1,14 @@ defmodule Electric.ServiceStatus do @type status() :: :waiting | :starting | :active | :stopping - @spec check() :: status() - def check() do + @spec check(atom | String.t(), String.t()) :: status() + def check(electric_instance_id, tenant_id) do # Match the connection status ot a service status - currently # they are one and the same but keeping this decoupled for future # additions to conditions that determine service status - case Electric.Connection.Manager.get_status(Electric.Connection.Manager) do + conn_mgr = Electric.Connection.Manager.name(electric_instance_id, tenant_id) + + case Electric.Connection.Manager.get_status(conn_mgr) do :waiting -> :waiting :starting -> :starting :active -> :active diff --git a/packages/sync-service/lib/electric/shape_cache.ex b/packages/sync-service/lib/electric/shape_cache.ex index f4d846fe90..6edc944a7e 100644 --- a/packages/sync-service/lib/electric/shape_cache.ex +++ b/packages/sync-service/lib/electric/shape_cache.ex @@ -38,19 +38,15 @@ defmodule Electric.ShapeCache do @type shape_id :: Electric.ShapeCacheBehaviour.shape_id() - @default_shape_meta_table :shape_meta_table - - @genserver_name_schema {:or, [:atom, {:tuple, [:atom, :atom, :any]}]} + @name_schema_tuple {:tuple, [:atom, :atom, :any]} + @genserver_name_schema {:or, [:atom, @name_schema_tuple]} @schema NimbleOptions.new!( name: [ type: @genserver_name_schema, - default: __MODULE__ + required: false ], electric_instance_id: [type: :atom, required: true], - shape_meta_table: [ - type: :atom, - default: @default_shape_meta_table - ], + tenant_id: [type: :string, required: true], log_producer: [type: @genserver_name_schema, required: true], consumer_supervisor: [type: @genserver_name_schema, required: true], storage: [type: :mod_arg, required: true], @@ -58,7 +54,7 @@ defmodule Electric.ShapeCache do inspector: [type: :mod_arg, required: true], shape_status: [type: :atom, default: Electric.ShapeCache.ShapeStatus], registry: [type: {:or, [:atom, :pid]}, required: true], - db_pool: [type: {:or, [:atom, :pid]}, default: Electric.DbPool], + db_pool: [type: {:or, [:atom, :pid, @name_schema_tuple]}], run_with_conn_fn: [ type: {:fun, 2}, default: &Shapes.Consumer.Snapshotter.run_with_conn/2 @@ -71,15 +67,44 @@ defmodule Electric.ShapeCache do purge_all_shapes?: [type: :boolean, required: false] ) + def name(electric_instance_id, tenant_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__) + end + + def name(opts) do + electric_instance_id = Access.fetch!(opts, :electric_instance_id) + tenant_id = Access.fetch!(opts, :tenant_id) + name(electric_instance_id, tenant_id) + end + def start_link(opts) do with {:ok, opts} <- NimbleOptions.validate(opts, @schema) do - GenServer.start_link(__MODULE__, Map.new(opts), name: opts[:name]) + electric_instance_id = Keyword.fetch!(opts, :electric_instance_id) + tenant_id = Keyword.fetch!(opts, :tenant_id) + name = Keyword.get(opts, :name, name(electric_instance_id, tenant_id)) + + db_pool = + Keyword.get( + opts, + :db_pool, + Electric.Application.process_name( + Keyword.fetch!(opts, :electric_instance_id), + Keyword.fetch!(opts, :tenant_id), + Electric.DbPool + ) + ) + + GenServer.start_link( + __MODULE__, + Map.new(opts) |> Map.put(:db_pool, db_pool) |> Map.put(:name, name), + name: name + ) end end @impl Electric.ShapeCacheBehaviour def get_shape(shape, opts \\ []) do - table = Access.get(opts, :shape_meta_table, @default_shape_meta_table) + table = get_shape_meta_table(opts) shape_status = Access.get(opts, :shape_status, ShapeStatus) shape_status.get_existing_shape(table, shape) end @@ -90,7 +115,7 @@ defmodule Electric.ShapeCache do if shape_state = get_shape(shape, opts) do shape_state else - server = Access.get(opts, :server, __MODULE__) + server = Access.get(opts, :server, name(opts)) GenServer.call(server, {:create_or_wait_shape_id, shape}) end end @@ -99,7 +124,7 @@ defmodule Electric.ShapeCache do @spec update_shape_latest_offset(shape_id(), LogOffset.t(), opts :: keyword()) :: :ok | {:error, term()} def update_shape_latest_offset(shape_id, latest_offset, opts) do - meta_table = Access.get(opts, :shape_meta_table, @default_shape_meta_table) + meta_table = get_shape_meta_table(opts) shape_status = Access.get(opts, :shape_status, ShapeStatus) if shape_status.set_latest_offset(meta_table, shape_id, latest_offset) do @@ -120,30 +145,31 @@ defmodule Electric.ShapeCache do @impl Electric.ShapeCacheBehaviour @spec clean_shape(shape_id(), keyword()) :: :ok def clean_shape(shape_id, opts) do - server = Access.get(opts, :server, __MODULE__) + server = Access.get(opts, :server, name(opts)) GenServer.call(server, {:clean, shape_id}) end @impl Electric.ShapeCacheBehaviour @spec clean_all_shapes(keyword()) :: :ok def clean_all_shapes(opts) do - server = Access.get(opts, :server, __MODULE__) + server = Access.get(opts, :server, name(opts)) GenServer.call(server, {:clean_all}) end @impl Electric.ShapeCacheBehaviour @spec handle_truncate(shape_id(), keyword()) :: :ok def handle_truncate(shape_id, opts \\ []) do - server = Access.get(opts, :server, __MODULE__) + server = Access.get(opts, :server, name(opts)) GenServer.call(server, {:truncate, shape_id}) end @impl Electric.ShapeCacheBehaviour @spec await_snapshot_start(shape_id(), keyword()) :: :started | {:error, term()} def await_snapshot_start(shape_id, opts \\ []) when is_binary(shape_id) do - table = Access.get(opts, :shape_meta_table, @default_shape_meta_table) + table = get_shape_meta_table(opts) shape_status = Access.get(opts, :shape_status, ShapeStatus) electric_instance_id = Access.fetch!(opts, :electric_instance_id) + tenant_id = Access.fetch!(opts, :tenant_id) cond do shape_status.snapshot_started?(table, shape_id) -> @@ -153,39 +179,46 @@ defmodule Electric.ShapeCache do {:error, :unknown} true -> - server = Electric.Shapes.Consumer.name(electric_instance_id, shape_id) + server = Electric.Shapes.Consumer.name(electric_instance_id, tenant_id, shape_id) GenServer.call(server, :await_snapshot_start) end end @impl Electric.ShapeCacheBehaviour def has_shape?(shape_id, opts \\ []) do - table = Access.get(opts, :shape_meta_table, @default_shape_meta_table) + table = get_shape_meta_table(opts) shape_status = Access.get(opts, :shape_status, ShapeStatus) if shape_status.get_existing_shape(table, shape_id) do true else - server = Access.get(opts, :server, __MODULE__) + server = Access.get(opts, :server, name(opts)) GenServer.call(server, {:wait_shape_id, shape_id}) end end @impl GenServer def init(opts) do + # Each tenant creates its own ETS table for storing shape meta data. + # We don't use a named table to avoid creating atoms dynamically for each tenant. + # Instead, we use the reference to the table that is returned by `:ets.new`. + # This requires storing the reference in the GenServer and exposing it through a `get_shape_meta_table` method. + meta_table = :ets.new(:shape_meta_table, [:public, :ordered_set]) + {:ok, shape_status_state} = opts.shape_status.initialise( - shape_meta_table: opts.shape_meta_table, + shape_meta_table: meta_table, storage: opts.storage ) state = %{ name: opts.name, electric_instance_id: opts.electric_instance_id, + tenant_id: opts.tenant_id, storage: opts.storage, chunk_bytes_threshold: opts.chunk_bytes_threshold, inspector: opts.inspector, - shape_meta_table: opts.shape_meta_table, + shape_meta_table: meta_table, shape_status: opts.shape_status, db_pool: opts.db_pool, shape_status_state: shape_status_state, @@ -261,10 +294,16 @@ defmodule Electric.ShapeCache do {:reply, :ok, state} end + # Returns a reference to the ETS table that stores shape meta data for this tenant + def handle_call(:get_shape_meta_table, _from, %{shape_meta_table: table} = state) do + {:reply, table, state} + end + defp clean_up_shape(state, shape_id) do Electric.Shapes.ConsumerSupervisor.stop_shape_consumer( state.consumer_supervisor, state.electric_instance_id, + state.tenant_id, shape_id ) @@ -294,6 +333,7 @@ defmodule Electric.ShapeCache do state.consumer_supervisor, electric_instance_id: state.electric_instance_id, inspector: state.inspector, + tenant_id: state.tenant_id, shape_id: shape_id, shape: shape, shape_status: {state.shape_status, state.shape_status_state}, @@ -301,18 +341,29 @@ defmodule Electric.ShapeCache do chunk_bytes_threshold: state.chunk_bytes_threshold, log_producer: state.log_producer, shape_cache: - {__MODULE__, %{server: state.name, shape_meta_table: state.shape_meta_table}}, + {__MODULE__, + %{ + server: state.name, + shape_meta_table: state.shape_meta_table, + electric_instance_id: state.electric_instance_id, + tenant_id: state.tenant_id + }}, registry: state.registry, db_pool: state.db_pool, run_with_conn_fn: state.run_with_conn_fn, prepare_tables_fn: state.prepare_tables_fn, create_snapshot_fn: state.create_snapshot_fn ) do - consumer = Shapes.Consumer.name(state.electric_instance_id, shape_id) + consumer = Shapes.Consumer.name(state.electric_instance_id, state.tenant_id, shape_id) {:ok, snapshot_xmin, latest_offset} = Shapes.Consumer.initial_state(consumer) {:ok, pid, snapshot_xmin, latest_offset} end end + + defp get_shape_meta_table(opts) do + server = Access.get(opts, :server, name(opts)) + GenStage.call(server, :get_shape_meta_table) + end end diff --git a/packages/sync-service/lib/electric/shape_cache/crashing_file_storage.ex b/packages/sync-service/lib/electric/shape_cache/crashing_file_storage.ex index 25e7fb9422..ff9107fbd1 100644 --- a/packages/sync-service/lib/electric/shape_cache/crashing_file_storage.ex +++ b/packages/sync-service/lib/electric/shape_cache/crashing_file_storage.ex @@ -9,7 +9,7 @@ defmodule Electric.ShapeCache.CrashingFileStorage do @num_calls_until_crash_key :num_calls_until_crash - defdelegate for_shape(shape_id, opts), to: FileStorage + defdelegate for_shape(shape_id, tenant_id, opts), to: FileStorage defdelegate start_link(opts), to: FileStorage defdelegate set_shape_definition(shape, opts), to: FileStorage defdelegate get_all_stored_shapes(opts), to: FileStorage diff --git a/packages/sync-service/lib/electric/shape_cache/file_storage.ex b/packages/sync-service/lib/electric/shape_cache/file_storage.ex index 65c1d1e71e..52357fd614 100644 --- a/packages/sync-service/lib/electric/shape_cache/file_storage.ex +++ b/packages/sync-service/lib/electric/shape_cache/file_storage.ex @@ -26,6 +26,7 @@ defmodule Electric.ShapeCache.FileStorage do :shape_definition_dir, :snapshot_dir, :electric_instance_id, + :tenant_id, :extra_opts, version: @version ] @@ -34,33 +35,36 @@ defmodule Electric.ShapeCache.FileStorage do def shared_opts(opts) do storage_dir = Keyword.get(opts, :storage_dir, "./shapes") electric_instance_id = Keyword.fetch!(opts, :electric_instance_id) + tenant_id = Keyword.fetch!(opts, :tenant_id) - %{base_path: storage_dir, electric_instance_id: electric_instance_id} + %{base_path: storage_dir, electric_instance_id: electric_instance_id, tenant_id: tenant_id} end @impl Electric.ShapeCache.Storage - def for_shape(shape_id, %FS{shape_id: shape_id} = opts) do + def for_shape(shape_id, _tenant_id, %FS{shape_id: shape_id} = opts) do opts end def for_shape( shape_id, + tenant_id, %{base_path: base_path, electric_instance_id: electric_instance_id} = opts ) do %FS{ base_path: base_path, shape_id: shape_id, - db: name(electric_instance_id, shape_id), - cubdb_dir: Path.join([base_path, shape_id, "cubdb"]), - snapshot_dir: Path.join([base_path, shape_id, "snapshots"]), - shape_definition_dir: Path.join([base_path, shape_id]), + db: name(electric_instance_id, tenant_id, shape_id), + cubdb_dir: Path.join([base_path, tenant_id, shape_id, "cubdb"]), + snapshot_dir: Path.join([base_path, tenant_id, shape_id, "snapshots"]), + shape_definition_dir: Path.join([base_path, tenant_id, shape_id]), electric_instance_id: electric_instance_id, + tenant_id: tenant_id, extra_opts: Map.get(opts, :extra_opts, %{}) } end - def name(electric_instance_id, shape_id) do - Electric.Application.process_name(electric_instance_id, __MODULE__, shape_id) + defp name(electric_instance_id, tenant_id, shape_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__, shape_id) end def child_spec(%FS{} = opts) do @@ -120,12 +124,16 @@ defmodule Electric.ShapeCache.FileStorage do end @impl Electric.ShapeCache.Storage - def get_all_stored_shapes(%{base_path: base_path}) do - case File.ls(base_path) do + def get_all_stored_shapes(opts) do + shapes_dir = Path.join([opts.base_path, opts.tenant_id]) + + case File.ls(shapes_dir) do {:ok, shape_ids} -> Enum.reduce(shape_ids, %{}, fn shape_id, acc -> shape_def_path = - shape_definition_path(%{shape_definition_dir: Path.join(base_path, shape_id)}) + shape_definition_path(%{ + shape_definition_dir: Path.join([opts.base_path, opts.tenant_id, shape_id]) + }) with {:ok, shape_def_encoded} <- File.read(shape_def_path), {:ok, shape_def_json} <- Jason.decode(shape_def_encoded), diff --git a/packages/sync-service/lib/electric/shape_cache/in_memory_storage.ex b/packages/sync-service/lib/electric/shape_cache/in_memory_storage.ex index c2100f8928..491ae51f54 100644 --- a/packages/sync-service/lib/electric/shape_cache/in_memory_storage.ex +++ b/packages/sync-service/lib/electric/shape_cache/in_memory_storage.ex @@ -19,33 +19,39 @@ defmodule Electric.ShapeCache.InMemoryStorage do :log_table, :chunk_checkpoint_table, :shape_id, - :electric_instance_id + :electric_instance_id, + :tenant_id ] @impl Electric.ShapeCache.Storage def shared_opts(opts) do table_base_name = Access.get(opts, :table_base_name, __MODULE__) electric_instance_id = Keyword.fetch!(opts, :electric_instance_id) + tenant_id = Keyword.fetch!(opts, :tenant_id) - %{table_base_name: table_base_name, electric_instance_id: electric_instance_id} + %{ + table_base_name: table_base_name, + electric_instance_id: electric_instance_id, + tenant_id: tenant_id + } end - def name(electric_instance_id, shape_id) when is_binary(shape_id) do - Electric.Application.process_name(electric_instance_id, __MODULE__, shape_id) + def name(electric_instance_id, tenant_id, shape_id) when is_binary(shape_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__, shape_id) end @impl Electric.ShapeCache.Storage - def for_shape(shape_id, %{shape_id: shape_id} = opts) do + def for_shape(shape_id, _tenant_id, %{shape_id: shape_id} = opts) do opts end - def for_shape(shape_id, %{ + def for_shape(shape_id, tenant_id, %{ table_base_name: table_base_name, electric_instance_id: electric_instance_id }) do - snapshot_table_name = :"#{table_base_name}.Snapshot_#{shape_id}" - log_table_name = :"#{table_base_name}.Log_#{shape_id}" - chunk_checkpoint_table_name = :"#{table_base_name}.ChunkCheckpoint_#{shape_id}" + snapshot_table_name = :"#{table_base_name}.#{tenant_id}.Snapshot_#{shape_id}" + log_table_name = :"#{table_base_name}.#{tenant_id}.Log_#{shape_id}" + chunk_checkpoint_table_name = :"#{table_base_name}.#{tenant_id}.ChunkCheckpoint_#{shape_id}" %__MODULE__{ table_base_name: table_base_name, @@ -53,7 +59,8 @@ defmodule Electric.ShapeCache.InMemoryStorage do snapshot_table: snapshot_table_name, log_table: log_table_name, chunk_checkpoint_table: chunk_checkpoint_table_name, - electric_instance_id: electric_instance_id + electric_instance_id: electric_instance_id, + tenant_id: tenant_id } end @@ -61,6 +68,7 @@ defmodule Electric.ShapeCache.InMemoryStorage do def start_link(%MS{} = opts) do if is_nil(opts.shape_id), do: raise("cannot start an un-attached storage instance") if is_nil(opts.electric_instance_id), do: raise("electric_instance_id cannot be nil") + if is_nil(opts.tenant_id), do: raise("tenant_id cannot be nil") Agent.start_link( fn -> @@ -70,7 +78,7 @@ defmodule Electric.ShapeCache.InMemoryStorage do chunk_checkpoint_table: storage_table(opts.chunk_checkpoint_table) } end, - name: name(opts.electric_instance_id, opts.shape_id) + name: name(opts.electric_instance_id, opts.tenant_id, opts.shape_id) ) end diff --git a/packages/sync-service/lib/electric/shape_cache/shape_status.ex b/packages/sync-service/lib/electric/shape_cache/shape_status.ex index 0984755de3..35b30d5d04 100644 --- a/packages/sync-service/lib/electric/shape_cache/shape_status.ex +++ b/packages/sync-service/lib/electric/shape_cache/shape_status.ex @@ -74,10 +74,8 @@ defmodule Electric.ShapeCache.ShapeStatus do @spec initialise(options()) :: {:ok, t()} | {:error, term()} def initialise(opts) do with {:ok, config} <- NimbleOptions.validate(opts, @schema), - {:ok, table_name} = Access.fetch(config, :shape_meta_table), + {:ok, meta_table} = Access.fetch(config, :shape_meta_table), {:ok, storage} = Access.fetch(config, :storage) do - meta_table = :ets.new(table_name, [:named_table, :public, :ordered_set]) - state = struct( __MODULE__, @@ -232,7 +230,7 @@ defmodule Electric.ShapeCache.ShapeStatus do snapshot_xmin(table, shape_id) end - def snapshot_xmin(meta_table, shape_id) when is_atom(meta_table) do + def snapshot_xmin(meta_table, shape_id) when is_reference(meta_table) or is_atom(meta_table) do turn_raise_into_error(fn -> :ets.lookup_element( meta_table, diff --git a/packages/sync-service/lib/electric/shape_cache/storage.ex b/packages/sync-service/lib/electric/shape_cache/storage.ex index 7c6fb1565a..81e29fa1b6 100644 --- a/packages/sync-service/lib/electric/shape_cache/storage.ex +++ b/packages/sync-service/lib/electric/shape_cache/storage.ex @@ -5,6 +5,7 @@ defmodule Electric.ShapeCache.Storage do alias Electric.Shapes.Querying alias Electric.Replication.LogOffset + @type tenant_id :: String.t() @type shape_id :: Electric.ShapeCacheBehaviour.shape_id() @type xmin :: Electric.ShapeCacheBehaviour.xmin() @type offset :: LogOffset.t() @@ -25,7 +26,7 @@ defmodule Electric.ShapeCache.Storage do @callback shared_opts(Keyword.t()) :: compiled_opts() @doc "Initialise shape-specific opts from the shared, global, configuration" - @callback for_shape(shape_id(), compiled_opts()) :: shape_opts() + @callback for_shape(shape_id(), tenant_id(), compiled_opts()) :: shape_opts() @doc "Start any processes required to run the storage backend" @callback start_link(shape_opts()) :: GenServer.on_start() @@ -113,8 +114,8 @@ defmodule Electric.ShapeCache.Storage do end @impl __MODULE__ - def for_shape(shape_id, {mod, opts}) do - {mod, mod.for_shape(shape_id, opts)} + def for_shape(shape_id, tenant_id, {mod, opts}) do + {mod, mod.for_shape(shape_id, tenant_id, opts)} end @impl __MODULE__ diff --git a/packages/sync-service/lib/electric/shapes.ex b/packages/sync-service/lib/electric/shapes.ex index 71675b73bd..7aedb3a949 100644 --- a/packages/sync-service/lib/electric/shapes.ex +++ b/packages/sync-service/lib/electric/shapes.ex @@ -10,9 +10,9 @@ defmodule Electric.Shapes do @doc """ Get snapshot for the shape ID """ - def get_snapshot(config, shape_id) do + def get_snapshot(config, shape_id, tenant_id) do {shape_cache, opts} = Access.get(config, :shape_cache, {ShapeCache, []}) - storage = shape_storage(config, shape_id) + storage = shape_storage(config, shape_id, tenant_id) if shape_cache.has_shape?(shape_id, opts) do with :started <- shape_cache.await_snapshot_start(shape_id, opts) do @@ -26,11 +26,11 @@ defmodule Electric.Shapes do @doc """ Get stream of the log since a given offset """ - def get_log_stream(config, shape_id, opts) do + def get_log_stream(config, shape_id, tenant_id, opts) do {shape_cache, shape_cache_opts} = Access.get(config, :shape_cache, {ShapeCache, []}) offset = Access.get(opts, :since, LogOffset.before_all()) max_offset = Access.get(opts, :up_to, LogOffset.last()) - storage = shape_storage(config, shape_id) + storage = shape_storage(config, shape_id, tenant_id) if shape_cache.has_shape?(shape_id, shape_cache_opts) do Storage.get_log_stream(offset, max_offset, storage) @@ -64,10 +64,10 @@ defmodule Electric.Shapes do If `nil` is returned, chunk is not complete and the shape's latest offset should be used """ - @spec get_chunk_end_log_offset(keyword(), shape_id(), LogOffset.t()) :: + @spec get_chunk_end_log_offset(keyword(), shape_id(), LogOffset.t(), String.t()) :: LogOffset.t() | nil - def get_chunk_end_log_offset(config, shape_id, offset) do - storage = shape_storage(config, shape_id) + def get_chunk_end_log_offset(config, shape_id, offset, tenant_id) do + storage = shape_storage(config, shape_id, tenant_id) Storage.get_chunk_end_log_offset(offset, storage) end @@ -102,7 +102,7 @@ defmodule Electric.Shapes do :ok end - defp shape_storage(config, shape_id) do - Storage.for_shape(shape_id, Access.fetch!(config, :storage)) + defp shape_storage(config, shape_id, tenant_id) do + Storage.for_shape(shape_id, tenant_id, Access.fetch!(config, :storage)) end end diff --git a/packages/sync-service/lib/electric/shapes/consumer.ex b/packages/sync-service/lib/electric/shapes/consumer.ex index b83f10e4fa..398b416730 100644 --- a/packages/sync-service/lib/electric/shapes/consumer.ex +++ b/packages/sync-service/lib/electric/shapes/consumer.ex @@ -16,12 +16,15 @@ defmodule Electric.Shapes.Consumer do @initial_log_state %{current_chunk_byte_size: 0} - def name(%{electric_instance_id: electric_instance_id, shape_id: shape_id} = _config) do - name(electric_instance_id, shape_id) + def name( + %{electric_instance_id: electric_instance_id, tenant_id: tenant_id, shape_id: shape_id} = + _config + ) do + name(electric_instance_id, tenant_id, shape_id) end - def name(electric_instance_id, shape_id) when is_binary(shape_id) do - Electric.Application.process_name(electric_instance_id, __MODULE__, shape_id) + def name(electric_instance_id, tenant_id, shape_id) when is_binary(shape_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__, shape_id) end def initial_state(consumer) do @@ -33,16 +36,14 @@ defmodule Electric.Shapes.Consumer do # when the `shape_id` consumer has processed every transaction. # Transactions that we skip because of xmin logic do not generate # a notification - @spec monitor(atom(), ShapeCache.shape_id(), pid()) :: reference() - def monitor(electric_instance_id, shape_id, pid \\ self()) do - GenStage.call(name(electric_instance_id, shape_id), {:monitor, pid}) + @spec monitor(atom(), String.t(), ShapeCache.shape_id(), pid()) :: reference() + def monitor(electric_instance_id, tenant_id, shape_id, pid \\ self()) do + GenStage.call(name(electric_instance_id, tenant_id, shape_id), {:monitor, pid}) end - @spec whereis(atom(), ShapeCache.shape_id()) :: pid() | nil - def whereis(electric_instance_id, shape_id) do - electric_instance_id - |> name(shape_id) - |> GenServer.whereis() + @spec whereis(atom(), String.t(), ShapeCache.shape_id()) :: pid() | nil + def whereis(electric_instance_id, tenant_id, shape_id) do + GenServer.whereis(name(electric_instance_id, tenant_id, shape_id)) end def start_link(config) when is_map(config) do @@ -232,6 +233,7 @@ defmodule Electric.Shapes.Consumer do %{ shape: shape, shape_id: shape_id, + tenant_id: tenant_id, log_state: log_state, chunk_bytes_threshold: chunk_bytes_threshold, shape_cache: {shape_cache, shape_cache_opts}, @@ -270,7 +272,7 @@ defmodule Electric.Shapes.Consumer do shape_cache.update_shape_latest_offset(shape_id, last_log_offset, shape_cache_opts) - notify_listeners(registry, :new_changes, shape_id, last_log_offset) + notify_listeners(registry, :new_changes, tenant_id, shape_id, last_log_offset) {:cont, notify(txn, %{state | log_state: new_log_state})} @@ -283,10 +285,10 @@ defmodule Electric.Shapes.Consumer do end end - defp notify_listeners(registry, :new_changes, shape_id, latest_log_offset) do - Registry.dispatch(registry, shape_id, fn registered -> + defp notify_listeners(registry, :new_changes, tenant_id, shape_id, latest_log_offset) do + Registry.dispatch(registry, {tenant_id, shape_id}, fn registered -> Logger.debug(fn -> - "Notifying ~#{length(registered)} clients about new changes to #{shape_id}" + "[Tenant #{tenant_id}]: Notifying ~#{length(registered)} clients about new changes to #{shape_id}" end) for {pid, ref} <- registered, diff --git a/packages/sync-service/lib/electric/shapes/consumer/snapshotter.ex b/packages/sync-service/lib/electric/shapes/consumer/snapshotter.ex index efb423ac17..a9a68c147d 100644 --- a/packages/sync-service/lib/electric/shapes/consumer/snapshotter.ex +++ b/packages/sync-service/lib/electric/shapes/consumer/snapshotter.ex @@ -10,12 +10,12 @@ defmodule Electric.Shapes.Consumer.Snapshotter do require Logger - def name(%{electric_instance_id: electric_instance_id, shape_id: shape_id}) do - name(electric_instance_id, shape_id) + def name(%{electric_instance_id: electric_instance_id, tenant_id: tenant_id, shape_id: shape_id}) do + name(electric_instance_id, tenant_id, shape_id) end - def name(electric_instance_id, shape_id) when is_binary(shape_id) do - Electric.Application.process_name(electric_instance_id, __MODULE__, shape_id) + def name(electric_instance_id, tenant_id, shape_id) when is_binary(shape_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__, shape_id) end def start_link(config) do @@ -27,9 +27,14 @@ defmodule Electric.Shapes.Consumer.Snapshotter do end def handle_continue(:start_snapshot, state) do - %{shape_id: shape_id, shape: shape, electric_instance_id: electric_instance_id} = state - - case Shapes.Consumer.whereis(electric_instance_id, shape_id) do + %{ + shape_id: shape_id, + shape: shape, + electric_instance_id: electric_instance_id, + tenant_id: tenant_id + } = state + + case Shapes.Consumer.whereis(electric_instance_id, tenant_id, shape_id) do consumer when is_pid(consumer) -> if not Storage.snapshot_started?(state.storage) do %{ diff --git a/packages/sync-service/lib/electric/shapes/consumer/supervisor.ex b/packages/sync-service/lib/electric/shapes/consumer/supervisor.ex index 2ea43c6b1b..8826f0c73a 100644 --- a/packages/sync-service/lib/electric/shapes/consumer/supervisor.ex +++ b/packages/sync-service/lib/electric/shapes/consumer/supervisor.ex @@ -3,21 +3,23 @@ defmodule Electric.Shapes.Consumer.Supervisor do require Logger - @genserver_name_schema {:or, [:atom, {:tuple, [:atom, :atom, :any]}]} + @name_schema_tuple {:tuple, [:atom, :atom, :any]} + @genserver_name_schema {:or, [:atom, @name_schema_tuple]} # TODO: unify these with ShapeCache @schema NimbleOptions.new!( shape_id: [type: :string, required: true], shape: [type: {:struct, Electric.Shapes.Shape}, required: true], electric_instance_id: [type: :atom, required: true], inspector: [type: :mod_arg, required: true], + tenant_id: [type: :string, required: true], log_producer: [type: @genserver_name_schema, required: true], shape_cache: [type: :mod_arg, required: true], registry: [type: :atom, required: true], shape_status: [type: :mod_arg, required: true], storage: [type: :mod_arg, required: true], chunk_bytes_threshold: [type: :non_neg_integer, required: true], - db_pool: [type: {:or, [:atom, :pid]}, default: Electric.DbPool], run_with_conn_fn: [type: {:fun, 2}, default: &DBConnection.run/2], + db_pool: [type: {:or, [:atom, :pid, @name_schema_tuple]}, required: true], prepare_tables_fn: [type: {:or, [:mfa, {:fun, 2}]}, required: true], create_snapshot_fn: [ type: {:fun, 5}, @@ -25,12 +27,12 @@ defmodule Electric.Shapes.Consumer.Supervisor do ] ) - def name(electric_instance_id, shape_id) when is_binary(shape_id) do - Electric.Application.process_name(electric_instance_id, __MODULE__, shape_id) + def name(electric_instance_id, tenant_id, shape_id) when is_binary(shape_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__, shape_id) end - def name(%{electric_instance_id: electric_instance_id, shape_id: shape_id}) do - name(electric_instance_id, shape_id) + def name(%{electric_instance_id: electric_instance_id, tenant_id: tenant_id, shape_id: shape_id}) do + name(electric_instance_id, tenant_id, shape_id) end def start_link(opts) do @@ -40,19 +42,25 @@ defmodule Electric.Shapes.Consumer.Supervisor do end end - def clean_and_stop(%{electric_instance_id: electric_instance_id, shape_id: shape_id}) do + def clean_and_stop(%{ + electric_instance_id: electric_instance_id, + tenant_id: tenant_id, + shape_id: shape_id + }) do # if consumer is present, terminate it gracefully, otherwise terminate supervisor - case GenServer.whereis(Electric.Shapes.Consumer.name(electric_instance_id, shape_id)) do - nil -> Supervisor.stop(name(electric_instance_id, shape_id)) + consumer = Electric.Shapes.Consumer.name(electric_instance_id, tenant_id, shape_id) + + case GenServer.whereis(consumer) do + nil -> Supervisor.stop(name(electric_instance_id, tenant_id, shape_id)) consumer_pid when is_pid(consumer_pid) -> GenServer.call(consumer_pid, :clean_and_stop) end end def init(config) when is_map(config) do - %{shape_id: shape_id, storage: {_, _} = storage} = + %{shape_id: shape_id, tenant_id: tenant_id, storage: {_, _} = storage} = config - shape_storage = Electric.ShapeCache.Storage.for_shape(shape_id, storage) + shape_storage = Electric.ShapeCache.Storage.for_shape(shape_id, tenant_id, storage) shape_config = %{config | storage: shape_storage} diff --git a/packages/sync-service/lib/electric/shapes/consumer_supervisor.ex b/packages/sync-service/lib/electric/shapes/consumer_supervisor.ex index 996f5b123a..cba5bf6689 100644 --- a/packages/sync-service/lib/electric/shapes/consumer_supervisor.ex +++ b/packages/sync-service/lib/electric/shapes/consumer_supervisor.ex @@ -8,16 +8,18 @@ defmodule Electric.Shapes.ConsumerSupervisor do require Logger - def name(electric_instance_id) do - Electric.Application.process_name(electric_instance_id, __MODULE__) + def name(electric_instance_id, tenant_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__) end def start_link(opts) do electric_instance_id = Keyword.fetch!(opts, :electric_instance_id) + tenant_id = Keyword.fetch!(opts, :tenant_id) DynamicSupervisor.start_link(__MODULE__, [], - name: Keyword.get(opts, :name, name(electric_instance_id)), - electric_instance_id: electric_instance_id + name: Keyword.get(opts, :name, name(electric_instance_id, tenant_id)), + electric_instance_id: electric_instance_id, + tenant_id: tenant_id ) end @@ -27,14 +29,15 @@ defmodule Electric.Shapes.ConsumerSupervisor do DynamicSupervisor.start_child(name, {Consumer.Supervisor, config}) end - def stop_shape_consumer(_name, electric_instance_id, shape_id) do - case GenServer.whereis(Consumer.Supervisor.name(electric_instance_id, shape_id)) do + def stop_shape_consumer(_name, electric_instance_id, tenant_id, shape_id) do + case GenServer.whereis(Consumer.Supervisor.name(electric_instance_id, tenant_id, shape_id)) do nil -> {:error, "no consumer for shape id #{inspect(shape_id)}"} pid when is_pid(pid) -> Consumer.Supervisor.clean_and_stop(%{ electric_instance_id: electric_instance_id, + tenant_id: tenant_id, shape_id: shape_id }) diff --git a/packages/sync-service/lib/electric/shapes/supervisor.ex b/packages/sync-service/lib/electric/shapes/supervisor.ex index 82250021ed..b7ab61a111 100644 --- a/packages/sync-service/lib/electric/shapes/supervisor.ex +++ b/packages/sync-service/lib/electric/shapes/supervisor.ex @@ -3,8 +3,18 @@ defmodule Electric.Shapes.Supervisor do require Logger + def name(electric_instance_id, tenant_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__) + end + + def name(opts) do + electric_instance_id = Access.fetch!(opts, :electric_instance_id) + tenant_id = Access.fetch!(opts, :tenant_id) + name(electric_instance_id, tenant_id) + end + def start_link(opts) do - name = Access.get(opts, :name, __MODULE__) + name = Access.get(opts, :name, name(opts)) Supervisor.start_link(__MODULE__, opts, name: name) end @@ -16,12 +26,14 @@ defmodule Electric.Shapes.Supervisor do shape_cache = Keyword.fetch!(opts, :shape_cache) log_collector = Keyword.fetch!(opts, :log_collector) electric_instance_id = Keyword.fetch!(opts, :electric_instance_id) + tenant_id = Keyword.fetch!(opts, :tenant_id) consumer_supervisor = Keyword.get( opts, :consumer_supervisor, - {Electric.Shapes.ConsumerSupervisor, [electric_instance_id: electric_instance_id]} + {Electric.Shapes.ConsumerSupervisor, + [electric_instance_id: electric_instance_id, tenant_id: tenant_id]} ) children = [consumer_supervisor, log_collector, shape_cache] diff --git a/packages/sync-service/lib/electric/tenant/dynamic_supervisor.ex b/packages/sync-service/lib/electric/tenant/dynamic_supervisor.ex new file mode 100644 index 0000000000..4fdc8d9cdd --- /dev/null +++ b/packages/sync-service/lib/electric/tenant/dynamic_supervisor.ex @@ -0,0 +1,36 @@ +defmodule Electric.TenantSupervisor do + @moduledoc """ + Responsible for managing tenant processes + """ + use DynamicSupervisor + + alias Electric.Tenant + + require Logger + + @name Electric.DynamicTenantSupervisor + + def start_link(_opts) do + DynamicSupervisor.start_link(__MODULE__, [], name: @name) + end + + def start_tenant(opts) do + Logger.debug(fn -> "Starting tenant for #{Access.fetch!(opts, :tenant_id)}" end) + DynamicSupervisor.start_child(@name, {Tenant.Supervisor, opts}) + end + + @doc """ + Stops all tenant processes. + """ + @spec stop_tenant(Keyword.t()) :: :ok + def stop_tenant(opts) do + sup = Tenant.Supervisor.name(opts) + :ok = Supervisor.stop(sup) + end + + @impl true + def init(_opts) do + Logger.debug(fn -> "Starting #{__MODULE__}" end) + DynamicSupervisor.init(strategy: :one_for_one) + end +end diff --git a/packages/sync-service/lib/electric/tenant/persistence.ex b/packages/sync-service/lib/electric/tenant/persistence.ex new file mode 100644 index 0000000000..a9799133f0 --- /dev/null +++ b/packages/sync-service/lib/electric/tenant/persistence.ex @@ -0,0 +1,94 @@ +defmodule Electric.Tenant.Persistence do + @moduledoc """ + Helper module to persist information about tenants. + """ + + alias Electric.Utils + + @doc """ + Persists a tenant configuration. + """ + @spec persist_tenant!(String.t(), Keyword.t(), Keyword.t()) :: :ok + def persist_tenant!(tenant_id, conn_opts, opts) do + load_tenants!(opts) + |> Map.put(tenant_id, conn_opts) + |> store_tenants(opts) + end + + @doc """ + Loads all tenants. + Returns a map of tenant ID to connection options. + """ + @spec load_tenants!(Keyword.t()) :: map() + def load_tenants!(opts) do + %{persistent_kv: kv} = + Keyword.get_lazy(opts, :app_config, fn -> Electric.Application.Configuration.get() end) + + case Electric.PersistentKV.get(kv, key(opts)) do + {:ok, tenants} -> + deserialise_tenants(tenants) + + {:error, :not_found} -> + %{} + + error -> + raise error + end + end + + @doc """ + Deletes a tenant from storage. + """ + @spec delete_tenant!(String.t(), Keyword.t()) :: :ok + def delete_tenant!(tenant_id, opts) do + load_tenants!(opts) + |> Map.delete(tenant_id) + |> store_tenants(opts) + end + + defp store_tenants(tenants, opts) do + %{persistent_kv: kv} = + Keyword.get_lazy(opts, :app_config, fn -> Electric.Application.Configuration.get() end) + + serialised_tenants = serialise_tenants(tenants) + Electric.PersistentKV.set(kv, key(opts), serialised_tenants) + end + + defp serialise_tenants(tenants) do + tenants + |> Utils.map_values(&tenant_config_keyword_to_map/1) + |> Jason.encode!() + end + + defp deserialise_tenants(tenants) do + tenants + |> Jason.decode!() + |> Utils.map_values(&tenant_config_map_to_keyword/1) + end + + defp tenant_config_keyword_to_map(conn_opts) do + conn_opts + |> Electric.Utils.deobfuscate_password() + |> Enum.into(%{}) + end + + defp tenant_config_map_to_keyword(config_map) do + config_map + |> Enum.map(fn {k, v} -> + val = + if k == "sslmode" do + String.to_existing_atom(v) + else + v + end + + {String.to_existing_atom(k), val} + end) + |> Electric.Utils.obfuscate_password() + end + + defp key(opts) do + electric_instance_id = Access.fetch!(opts, :electric_instance_id) + "tenants_#{electric_instance_id}" + end +end diff --git a/packages/sync-service/lib/electric/tenant/supervisor.ex b/packages/sync-service/lib/electric/tenant/supervisor.ex new file mode 100644 index 0000000000..3ac1e596e4 --- /dev/null +++ b/packages/sync-service/lib/electric/tenant/supervisor.ex @@ -0,0 +1,99 @@ +defmodule Electric.Tenant.Supervisor do + use Supervisor, restart: :transient + + require Logger + + def name(electric_instance_id, tenant_id) do + Electric.Application.process_name(electric_instance_id, tenant_id, __MODULE__) + end + + def name(opts) do + electric_instance_id = Access.fetch!(opts, :electric_instance_id) + tenant_id = Access.fetch!(opts, :tenant_id) + name(electric_instance_id, tenant_id) + end + + def start_link(opts) do + config = Map.new(opts) + Supervisor.start_link(__MODULE__, config, name: name(config)) + end + + @impl true + def init(%{ + app_config: app_config, + electric_instance_id: electric_instance_id, + tenant_id: tenant_id, + connection_opts: connection_opts, + inspector: inspector, + storage: storage + }) do + get_pg_version_fn = fn -> + server = Electric.Connection.Manager.name(electric_instance_id, tenant_id) + Electric.Connection.Manager.get_pg_version(server) + end + + prepare_tables_mfa = + {Electric.Postgres.Configuration, :configure_tables_for_replication!, + [get_pg_version_fn, app_config.replication_opts.publication_name]} + + shape_log_collector = + Electric.Replication.ShapeLogCollector.name(electric_instance_id, tenant_id) + + db_pool = + Electric.Application.process_name(electric_instance_id, tenant_id, Electric.DbPool) + + shape_cache_opts = [ + electric_instance_id: electric_instance_id, + tenant_id: tenant_id, + storage: storage, + inspector: inspector, + prepare_tables_fn: prepare_tables_mfa, + # TODO: move this to config + chunk_bytes_threshold: Application.fetch_env!(:electric, :chunk_bytes_threshold), + log_producer: shape_log_collector, + consumer_supervisor: + Electric.Shapes.ConsumerSupervisor.name(electric_instance_id, tenant_id), + registry: Registry.ShapeChanges + ] + + connection_manager_opts = [ + electric_instance_id: electric_instance_id, + tenant_id: tenant_id, + connection_opts: connection_opts, + replication_opts: [ + publication_name: app_config.replication_opts.publication_name, + try_creating_publication?: true, + slot_name: app_config.replication_opts.slot_name, + slot_temporary?: app_config.replication_opts.slot_temporary?, + transaction_received: + {Electric.Replication.ShapeLogCollector, :store_transaction, [shape_log_collector]}, + relation_received: + {Electric.Replication.ShapeLogCollector, :handle_relation_msg, [shape_log_collector]} + ], + pool_opts: [ + name: db_pool, + pool_size: app_config.pool_opts.size, + types: PgInterop.Postgrex.Types + ], + timeline_opts: [ + tenant_id: tenant_id, + persistent_kv: app_config.persistent_kv + ], + shape_cache_opts: shape_cache_opts + ] + + {_, opts} = inspector + tenant_tables_name = Access.fetch!(opts, :tenant_tables_name) + + children = [ + {Electric.Postgres.Inspector.EtsInspector, + pool: db_pool, + electric_instance_id: electric_instance_id, + tenant_id: tenant_id, + tenant_tables_name: tenant_tables_name}, + {Electric.Connection.Supervisor, connection_manager_opts} + ] + + Supervisor.init(children, strategy: :one_for_one, auto_shutdown: :any_significant) + end +end diff --git a/packages/sync-service/lib/electric/tenant_manager.ex b/packages/sync-service/lib/electric/tenant_manager.ex new file mode 100644 index 0000000000..2ff9073f6c --- /dev/null +++ b/packages/sync-service/lib/electric/tenant_manager.ex @@ -0,0 +1,283 @@ +defmodule Electric.TenantManager do + use GenServer + require Logger + + alias Electric.Tenant.Persistence + + # Public API + + def name(electric_instance_id) + when is_binary(electric_instance_id) or is_atom(electric_instance_id) do + Electric.Application.process_name(electric_instance_id, "no tenant", __MODULE__) + end + + def name([]) do + __MODULE__ + end + + def name(opts) do + Access.get(opts, :electric_instance_id, []) + |> name() + end + + def start_link(opts) do + {:ok, pid} = + GenServer.start_link(__MODULE__, opts, + name: Keyword.get_lazy(opts, :name, fn -> name(opts) end) + ) + + recreate_tenants_from_disk!(opts) + + {:ok, pid} + end + + @doc """ + Retrieves the only tenant in the system. + If there are no tenants, it returns `{:error, :not_found}`. + If there are several tenants, it returns `{:error, :several_tenants}` + and we should use `get_tenant` instead. + """ + @spec get_only_tenant(Keyword.t()) :: + {:ok, Keyword.t()} | {:error, :not_found} | {:error, :several_tenants} + def get_only_tenant(opts \\ []) do + server = Keyword.get(opts, :tenant_manager, name(opts)) + GenServer.call(server, :get_only_tenant) + end + + @doc """ + Retrieves a tenant by its ID. + """ + @spec get_tenant(String.t(), Keyword.t()) :: {:ok, Keyword.t()} | {:error, :not_found} + def get_tenant(tenant_id, opts \\ []) do + server = Keyword.get(opts, :tenant_manager, name(opts)) + GenServer.call(server, {:get_tenant, tenant_id}) + end + + @doc """ + Creates a new tenant for the provided database URL. + """ + @spec create_tenant(String.t(), Keyword.t(), Keyword.t()) :: + :ok | {:error, atom()} + def create_tenant(tenant_id, connection_opts, opts \\ []) do + app_config = + %{electric_instance_id: electric_instance_id, persistent_kv: persistent_kv} = + Keyword.get_lazy(opts, :app_config, fn -> Electric.Application.Configuration.get() end) + + inspector = + Access.get( + opts, + :inspector, + {Electric.Postgres.Inspector.EtsInspector, + electric_instance_id: electric_instance_id, + tenant_id: tenant_id, + server: + Electric.Postgres.Inspector.EtsInspector.name( + electric_instance_id, + tenant_id + ), + tenant_tables_name: + Electric.Postgres.Inspector.EtsInspector.fetch_tenant_tables_name(opts)} + ) + + registry = Access.get(opts, :registry, Registry.ShapeChanges) + + get_storage = fn -> + {storage_module, storage_in_opts} = Application.fetch_env!(:electric, :storage) + + storage_opts = + storage_module.shared_opts(storage_in_opts |> Keyword.put(:tenant_id, tenant_id)) + + {storage_module, storage_opts} + end + + storage = Access.get(opts, :storage, get_storage.()) + + # Can't load pg_id here because the connection manager may still be busy + # connecting to the DB so it might not be known yet + # {pg_id, _} = Electric.Timeline.load_timeline(persistent_kv: persistent_kv) + get_pg_id = fn -> + hostname = Access.fetch!(connection_opts, :hostname) + port = Access.fetch!(connection_opts, :port) + database = Access.fetch!(connection_opts, :database) + hostname <> ":" <> to_string(port) <> "/" <> database + end + + pg_id = Access.get(opts, :pg_id, get_pg_id.()) + + shape_cache = + Access.get( + opts, + :shape_cache, + {Electric.ShapeCache, + electric_instance_id: electric_instance_id, + tenant_id: tenant_id, + server: Electric.ShapeCache.name(electric_instance_id, tenant_id)} + ) + + get_service_status = + Access.get(opts, :get_service_status, fn -> + Electric.ServiceStatus.check(electric_instance_id, tenant_id) + end) + + long_poll_timeout = Access.get(opts, :long_poll_timeout, 20_000) + max_age = Access.get(opts, :max_age, Application.fetch_env!(:electric, :cache_max_age)) + stale_age = Access.get(opts, :stale_age, Application.fetch_env!(:electric, :cache_stale_age)) + + allow_shape_deletion = + Access.get( + opts, + :allow_shape_deletion, + Application.get_env(:electric, :allow_shape_deletion, false) + ) + + tenant = [ + electric_instance_id: electric_instance_id, + tenant_id: tenant_id, + pg_id: pg_id, + registry: registry, + storage: storage, + shape_cache: shape_cache, + get_service_status: get_service_status, + inspector: inspector, + long_poll_timeout: long_poll_timeout, + max_age: max_age, + stale_age: stale_age, + allow_shape_deletion: allow_shape_deletion + ] + + # Store the tenant in the tenant manager + store_tenant_opts = + opts ++ + [ + electric_instance_id: electric_instance_id, + persistent_kv: persistent_kv, + connection_opts: connection_opts + ] + + start_tenant_opts = [ + app_config: app_config, + electric_instance_id: electric_instance_id, + tenant_id: tenant_id, + connection_opts: connection_opts, + inspector: inspector, + storage: storage + ] + + with :ok <- store_tenant(tenant, store_tenant_opts), + {:ok, _} <- Electric.TenantSupervisor.start_tenant(start_tenant_opts) do + :ok + end + end + + @doc """ + Stores the provided tenant in the tenant manager. + """ + @spec store_tenant(Keyword.t(), Keyword.t()) :: :ok | {:error, atom()} + def store_tenant(tenant, opts) do + server = Keyword.get(opts, :tenant_manager, name(opts)) + + case GenServer.call(server, {:store_tenant, tenant}) do + {:tenant_already_exists, tenant_id} -> + {:error, {:tenant_already_exists, tenant_id}} + + {:db_already_in_use, pg_id} -> + {:error, {:db_already_in_use, pg_id}} + + :ok -> + Electric.Tenant.Persistence.persist_tenant!( + Keyword.fetch!(tenant, :tenant_id), + Keyword.fetch!(opts, :connection_opts), + opts + ) + end + end + + @doc """ + Deletes a tenant by its ID. + """ + @spec delete_tenant(String.t(), Keyword.t()) :: :ok | :not_found + def delete_tenant(tenant_id, opts \\ []) do + server = Keyword.get(opts, :tenant_manager, name(opts)) + + case GenServer.call(server, {:get_tenant, tenant_id}) do + {:ok, tenant} -> + pg_id = Access.fetch!(tenant, :pg_id) + :ok = GenServer.call(server, {:delete_tenant, tenant_id, pg_id}) + :ok = Electric.TenantSupervisor.stop_tenant(opts ++ [tenant_id: tenant_id]) + :ok = Electric.Tenant.Persistence.delete_tenant!(tenant_id, opts) + + {:error, :not_found} -> + :not_found + end + end + + ## Internal API + + @impl GenServer + def init(_opts) do + # state contains an index `tenants` of tenant_id -> tenant + # and a set `dbs` of PG identifiers used by tenants + {:ok, %{tenants: Map.new(), dbs: MapSet.new()}} + end + + @impl GenServer + def handle_call( + {:store_tenant, tenant}, + _from, + %{tenants: tenants, dbs: dbs} = state + ) do + tenant_id = tenant[:tenant_id] + pg_id = tenant[:pg_id] + + if Map.has_key?(tenants, tenant_id) do + {:reply, {:tenant_already_exists, tenant_id}, state} + else + if MapSet.member?(dbs, pg_id) do + {:reply, {:db_already_in_use, pg_id}, state} + else + {:reply, :ok, + %{tenants: Map.put(tenants, tenant_id, tenant), dbs: MapSet.put(dbs, pg_id)}} + end + end + end + + @impl GenServer + def handle_call(:get_only_tenant, _from, %{tenants: tenants} = state) do + case map_size(tenants) do + 1 -> + tenant = tenants |> Map.values() |> Enum.at(0) + {:reply, {:ok, tenant}, state} + + 0 -> + {:reply, {:error, :not_found}, state} + + _ -> + {:reply, {:error, :several_tenants}, state} + end + end + + @impl GenServer + def handle_call({:get_tenant, tenant_id}, _from, %{tenants: tenants} = state) do + if Map.has_key?(tenants, tenant_id) do + {:reply, {:ok, Map.get(tenants, tenant_id)}, state} + else + {:reply, {:error, :not_found}, state} + end + end + + @impl GenServer + def handle_call({:delete_tenant, tenant_id, pg_id}, _from, %{tenants: tenants, dbs: dbs}) do + {:reply, :ok, %{tenants: Map.delete(tenants, tenant_id), dbs: MapSet.delete(dbs, pg_id)}} + end + + defp recreate_tenants_from_disk!(opts) do + # Load the tenants from the persistent KV store + tenants = Persistence.load_tenants!(opts) + + # Recreate all tenants + Enum.each(tenants, fn {tenant_id, conn_opts} -> + Logger.info("Reloading tenant #{tenant_id} from storage") + :ok = create_tenant(tenant_id, conn_opts, opts) + end) + end +end diff --git a/packages/sync-service/lib/electric/timeline.ex b/packages/sync-service/lib/electric/timeline.ex index c66265a6cf..a882347039 100644 --- a/packages/sync-service/lib/electric/timeline.ex +++ b/packages/sync-service/lib/electric/timeline.ex @@ -12,8 +12,6 @@ defmodule Electric.Timeline do @type check_result :: :ok | :timeline_changed - @timeline_key "timeline_id" - @doc """ Checks that we're connected to the same Postgres DB as before and on the same timeline. TO this end, it checks the provided `pg_id` against the persisted PG ID. @@ -22,14 +20,14 @@ defmodule Electric.Timeline do If the timelines differ, that indicates that a Point In Time Recovery (PITR) has occurred and all shapes must be cleaned. If we fail to fetch timeline information, we also clean all shapes for safety as we can't be sure that Postgres and Electric are on the same timeline. """ - @spec check(timeline(), map()) :: check_result() - def check(pg_timeline, persistent_kv) do - electric_timeline = load_timeline(persistent_kv) + @spec check(timeline(), keyword()) :: check_result() + def check(pg_timeline, opts) do + electric_timeline = load_timeline(opts) # In any situation where the newly fetched timeline is different from the one we had # stored previously, overwrite the old one with the new one in our persistent KV store. if pg_timeline != electric_timeline do - :ok = store_timeline(pg_timeline, persistent_kv) + :ok = store_timeline(pg_timeline, opts) end # Now check for specific differences between the two timelines. @@ -62,11 +60,11 @@ defmodule Electric.Timeline do end # Loads the PG ID and timeline ID from persistent storage - @spec load_timeline(map()) :: timeline() - def load_timeline(persistent_kv) do - kv = make_serialized_kv(persistent_kv) + @spec load_timeline(Keyword.t()) :: timeline() + def load_timeline(opts) do + kv = make_serialized_kv(opts) - case PersistentKV.get(kv, @timeline_key) do + case PersistentKV.get(kv, timeline_key(opts)) do {:ok, [pg_id, timeline_id]} -> {pg_id, timeline_id} @@ -79,13 +77,20 @@ defmodule Electric.Timeline do end end - def store_timeline({pg_id, timeline_id}, persistent_kv) do - kv = make_serialized_kv(persistent_kv) - :ok = PersistentKV.set(kv, @timeline_key, [pg_id, timeline_id]) + @spec store_timeline(timeline(), Keyword.t()) :: :ok + def store_timeline({pg_id, timeline_id}, opts) do + kv = make_serialized_kv(opts) + :ok = PersistentKV.set(kv, timeline_key(opts), [pg_id, timeline_id]) end - defp make_serialized_kv(persistent_kv) do + defp make_serialized_kv(opts) do + kv_backend = Keyword.fetch!(opts, :persistent_kv) # defaults to using Jason encoder and decoder - PersistentKV.Serialized.new!(backend: persistent_kv) + PersistentKV.Serialized.new!(backend: kv_backend) + end + + defp timeline_key(opts) do + tenant_id = Access.fetch!(opts, :tenant_id) + "timeline_id_#{tenant_id}" end end diff --git a/packages/sync-service/lib/electric/utils.ex b/packages/sync-service/lib/electric/utils.ex index f26756ecb4..a14407b251 100644 --- a/packages/sync-service/lib/electric/utils.ex +++ b/packages/sync-service/lib/electric/utils.ex @@ -301,5 +301,11 @@ defmodule Electric.Utils do Keyword.update!(connection_opts, :password, fn passw -> passw.() end) end + @doc """ + Apply a function to each value of a map. + """ + @spec map_values(map(), (term() -> term())) :: map() + def map_values(map, fun), do: Map.new(map, fn {k, v} -> {k, fun.(v)} end) + defp wrap_in_fun(val), do: fn -> val end end diff --git a/packages/sync-service/test/electric/plug/add_database_plug_test.exs b/packages/sync-service/test/electric/plug/add_database_plug_test.exs new file mode 100644 index 0000000000..6fa2278db1 --- /dev/null +++ b/packages/sync-service/test/electric/plug/add_database_plug_test.exs @@ -0,0 +1,117 @@ +defmodule Electric.Plug.AddDatabasePlugTest do + use ExUnit.Case, async: false + import Plug.Conn + + alias Electric.Plug.AddDatabasePlug + + import Support.ComponentSetup + import Support.DbSetup + + alias Support.Mock + + import Mox + + setup :verify_on_exit! + @moduletag :capture_log + @moduletag :tmp_dir + + @conn_url "postgresql://postgres:password@foo:5432/electric" + @other_conn_url "postgresql://postgres:password@bar:5432/electric" + + # setup do + # start_link_supervised!({Registry, keys: :duplicate, name: @registry}) + # :ok + # end + + def conn(ctx, method, body_params \\ nil) do + # Pass mock dependencies to the plug + config = [ + storage: {Mock.Storage, []}, + tenant_manager: Access.fetch!(ctx, :tenant_manager), + app_config: ctx.app_config, + tenant_tables_name: ctx.tenant_tables_name + ] + + conn = + if body_params do + Plug.Test.conn(method, "/", body_params) + else + Plug.Test.conn(method, "/") + end + + conn + |> assign(:config, config) + end + + describe "AddDatabasePlug" do + setup :with_unique_db + setup :with_publication + + setup :with_complete_stack_but_no_tenant + setup :with_app_config + + test "returns 400 for invalid params", ctx do + conn = + ctx + |> conn("POST") + |> AddDatabasePlug.call([]) + + assert conn.status == 400 + + assert Jason.decode!(conn.resp_body) == %{ + "database_url" => ["can't be blank"], + "database_id" => ["can't be blank"] + } + end + + test "returns 200 when successfully adding a tenant", ctx do + conn = + ctx + |> conn("POST", %{database_id: ctx.tenant_id, database_url: @conn_url}) + |> AddDatabasePlug.call([]) + + assert conn.status == 200 + assert Jason.decode!(conn.resp_body) == ctx.tenant_id + end + + test "returns 400 when tenant already exists", ctx do + conn = + ctx + |> conn("POST", %{database_id: ctx.tenant_id, database_url: @conn_url}) + |> AddDatabasePlug.call([]) + + assert conn.status == 200 + assert Jason.decode!(conn.resp_body) == ctx.tenant_id + + # Now try creating another tenant with the same ID + conn = + ctx + |> conn("POST", %{database_id: ctx.tenant_id, database_url: @other_conn_url}) + |> AddDatabasePlug.call([]) + + assert conn.status == 400 + assert Jason.decode!(conn.resp_body) == "Database #{ctx.tenant_id} already exists." + end + + test "returns 400 when database is already in use", ctx do + conn = + ctx + |> conn("POST", %{database_id: ctx.tenant_id, database_url: @conn_url}) + |> AddDatabasePlug.call([]) + + assert conn.status == 200 + assert Jason.decode!(conn.resp_body) == ctx.tenant_id + + # Now try creating another tenant with the same database + conn = + ctx + |> conn("POST", %{database_id: "other_tenant", database_url: @conn_url}) + |> AddDatabasePlug.call([]) + + assert conn.status == 400 + + assert Jason.decode!(conn.resp_body) == + "The database foo:5432/electric is already in use by another tenant." + end + end +end diff --git a/packages/sync-service/test/electric/plug/delete_shape_plug_test.exs b/packages/sync-service/test/electric/plug/delete_shape_plug_test.exs index 6e698eed13..441b47b120 100644 --- a/packages/sync-service/test/electric/plug/delete_shape_plug_test.exs +++ b/packages/sync-service/test/electric/plug/delete_shape_plug_test.exs @@ -5,6 +5,9 @@ defmodule Electric.Plug.DeleteShapePlugTest do alias Electric.Plug.DeleteShapePlug alias Electric.Shapes.Shape + import Support.ComponentSetup + import Support.TestUtils, only: [with_electric_instance_id: 1] + alias Support.Mock import Mox @@ -25,6 +28,7 @@ defmodule Electric.Plug.DeleteShapePlugTest do } } @test_shape_id "test-shape-id" + @test_pg_id "12345" def load_column_info({"public", "users"}, _), do: {:ok, @test_shape.table_info[{"public", "users"}][:columns]} @@ -37,26 +41,46 @@ defmodule Electric.Plug.DeleteShapePlugTest do :ok end - def conn(method, "?" <> _ = query_string, allow \\ true) do + def conn(ctx, method, "?" <> _ = query_string, allow \\ true) do # Pass mock dependencies to the plug - config = %{ + tenant = [ + electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id, + pg_id: @test_pg_id, shape_cache: {Mock.ShapeCache, []}, + storage: {Mock.Storage, []}, inspector: {__MODULE__, []}, registry: @registry, - long_poll_timeout: 20_000, - max_age: 60, - stale_age: 300, + long_poll_timeout: Access.get(ctx, :long_poll_timeout, 20_000), + max_age: Access.get(ctx, :max_age, 60), + stale_age: Access.get(ctx, :stale_age, 300) + ] + + store_tenant(tenant, ctx) + + config = [ + storage: {Mock.Storage, []}, + tenant_manager: ctx.tenant_manager, allow_shape_deletion: allow - } + ] Plug.Test.conn(method, "/" <> query_string) |> assign(:config, config) end describe "DeleteShapePlug" do - test "returns 404 if shape deletion is not allowed" do + setup [ + :with_electric_instance_id, + :with_persistent_kv, + :with_minimal_app_config, + :with_tenant_manager, + :with_tenant_id + ] + + test "returns 404 if shape deletion is not allowed", ctx do conn = - conn("DELETE", "?root_table=.invalid_shape", false) + ctx + |> conn("DELETE", "?root_table=.invalid_shape", false) |> DeleteShapePlug.call([]) assert conn.status == 404 @@ -66,9 +90,10 @@ defmodule Electric.Plug.DeleteShapePlugTest do } end - test "returns 400 for invalid params" do + test "returns 400 for invalid params", ctx do conn = - conn("DELETE", "?root_table=.invalid_shape") + ctx + |> conn("DELETE", "?root_table=.invalid_shape") |> DeleteShapePlug.call([]) assert conn.status == 400 @@ -80,24 +105,36 @@ defmodule Electric.Plug.DeleteShapePlugTest do } end - test "should clean shape based on shape definition" do + test "returns 404 when database is not found", ctx do + conn = + ctx + |> conn(:delete, "?root_table=public.users&database_id=unknown") + |> DeleteShapePlug.call([]) + + assert conn.status == 404 + assert Jason.decode!(conn.resp_body) == ~s|Database not found| + end + + test "should clean shape based on shape definition", ctx do Mock.ShapeCache |> expect(:get_or_create_shape_id, fn @test_shape, _opts -> {@test_shape_id, 0} end) |> expect(:clean_shape, fn @test_shape_id, _ -> :ok end) conn = - conn(:delete, "?root_table=public.users") + ctx + |> conn(:delete, "?root_table=public.users") |> DeleteShapePlug.call([]) assert conn.status == 202 end - test "should clean shape based on shape_id" do + test "should clean shape based on shape_id", ctx do Mock.ShapeCache |> expect(:clean_shape, fn @test_shape_id, _ -> :ok end) conn = - conn(:delete, "?root_table=public.users&shape_id=#{@test_shape_id}") + ctx + |> conn(:delete, "?root_table=public.users&shape_id=#{@test_shape_id}") |> DeleteShapePlug.call([]) assert conn.status == 202 diff --git a/packages/sync-service/test/electric/plug/health_check_plug_test.exs b/packages/sync-service/test/electric/plug/health_check_plug_test.exs index d30c257c87..6abc3c8769 100644 --- a/packages/sync-service/test/electric/plug/health_check_plug_test.exs +++ b/packages/sync-service/test/electric/plug/health_check_plug_test.exs @@ -1,6 +1,8 @@ defmodule Electric.Plug.HealthCheckPlugTest do use ExUnit.Case, async: true import Plug.Conn + import Support.ComponentSetup + import Support.TestUtils alias Plug.Conn alias Electric.Plug.HealthCheckPlug @@ -14,20 +16,40 @@ defmodule Electric.Plug.HealthCheckPlugTest do :ok end - def conn(%{connection_status: connection_status} = _config) do + setup :with_electric_instance_id + setup :with_tenant_id + setup :with_persistent_kv + setup :with_minimal_app_config + setup :with_tenant_manager + + setup ctx do + tenant = [ + electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id, + pg_id: "foo", + registry: @registry, + get_service_status: fn -> ctx.connection_status end + ] + + store_tenant(tenant, ctx) + %{} + end + + def conn(ctx) do # Pass mock dependencies to the plug - config = %{ - get_service_status: fn -> connection_status end - } + config = [ + tenant_manager: ctx.tenant_manager + ] - Plug.Test.conn("GET", "/") + Plug.Test.conn("GET", "/?database_id=#{ctx.tenant_id}") |> assign(:config, config) end describe "HealthCheckPlug" do - test "has appropriate content and cache headers" do + @tag connection_status: :waiting + test "has appropriate content and cache headers", ctx do conn = - conn(%{connection_status: :waiting}) + conn(ctx) |> HealthCheckPlug.call([]) assert Conn.get_resp_header(conn, "content-type") == ["application/json"] @@ -37,36 +59,40 @@ defmodule Electric.Plug.HealthCheckPlugTest do ] end - test "returns 200 when in waiting mode" do + @tag connection_status: :waiting + test "returns 503 when in waiting mode", ctx do conn = - conn(%{connection_status: :waiting}) + conn(ctx) |> HealthCheckPlug.call([]) - assert conn.status == 200 + assert conn.status == 503 assert Jason.decode!(conn.resp_body) == %{"status" => "waiting"} end - test "returns 200 when in starting mode" do + @tag connection_status: :starting + test "returns 503 when in starting mode", ctx do conn = - conn(%{connection_status: :starting}) + conn(ctx) |> HealthCheckPlug.call([]) - assert conn.status == 200 + assert conn.status == 503 assert Jason.decode!(conn.resp_body) == %{"status" => "starting"} end - test "returns 200 when in active mode" do + @tag connection_status: :active + test "returns 200 when in active mode", ctx do conn = - conn(%{connection_status: :active}) + conn(ctx) |> HealthCheckPlug.call([]) assert conn.status == 200 assert Jason.decode!(conn.resp_body) == %{"status" => "active"} end - test "returns 503 when stopping" do + @tag connection_status: :stopping + test "returns 503 when stopping", ctx do conn = - conn(%{connection_status: :stopping}) + conn(ctx) |> HealthCheckPlug.call([]) assert conn.status == 503 diff --git a/packages/sync-service/test/electric/plug/remove_database_plug_test.exs b/packages/sync-service/test/electric/plug/remove_database_plug_test.exs new file mode 100644 index 0000000000..9d29d734d9 --- /dev/null +++ b/packages/sync-service/test/electric/plug/remove_database_plug_test.exs @@ -0,0 +1,87 @@ +defmodule Electric.Plug.RemoveDatabasePlugTest do + use ExUnit.Case, async: true + import Plug.Conn + + alias Electric.Plug.RemoveDatabasePlug + + import Support.ComponentSetup + import Support.DbSetup + + alias Support.Mock + + import Mox + + setup :verify_on_exit! + @moduletag :capture_log + @moduletag :tmp_dir + + def conn(ctx, method, database_id \\ nil) do + # Pass mock dependencies to the plug + config = [ + electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id, + storage: {Mock.Storage, []}, + tenant_manager: Access.fetch!(ctx, :tenant_manager), + app_config: ctx.app_config + ] + + conn = + if is_nil(database_id) do + Plug.Test.conn(method, "/") + else + Plug.Test.conn(method, "/#{database_id}") + |> Map.update!(:path_params, &Map.put(&1, "database_id", database_id)) + end + + conn + |> assign(:config, config) + end + + describe "RemoveDatabasePlug" do + setup :with_unique_db + setup :with_publication + + setup do + %{ + slot_name: "electric_remove_db_test_slot", + stream_id: "default" + } + end + + setup :with_complete_stack + setup :with_app_config + + test "returns 200 when successfully deleting a tenant", ctx do + # The tenant manager will try to shut down the tenant supervisor + # but we did not start a tenant supervisor in this test + # so we create one here + supervisor_name = Electric.Tenant.Supervisor.name(ctx.electric_instance_id, ctx.tenant_id) + Supervisor.start_link([], name: supervisor_name, strategy: :one_for_one) + + conn = + ctx + |> conn("DELETE", ctx.tenant_id) + |> RemoveDatabasePlug.call([]) + + assert conn.status == 200 + assert Jason.decode!(conn.resp_body) == ctx.tenant_id + + assert Electric.Tenant.Persistence.load_tenants!( + app_config: ctx.app_config, + electric_instance_id: ctx.electric_instance_id + ) == %{} + end + + test "returns 404 when tenant is not found", ctx do + tenant = "non-existing tenant" + + conn = + ctx + |> conn("DELETE", tenant) + |> RemoveDatabasePlug.call([]) + + assert conn.status == 404 + assert Jason.decode!(conn.resp_body) == "Database #{tenant} not found." + end + end +end diff --git a/packages/sync-service/test/electric/plug/router_test.exs b/packages/sync-service/test/electric/plug/router_test.exs index a4e1963ab1..7b5dd3dbca 100644 --- a/packages/sync-service/test/electric/plug/router_test.exs +++ b/packages/sync-service/test/electric/plug/router_test.exs @@ -46,9 +46,9 @@ defmodule Electric.Plug.RouterTest do do: %{opts: Router.init(build_router_opts(ctx, get_service_status: fn -> :active end))} ) - test "GET returns health status of service", %{opts: opts} do + test "GET returns health status of service", %{opts: opts, tenant_id: tenant_id} do conn = - conn("GET", "/v1/health") + conn("GET", "/v1/health?database_id=#{tenant_id}") |> Router.call(opts) assert %{status: 200} = conn diff --git a/packages/sync-service/test/electric/plug/serve_shape_plug_test.exs b/packages/sync-service/test/electric/plug/serve_shape_plug_test.exs index b42fcbcb36..819ab2fffc 100644 --- a/packages/sync-service/test/electric/plug/serve_shape_plug_test.exs +++ b/packages/sync-service/test/electric/plug/serve_shape_plug_test.exs @@ -7,6 +7,9 @@ defmodule Electric.Plug.ServeShapePlugTest do alias Electric.Plug.ServeShapePlug alias Electric.Shapes.Shape + import Support.ComponentSetup + import Support.TestUtils, only: [with_electric_instance_id: 1] + alias Support.Mock import Mox @@ -36,6 +39,7 @@ defmodule Electric.Plug.ServeShapePlugTest do @first_offset LogOffset.first() @test_offset LogOffset.new(Lsn.from_integer(100), 0) @start_offset_50 LogOffset.new(Lsn.from_integer(50), 0) + @test_pg_id "12345" def load_column_info({"public", "users"}, _), do: {:ok, @test_shape.table_info[{"public", "users"}][:columns]} @@ -51,17 +55,27 @@ defmodule Electric.Plug.ServeShapePlugTest do :ok end - def conn(method, params, "?" <> _ = query_string) do + def conn(ctx, method, params, "?" <> _ = query_string) do # Pass mock dependencies to the plug - config = %{ + tenant = [ + electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id, + pg_id: @test_pg_id, shape_cache: {Mock.ShapeCache, []}, storage: {Mock.Storage, []}, inspector: {__MODULE__, []}, registry: @registry, - long_poll_timeout: 20_000, - max_age: 60, - stale_age: 300 - } + long_poll_timeout: Access.get(ctx, :long_poll_timeout, 20_000), + max_age: Access.get(ctx, :max_age, 60), + stale_age: Access.get(ctx, :stale_age, 300) + ] + + store_tenant(tenant, ctx) + + config = [ + storage: {Mock.Storage, []}, + tenant_manager: ctx.tenant_manager + ] Plug.Test.conn(method, "/" <> query_string, params) |> assign(:config, config) @@ -126,10 +140,21 @@ defmodule Electric.Plug.ServeShapePlugTest do assert Electric.Plug.ServeShapePlug.TimeUtils.seconds_since_oct9th_2024_next_interval(conn) != expected_interval end + end + + describe "serving shape" do + setup [ + :with_electric_instance_id, + :with_persistent_kv, + :with_minimal_app_config, + :with_tenant_manager, + :with_tenant_id + ] - test "returns 400 for invalid params" do + test "returns 400 for invalid params", ctx do conn = - conn(:get, %{"root_table" => ".invalid_shape"}, "?offset=invalid") + ctx + |> conn(:get, %{"root_table" => ".invalid_shape"}, "?offset=invalid") |> ServeShapePlug.call([]) assert conn.status == 400 @@ -142,11 +167,12 @@ defmodule Electric.Plug.ServeShapePlugTest do } end - test "returns 400 when table does not exist" do + test "returns 400 when table does not exist", ctx do # this will pass table name validation # but will fail to find the table conn = - conn(:get, %{"root_table" => "_val1d_schëmaΦ$.Φtàble"}, "?offset=-1") + ctx + |> conn(:get, %{"root_table" => "_val1d_schëmaΦ$.Φtàble"}, "?offset=-1") |> ServeShapePlug.call([]) assert conn.status == 400 @@ -156,9 +182,10 @@ defmodule Electric.Plug.ServeShapePlugTest do } end - test "returns 400 for missing shape_id when offset != -1" do + test "returns 400 for missing shape_id when offset != -1", ctx do conn = - conn(:get, %{"root_table" => "public.users"}, "?offset=#{LogOffset.first()}") + ctx + |> conn(:get, %{"root_table" => "public.users"}, "?offset=#{LogOffset.first()}") |> ServeShapePlug.call([]) assert conn.status == 400 @@ -168,9 +195,10 @@ defmodule Electric.Plug.ServeShapePlugTest do } end - test "returns 400 for live request when offset == -1" do + test "returns 400 for live request when offset == -1", ctx do conn = - conn( + ctx + |> conn( :get, %{"root_table" => "public.users"}, "?offset=#{LogOffset.before_all()}&live=true" @@ -184,7 +212,17 @@ defmodule Electric.Plug.ServeShapePlugTest do } end - test "returns snapshot when offset is -1" do + test "returns 404 when database is not found", ctx do + conn = + ctx + |> conn(:get, %{"root_table" => "public.users"}, "?offset=-1&database_id=unknown") + |> ServeShapePlug.call([]) + + assert conn.status == 404 + assert Jason.decode!(conn.resp_body) == ~s|Database not found| + end + + test "returns snapshot when offset is -1", %{tenant_id: tenant_id} = ctx do Mock.ShapeCache |> expect(:get_or_create_shape_id, fn @test_shape, _opts -> {@test_shape_id, @test_offset} @@ -195,7 +233,7 @@ defmodule Electric.Plug.ServeShapePlugTest do next_offset = LogOffset.increment(@first_offset) Mock.Storage - |> stub(:for_shape, fn @test_shape_id, _opts -> @test_opts end) + |> stub(:for_shape, fn @test_shape_id, ^tenant_id, _opts -> @test_opts end) |> expect(:get_chunk_end_log_offset, fn @before_all_offset, _ -> next_offset end) @@ -207,7 +245,8 @@ defmodule Electric.Plug.ServeShapePlugTest do end) conn = - conn(:get, %{"root_table" => "public.users"}, "?offset=-1") + ctx + |> conn(:get, %{"root_table" => "public.users"}, "?offset=-1") |> ServeShapePlug.call([]) assert conn.status == 200 @@ -229,7 +268,7 @@ defmodule Electric.Plug.ServeShapePlugTest do assert Plug.Conn.get_resp_header(conn, "electric-shape-id") == [@test_shape_id] end - test "snapshot has correct cache control headers" do + test "snapshot has correct cache control headers", %{tenant_id: tenant_id} = ctx do Mock.ShapeCache |> expect(:get_or_create_shape_id, fn @test_shape, _opts -> {@test_shape_id, @test_offset} @@ -240,7 +279,7 @@ defmodule Electric.Plug.ServeShapePlugTest do next_offset = LogOffset.increment(@first_offset) Mock.Storage - |> stub(:for_shape, fn @test_shape_id, _opts -> @test_opts end) + |> stub(:for_shape, fn @test_shape_id, ^tenant_id, _opts -> @test_opts end) |> expect(:get_chunk_end_log_offset, fn @before_all_offset, _ -> next_offset end) @@ -255,9 +294,10 @@ defmodule Electric.Plug.ServeShapePlugTest do stale_age = 312 conn = - conn(:get, %{"root_table" => "public.users"}, "?offset=-1") - |> put_in_config(:max_age, max_age) - |> put_in_config(:stale_age, stale_age) + ctx + |> Map.put(:max_age, max_age) + |> Map.put(:stale_age, stale_age) + |> conn(:get, %{"root_table" => "public.users"}, "?offset=-1") |> ServeShapePlug.call([]) assert conn.status == 200 @@ -267,7 +307,7 @@ defmodule Electric.Plug.ServeShapePlugTest do ] end - test "response has correct schema header" do + test "response has correct schema header", %{tenant_id: tenant_id} = ctx do Mock.ShapeCache |> expect(:get_or_create_shape_id, fn @test_shape, _opts -> {@test_shape_id, @test_offset} @@ -278,7 +318,7 @@ defmodule Electric.Plug.ServeShapePlugTest do next_offset = LogOffset.increment(@first_offset) Mock.Storage - |> stub(:for_shape, fn @test_shape_id, _opts -> @test_opts end) + |> stub(:for_shape, fn @test_shape_id, ^tenant_id, _opts -> @test_opts end) |> expect(:get_chunk_end_log_offset, fn @before_all_offset, _ -> next_offset end) @@ -290,7 +330,8 @@ defmodule Electric.Plug.ServeShapePlugTest do end) conn = - conn(:get, %{"root_table" => "public.users"}, "?offset=-1") + ctx + |> conn(:get, %{"root_table" => "public.users"}, "?offset=-1") |> ServeShapePlug.call([]) assert Plug.Conn.get_resp_header(conn, "electric-schema") == [ @@ -298,7 +339,7 @@ defmodule Electric.Plug.ServeShapePlugTest do ] end - test "returns log when offset is >= 0" do + test "returns log when offset is >= 0", %{tenant_id: tenant_id} = ctx do Mock.ShapeCache |> expect(:get_shape, fn @test_shape, _opts -> {@test_shape_id, @test_offset} @@ -309,7 +350,7 @@ defmodule Electric.Plug.ServeShapePlugTest do next_next_offset = LogOffset.increment(next_offset) Mock.Storage - |> stub(:for_shape, fn @test_shape_id, _opts -> @test_opts end) + |> stub(:for_shape, fn @test_shape_id, ^tenant_id, _opts -> @test_opts end) |> expect(:get_chunk_end_log_offset, fn @start_offset_50, _ -> next_next_offset end) @@ -321,7 +362,8 @@ defmodule Electric.Plug.ServeShapePlugTest do end) conn = - conn( + ctx + |> conn( :get, %{"root_table" => "public.users"}, "?offset=#{@start_offset_50}&shape_id=#{@test_shape_id}" @@ -358,7 +400,8 @@ defmodule Electric.Plug.ServeShapePlugTest do assert Plug.Conn.get_resp_header(conn, "electric-chunk-up-to-date") == [] end - test "returns 304 Not Modified when If-None-Match matches ETag" do + test "returns 304 Not Modified when If-None-Match matches ETag", + %{tenant_id: tenant_id} = ctx do Mock.ShapeCache |> expect(:get_shape, fn @test_shape, _opts -> {@test_shape_id, @test_offset} @@ -366,13 +409,14 @@ defmodule Electric.Plug.ServeShapePlugTest do |> stub(:has_shape?, fn @test_shape_id, _opts -> true end) Mock.Storage - |> stub(:for_shape, fn @test_shape_id, _opts -> @test_opts end) + |> stub(:for_shape, fn @test_shape_id, ^tenant_id, _opts -> @test_opts end) |> expect(:get_chunk_end_log_offset, fn @start_offset_50, _ -> @test_offset end) conn = - conn( + ctx + |> conn( :get, %{"root_table" => "public.users"}, "?offset=#{@start_offset_50}&shape_id=#{@test_shape_id}" @@ -387,7 +431,7 @@ defmodule Electric.Plug.ServeShapePlugTest do assert conn.resp_body == "" end - test "handles live updates" do + test "handles live updates", %{tenant_id: tenant_id} = ctx do Mock.ShapeCache |> expect(:get_shape, fn @test_shape, _opts -> {@test_shape_id, @test_offset} @@ -399,7 +443,7 @@ defmodule Electric.Plug.ServeShapePlugTest do next_offset_str = "#{next_offset}" Mock.Storage - |> stub(:for_shape, fn @test_shape_id, _opts -> @test_opts end) + |> stub(:for_shape, fn @test_shape_id, ^tenant_id, _opts -> @test_opts end) |> expect(:get_chunk_end_log_offset, fn @test_offset, _ -> nil end) @@ -413,7 +457,8 @@ defmodule Electric.Plug.ServeShapePlugTest do task = Task.async(fn -> - conn( + ctx + |> conn( :get, %{"root_table" => "public.users"}, "?offset=#{@test_offset}&shape_id=#{@test_shape_id}&live=true" @@ -426,7 +471,7 @@ defmodule Electric.Plug.ServeShapePlugTest do Process.sleep(50) # Simulate new changes arriving - Registry.dispatch(@registry, @test_shape_id, fn [{pid, ref}] -> + Registry.dispatch(@registry, {ctx.tenant_id, @test_shape_id}, fn [{pid, ref}] -> send(pid, {ref, :new_changes, next_offset}) end) @@ -449,7 +494,7 @@ defmodule Electric.Plug.ServeShapePlugTest do assert Plug.Conn.get_resp_header(conn, "electric-schema") == [] end - test "handles shape rotation" do + test "handles shape rotation", %{tenant_id: tenant_id} = ctx do Mock.ShapeCache |> expect(:get_shape, fn @test_shape, _opts -> {@test_shape_id, @test_offset} @@ -459,7 +504,7 @@ defmodule Electric.Plug.ServeShapePlugTest do test_pid = self() Mock.Storage - |> stub(:for_shape, fn @test_shape_id, _opts -> @test_opts end) + |> stub(:for_shape, fn @test_shape_id, ^tenant_id, _opts -> @test_opts end) |> expect(:get_chunk_end_log_offset, fn @test_offset, _ -> nil end) @@ -470,7 +515,8 @@ defmodule Electric.Plug.ServeShapePlugTest do task = Task.async(fn -> - conn( + ctx + |> conn( :get, %{"root_table" => "public.users"}, "?offset=#{@test_offset}&shape_id=#{@test_shape_id}&live=true" @@ -483,7 +529,7 @@ defmodule Electric.Plug.ServeShapePlugTest do Process.sleep(50) # Simulate shape rotation - Registry.dispatch(@registry, @test_shape_id, fn [{pid, ref}] -> + Registry.dispatch(@registry, {ctx.tenant_id, @test_shape_id}, fn [{pid, ref}] -> send(pid, {ref, :shape_rotation}) end) @@ -497,7 +543,8 @@ defmodule Electric.Plug.ServeShapePlugTest do assert Plug.Conn.get_resp_header(conn, "electric-chunk-up-to-date") == [""] end - test "sends an up-to-date response after a timeout if no changes are observed" do + test "sends an up-to-date response after a timeout if no changes are observed", + %{tenant_id: tenant_id} = ctx do Mock.ShapeCache |> expect(:get_shape, fn @test_shape, _opts -> {@test_shape_id, @test_offset} @@ -505,7 +552,7 @@ defmodule Electric.Plug.ServeShapePlugTest do |> stub(:has_shape?, fn @test_shape_id, _opts -> true end) Mock.Storage - |> stub(:for_shape, fn @test_shape_id, _opts -> @test_opts end) + |> stub(:for_shape, fn @test_shape_id, ^tenant_id, _opts -> @test_opts end) |> expect(:get_chunk_end_log_offset, fn @test_offset, _ -> nil end) @@ -514,12 +561,13 @@ defmodule Electric.Plug.ServeShapePlugTest do end) conn = - conn( + ctx + |> Map.put(:long_poll_timeout, 100) + |> conn( :get, %{"root_table" => "public.users"}, "?offset=#{@test_offset}&shape_id=#{@test_shape_id}&live=true" ) - |> put_in_config(:long_poll_timeout, 100) |> ServeShapePlug.call([]) assert conn.status == 204 @@ -533,7 +581,8 @@ defmodule Electric.Plug.ServeShapePlugTest do assert Plug.Conn.get_resp_header(conn, "electric-chunk-up-to-date") == [""] end - test "sends 409 with a redirect to existing shape when requested shape ID does not exist" do + test "sends 409 with a redirect to existing shape when requested shape ID does not exist", + %{tenant_id: tenant_id} = ctx do Mock.ShapeCache |> expect(:get_shape, fn @test_shape, _opts -> {@test_shape_id, @test_offset} @@ -541,10 +590,11 @@ defmodule Electric.Plug.ServeShapePlugTest do |> stub(:has_shape?, fn "foo", _opts -> false end) Mock.Storage - |> stub(:for_shape, fn "foo", opts -> {"foo", opts} end) + |> stub(:for_shape, fn "foo", ^tenant_id, opts -> {"foo", opts} end) conn = - conn( + ctx + |> conn( :get, %{"root_table" => "public.users"}, "?offset=#{"50_12"}&shape_id=foo" @@ -558,7 +608,8 @@ defmodule Electric.Plug.ServeShapePlugTest do assert get_resp_header(conn, "location") == ["/?shape_id=#{@test_shape_id}&offset=-1"] end - test "creates a new shape when shape ID does not exist and sends a 409 redirecting to the newly created shape" do + test "creates a new shape when shape ID does not exist and sends a 409 redirecting to the newly created shape", + %{tenant_id: tenant_id} = ctx do new_shape_id = "new-shape-id" Mock.ShapeCache @@ -569,10 +620,11 @@ defmodule Electric.Plug.ServeShapePlugTest do end) Mock.Storage - |> stub(:for_shape, fn new_shape_id, opts -> {new_shape_id, opts} end) + |> stub(:for_shape, fn new_shape_id, ^tenant_id, opts -> {new_shape_id, opts} end) conn = - conn( + ctx + |> conn( :get, %{"root_table" => "public.users"}, "?offset=#{"50_12"}&shape_id=#{@test_shape_id}" @@ -586,16 +638,18 @@ defmodule Electric.Plug.ServeShapePlugTest do assert get_resp_header(conn, "location") == ["/?shape_id=#{new_shape_id}&offset=-1"] end - test "sends 400 when shape ID does not match shape definition" do + test "sends 400 when shape ID does not match shape definition", + %{tenant_id: tenant_id} = ctx do Mock.ShapeCache |> expect(:get_shape, fn @test_shape, _opts -> nil end) |> stub(:has_shape?, fn @test_shape_id, _opts -> true end) Mock.Storage - |> stub(:for_shape, fn @test_shape_id, opts -> {@test_shape_id, opts} end) + |> stub(:for_shape, fn @test_shape_id, ^tenant_id, opts -> {@test_shape_id, opts} end) conn = - conn( + ctx + |> conn( :get, %{"root_table" => "public.users"}, "?offset=#{"50_12"}&shape_id=#{@test_shape_id}" @@ -611,13 +665,10 @@ defmodule Electric.Plug.ServeShapePlugTest do } end - test "sends 400 when omitting primary key columns in selection" do + test "sends 400 when omitting primary key columns in selection", ctx do conn = - conn( - :get, - %{"root_table" => "public.users", "columns" => "value"}, - "?offset=-1" - ) + ctx + |> conn(:get, %{"root_table" => "public.users", "columns" => "value"}, "?offset=-1") |> ServeShapePlug.call([]) assert conn.status == 400 @@ -627,13 +678,10 @@ defmodule Electric.Plug.ServeShapePlugTest do } end - test "sends 400 when selecting invalid columns" do + test "sends 400 when selecting invalid columns", ctx do conn = - conn( - :get, - %{"root_table" => "public.users", "columns" => "id,invalid"}, - "?offset=-1" - ) + ctx + |> conn(:get, %{"root_table" => "public.users", "columns" => "id,invalid"}, "?offset=-1") |> ServeShapePlug.call([]) assert conn.status == 400 @@ -643,7 +691,4 @@ defmodule Electric.Plug.ServeShapePlugTest do } end end - - defp put_in_config(%Plug.Conn{assigns: assigns} = conn, key, value), - do: %{conn | assigns: put_in(assigns, [:config, key], value)} end diff --git a/packages/sync-service/test/electric/postgres/inspector/ets_inspector_test.exs b/packages/sync-service/test/electric/postgres/inspector/ets_inspector_test.exs index 3c25b0cbad..b8e107a5fe 100644 --- a/packages/sync-service/test/electric/postgres/inspector/ets_inspector_test.exs +++ b/packages/sync-service/test/electric/postgres/inspector/ets_inspector_test.exs @@ -5,7 +5,7 @@ defmodule Electric.Postgres.Inspector.EtsInspectorTest do alias Electric.Postgres.Inspector.EtsInspector describe "load_relation/2" do - setup [:with_inspector, :with_basic_tables, :with_sql_execute] + setup [:with_tenant_id, :with_inspector, :with_basic_tables, :with_sql_execute] setup %{inspector: {EtsInspector, opts}} do {:ok, %{opts: opts, table: {"public", "items"}}} @@ -54,7 +54,7 @@ defmodule Electric.Postgres.Inspector.EtsInspectorTest do end describe "clean/2" do - setup [:with_inspector, :with_basic_tables, :with_sql_execute] + setup [:with_tenant_id, :with_inspector, :with_basic_tables, :with_sql_execute] setup %{ inspector: {EtsInspector, opts}, @@ -124,7 +124,7 @@ defmodule Electric.Postgres.Inspector.EtsInspectorTest do end describe "load_column_info/2" do - setup [:with_inspector, :with_basic_tables] + setup [:with_tenant_id, :with_inspector, :with_basic_tables] setup %{inspector: {EtsInspector, opts}} do {:ok, %{opts: opts, table: {"public", "items"}}} diff --git a/packages/sync-service/test/electric/postgres/replication_client_test.exs b/packages/sync-service/test/electric/postgres/replication_client_test.exs index 0bca05e6f9..925a2481f9 100644 --- a/packages/sync-service/test/electric/postgres/replication_client_test.exs +++ b/packages/sync-service/test/electric/postgres/replication_client_test.exs @@ -1,6 +1,7 @@ defmodule Electric.Postgres.ReplicationClientTest do use ExUnit.Case, async: true + import Support.ComponentSetup, only: [with_tenant_id: 1] import Support.DbSetup, except: [with_publication: 1] import Support.DbStructureSetup import Support.TestUtils, only: [with_electric_instance_id: 1] @@ -32,7 +33,7 @@ defmodule Electric.Postgres.ReplicationClientTest do %{dummy_pid: pid} end - setup :with_electric_instance_id + setup [:with_electric_instance_id, :with_tenant_id] describe "ReplicationClient init" do setup [:with_unique_db, :with_basic_tables] @@ -430,6 +431,11 @@ defmodule Electric.Postgres.ReplicationClientTest do ctx = Enum.into(overrides, ctx) {:ok, _pid} = - ReplicationClient.start_link(ctx.electric_instance_id, ctx.db_config, ctx.replication_opts) + ReplicationClient.start_link( + electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id, + connection_opts: ctx.db_config, + replication_opts: ctx.replication_opts + ) end end diff --git a/packages/sync-service/test/electric/replication/shape_log_collector_test.exs b/packages/sync-service/test/electric/replication/shape_log_collector_test.exs index fc21e5e4dd..34a999c7b2 100644 --- a/packages/sync-service/test/electric/replication/shape_log_collector_test.exs +++ b/packages/sync-service/test/electric/replication/shape_log_collector_test.exs @@ -8,15 +8,15 @@ defmodule Electric.Replication.ShapeLogCollectorTest do alias Electric.Replication.LogOffset alias Support.Mock - import Support.ComponentSetup, only: [with_in_memory_storage: 1] - import Support.TestUtils, only: [with_electric_instance_id: 1, full_test_name: 1] + import Support.ComponentSetup, only: [with_in_memory_storage: 1, with_tenant_id: 1] + import Support.TestUtils, only: [with_electric_instance_id: 1] import Mox @moduletag :capture_log setup :verify_on_exit! - setup [:with_electric_instance_id, :with_in_memory_storage] + setup [:with_electric_instance_id, :with_tenant_id, :with_in_memory_storage] setup(ctx) do # Start a test Registry @@ -26,6 +26,7 @@ defmodule Electric.Replication.ShapeLogCollectorTest do # Start the ShapeLogCollector process opts = [ electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id, inspector: {Mock.Inspector, []}, demand: :forward ] @@ -36,11 +37,9 @@ defmodule Electric.Replication.ShapeLogCollectorTest do |> expect(:initialise, 1, fn _opts -> {:ok, %{}} end) |> expect(:list_shapes, 1, fn _ -> [] end) # allow the ShapeCache to call this mock - |> allow(self(), fn -> GenServer.whereis(Electric.ShapeCache) end) - - # We need a ShapeCache process because it is a GenStage consumer - # that handles the Relation events produced by ShapeLogCollector - shape_meta_table = :"shape_meta_#{full_test_name(ctx)}" + |> allow(self(), fn -> + GenServer.whereis(Electric.ShapeCache.name(ctx.electric_instance_id, ctx.tenant_id)) + end) shape_cache_opts = [ @@ -48,11 +47,12 @@ defmodule Electric.Replication.ShapeLogCollectorTest do chunk_bytes_threshold: Electric.ShapeCache.LogChunker.default_chunk_size_threshold(), inspector: {Mock.Inspector, []}, shape_status: Mock.ShapeStatus, - shape_meta_table: shape_meta_table, prepare_tables_fn: fn _, _ -> {:ok, [:ok]} end, - log_producer: ShapeLogCollector.name(ctx.electric_instance_id), + log_producer: ShapeLogCollector.name(ctx.electric_instance_id, ctx.tenant_id), electric_instance_id: ctx.electric_instance_id, - consumer_supervisor: Electric.Shapes.ConsumerSupervisor.name(ctx.electric_instance_id), + tenant_id: ctx.tenant_id, + consumer_supervisor: + Electric.Shapes.ConsumerSupervisor.name(ctx.electric_instance_id, ctx.tenant_id), registry: registry_name ] diff --git a/packages/sync-service/test/electric/shape_cache/shape_status_test.exs b/packages/sync-service/test/electric/shape_cache/shape_status_test.exs index f5e98b459d..546e77bdcd 100644 --- a/packages/sync-service/test/electric/shape_cache/shape_status_test.exs +++ b/packages/sync-service/test/electric/shape_cache/shape_status_test.exs @@ -25,7 +25,12 @@ defmodule Electric.ShapeCache.ShapeStatusTest do shape end - defp table_name, do: :"#{__MODULE__}-#{System.unique_integer([:positive, :monotonic])}" + defp table_name, + do: + :ets.new(:"#{__MODULE__}-#{System.unique_integer([:positive, :monotonic])}", [ + :public, + :ordered_set + ]) defp new_state(_ctx, opts \\ []) do table = Keyword.get(opts, :table, table_name()) diff --git a/packages/sync-service/test/electric/shape_cache/storage_implementations_test.exs b/packages/sync-service/test/electric/shape_cache/storage_implementations_test.exs index 8bf444b478..5e1d44b1a6 100644 --- a/packages/sync-service/test/electric/shape_cache/storage_implementations_test.exs +++ b/packages/sync-service/test/electric/shape_cache/storage_implementations_test.exs @@ -24,6 +24,7 @@ defmodule Electric.ShapeCache.StorageImplimentationsTest do } } } + @tenant_id "test_tenant" @snapshot_offset LogOffset.first() @snapshot_offset_encoded to_string(@snapshot_offset) @@ -534,7 +535,7 @@ defmodule Electric.ShapeCache.StorageImplimentationsTest do defp start_storage(%{module: module} = context) do opts = module |> opts(context) |> module.shared_opts() - shape_opts = module.for_shape(@shape_id, opts) + shape_opts = module.for_shape(@shape_id, @tenant_id, opts) {:ok, _} = module.start_link(shape_opts) {:ok, %{module: module, opts: shape_opts}} end @@ -544,7 +545,8 @@ defmodule Electric.ShapeCache.StorageImplimentationsTest do snapshot_ets_table: String.to_atom("snapshot_ets_table_#{Utils.uuid4()}"), log_ets_table: String.to_atom("log_ets_table_#{Utils.uuid4()}"), chunk_checkpoint_ets_table: String.to_atom("chunk_checkpoint_ets_table_#{Utils.uuid4()}"), - electric_instance_id: electric_instance_id + electric_instance_id: electric_instance_id, + tenant_id: @tenant_id ] end @@ -552,7 +554,8 @@ defmodule Electric.ShapeCache.StorageImplimentationsTest do [ db: String.to_atom("shape_mixed_disk_#{Utils.uuid4()}"), storage_dir: tmp_dir, - electric_instance_id: electric_instance_id + electric_instance_id: electric_instance_id, + tenant_id: @tenant_id ] end end diff --git a/packages/sync-service/test/electric/shape_cache/storage_test.exs b/packages/sync-service/test/electric/shape_cache/storage_test.exs index 501785bb7e..acd5084b0e 100644 --- a/packages/sync-service/test/electric/shape_cache/storage_test.exs +++ b/packages/sync-service/test/electric/shape_cache/storage_test.exs @@ -12,16 +12,17 @@ defmodule Electric.ShapeCache.StorageTest do test "should pass through the calls to the storage module" do storage = {Mock.Storage, :opts} shape_id = "test" + tenant_id = "test_tenant" Mock.Storage - |> Mox.stub(:for_shape, fn ^shape_id, :opts -> {shape_id, :opts} end) + |> Mox.stub(:for_shape, fn ^shape_id, ^tenant_id, :opts -> {shape_id, :opts} end) |> Mox.expect(:make_new_snapshot!, fn _, {^shape_id, :opts} -> :ok end) |> Mox.expect(:snapshot_started?, fn {^shape_id, :opts} -> true end) |> Mox.expect(:get_snapshot, fn {^shape_id, :opts} -> {1, []} end) |> Mox.expect(:append_to_log!, fn _, {^shape_id, :opts} -> :ok end) |> Mox.expect(:get_log_stream, fn _, _, {^shape_id, :opts} -> [] end) - shape_storage = Storage.for_shape(shape_id, storage) + shape_storage = Storage.for_shape(shape_id, tenant_id, storage) Storage.make_new_snapshot!([], shape_storage) Storage.snapshot_started?(shape_storage) @@ -32,15 +33,17 @@ defmodule Electric.ShapeCache.StorageTest do test "get_log_stream/4 correctly guards offset ordering" do storage = {Mock.Storage, :opts} + shape_id = "test" + tenant_id = "test_tenant" Mock.Storage - |> Mox.stub(:for_shape, fn shape_id, :opts -> {shape_id, :opts} end) + |> Mox.stub(:for_shape, fn shape_id, _, :opts -> {shape_id, :opts} end) |> Mox.expect(:get_log_stream, fn _, _, {_shape_id, :opts} -> [] end) l1 = LogOffset.new(26_877_408, 10) l2 = LogOffset.new(26_877_648, 0) - shape_storage = Storage.for_shape("test", storage) + shape_storage = Storage.for_shape(shape_id, tenant_id, storage) Storage.get_log_stream(l1, l2, shape_storage) diff --git a/packages/sync-service/test/electric/shape_cache_test.exs b/packages/sync-service/test/electric/shape_cache_test.exs index 32111b0342..ed3422370a 100644 --- a/packages/sync-service/test/electric/shape_cache_test.exs +++ b/packages/sync-service/test/electric/shape_cache_test.exs @@ -65,6 +65,7 @@ defmodule Electric.ShapeCacheTest do describe "get_or_create_shape_id/2" do setup [ :with_electric_instance_id, + :with_tenant_id, :with_in_memory_storage, :with_log_chunking, :with_no_pool, @@ -95,6 +96,7 @@ defmodule Electric.ShapeCacheTest do describe "get_or_create_shape_id/2 shape initialization" do setup [ :with_electric_instance_id, + :with_tenant_id, :with_in_memory_storage, :with_log_chunking, :with_registry, @@ -117,7 +119,7 @@ defmodule Electric.ShapeCacheTest do assert offset == @zero_offset assert :started = ShapeCache.await_snapshot_start(shape_id, opts) Process.sleep(100) - shape_storage = Storage.for_shape(shape_id, storage) + shape_storage = Storage.for_shape(shape_id, ctx.tenant_id, storage) assert Storage.snapshot_started?(shape_storage) end @@ -202,6 +204,7 @@ defmodule Electric.ShapeCacheTest do describe "get_or_create_shape_id/2 against real db" do setup [ :with_electric_instance_id, + :with_tenant_id, :with_in_memory_storage, :with_log_chunking, :with_registry, @@ -224,10 +227,14 @@ defmodule Electric.ShapeCacheTest do :ok end - test "creates initial snapshot from DB data", %{storage: storage, shape_cache_opts: opts} do + test "creates initial snapshot from DB data", %{ + storage: storage, + shape_cache_opts: opts, + tenant_id: tenant_id + } do {shape_id, _} = ShapeCache.get_or_create_shape_id(@shape, opts) assert :started = ShapeCache.await_snapshot_start(shape_id, opts) - storage = Storage.for_shape(shape_id, storage) + storage = Storage.for_shape(shape_id, tenant_id, storage) assert {@zero_offset, stream} = Storage.get_snapshot(storage) assert [%{"value" => %{"value" => "test1"}}, %{"value" => %{"value" => "test2"}}] = @@ -247,7 +254,8 @@ defmodule Electric.ShapeCacheTest do test "uses correct display settings when querying initial data", %{ pool: pool, storage: storage, - shape_cache_opts: opts + shape_cache_opts: opts, + tenant_id: tenant_id } do shape = update_in( @@ -290,7 +298,7 @@ defmodule Electric.ShapeCacheTest do {shape_id, _} = ShapeCache.get_or_create_shape_id(shape, opts) assert :started = ShapeCache.await_snapshot_start(shape_id, opts) - storage = Storage.for_shape(shape_id, storage) + storage = Storage.for_shape(shape_id, tenant_id, storage) assert {@zero_offset, stream} = Storage.get_snapshot(storage) assert [ @@ -309,7 +317,11 @@ defmodule Electric.ShapeCacheTest do } = map end - test "updates latest offset correctly", %{shape_cache_opts: opts, storage: storage} do + test "updates latest offset correctly", %{ + shape_cache_opts: opts, + tenant_id: tenant_id, + storage: storage + } do {shape_id, initial_offset} = ShapeCache.get_or_create_shape_id(@shape, opts) assert :started = ShapeCache.await_snapshot_start(shape_id, opts) @@ -329,7 +341,7 @@ defmodule Electric.ShapeCacheTest do assert offset_after_log_entry == expected_offset_after_log_entry # Stop snapshot process gracefully to prevent errors being logged in the test - storage = Storage.for_shape(shape_id, storage) + storage = Storage.for_shape(shape_id, tenant_id, storage) {_, stream} = Storage.get_snapshot(storage) Stream.run(stream) end @@ -367,6 +379,7 @@ defmodule Electric.ShapeCacheTest do describe "list_shapes/1" do setup [ :with_electric_instance_id, + :with_tenant_id, :with_in_memory_storage, :with_log_chunking, :with_registry, @@ -380,7 +393,7 @@ defmodule Electric.ShapeCacheTest do prepare_tables_fn: @prepare_tables_noop ) - meta_table = Keyword.fetch!(opts, :shape_meta_table) + meta_table = Access.fetch!(opts, :shape_meta_table) assert ShapeCache.list_shapes(%{shape_meta_table: meta_table}) == [] end @@ -399,7 +412,7 @@ defmodule Electric.ShapeCacheTest do {shape_id, _} = ShapeCache.get_or_create_shape_id(@shape, opts) assert :started = ShapeCache.await_snapshot_start(shape_id, opts) - meta_table = Keyword.fetch!(opts, :shape_meta_table) + meta_table = Access.fetch!(opts, :shape_meta_table) assert [{^shape_id, @shape}] = ShapeCache.list_shapes(%{shape_meta_table: meta_table}) assert {:ok, 10} = ShapeStatus.snapshot_xmin(meta_table, shape_id) end @@ -426,7 +439,7 @@ defmodule Electric.ShapeCacheTest do # Wait until we get to the waiting point in the snapshot assert_receive {:waiting_point, ref, pid} - meta_table = Keyword.fetch!(opts, :shape_meta_table) + meta_table = Access.fetch!(opts, :shape_meta_table) assert [{^shape_id, @shape}] = ShapeCache.list_shapes(%{shape_meta_table: meta_table}) send(pid, {:continue, ref}) @@ -439,6 +452,7 @@ defmodule Electric.ShapeCacheTest do describe "has_shape?/2" do setup [ :with_electric_instance_id, + :with_tenant_id, :with_in_memory_storage, :with_log_chunking, :with_registry, @@ -481,6 +495,7 @@ defmodule Electric.ShapeCacheTest do describe "await_snapshot_start/4" do setup [ :with_electric_instance_id, + :with_tenant_id, :with_in_memory_storage, :with_log_chunking, :with_registry, @@ -506,7 +521,7 @@ defmodule Electric.ShapeCacheTest do test "returns an error if waiting is for an unknown shape id", ctx do shape_id = "orphaned_id" - storage = Storage.for_shape(shape_id, ctx.storage) + storage = Storage.for_shape(shape_id, ctx.tenant_id, ctx.storage) %{shape_cache_opts: opts} = with_shape_cache(Map.merge(ctx, %{pool: nil, inspector: @stub_inspector}), @@ -547,7 +562,7 @@ defmodule Electric.ShapeCacheTest do {shape_id, _} = ShapeCache.get_or_create_shape_id(@shape, opts) - storage = Storage.for_shape(shape_id, ctx.storage) + storage = Storage.for_shape(shape_id, ctx.tenant_id, ctx.storage) tasks = for _id <- 1..10 do @@ -590,7 +605,7 @@ defmodule Electric.ShapeCacheTest do {shape_id, _} = ShapeCache.get_or_create_shape_id(@shape, opts) - storage = Storage.for_shape(shape_id, ctx.storage) + storage = Storage.for_shape(shape_id, ctx.tenant_id, ctx.storage) tasks = for _ <- 1..10 do @@ -644,6 +659,7 @@ defmodule Electric.ShapeCacheTest do describe "handle_truncate/2" do setup [ :with_electric_instance_id, + :with_tenant_id, :with_in_memory_storage, :with_log_chunking, :with_registry, @@ -666,7 +682,7 @@ defmodule Electric.ShapeCacheTest do Process.sleep(50) assert :started = ShapeCache.await_snapshot_start(shape_id, opts) - storage = Storage.for_shape(shape_id, ctx.storage) + storage = Storage.for_shape(shape_id, ctx.tenant_id, ctx.storage) Storage.append_to_log!( changes_to_log_items([ @@ -682,7 +698,9 @@ defmodule Electric.ShapeCacheTest do assert Storage.snapshot_started?(storage) assert Enum.count(Storage.get_log_stream(@zero_offset, storage)) == 1 - ref = ctx.electric_instance_id |> Shapes.Consumer.whereis(shape_id) |> Process.monitor() + ref = + Shapes.Consumer.whereis(ctx.electric_instance_id, ctx.tenant_id, shape_id) + |> Process.monitor() log = capture_log(fn -> ShapeCache.handle_truncate(shape_id, opts) end) assert log =~ "Truncating and rotating shape id" @@ -697,6 +715,7 @@ defmodule Electric.ShapeCacheTest do describe "clean_shape/2" do setup [ :with_electric_instance_id, + :with_tenant_id, :with_in_memory_storage, :with_log_chunking, :with_registry, @@ -719,7 +738,7 @@ defmodule Electric.ShapeCacheTest do Process.sleep(50) assert :started = ShapeCache.await_snapshot_start(shape_id, opts) - storage = Storage.for_shape(shape_id, ctx.storage) + storage = Storage.for_shape(shape_id, ctx.tenant_id, ctx.storage) Storage.append_to_log!( changes_to_log_items([ @@ -738,7 +757,10 @@ defmodule Electric.ShapeCacheTest do {module, _} = storage ref = - Process.monitor(module.name(ctx.electric_instance_id, shape_id) |> GenServer.whereis()) + Process.monitor( + module.name(ctx.electric_instance_id, ctx.tenant_id, shape_id) + |> GenServer.whereis() + ) log = capture_log(fn -> :ok = ShapeCache.clean_shape(shape_id, opts) end) assert log =~ "Cleaning up shape" @@ -792,6 +814,7 @@ defmodule Electric.ShapeCacheTest do setup [ :with_electric_instance_id, + :with_tenant_id, :with_cub_db_storage, :with_log_chunking, :with_registry, @@ -827,9 +850,10 @@ defmodule Electric.ShapeCacheTest do [{^shape_id, @shape}] = ShapeCache.list_shapes(%{shape_meta_table: meta_table}) {:ok, @snapshot_xmin} = ShapeStatus.snapshot_xmin(meta_table, shape_id) - restart_shape_cache(context) + %{shape_cache_opts: opts} = restart_shape_cache(context) :started = ShapeCache.await_snapshot_start(shape_id, opts) + meta_table = Keyword.fetch!(opts, :shape_meta_table) assert [{^shape_id, @shape}] = ShapeCache.list_shapes(%{shape_meta_table: meta_table}) {:ok, @snapshot_xmin} = ShapeStatus.snapshot_xmin(meta_table, shape_id) end @@ -839,7 +863,7 @@ defmodule Electric.ShapeCacheTest do {shape_id, _} = ShapeCache.get_or_create_shape_id(@shape, opts) :started = ShapeCache.await_snapshot_start(shape_id, opts) - ref = Shapes.Consumer.monitor(context.electric_instance_id, shape_id) + ref = Shapes.Consumer.monitor(context.electric_instance_id, context.tenant_id, shape_id) ShapeLogCollector.store_transaction( %Changes.Transaction{ @@ -889,7 +913,7 @@ defmodule Electric.ShapeCacheTest do consumers = for {shape_id, _} <- shape_cache.list_shapes(Map.new(shape_cache_opts)) do - pid = Shapes.Consumer.whereis(ctx.electric_instance_id, shape_id) + pid = Shapes.Consumer.whereis(ctx.electric_instance_id, ctx.tenant_id, shape_id) {pid, Process.monitor(pid)} end diff --git a/packages/sync-service/test/electric/shapes/consumer_test.exs b/packages/sync-service/test/electric/shapes/consumer_test.exs index d859d52f21..b7959c9552 100644 --- a/packages/sync-service/test/electric/shapes/consumer_test.exs +++ b/packages/sync-service/test/electric/shapes/consumer_test.exs @@ -52,6 +52,7 @@ defmodule Electric.Shapes.ConsumerTest do end) setup :with_electric_instance_id + setup :with_tenant_id setup :set_mox_from_context setup :verify_on_exit! @@ -109,6 +110,7 @@ defmodule Electric.Shapes.ConsumerTest do {:ok, producer} = ShapeLogCollector.start_link( electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id, demand: :forward, inspector: Support.StubInspector.new([ @@ -123,12 +125,12 @@ defmodule Electric.Shapes.ConsumerTest do |> expect(:set_snapshot_xmin, 1, fn _, ^shape_id, _ -> :ok end) |> expect(:mark_snapshot_started, 1, fn _, ^shape_id -> :ok end) |> allow(self(), fn -> - Shapes.Consumer.whereis(ctx.electric_instance_id, shape_id) + Shapes.Consumer.whereis(ctx.electric_instance_id, ctx.tenant_id, shape_id) end) Mock.ShapeCache |> allow(self(), fn -> - Shapes.Consumer.whereis(ctx.electric_instance_id, shape_id) + Shapes.Consumer.whereis(ctx.electric_instance_id, ctx.tenant_id, shape_id) end) {:ok, consumer} = @@ -138,7 +140,14 @@ defmodule Electric.Shapes.ConsumerTest do shape: shape, electric_instance_id: ctx.electric_instance_id, inspector: {Mock.Inspector, []}, - log_producer: ShapeLogCollector.name(ctx.electric_instance_id), + log_producer: ShapeLogCollector.name(ctx.electric_instance_id, ctx.tenant_id), + tenant_id: ctx.tenant_id, + db_pool: + Electric.Application.process_name( + ctx.electric_instance_id, + ctx.tenant_id, + Electric.DbPool + ), registry: registry_name, shape_cache: {Mock.ShapeCache, []}, shape_status: {Mock.ShapeStatus, []}, @@ -170,11 +179,12 @@ defmodule Electric.Shapes.ConsumerTest do Mock.ShapeCache |> expect(:update_shape_latest_offset, 2, fn @shape_id1, ^last_log_offset, _ -> :ok end) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id1)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id1)) ref = make_ref() - Registry.register(ctx.registry, @shape_id1, ref) + tenant_id = Access.fetch!(ctx, :tenant_id) + Registry.register(ctx.registry, {tenant_id, @shape_id1}, ref) txn = %Transaction{xid: xmin, lsn: lsn, last_log_offset: last_log_offset} @@ -208,14 +218,15 @@ defmodule Electric.Shapes.ConsumerTest do @shape_id1, ^last_log_offset, _ -> :ok @shape_id2, ^last_log_offset, _ -> :ok end) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id1)) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id2)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id1)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id2)) ref1 = make_ref() ref2 = make_ref() - Registry.register(ctx.registry, @shape_id1, ref1) - Registry.register(ctx.registry, @shape_id2, ref2) + tenant_id = Access.fetch!(ctx, :tenant_id) + Registry.register(ctx.registry, {tenant_id, @shape_id1}, ref1) + Registry.register(ctx.registry, {tenant_id, @shape_id2}, ref2) txn = %Transaction{xid: xid, lsn: lsn, last_log_offset: last_log_offset} @@ -262,12 +273,12 @@ defmodule Electric.Shapes.ConsumerTest do lsn = Lsn.from_string("0/10") last_log_offset = LogOffset.new(lsn, 0) - ref1 = Shapes.Consumer.monitor(ctx.electric_instance_id, @shape_id1) - ref2 = Shapes.Consumer.monitor(ctx.electric_instance_id, @shape_id2) + ref1 = Shapes.Consumer.monitor(ctx.electric_instance_id, ctx.tenant_id, @shape_id1) + ref2 = Shapes.Consumer.monitor(ctx.electric_instance_id, ctx.tenant_id, @shape_id2) Mock.ShapeCache |> expect(:update_shape_latest_offset, fn @shape_id2, _offset, _ -> :ok end) - |> allow(self(), Shapes.Consumer.name(ctx.electric_instance_id, @shape_id2)) + |> allow(self(), Shapes.Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id2)) txn = %Transaction{xid: xid, lsn: lsn, last_log_offset: last_log_offset} @@ -293,7 +304,7 @@ defmodule Electric.Shapes.ConsumerTest do Mock.ShapeCache |> expect(:handle_truncate, fn @shape_id1, _ -> :ok end) - |> allow(self(), Shapes.Consumer.name(ctx.electric_instance_id, @shape_id1)) + |> allow(self(), Shapes.Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id1)) txn = %Transaction{xid: xid, lsn: lsn, last_log_offset: last_log_offset} @@ -301,7 +312,7 @@ defmodule Electric.Shapes.ConsumerTest do relation: {"public", "test_table"} }) - assert_consumer_shutdown(ctx.electric_instance_id, @shape_id1, fn -> + assert_consumer_shutdown(ctx.electric_instance_id, ctx.tenant_id, @shape_id1, fn -> assert :ok = ShapeLogCollector.store_transaction(txn, ctx.producer) end) @@ -309,12 +320,12 @@ defmodule Electric.Shapes.ConsumerTest do refute_receive {Support.TestStorage, :cleanup!, @shape_id2} end - defp assert_consumer_shutdown(electric_instance_id, shape_id, fun) do + defp assert_consumer_shutdown(electric_instance_id, tenant_id, shape_id, fun) do monitors = for name <- [ - Shapes.Consumer.Supervisor.name(electric_instance_id, shape_id), - Shapes.Consumer.name(electric_instance_id, shape_id), - Shapes.Consumer.Snapshotter.name(electric_instance_id, shape_id) + Shapes.Consumer.Supervisor.name(electric_instance_id, tenant_id, shape_id), + Shapes.Consumer.name(electric_instance_id, tenant_id, shape_id), + Shapes.Consumer.Snapshotter.name(electric_instance_id, tenant_id, shape_id) ], pid = GenServer.whereis(name) do ref = Process.monitor(pid) @@ -343,7 +354,7 @@ defmodule Electric.Shapes.ConsumerTest do Mock.ShapeCache |> expect(:handle_truncate, fn @shape_id1, _ -> :ok end) - |> allow(self(), Shapes.Consumer.name(ctx.electric_instance_id, @shape_id1)) + |> allow(self(), Shapes.Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id1)) txn = %Transaction{xid: xid, lsn: lsn, last_log_offset: last_log_offset} @@ -351,7 +362,7 @@ defmodule Electric.Shapes.ConsumerTest do relation: {"public", "test_table"} }) - assert_consumer_shutdown(ctx.electric_instance_id, @shape_id1, fn -> + assert_consumer_shutdown(ctx.electric_instance_id, ctx.tenant_id, @shape_id1, fn -> assert :ok = ShapeLogCollector.store_transaction(txn, ctx.producer) end) @@ -367,10 +378,11 @@ defmodule Electric.Shapes.ConsumerTest do Mock.ShapeCache |> expect(:update_shape_latest_offset, fn @shape_id1, ^last_log_offset, _ -> :ok end) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id1)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id1)) ref = make_ref() - Registry.register(ctx.registry, @shape_id1, ref) + tenant_id = Access.fetch!(ctx, :tenant_id) + Registry.register(ctx.registry, {tenant_id, @shape_id1}, ref) txn = %Transaction{xid: xid, lsn: lsn, last_log_offset: last_log_offset} @@ -395,16 +407,20 @@ defmodule Electric.Shapes.ConsumerTest do } ref1 = - Process.monitor(GenServer.whereis(Consumer.name(ctx.electric_instance_id, @shape_id1))) + Process.monitor( + GenServer.whereis(Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id1)) + ) ref2 = - Process.monitor(GenServer.whereis(Consumer.name(ctx.electric_instance_id, @shape_id2))) + Process.monitor( + GenServer.whereis(Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id2)) + ) Mock.ShapeStatus |> expect(:remove_shape, 0, fn _, _ -> :ok end) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id1)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id1)) |> expect(:remove_shape, 0, fn _, _ -> :ok end) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id2)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id2)) assert :ok = ShapeLogCollector.handle_relation_msg(rel, ctx.producer) @@ -423,23 +439,27 @@ defmodule Electric.Shapes.ConsumerTest do } ref1 = - Process.monitor(GenServer.whereis(Consumer.name(ctx.electric_instance_id, @shape_id1))) + Process.monitor( + GenServer.whereis(Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id1)) + ) ref2 = - Process.monitor(GenServer.whereis(Consumer.name(ctx.electric_instance_id, @shape_id2))) + Process.monitor( + GenServer.whereis(Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id2)) + ) # also cleans up inspector cache and shape status cache Mock.Inspector |> expect(:clean, 1, fn _, _ -> true end) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id1)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id1)) |> expect(:clean, 0, fn _, _ -> true end) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id2)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id2)) Mock.ShapeStatus |> expect(:remove_shape, 1, fn _, _ -> :ok end) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id1)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id1)) |> expect(:remove_shape, 0, fn _, _ -> :ok end) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id2)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id2)) assert :ok = ShapeLogCollector.handle_relation_msg(rel, ctx.producer) @@ -461,23 +481,27 @@ defmodule Electric.Shapes.ConsumerTest do } ref1 = - Process.monitor(GenServer.whereis(Consumer.name(ctx.electric_instance_id, @shape_id1))) + Process.monitor( + GenServer.whereis(Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id1)) + ) ref2 = - Process.monitor(GenServer.whereis(Consumer.name(ctx.electric_instance_id, @shape_id2))) + Process.monitor( + GenServer.whereis(Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id2)) + ) # also cleans up inspector cache and shape status cache Mock.Inspector |> expect(:clean, 1, fn _, _ -> true end) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id1)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id1)) |> expect(:clean, 0, fn _, _ -> true end) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id2)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id2)) Mock.ShapeStatus |> expect(:remove_shape, 1, fn _, _ -> :ok end) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id1)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id1)) |> expect(:remove_shape, 0, fn _, _ -> :ok end) - |> allow(self(), Consumer.name(ctx.electric_instance_id, @shape_id2)) + |> allow(self(), Consumer.name(ctx.electric_instance_id, ctx.tenant_id, @shape_id2)) assert :ok = ShapeLogCollector.handle_relation_msg(rel, ctx.producer) @@ -540,7 +564,7 @@ defmodule Electric.Shapes.ConsumerTest do lsn = Lsn.from_integer(10) - ref = Shapes.Consumer.monitor(ctx.electric_instance_id, shape_id) + ref = Shapes.Consumer.monitor(ctx.electric_instance_id, ctx.tenant_id, shape_id) txn = %Transaction{xid: 11, lsn: lsn, last_log_offset: LogOffset.new(lsn, 2)} @@ -559,7 +583,7 @@ defmodule Electric.Shapes.ConsumerTest do assert_receive {Shapes.Consumer, ^ref, 11} - shape_storage = Storage.for_shape(shape_id, storage) + shape_storage = Storage.for_shape(shape_id, ctx.tenant_id, storage) assert [op1, op2] = Storage.get_log_stream(LogOffset.before_all(), shape_storage) @@ -587,7 +611,7 @@ defmodule Electric.Shapes.ConsumerTest do lsn1 = Lsn.from_integer(9) lsn2 = Lsn.from_integer(10) - ref = Shapes.Consumer.monitor(ctx.electric_instance_id, shape_id) + ref = Shapes.Consumer.monitor(ctx.electric_instance_id, ctx.tenant_id, shape_id) txn1 = %Transaction{xid: 9, lsn: lsn1, last_log_offset: LogOffset.new(lsn1, 2)} @@ -622,7 +646,7 @@ defmodule Electric.Shapes.ConsumerTest do assert_receive {Shapes.Consumer, ^ref, 10} - shape_storage = Storage.for_shape(shape_id, storage) + shape_storage = Storage.for_shape(shape_id, ctx.tenant_id, storage) assert [_op1, _op2] = Storage.get_log_stream(LogOffset.before_all(), shape_storage) diff --git a/packages/sync-service/test/electric/shapes/shape_test.exs b/packages/sync-service/test/electric/shapes/shape_test.exs index 2db3e5f01f..6773e095db 100644 --- a/packages/sync-service/test/electric/shapes/shape_test.exs +++ b/packages/sync-service/test/electric/shapes/shape_test.exs @@ -214,7 +214,7 @@ defmodule Electric.Shapes.ShapeTest do import Support.DbStructureSetup import Support.ComponentSetup - setup [:with_shared_db, :with_inspector, :with_sql_execute] + setup [:with_shared_db, :with_tenant_id, :with_inspector, :with_sql_execute] @tag with_sql: [ "CREATE SCHEMA IF NOT EXISTS test", @@ -354,7 +354,7 @@ defmodule Electric.Shapes.ShapeTest do import Support.DbStructureSetup import Support.ComponentSetup - setup [:with_shared_db, :with_inspector, :with_sql_execute] + setup [:with_shared_db, :with_tenant_id, :with_inspector, :with_sql_execute] @tag with_sql: [ "CREATE SCHEMA IF NOT EXISTS test", diff --git a/packages/sync-service/test/electric/tenant/persistence_test.exs b/packages/sync-service/test/electric/tenant/persistence_test.exs new file mode 100644 index 0000000000..81c8533323 --- /dev/null +++ b/packages/sync-service/test/electric/tenant/persistence_test.exs @@ -0,0 +1,122 @@ +defmodule Electric.Tenant.PersistenceTest do + use ExUnit.Case, async: false + + alias Electric.Utils + + import Support.ComponentSetup + import Support.TestUtils + + setup :with_persistent_kv + setup :with_electric_instance_id + + @tenant1 "test_tenant1" + @tenant2 "test_tenant2" + + @conn_opts [ + database: "electric", + hostname: "localhost", + ipv6: false, + password: "password", + port: 54321, + sslmode: :disable, + username: "postgres" + ] + |> Enum.sort_by(fn {key, _value} -> key end) + + test "should load persisted tenant", opts do + Electric.Tenant.Persistence.persist_tenant!( + @tenant1, + Electric.Utils.obfuscate_password(@conn_opts), + app_config(opts) + ) + + tenants = + Electric.Tenant.Persistence.load_tenants!(app_config(opts)) + |> Utils.map_values(&Utils.deobfuscate_password/1) + + assert tenants == %{ + @tenant1 => @conn_opts + } + end + + test "should load all added tenants", opts do + Electric.Tenant.Persistence.persist_tenant!( + @tenant1, + Electric.Utils.obfuscate_password(@conn_opts), + app_config(opts) + ) + + tenant2_db = "electric_test" + + tenant2_conn_opts = + Keyword.merge(@conn_opts, database: tenant2_db) + |> Enum.sort_by(fn {key, _value} -> key end) + + Electric.Tenant.Persistence.persist_tenant!( + @tenant2, + Electric.Utils.obfuscate_password(tenant2_conn_opts), + app_config(opts) + ) + + tenants = + Electric.Tenant.Persistence.load_tenants!(app_config(opts)) + |> Utils.map_values(&Utils.deobfuscate_password/1) + + assert tenants == %{ + @tenant1 => @conn_opts, + @tenant2 => tenant2_conn_opts + } + end + + test "should delete tenant", opts do + # Create two tenants + Electric.Tenant.Persistence.persist_tenant!( + @tenant1, + Electric.Utils.obfuscate_password(@conn_opts), + app_config(opts) + ) + + tenant2_db = "electric_test" + + tenant2_conn_opts = + Keyword.merge(@conn_opts, database: tenant2_db) + |> Enum.sort_by(fn {key, _value} -> key end) + + Electric.Tenant.Persistence.persist_tenant!( + @tenant2, + Electric.Utils.obfuscate_password(tenant2_conn_opts), + app_config(opts) + ) + + # Check that boths tenants are persisted + tenants = + Electric.Tenant.Persistence.load_tenants!(app_config(opts)) + |> Utils.map_values(&Utils.deobfuscate_password/1) + + assert tenants == %{ + @tenant1 => @conn_opts, + @tenant2 => tenant2_conn_opts + } + + # Delete a tenant + Electric.Tenant.Persistence.delete_tenant!(@tenant1, app_config(opts)) + + # Check that the other tenant still exists + tenants = + Electric.Tenant.Persistence.load_tenants!(app_config(opts)) + |> Utils.map_values(&Utils.deobfuscate_password/1) + + assert tenants == %{ + @tenant2 => tenant2_conn_opts + } + end + + defp app_config(ctx) do + [ + app_config: %{ + persistent_kv: ctx.persistent_kv + }, + electric_instance_id: ctx.electric_instance_id + ] + end +end diff --git a/packages/sync-service/test/electric/tenant_manager_test.exs b/packages/sync-service/test/electric/tenant_manager_test.exs new file mode 100644 index 0000000000..63046c7491 --- /dev/null +++ b/packages/sync-service/test/electric/tenant_manager_test.exs @@ -0,0 +1,259 @@ +defmodule Electric.TenantManagerTest do + use ExUnit.Case, async: false + + alias Electric.TenantManager + alias Electric.Tenant.Persistence + + import Support.ComponentSetup + import Support.DbSetup + + @moduletag :tmp_dir + + describe "start_link/1" do + @tenant_id "persisted_tenant" + + setup :with_unique_db + setup :with_publication + + setup ctx do + # Persist a tenant + with_manager = fn ctx -> + opts = [ + app_config: ctx.app_config, + persistent_kv: ctx.persistent_kv, + electric_instance_id: ctx.electric_instance_id + ] + + # Persist a tenant + Persistence.persist_tenant!(@tenant_id, ctx.db_config, opts) + + # Now create the tenant manager + with_tenant_manager(ctx) + end + + with_complete_stack_but_no_tenant(ctx, tenant_manager: with_manager) + end + + test "loads tenants from storage", ctx do + # Check that it recreated the tenant + {:ok, tenant} = + TenantManager.get_tenant(@tenant_id, + tenant_manager: ctx.tenant_manager, + tenant_tables_name: ctx.tenant_tables_name + ) + + assert tenant[:tenant_id] == @tenant_id + end + end + + describe "create_tenant/1" do + setup :with_unique_db + setup :with_publication + + setup :with_complete_stack_but_no_tenant + setup :with_app_config + + setup ctx do + Map.put(ctx, :connection_opts, Map.fetch!(ctx, :db_config)) + end + + test "creates a new tenant", %{ + tenant_manager: tenant_manager, + tenant_id: tenant_id, + connection_opts: connection_opts, + inspector: inspector, + app_config: app_config, + tenant_tables_name: tenant_tables_name + } do + :ok = + TenantManager.create_tenant(tenant_id, connection_opts, + inspector: inspector, + tenant_manager: tenant_manager, + app_config: app_config, + tenant_tables_name: tenant_tables_name + ) + end + + test "complains if tenant already exists", %{ + tenant_manager: tenant_manager, + tenant_id: tenant_id, + connection_opts: connection_opts, + inspector: inspector, + app_config: app_config, + tenant_tables_name: tenant_tables_name + } do + assert :ok = + TenantManager.create_tenant(tenant_id, connection_opts, + inspector: inspector, + tenant_manager: tenant_manager, + app_config: app_config, + tenant_tables_name: tenant_tables_name + ) + + assert {:error, {:tenant_already_exists, ^tenant_id}} = + TenantManager.create_tenant( + tenant_id, + Keyword.put(connection_opts, :port, "654"), + inspector: inspector, + tenant_manager: tenant_manager, + app_config: app_config, + tenant_tables_name: tenant_tables_name + ) + end + + test "complains if database is already in use by a tenant", %{ + tenant_manager: tenant_manager, + tenant_id: tenant_id, + connection_opts: connection_opts, + inspector: inspector, + app_config: app_config, + tenant_tables_name: tenant_tables_name + } do + assert :ok = + TenantManager.create_tenant(tenant_id, connection_opts, + inspector: inspector, + tenant_manager: tenant_manager, + app_config: app_config, + tenant_tables_name: tenant_tables_name + ) + + pg_id = + connection_opts[:hostname] <> + ":" <> to_string(connection_opts[:port]) <> "/" <> connection_opts[:database] + + assert {:error, {:db_already_in_use, ^pg_id}} = + TenantManager.create_tenant("another_tenant", connection_opts, + inspector: inspector, + tenant_manager: tenant_manager, + app_config: app_config, + tenant_tables_name: tenant_tables_name + ) + end + end + + describe "fetching tenants when there are none" do + setup :with_unique_db + + setup do + %{publication_name: "electric_test_publication"} + end + + setup :with_complete_stack_but_no_tenant + + test "get_only_tenant/1 complains if there are no tenants", ctx do + assert {:error, :not_found} = + TenantManager.get_only_tenant(tenant_manager: ctx.tenant_manager) + end + + test "get_tenant/2 complains if the tenant does not exist", ctx do + assert {:error, :not_found} = + TenantManager.get_tenant("non-existing tenant", tenant_manager: ctx.tenant_manager) + end + end + + describe "fetching the only tenant" do + setup :with_unique_db + + setup do + %{publication_name: "electric_test_publication", slot_name: "electric_test_slot"} + end + + setup :with_complete_stack + + test "get_only_tenant/1 returns the only tenant", ctx do + {:ok, tenant_config} = + TenantManager.get_only_tenant(tenant_manager: ctx.tenant_manager) + + assert tenant_config[:tenant_id] == ctx.tenant_id + end + + test "get_tenant/2 returns the requested tenant", ctx do + {:ok, tenant_config} = + TenantManager.get_tenant(ctx.tenant_id, tenant_manager: ctx.tenant_manager) + + assert tenant_config[:tenant_id] == ctx.tenant_id + end + end + + describe "fetching a tenant when there are two tenants" do + setup :with_unique_db + + setup do + %{publication_name: "electric_test_publication", slot_name: "electric_test_slot"} + end + + setup :with_complete_stack + + setup ctx do + with_tenant( + ctx + |> Map.put(:tenant_id, "another_tenant") + |> Map.put(:pg_id, "678") + ) + end + + test "get_only_tenant/1 complains if there are several tenants", ctx do + assert {:error, :several_tenants} = + TenantManager.get_only_tenant(tenant_manager: ctx.tenant_manager) + end + + test "get_tenant/2 returns the requested tenant", ctx do + {:ok, tenant_config} = + TenantManager.get_tenant("another_tenant", tenant_manager: ctx.tenant_manager) + + assert tenant_config[:tenant_id] == "another_tenant" + end + end + + describe "delete_tenant/2" do + setup :with_unique_db + + setup do + %{ + publication_name: "electric_test_publication" + } + end + + setup ctx do + ctx + |> Map.put(:connection_opts, Map.fetch!(ctx, :db_config)) + |> with_complete_stack(tenant: &with_supervised_tenant/1) + end + + test "deletes the tenant", %{ + electric_instance_id: electric_instance_id, + tenant_id: tenant_id, + tenant_manager: tenant_manager, + tenant_tables_name: tenant_tables_name, + tenant_supervisor_pid: tenant_supervisor_pid, + app_config: app_config + } do + # Check that the tenant supervisor is running + # and that the tenant's ETS tables are registered in the global ETS table + assert Process.alive?(tenant_supervisor_pid) + assert :ets.member(tenant_tables_name, {tenant_id, :pg_info_table}) + assert :ets.member(tenant_tables_name, {tenant_id, :pg_relation_table}) + + # Delete the tenant + assert :ok = + TenantManager.delete_tenant(tenant_id, + electric_instance_id: electric_instance_id, + tenant_id: tenant_id, + tenant_manager: tenant_manager, + tenant_tables_name: tenant_tables_name, + app_config: app_config + ) + + # Check that the tenant is now unknown to the tenant manager + # and that it is fully shut down and removed from the ETS table + assert {:error, :not_found} = + TenantManager.get_tenant(tenant_id, tenant_manager: tenant_manager) + + # Verify process was terminated + refute Process.alive?(tenant_supervisor_pid) + + refute :ets.member(tenant_tables_name, {tenant_id, :pg_info_table}) + refute :ets.member(tenant_tables_name, {tenant_id, :pg_relation_table}) + end + end +end diff --git a/packages/sync-service/test/electric/timeline_test.exs b/packages/sync-service/test/electric/timeline_test.exs index 18010e908d..2f1e31b8d7 100644 --- a/packages/sync-service/test/electric/timeline_test.exs +++ b/packages/sync-service/test/electric/timeline_test.exs @@ -3,91 +3,97 @@ defmodule Electric.TimelineTest do alias Electric.Timeline - describe "load_timeline/1" do - @moduletag :tmp_dir + @moduletag :tmp_dir + @tenant_id "test_tenant" + describe "load_timeline/1" do setup context do - %{kv: Electric.PersistentKV.Filesystem.new!(root: context.tmp_dir)} + %{ + opts: [ + persistent_kv: Electric.PersistentKV.Filesystem.new!(root: context.tmp_dir), + tenant_id: @tenant_id + ] + } end - test "returns nil when no timeline is available", %{kv: kv} do - assert Timeline.load_timeline(kv) == nil + test "returns nil when no timeline is available", %{opts: opts} do + assert Timeline.load_timeline(opts) == nil end end describe "store_timeline/2" do - @moduletag :tmp_dir - setup context do - %{persistent_kv: Electric.PersistentKV.Filesystem.new!(root: context.tmp_dir)} + %{ + opts: [ + persistent_kv: Electric.PersistentKV.Filesystem.new!(root: context.tmp_dir), + tenant_id: @tenant_id + ] + } end - test "stores the timeline", %{persistent_kv: persistent_kv} do + test "stores the timeline", %{opts: opts} do timeline = {1, 2} - Timeline.store_timeline(timeline, persistent_kv) - assert ^timeline = Timeline.load_timeline(persistent_kv) + Timeline.store_timeline(timeline, opts) + assert ^timeline = Timeline.load_timeline(opts) end end describe "check/2" do - @moduletag :tmp_dir - setup context do timeline = context[:electric_timeline] kv = Electric.PersistentKV.Filesystem.new!(root: context.tmp_dir) + opts = [persistent_kv: kv, shape_cache: {ShapeCache, []}, tenant_id: @tenant_id] if timeline != nil do - Timeline.store_timeline(timeline, kv) + Timeline.store_timeline(timeline, opts) end - {:ok, [timeline: timeline, persistent_kv: kv]} + {:ok, [timeline: timeline, opts: opts]} end @tag electric_timeline: nil - test "stores the timeline if Electric has no timeline yet", %{persistent_kv: kv} do - assert Timeline.load_timeline(kv) == nil + test "stores the timeline if Electric has no timeline yet", %{opts: opts} do + assert Timeline.load_timeline(opts) == nil timeline = {2, 5} - assert :ok = Timeline.check(timeline, kv) - assert ^timeline = Timeline.load_timeline(kv) + assert :ok = Timeline.check(timeline, opts) + assert ^timeline = Timeline.load_timeline(opts) end @tag electric_timeline: {1, 2} test "proceeds without changes if Postgres' timeline matches Electric's timeline", %{ timeline: timeline, - persistent_kv: kv + opts: opts } do - assert ^timeline = Timeline.load_timeline(kv) - assert :ok = Timeline.check(timeline, kv) - assert ^timeline = Timeline.load_timeline(kv) + assert ^timeline = Timeline.load_timeline(opts) + assert :ok = Timeline.check(timeline, opts) + assert ^timeline = Timeline.load_timeline(opts) end @tag electric_timeline: {1, 3} test "returns :timeline_changed on Point In Time Recovery (PITR)", %{ timeline: timeline, - persistent_kv: kv + opts: opts } do - assert ^timeline = Timeline.load_timeline(kv) + assert ^timeline = Timeline.load_timeline(opts) pg_timeline = {1, 2} - assert :timeline_changed = Timeline.check(pg_timeline, kv) + assert :timeline_changed = Timeline.check(pg_timeline, opts) - assert ^pg_timeline = Timeline.load_timeline(kv) + assert ^pg_timeline = Timeline.load_timeline(opts) end - # TODO: add log output checks - @tag electric_timeline: {1, 3} test "returns :timeline_changed when Postgres DB changed", %{ timeline: timeline, - persistent_kv: kv + opts: opts } do - assert ^timeline = Timeline.load_timeline(kv) + assert ^timeline = Timeline.load_timeline(opts) pg_timeline = {2, 3} - assert :timeline_changed = Timeline.check(pg_timeline, kv) - assert ^pg_timeline = Timeline.load_timeline(kv) + assert :timeline_changed = Timeline.check(pg_timeline, opts) + assert ^pg_timeline = Timeline.load_timeline(opts) end end end diff --git a/packages/sync-service/test/support/component_setup.ex b/packages/sync-service/test/support/component_setup.ex index 097c4052ea..fbd8b63165 100644 --- a/packages/sync-service/test/support/component_setup.ex +++ b/packages/sync-service/test/support/component_setup.ex @@ -9,6 +9,99 @@ defmodule Support.ComponentSetup do alias Electric.ShapeCache.InMemoryStorage alias Electric.Postgres.Inspector.EtsInspector + def with_tenant_id(_ctx) do + %{tenant_id: "test_tenant"} + end + + def with_tenant_manager(ctx) do + Electric.TenantSupervisor.start_link([]) + + opts = [ + app_config: ctx.app_config, + electric_instance_id: ctx.electric_instance_id, + tenant_tables_name: Access.get(ctx, :tenant_tables_name, nil) + ] + + Electric.TenantManager.start_link(opts) + + %{tenant_manager: Electric.TenantManager.name(opts)} + end + + defp tenant_config(ctx) do + [ + electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id, + pg_id: Map.get(ctx, :pg_id, "12345"), + shape_cache: ctx.shape_cache, + storage: ctx.storage, + inspector: ctx.inspector, + registry: ctx.registry, + long_poll_timeout: Access.get(ctx, :long_poll_timeout, 20_000), + max_age: Access.get(ctx, :max_age, 60), + stale_age: Access.get(ctx, :stale_age, 300), + get_service_status: fn -> :active end + ] + end + + def store_tenant(tenant, ctx) do + :ok = + Electric.TenantManager.store_tenant(tenant, + electric_instance_id: ctx.electric_instance_id, + tenant_manager: ctx.tenant_manager, + app_config: ctx.app_config, + # not important for this test + connection_opts: + Access.get(ctx, :connection_opts, Electric.Utils.obfuscate_password(password: "foo")) + ) + end + + def with_tenant(ctx) do + tenant = Map.get_lazy(ctx, :tenant_config, fn -> tenant_config(ctx) end) + + tenant_opts = [ + electric_instance_id: ctx.electric_instance_id, + persistent_kv: ctx.persistent_kv, + connection_opts: ctx.db_config, + tenant_manager: ctx.tenant_manager, + app_config: ctx.app_config + ] + + :ok = Electric.TenantManager.store_tenant(tenant, tenant_opts) + Electric.TenantSupervisor.start_tenant(ctx) + + %{tenant: tenant} + end + + def with_supervised_tenant(ctx) do + tenant = Access.get(ctx, :tenant_config, tenant_config(ctx)) + + :ok = + Electric.TenantManager.create_tenant(ctx.tenant_id, ctx.db_config, + pg_id: tenant[:pg_id], + shape_cache: tenant[:shape_cache], + storage: tenant[:storage], + inspector: tenant[:inspector], + registry: tenant[:registry], + long_poll_timeout: tenant[:long_poll_timeout], + max_age: tenant[:max_age], + stale_age: tenant[:stale_age], + get_service_status: tenant[:get_service_status], + tenant_manager: ctx.tenant_manager, + app_config: ctx.app_config, + tenant_tables_name: ctx.tenant_tables_name + ) + + {:via, _, {registry_name, registry_key}} = + Electric.Tenant.Supervisor.name( + electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id + ) + + [{tenant_supervisor_pid, _}] = Registry.lookup(registry_name, registry_key) + + %{tenant: tenant, tenant_supervisor_pid: tenant_supervisor_pid} + end + def with_registry(ctx) do registry_name = Module.concat(Registry, ctx.electric_instance_id) start_link_supervised!({Registry, keys: :duplicate, name: registry_name}) @@ -20,7 +113,8 @@ defmodule Support.ComponentSetup do storage_opts = InMemoryStorage.shared_opts( table_base_name: :"in_memory_storage_#{full_test_name(ctx)}", - electric_instance_id: ctx.electric_instance_id + electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id ) %{storage: {InMemoryStorage, storage_opts}} @@ -34,7 +128,8 @@ defmodule Support.ComponentSetup do storage_opts = FileStorage.shared_opts( storage_dir: ctx.tmp_dir, - electric_instance_id: ctx.electric_instance_id + electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id ) %{storage: {FileStorage, storage_opts}} @@ -50,7 +145,6 @@ defmodule Support.ComponentSetup do end def with_shape_cache(ctx, additional_opts \\ []) do - shape_meta_table = :"shape_meta_#{full_test_name(ctx)}" server = :"shape_cache_#{full_test_name(ctx)}" consumer_supervisor = :"consumer_supervisor_#{full_test_name(ctx)}" get_pg_version = fn -> Application.fetch_env!(:electric, :pg_version_for_tests) end @@ -59,7 +153,7 @@ defmodule Support.ComponentSetup do [ name: server, electric_instance_id: ctx.electric_instance_id, - shape_meta_table: shape_meta_table, + tenant_id: ctx.tenant_id, inspector: ctx.inspector, storage: ctx.storage, chunk_bytes_threshold: ctx.chunk_bytes_threshold, @@ -80,23 +174,28 @@ defmodule Support.ComponentSetup do {:ok, _pid} = Electric.Shapes.ConsumerSupervisor.start_link( name: consumer_supervisor, - electric_instance_id: ctx.electric_instance_id + electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id ) {:ok, _pid} = ShapeCache.start_link(start_opts) + shape_meta_table = GenServer.call(server, :get_shape_meta_table) + shape_cache_opts = [ - server: server, electric_instance_id: ctx.electric_instance_id, - shape_meta_table: shape_meta_table, - storage: ctx.storage + tenant_id: ctx.tenant_id, + server: server, + storage: ctx.storage, + shape_meta_table: shape_meta_table ] %{ shape_cache_opts: shape_cache_opts, shape_cache: {ShapeCache, shape_cache_opts}, shape_cache_server: server, - consumer_supervisor: consumer_supervisor + consumer_supervisor: consumer_supervisor, + shape_meta_table: shape_meta_table } end @@ -104,11 +203,20 @@ defmodule Support.ComponentSetup do {:ok, _} = ShapeLogCollector.start_link( electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id, inspector: ctx.inspector, link_consumers: Map.get(ctx, :link_log_collector, true) ) - %{shape_log_collector: ShapeLogCollector.name(ctx.electric_instance_id)} + %{shape_log_collector: ShapeLogCollector.name(ctx.electric_instance_id, ctx.tenant_id)} + end + + def with_slot_name_and_stream_id(_ctx) do + # Use a random slot name to avoid conflicts + %{ + slot_name: "electric_test_slot_#{:rand.uniform(10_000)}", + stream_id: "default" + } end def with_replication_client(ctx) do @@ -124,7 +232,12 @@ defmodule Support.ComponentSetup do ] {:ok, pid} = - ReplicationClient.start_link(ctx.electric_instance_id, ctx.db_config, replication_opts) + ReplicationClient.start_link( + electric_instance_id: ctx.electric_instance_id, + tenant_id: ctx.tenant_id, + connection_opts: ctx.db_config, + replication_opts: replication_opts + ) %{replication_client: pid} end @@ -134,26 +247,86 @@ defmodule Support.ComponentSetup do pg_info_table = :"pg_info_table #{full_test_name(ctx)}" pg_relation_table = :"pg_relation_table #{full_test_name(ctx)}" + tenant_tables_name = :"tenant_tables_name #{full_test_name(ctx)}" + :ets.new(tenant_tables_name, [:public, :named_table, :set]) + {:ok, _} = EtsInspector.start_link( + tenant_id: ctx.tenant_id, pg_info_table: pg_info_table, pg_relation_table: pg_relation_table, pool: ctx.db_conn, - name: server + name: server, + tenant_tables_name: tenant_tables_name ) + opts = [tenant_id: ctx.tenant_id, tenant_tables_name: tenant_tables_name] + %{ inspector: {EtsInspector, - pg_info_table: pg_info_table, pg_relation_table: pg_relation_table, server: server}, - pg_info_table: pg_info_table, - pg_relation_table: pg_relation_table + tenant_id: ctx.tenant_id, + tenant_tables_name: tenant_tables_name, + pg_info_table: EtsInspector.get_column_info_table(opts), + pg_relation_table: EtsInspector.get_relation_table(opts), + server: server}, + pg_info_table: EtsInspector.get_column_info_table(opts), + pg_relation_table: EtsInspector.get_relation_table(opts), + tenant_tables_name: tenant_tables_name + } + end + + def with_app_config(ctx) do + %{ + app_config: %Electric.Application.Configuration{ + electric_instance_id: ctx.electric_instance_id, + persistent_kv: ctx.persistent_kv, + replication_opts: %{ + stream_id: ctx.stream_id, + publication_name: ctx.publication_name, + slot_name: ctx.slot_name, + slot_temporary?: false + }, + pool_opts: %{ + size: 20 + } + } + } + end + + # This is a reduced version of the app config that the tenant manager can use to restore persisted tenants + def with_minimal_app_config(ctx) do + %{ + app_config: %Electric.Application.Configuration{ + persistent_kv: ctx.persistent_kv + } } end def with_complete_stack(ctx, opts \\ []) do [ Keyword.get(opts, :electric_instance_id, &Support.TestUtils.with_electric_instance_id/1), + Keyword.get(opts, :tenant_id, &with_tenant_id/1), + Keyword.get(opts, :registry, &with_registry/1), + Keyword.get(opts, :inspector, &with_inspector/1), + Keyword.get(opts, :persistent_kv, &with_persistent_kv/1), + Keyword.get(opts, :log_chunking, &with_log_chunking/1), + Keyword.get(opts, :storage, &with_cub_db_storage/1), + Keyword.get(opts, :log_collector, &with_shape_log_collector/1), + Keyword.get(opts, :shape_cache, &with_shape_cache/1), + Keyword.get(opts, :slot_name_and_stream_id, &with_slot_name_and_stream_id/1), + Keyword.get(opts, :replication_client, &with_replication_client/1), + Keyword.get(opts, :app_config, &with_app_config/1), + Keyword.get(opts, :tenant_manager, &with_tenant_manager/1), + Keyword.get(opts, :tenant, &with_tenant/1) + ] + |> Enum.reduce(ctx, &Map.merge(&2, apply(&1, [&2]))) + end + + def with_complete_stack_but_no_tenant(ctx, opts \\ []) do + [ + Keyword.get(opts, :electric_instance_id, &Support.TestUtils.with_electric_instance_id/1), + Keyword.get(opts, :tenant_id, &with_tenant_id/1), Keyword.get(opts, :registry, &with_registry/1), Keyword.get(opts, :inspector, &with_inspector/1), Keyword.get(opts, :persistent_kv, &with_persistent_kv/1), @@ -161,13 +334,17 @@ defmodule Support.ComponentSetup do Keyword.get(opts, :storage, &with_cub_db_storage/1), Keyword.get(opts, :log_collector, &with_shape_log_collector/1), Keyword.get(opts, :shape_cache, &with_shape_cache/1), - Keyword.get(opts, :replication_client, &with_replication_client/1) + Keyword.get(opts, :slot_name_and_stream_id, &with_slot_name_and_stream_id/1), + Keyword.get(opts, :replication_client, &with_replication_client/1), + Keyword.get(opts, :app_config, &with_app_config/1), + Keyword.get(opts, :tenant_manager, &with_tenant_manager/1) ] |> Enum.reduce(ctx, &Map.merge(&2, apply(&1, [&2]))) end def build_router_opts(ctx, overrides \\ []) do [ + tenant_manager: ctx.tenant_manager, storage: ctx.storage, registry: ctx.registry, shape_cache: ctx.shape_cache, diff --git a/packages/sync-service/test/support/db_setup.ex b/packages/sync-service/test/support/db_setup.ex index 5bee120ce4..cc60666a8c 100644 --- a/packages/sync-service/test/support/db_setup.ex +++ b/packages/sync-service/test/support/db_setup.ex @@ -9,7 +9,7 @@ defmodule Support.DbSetup do ] def with_unique_db(ctx) do - base_config = Application.fetch_env!(:electric, :connection_opts) + base_config = Application.fetch_env!(:electric, :default_connection_opts) {:ok, utility_pool} = start_db_pool(base_config) Process.unlink(utility_pool) @@ -56,7 +56,7 @@ defmodule Support.DbSetup do end def with_shared_db(_ctx) do - config = Application.fetch_env!(:electric, :connection_opts) + config = Application.fetch_env!(:electric, :default_connection_opts) {:ok, pool} = start_db_pool(config) {:ok, %{pool: pool, db_config: config, db_conn: pool}} end diff --git a/packages/sync-service/test/support/mocks.ex b/packages/sync-service/test/support/mocks.ex index 213be0d307..8899b38ec3 100644 --- a/packages/sync-service/test/support/mocks.ex +++ b/packages/sync-service/test/support/mocks.ex @@ -3,4 +3,5 @@ defmodule Support.Mock do Mox.defmock(Support.Mock.ShapeCache, for: Electric.ShapeCacheBehaviour) Mox.defmock(Support.Mock.Inspector, for: Electric.Postgres.Inspector) Mox.defmock(Support.Mock.ShapeStatus, for: Electric.ShapeCache.ShapeStatusBehaviour) + Mox.defmock(Support.Mock.PersistentKV, for: Electric.PersistentKV) end diff --git a/packages/sync-service/test/support/test_storage.ex b/packages/sync-service/test/support/test_storage.ex index ccb253c1b2..af09010312 100644 --- a/packages/sync-service/test/support/test_storage.ex +++ b/packages/sync-service/test/support/test_storage.ex @@ -38,10 +38,10 @@ defmodule Support.TestStorage do end @impl Electric.ShapeCache.Storage - def for_shape(shape_id, {parent, init, storage}) do - send(parent, {__MODULE__, :for_shape, shape_id}) + def for_shape(shape_id, tenant_id, {parent, init, storage}) do + send(parent, {__MODULE__, :for_shape, shape_id, tenant_id}) shape_init = Map.get(init, shape_id, []) - {parent, shape_id, shape_init, Storage.for_shape(shape_id, storage)} + {parent, shape_id, shape_init, Storage.for_shape(shape_id, tenant_id, storage)} end @impl Electric.ShapeCache.Storage diff --git a/packages/typescript-client/src/client.ts b/packages/typescript-client/src/client.ts index 1aa3f3cc13..b5df0d0866 100644 --- a/packages/typescript-client/src/client.ts +++ b/packages/typescript-client/src/client.ts @@ -26,6 +26,7 @@ import { SHAPE_ID_QUERY_PARAM, SHAPE_SCHEMA_HEADER, WHERE_QUERY_PARAM, + DATABASE_ID_QUERY_PARAM, } from './constants' /** @@ -37,6 +38,13 @@ export interface ShapeStreamOptions { * directly or a proxy. E.g. for a local Electric instance, you might set `http://localhost:3000/v1/shape/foo` */ url: string + + /** + * Which database to use. + * This is optional unless Electric is used with multiple databases. + */ + databaseId?: string + /** * The where clauses for the shape. */ @@ -158,6 +166,7 @@ export class ShapeStream = Row> #isUpToDate: boolean = false #connected: boolean = false #shapeId?: string + #databaseId?: string #schema?: Schema #error?: unknown @@ -167,6 +176,7 @@ export class ShapeStream = Row> this.#lastOffset = this.options.offset ?? `-1` this.#liveCacheBuster = `` this.#shapeId = this.options.shapeId + this.#databaseId = this.options.databaseId this.#messageParser = new MessageParser(options.parser) const baseFetchClient = @@ -227,6 +237,10 @@ export class ShapeStream = Row> fetchUrl.searchParams.set(SHAPE_ID_QUERY_PARAM, this.#shapeId!) } + if (this.#databaseId) { + fetchUrl.searchParams.set(DATABASE_ID_QUERY_PARAM, this.#databaseId!) + } + let response!: Response try { response = await this.#fetchClient(fetchUrl.toString(), { diff --git a/packages/typescript-client/src/constants.ts b/packages/typescript-client/src/constants.ts index c2ba435eab..ab465a9d73 100644 --- a/packages/typescript-client/src/constants.ts +++ b/packages/typescript-client/src/constants.ts @@ -5,6 +5,7 @@ export const CHUNK_LAST_OFFSET_HEADER = `electric-chunk-last-offset` export const CHUNK_UP_TO_DATE_HEADER = `electric-chunk-up-to-date` export const SHAPE_SCHEMA_HEADER = `electric-schema` export const SHAPE_ID_QUERY_PARAM = `shape_id` +export const DATABASE_ID_QUERY_PARAM = `database_id` export const OFFSET_QUERY_PARAM = `offset` export const WHERE_QUERY_PARAM = `where` export const COLUMNS_QUERY_PARAM = `columns` diff --git a/packages/typescript-client/test/integration.test.ts b/packages/typescript-client/test/integration.test.ts index a097035172..19e40910f9 100644 --- a/packages/typescript-client/test/integration.test.ts +++ b/packages/typescript-client/test/integration.test.ts @@ -9,11 +9,14 @@ import { IssueRow, testWithIssuesTable as it, testWithMultitypeTable as mit, + testWithMultiTenantIssuesTable as multiTenantIt, } from './support/test-context' import * as h from './support/test-helpers' const BASE_URL = inject(`baseUrl`) - +const OTHER_DATABASE_URL = inject(`otherDatabaseUrl`) +const databaseId = inject(`databaseId`) +const otherDatabaseId = inject(`otherDatabaseId`) it(`sanity check`, async ({ dbClient, issuesTableSql }) => { const result = await dbClient.query(`SELECT * FROM ${issuesTableSql}`) @@ -659,7 +662,7 @@ describe(`HTTP Sync`, () => { } }) - await clearShape(issuesTableUrl, issueStream.shapeId!) + await clearShape(issuesTableUrl, { shapeId: issueStream.shapeId! }) expect(shapeData).toEqual( new Map([[`${issuesTableKey}/"${id1}"`, { id: id1, title: `foo1` }]]) @@ -947,3 +950,177 @@ describe(`HTTP Sync`, () => { }) }) }) + +describe.sequential(`Multi tenancy sync`, () => { + it(`should allow new databases to be added`, async () => { + const url = new URL(`${BASE_URL}/v1/admin/database`) + + // Add the database + const res = await fetch(url.toString(), { + method: `POST`, + headers: { + Accept: `application/json`, + 'Content-Type': `application/json`, + }, + body: JSON.stringify({ + database_id: otherDatabaseId, + database_url: OTHER_DATABASE_URL, + }), + }) + + expect(res.status).toBe(200) + const body = await res.json() + expect(body).toBe(otherDatabaseId) + }) + + it(`should serve original database`, async ({ + issuesTableUrl, + aborter, + insertIssues, + }) => { + const id = await insertIssues({ title: `test issue` }) + + const shapeData = new Map() + const issueStream = new ShapeStream({ + url: `${BASE_URL}/v1/shape/${issuesTableUrl}`, + databaseId, + subscribe: false, + signal: aborter.signal, + }) + + await new Promise((resolve, reject) => { + issueStream.subscribe((messages) => { + messages.forEach((message) => { + if (isChangeMessage(message)) { + shapeData.set(message.key, message.value) + } + if (isUpToDateMessage(message)) { + aborter.abort() + return resolve() + } + }) + }, reject) + }) + + const values = [...shapeData.values()] + expect(values).toHaveLength(1) + expect(values[0]).toMatchObject({ + id: id[0], + title: `test issue`, + }) + }) + + multiTenantIt( + `should serve new database`, + async ({ issuesTableUrl, aborter, insertIssuesToOtherDb }) => { + const id = await insertIssuesToOtherDb({ title: `test issue in new db` }) + + const shapeData = new Map() + const issueStream = new ShapeStream({ + url: `${BASE_URL}/v1/shape/${issuesTableUrl}`, + databaseId: otherDatabaseId, + subscribe: false, + signal: aborter.signal, + }) + + await new Promise((resolve, reject) => { + issueStream.subscribe((messages) => { + messages.forEach((message) => { + if (isChangeMessage(message)) { + shapeData.set(message.key, message.value) + } + if (isUpToDateMessage(message)) { + aborter.abort() + return resolve() + } + }) + }, reject) + }) + + const values = [...shapeData.values()] + expect(values).toHaveLength(1) + expect(values[0]).toMatchObject({ + id: id[0], + title: `test issue in new db`, + }) + } + ) + + multiTenantIt( + `should serve both databases in live mode`, + async ({ + issuesTableUrl, + aborter, + otherAborter, + insertIssues, + insertIssuesToOtherDb, + }) => { + // Set up streams for both databases + const defaultStream = new ShapeStream({ + url: `${BASE_URL}/v1/shape/${issuesTableUrl}`, + databaseId, + subscribe: true, + signal: aborter.signal, + }) + + const otherStream = new ShapeStream({ + url: `${BASE_URL}/v1/shape/${issuesTableUrl}`, + databaseId: otherDatabaseId, + subscribe: true, + signal: otherAborter.signal, + }) + + const defaultData = new Map() + const otherData = new Map() + + // Set up subscriptions + defaultStream.subscribe((messages) => { + messages.forEach((message) => { + if (isChangeMessage(message)) { + defaultData.set(message.key, message.value) + } + }) + }) + + otherStream.subscribe((messages) => { + messages.forEach((message) => { + if (isChangeMessage(message)) { + otherData.set(message.key, message.value) + } + }) + }) + + // Insert data into both databases + const defaultId = await insertIssues({ title: `default db issue` }) + const otherId = await insertIssuesToOtherDb({ title: `other db issue` }) + + // Give time for updates to propagate + await sleep(1000) + + // Verify data from default database + expect([...defaultData.values()]).toHaveLength(1) + expect([...defaultData.values()][0]).toMatchObject({ + id: defaultId[0], + title: `default db issue`, + }) + + // Verify data from other database + expect([...otherData.values()]).toHaveLength(1) + expect([...otherData.values()][0]).toMatchObject({ + id: otherId[0], + title: `other db issue`, + }) + } + ) + + it(`should allow databases to be deleted`, async () => { + const url = new URL(`${BASE_URL}/v1/admin/database/${otherDatabaseId}`) + + // Add the database + const res = await fetch(url.toString(), { method: `DELETE` }) + + expect(res.status).toBe(200) + const body = await res.json() + expect(body).toBe(otherDatabaseId) + }) +}) diff --git a/packages/typescript-client/test/support/global-setup.ts b/packages/typescript-client/test/support/global-setup.ts index 039b4e0ce0..8eb55f6d4a 100644 --- a/packages/typescript-client/test/support/global-setup.ts +++ b/packages/typescript-client/test/support/global-setup.ts @@ -1,8 +1,14 @@ import type { GlobalSetupContext } from 'vitest/node' import { makePgClient } from './test-helpers' +import { Client } from 'pg' const url = process.env.ELECTRIC_URL ?? `http://localhost:3000` const proxyUrl = process.env.ELECTRIC_PROXY_CACHE_URL ?? `http://localhost:3002` +const databaseId = process.env.DATABASE_ID ?? `test_tenant` +const otherDatabaseId = `other_test_tenant` +const otherDatabaseUrl = + process.env.OTHER_DATABASE_URL ?? + `postgresql://postgres:password@localhost:54322/electric?sslmode=disable` // name of proxy cache container to execute commands against, // see docker-compose.yml that spins it up for details @@ -18,6 +24,9 @@ declare module 'vitest' { testPgSchema: string proxyCacheContainerName: string proxyCachePath: string + databaseId: string + otherDatabaseId: string + otherDatabaseUrl: string } } @@ -29,7 +38,7 @@ function waitForElectric(url: string): Promise { ) const tryHealth = async () => - fetch(`${url}/v1/health`) + fetch(`${url}/v1/health?database_id=${databaseId}`) .then(async (res): Promise => { if (!res.ok) return tryHealth() const { status } = (await res.json()) as { status: string } @@ -54,17 +63,27 @@ export default async function ({ provide }: GlobalSetupContext) { await waitForElectric(url) const client = makePgClient() - await client.connect() - await client.query(`CREATE SCHEMA IF NOT EXISTS electric_test`) + const otherClient = new Client(otherDatabaseUrl) + const clients = [client, otherClient] + + for (const c of clients) { + await c.connect() + await c.query(`CREATE SCHEMA IF NOT EXISTS electric_test`) + } provide(`baseUrl`, url) provide(`testPgSchema`, `electric_test`) provide(`proxyCacheBaseUrl`, proxyUrl) provide(`proxyCacheContainerName`, proxyCacheContainerName) provide(`proxyCachePath`, proxyCachePath) + provide(`databaseId`, databaseId) + provide(`otherDatabaseId`, otherDatabaseId) + provide(`otherDatabaseUrl`, otherDatabaseUrl) return async () => { - await client.query(`DROP SCHEMA electric_test CASCADE`) - await client.end() + for (const c of clients) { + await c.query(`DROP SCHEMA electric_test CASCADE`) + await c.end() + } } } diff --git a/packages/typescript-client/test/support/test-context.ts b/packages/typescript-client/test/support/test-context.ts index 3651211354..4736ab50ab 100644 --- a/packages/typescript-client/test/support/test-context.ts +++ b/packages/typescript-client/test/support/test-context.ts @@ -11,7 +11,10 @@ export type UpdateIssueFn = (row: IssueRow) => Promise> export type DeleteIssueFn = (row: IssueRow) => Promise> export type InsertIssuesFn = (...rows: GeneratedIssueRow[]) => Promise export type ClearIssuesShapeFn = (shapeId?: string) => Promise -export type ClearShapeFn = (table: string, shapeId?: string) => Promise +export type ClearShapeFn = ( + table: string, + options?: { shapeId?: string; databaseId?: string } +) => Promise export const testWithDbClient = test.extend<{ dbClient: Client @@ -35,24 +38,58 @@ export const testWithDbClient = test.extend<{ baseUrl: async ({}, use) => use(inject(`baseUrl`)), pgSchema: async ({}, use) => use(inject(`testPgSchema`)), clearShape: async ({}, use) => { - await use(async (table: string, shapeId?: string) => { - const baseUrl = inject(`baseUrl`) - const resp = await fetch( - `${baseUrl}/v1/shape/${table}${shapeId ? `?shape_id=${shapeId}` : ``}`, - { - method: `DELETE`, + await use( + async ( + table: string, + options: { + databaseId?: string + shapeId?: string + } = {} + ) => { + const baseUrl = inject(`baseUrl`) + const url = new URL(`${baseUrl}/v1/shape/${table}`) + + if (!options.databaseId) { + options.databaseId = inject(`databaseId`) } - ) - if (!resp.ok) { - console.error( - await FetchError.fromResponse( - resp, - `DELETE ${baseUrl}/v1/shape/${table}` + + url.searchParams.set(`database_id`, options.databaseId) + + if (options.shapeId) { + url.searchParams.set(`shape_id`, options.shapeId) + } + + const resp = await fetch(url.toString(), { method: `DELETE` }) + if (!resp.ok) { + console.error( + await FetchError.fromResponse(resp, `DELETE ${url.toString()}`) + ) + throw new Error( + `Could not delete shape ${table} with ID ${options.shapeId}` ) - ) - throw new Error(`Could not delete shape ${table} with ID ${shapeId}`) + } } + ) + }, +}) + +export const testWithDbClients = testWithDbClient.extend<{ + otherDbClient: Client + otherAborter: AbortController +}>({ + otherDbClient: async ({}, use) => { + const client = new Client({ + connectionString: inject(`otherDatabaseUrl`), + options: `-csearch_path=${inject(`testPgSchema`)}`, }) + await client.connect() + await use(client) + await client.end() + }, + otherAborter: async ({}, use) => { + const controller = new AbortController() + await use(controller) + controller.abort(`Test complete`) }, }) @@ -115,8 +152,74 @@ export const testWithIssuesTable = testWithDbClient.extend<{ }), clearIssuesShape: async ({ clearShape, issuesTableUrl }, use) => { - use((shapeId?: string) => clearShape(issuesTableUrl, shapeId)) + use((shapeId?: string) => clearShape(issuesTableUrl, { shapeId })) + }, +}) + +export const testWithMultiTenantIssuesTable = testWithDbClients.extend<{ + issuesTableSql: string + issuesTableUrl: string + insertIssues: InsertIssuesFn + insertIssuesToOtherDb: InsertIssuesFn +}>({ + issuesTableSql: async ({ dbClient, otherDbClient, task }, use) => { + const tableName = `"issues for ${task.id}_${Math.random().toString(16)}"` + const clients = [dbClient, otherDbClient] + const queryProms = clients.map((client) => + client.query(` + DROP TABLE IF EXISTS ${tableName}; + CREATE TABLE ${tableName} ( + id UUID PRIMARY KEY, + title TEXT NOT NULL, + priority INTEGER NOT NULL + ); + COMMENT ON TABLE ${tableName} IS 'Created for ${task.file?.name.replace(/'/g, `\``) ?? `unknown`} - ${task.name.replace(`'`, `\``)}'; + `) + ) + + await Promise.all(queryProms) + + await use(tableName) + + const cleanupProms = clients.map((client) => + client.query(`DROP TABLE ${tableName}`) + ) + await Promise.all(cleanupProms) }, + issuesTableUrl: async ({ issuesTableSql, pgSchema, clearShape }, use) => { + const urlAppropriateTable = pgSchema + `.` + issuesTableSql + await use(urlAppropriateTable) + // ignore errors - clearShape has its own logging + // we don't want to interrupt cleanup + await Promise.allSettled([ + clearShape(urlAppropriateTable), + clearShape(urlAppropriateTable, { + databaseId: inject(`otherDatabaseId`), + }), + ]) + }, + insertIssues: ({ issuesTableSql, dbClient }, use) => + use(async (...rows) => { + const placeholders = rows.map( + (_, i) => `($${i * 3 + 1}, $${i * 3 + 2}, $${i * 3 + 3})` + ) + const { rows: result } = await dbClient.query( + `INSERT INTO ${issuesTableSql} (id, title, priority) VALUES ${placeholders} RETURNING id`, + rows.flatMap((x) => [x.id ?? uuidv4(), x.title, 10]) + ) + return result.map((x) => x.id) + }), + insertIssuesToOtherDb: ({ issuesTableSql, otherDbClient }, use) => + use(async (...rows) => { + const placeholders = rows.map( + (_, i) => `($${i * 3 + 1}, $${i * 3 + 2}, $${i * 3 + 3})` + ) + const { rows: result } = await otherDbClient.query( + `INSERT INTO ${issuesTableSql} (id, title, priority) VALUES ${placeholders} RETURNING id`, + rows.flatMap((x) => [x.id ?? uuidv4(), x.title, 10]) + ) + return result.map((x) => x.id) + }), }) export const testWithMultitypeTable = testWithDbClient.extend<{ diff --git a/packages/typescript-client/vitest.config.ts b/packages/typescript-client/vitest.config.ts index 6f1bf248c0..9ca6c004fe 100644 --- a/packages/typescript-client/vitest.config.ts +++ b/packages/typescript-client/vitest.config.ts @@ -4,5 +4,6 @@ export default defineConfig({ test: { globalSetup: `test/support/global-setup.ts`, typecheck: { enabled: true }, + fileParallelism: false, }, }) diff --git a/website/electric-api.yaml b/website/electric-api.yaml index 755bed46e0..90a7fa88da 100644 --- a/website/electric-api.yaml +++ b/website/electric-api.yaml @@ -47,6 +47,13 @@ paths: using a `.` delimiter, such as `foo.issues`. If you don't provide a schema prefix, then the table is assumed to be in the `public.` schema. # Query parameters + - name: database_id + in: query + schema: + type: string + description: |- + The ID of the database to sync from. + This is required only if Electric manages several databases. - name: offset in: query schema: @@ -293,6 +300,8 @@ paths: any new content to process. "400": description: Bad request. + "404": + description: Database not found. "409": description: The requested offset for the given shape no longer exists. @@ -363,6 +372,13 @@ paths: Can be qualified by the schema name. # Query parameters + - name: database_id + in: query + schema: + type: string + description: |- + The ID of the database from which to delete the shape. + This is required only if Electric manages several databases. - name: shape_id in: query schema: @@ -379,4 +395,88 @@ paths: "400": description: Bad request. "404": - description: Not found (or shape deletion is not enabled). + description: Database or shape not found (or shape deletion is not enabled). + /v1/admin/database: + post: + summary: Add Database + description: |- + Adds a database to Electric. + requestBody: + required: true + content: + application/json: + schema: + type: object + required: + - database_url + - database_id + properties: + database_url: + type: string + description: PostgreSQL connection URL for the database + database_use_ipv6: + type: boolean + default: false + description: Whether to use IPv6 for database connections + database_id: + type: string + description: Unique identifier for the database (auto-generated UUID if not provided) + responses: + "200": + description: Database successfully added + content: + application/json: + schema: + type: string + description: The database ID of the added database + "400": + description: Bad request + content: + application/json: + schema: + type: string + description: Error message + examples: + already_exists: + value: "Database already exists." + db_in_use: + value: "The database localhost:54321/db is already in use by another tenant." + /v1/admin/database/{database_id}: + delete: + summary: Remove Database + description: |- + Removes a database from Electric. + parameters: + - name: database_id + in: path + required: true + schema: + type: string + description: The ID of the database to remove + responses: + "200": + description: Database successfully removed + content: + application/json: + schema: + type: string + description: The ID of the removed database + "400": + description: Bad request + content: + application/json: + schema: + type: object + properties: + database_id: + type: array + items: + type: string + description: Validation error messages + "404": + description: Database not found + content: + application/json: + schema: + type: string + example: "Database {id} not found."