Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: e2e test for multi tenancy #1937

Closed
wants to merge 52 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
33f86ac
Add TenantManager that spawns the necessary processes per tenant.
kevin-dp Oct 3, 2024
8b3a2b1
Initialize separate storage backend for each tenant
alco Oct 17, 2024
9fb6801
Fix tests
alco Oct 17, 2024
68ae922
mix format
alco Oct 17, 2024
48959b1
Fix missing :storage and :tenant_id options where needed
alco Oct 17, 2024
0c47719
Support configuring a default tenant when running in any environment,…
alco Oct 17, 2024
b03deea
Use default tenant for TS integration tests on CI
alco Oct 17, 2024
582bc3c
Adjust the health check endpoint for CI
alco Oct 17, 2024
621b5b9
fixup! Support configuring a default tenant when running in any envir…
alco Oct 17, 2024
65bba0a
(tmp) Load requested or default tenant in DeleteShapePlug
alco Oct 17, 2024
511034b
Add tenant id to .env.dev
msfstef Oct 21, 2024
7789bc2
Fix typo in options passed to plug.
kevin-dp Oct 21, 2024
5717a7d
Remove commented tests because they are either obsolete or included i…
kevin-dp Oct 22, 2024
d504498
Fix unit tests.
kevin-dp Oct 23, 2024
7950e79
Make default tenant optional
kevin-dp Oct 24, 2024
680cb11
rename TENANT_ID env var to DATABASE_ID
kevin-dp Oct 24, 2024
8f8d805
Modify health check endpoint to take a database_id
kevin-dp Oct 24, 2024
77349d4
Restore allow_shape_deletion config option needed for delete shape plug.
kevin-dp Oct 24, 2024
0eb7da0
Fix react hooks test
kevin-dp Oct 24, 2024
7daa40b
Fix rolling deploy integration test
kevin-dp Oct 24, 2024
8d33523
Unit tests for add database plug.
kevin-dp Oct 28, 2024
d368e1f
Add optional databaseId parameter the ShapeStream for selecting a DB …
kevin-dp Oct 28, 2024
f67d3b0
Integration test for multi tenancy
kevin-dp Oct 28, 2024
fb82a84
Fix error message in delete shape plug
kevin-dp Oct 28, 2024
c4f1d5f
Plug for deleting a tenant
kevin-dp Oct 28, 2024
30b25ab
Fix clearShape in test setup
kevin-dp Oct 28, 2024
10a752c
Do not take the tenant ID of the 2nd tenant for tge integration test …
kevin-dp Oct 28, 2024
cfa4476
Rename delete DB plug to remove DB plug.
kevin-dp Oct 28, 2024
2073698
Unit tests for remove DB plug
kevin-dp Oct 28, 2024
cc57b05
Remove obsolete comment
kevin-dp Oct 28, 2024
99fb471
Extract duplicated functions for loading tenant to utility module.
kevin-dp Oct 29, 2024
83db0e6
Return 404 if tenant is not found
kevin-dp Oct 29, 2024
11f1c37
Update OpenAPI spec
kevin-dp Oct 29, 2024
9cd866f
Rename id parameter to database_id in add DB plug
kevin-dp Oct 29, 2024
b55d81b
Disable file parallelism in vitest to avoid flaky tests due to some u…
kevin-dp Oct 29, 2024
d10aa7d
Handle failure to parse connection string
kevin-dp Oct 29, 2024
566e809
Store references to per-tenant ETS tables in a global ETS table.
kevin-dp Oct 30, 2024
c08277f
WIP shutting down
kevin-dp Oct 30, 2024
c97133d
Reverse error messages
msfstef Oct 30, 2024
907608e
Shutdown tenant processes and clean up ETS table when tenant is deleted.
kevin-dp Oct 30, 2024
36e9a15
Introduce a with_supervised_tenant in component_setup.
kevin-dp Oct 31, 2024
9a622f2
Rebase on top of main
kevin-dp Oct 31, 2024
1e4f0c5
Pass tenant_id option to stop_tenant
kevin-dp Oct 31, 2024
fff28af
Persist tenants
kevin-dp Oct 31, 2024
75c24bb
Unit test tenant persistence module
kevin-dp Nov 4, 2024
be59a66
Remove obsolete tenant deletion in component setup
kevin-dp Nov 4, 2024
1a5b44e
Remove tenant from disk when the tenant is deleted.
kevin-dp Nov 4, 2024
8e830eb
Unit test that tenant manager loads tenants from storage
kevin-dp Nov 4, 2024
476cf3d
Formatting
kevin-dp Nov 5, 2024
0c26de3
Generate unique tenant ID if DB URL is provided but no tenant ID is p…
kevin-dp Nov 5, 2024
a7bb981
Log message when tenant is reloaded from storage
kevin-dp Nov 5, 2024
e0ff4ef
e2e test for multi tenancy
kevin-dp Nov 6, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .github/workflows/ts_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,8 @@ jobs:
defaults:
run:
working-directory: ${{ matrix.package_dir }}
env:
DATABASE_ID: ci_test_tenant
steps:
- uses: actions/checkout@v4
- uses: erlef/setup-beam@v1
Expand Down Expand Up @@ -111,7 +113,7 @@ jobs:
mix run --no-halt &

wait-on: |
http-get://localhost:3000/v1/health
http-get://localhost:3000/v1/health?database_id=${{ env.DATABASE_ID }}

tail: true
log-output-resume: stderr
Expand Down
55 changes: 50 additions & 5 deletions integration-tests/tests/_macros.luxinc
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@
[global pg_host_port=54331]
[global database_url=postgresql://postgres:password@localhost:$pg_host_port/postgres?sslmode=disable]

[macro setup_pg initdb_args config_opts]
[shell pg]
[macro setup_pg_with_name shell_name initdb_args config_opts]
[shell $shell_name]
-$fail_pattern

!docker run \
Expand All @@ -27,6 +27,10 @@
-
[endmacro]

[macro setup_pg initdb_args config_opts]
[invoke setup_pg "pg" $initdb_args $config_opts]
[endmacro]

[macro stop_pg]
[shell pg_lifecycle]
# This timeout is needed until https://github.com/electric-sql/electric/issues/1632 is fixed.
Expand All @@ -48,11 +52,15 @@
??database system is ready to accept connections
[endmacro]

[macro start_psql]
[shell psql]
[macro start_psql_shell shell_name pg_container_name]
[shell $shell_name]
!docker exec -u postgres -it $pg_container_name psql
[endmacro]

[macro start_psql]
[invoke start_psql_shell psql $pg_container_name]
[endmacro]

[macro seed_pg]
[shell psql]
!docker exec -u postgres -it $pg_container_name psql
Expand Down Expand Up @@ -80,6 +88,10 @@
[endmacro]

[macro setup_electric]
[invoke setup_electric_with_env "DATABASE_ID=integration_test_tenant DATABASE_URL=$database_url"]
[endmacro]

[macro setup_multi_tenant_electric]
[invoke setup_electric_with_env ""]
[endmacro]

Expand All @@ -91,7 +103,22 @@
[shell $shell_name]
-$fail_pattern

!DATABASE_URL=$database_url PORT=$port $env ../scripts/electric_dev.sh
!PORT=$port $env ../scripts/electric_dev.sh
[endmacro]

[macro add_tenant tenant_id electric_port]
[shell $tenant_id]
!curl -X POST http://localhost:$electric_port/v1/admin/database \
-H "Content-Type: application/json" \
-d "{\"database_id\":\"$tenant_id\",\"DATABASE_URL\":\"$database_url\"}"
??"$tenant_id"
[endmacro]

[macro check_tenant_status tenant_id expected_status electric_port]
[shell $tenant_id]
[invoke wait-for "curl -X GET http://localhost:$electric_port/v1/health?database_id=$tenant_id" "\{\"status\":\"$expected_status\"\}" 10 $PS1]
#!curl -X GET http://localhost:$electric_port/v1/health?database_id=$tenant_id
#??{"status":"$expected_status"}
[endmacro]

[macro teardown]
Expand All @@ -101,3 +128,21 @@
!../scripts/clean_up.sh
?$PS1
[endmacro]

[macro wait-for command match max_time prompt]
[loop i 1..$max_time]
@$match
!$command
??$command
?$prompt
[sleep 1]
[endloop]
# The last prompt won't match since the loop pattern will
# match before it, so match it here instead.
?$prompt

# Sync up after the loop.
!$command
??$command
?$prompt
[endmacro]
2 changes: 1 addition & 1 deletion integration-tests/tests/invalidated-replication-slot.lux
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

[my invalidated_slot_error=
"""
[error] GenServer Electric.Connection.Manager terminating
[error] :gen_statem {Electric.Registry.Processes, {Electric.Postgres.ReplicationClient, :default, "integration_test_tenant"}} terminating
** (Postgrex.Error) ERROR 55000 (object_not_in_prerequisite_state) cannot read from logical replication slot "electric_slot_integration"

This slot has been invalidated because it exceeded the maximum reserved size.
Expand Down
221 changes: 221 additions & 0 deletions integration-tests/tests/multi-tenancy.lux
Original file line number Diff line number Diff line change
@@ -0,0 +1,221 @@
[doc Verify support for multi tenancy]

[include _macros.luxinc]

[global tenant1_pg_container_name=multi-tenancy-tenant1__pg]
[global tenant1_pg_host_port=54331]
[global tenant1_database_url=postgresql://postgres:password@localhost:$tenant1_pg_host_port/postgres?sslmode=disable]

[global tenant2_pg_container_name=multi-tenancy-tenant2__pg]
[global tenant2_pg_host_port=54332]
[global tenant2_database_url=postgresql://postgres:password@localhost:$tenant2_pg_host_port/postgres?sslmode=disable]

###

## Start a new Postgres DB
[global pg_container_name=$tenant1_pg_container_name]
[global pg_host_port=$tenant1_pg_host_port]
[global database_url=$tenant1_database_url]
[invoke setup_pg_with_name "tenant1_pg" "" ""]

## Start Electric in multi tenancy mode
[invoke setup_multi_tenant_electric]

[shell electric]
??[info] Running Electric.Plug.Router with Bandit 1.5.5 at 0.0.0.0:3000 (http)

## Create tenant 1
[invoke add_tenant "tenant1" 3000]
[invoke check_tenant_status "tenant1" "active" 3000]

## Setup a second Postgres DB
[global pg_container_name=$tenant2_pg_container_name]
[global pg_host_port=$tenant2_pg_host_port]
[global database_url=$tenant2_database_url]
[invoke setup_pg_with_name "tenant2_pg" "" ""]

## Create tenant 2
[invoke add_tenant "tenant2" 3000]
[invoke check_tenant_status "tenant2" "active" 3000]

## Insert some data in both DBs
[invoke start_psql_shell "tenant1_psql" $tenant1_pg_container_name]
[invoke start_psql_shell "tenant2_psql" $tenant2_pg_container_name]

[shell tenant1_psql]
!CREATE TABLE items (id INT PRIMARY KEY, val TEXT);
??CREATE TABLE
!INSERT INTO items (id, val) VALUES (1, 'tenant1');
??INSERT 0 1

[shell tenant2_psql]
!CREATE TABLE items (id INT PRIMARY KEY, val TEXT);
??CREATE TABLE
!INSERT INTO items (id, val) VALUES (1, 'tenant2');
??INSERT 0 1

## Check that both tenants can query their data
[shell tenant1]
# Chech tenant 1 data
!curl -i -X GET "http://localhost:3000/v1/shape/items?offset=-1&database_id=tenant1"
?\e\[1melectric-shape-id\e\[0m: ([\d-]+)
[local shape_id=$1]
?\e\[1melectric-chunk-last-offset\e\[0m: ([\d_]+)
[local offset=$1]
"""??
[{"key":"\"public\".\"items\"/\"1\"","value":{"id":"1","val":"tenant1"},"headers":{"operation":"insert","relation":["public","items"]},"offset":"$offset"}
]
"""

# Check tenant 2 data
[shell tenant2]
!curl -i -X GET "http://localhost:3000/v1/shape/items?offset=-1&database_id=tenant2"
?\e\[1melectric-shape-id\e\[0m: ([\d-]+)
[local shape_id=$1]
?\e\[1melectric-chunk-last-offset\e\[0m: ([\d_]+)
[local offset=$1]
"""??
[{"key":"\"public\".\"items\"/\"1\"","value":{"id":"1","val":"tenant2"},"headers":{"operation":"insert","relation":["public","items"]},"offset":"$offset"}
]
"""

## Now do a live query on tenant 1
[shell tenant1]
??$PS1
!curl -i -X GET "localhost:3000/v1/shape/items?offset=$offset&shape_id="$shape_id"&database_id=tenant1&live"

## And a live query on tenant 2
[shell tenant2]
??$PS1
!curl -i -X GET "localhost:3000/v1/shape/items?offset=$offset&shape_id="$shape_id"&database_id=tenant2&live"

## Insert some data in tenant 1
[shell tenant1_psql]
!INSERT INTO items (id, val) VALUES (2, 'tenant1');
??INSERT 0 1

## Insert some data in tenant 2
[shell tenant2_psql]
!INSERT INTO items (id, val) VALUES (2, 'tenant2');
??INSERT 0 1

## Check that tenant 1 sees the new data
[shell tenant1]
# give some time for the data to sync
[sleep 1]
?\e\[1melectric-chunk-last-offset\e\[0m: ([\d_]+)
[local offset=$1]
??[{"offset":"$offset","value":{"id":"2","val":"tenant1"},"key":"\"public\".\"items\"/\"2\"","headers":{"relation":["public","items"],"operation":"insert","txid":
?[\d+]
??}},{"headers":{"control":"up-to-date"}}]$PS1

## Check that tenant 2 sees the new data
[shell tenant2]
[sleep 1]
?\e\[1melectric-chunk-last-offset\e\[0m: ([\d_]+)
[local offset=$1]
??[{"offset":"$offset","value":{"id":"2","val":"tenant2"},"key":"\"public\".\"items\"/\"2\"","headers":{"relation":["public","items"],"operation":"insert","txid":
?[\d+]
??}},{"headers":{"control":"up-to-date"}}]$PS1

# Disable fail pattern for Electric as we are going to kill it
[shell electric]
-

## kill Electric
[shell orchestrator]
!kill $(lsof -ti:3000)
??$PS1

## restart Electric
[shell electric]
??$PS1
# Re-enable fail pattern for Electric
-$fail_pattern
[invoke setup_multi_tenant_electric]
?? Reloading tenant tenant1 from storage
?? Reloading tenant tenant2 from storage

## Make a query to check that they still see their data
[shell tenant1]
# wait for Electric to start
[invoke wait-for "lsof -Pi :3000 -sTCP:LISTEN" "TCP \*:3000" 10 $PS1]
# Query the shape
!curl -i -X GET "http://localhost:3000/v1/shape/items?offset=${offset}&shape_id=${shape_id}&database_id=tenant1"
???[{"headers":{"control":"up-to-date"}}]
??$PS1

[shell tenant2]
# wait for Electric to start
[invoke wait-for "lsof -Pi :3000 -sTCP:LISTEN" "TCP \*:3000" 10 $PS1]
# Query the shape
!curl -i -X GET "http://localhost:3000/v1/shape/items?offset=${offset}&shape_id=${shape_id}&database_id=tenant2"
???[{"headers":{"control":"up-to-date"}}]
??$PS1

## Make a live query on both and check that it still works
[shell tenant1]
!curl -i -X GET "localhost:3000/v1/shape/items?offset=$offset&shape_id="$shape_id"&database_id=tenant1&live"

[shell tenant2]
!curl -i -X GET "localhost:3000/v1/shape/items?offset=$offset&shape_id="$shape_id"&database_id=tenant2&live"

## Insert some data in tenant 1
[shell tenant1_psql]
!INSERT INTO items (id, val) VALUES (3, 'tenant 1');
??INSERT 0 1

## Insert some data in tenant 2
[shell tenant2_psql]
!INSERT INTO items (id, val) VALUES (3, 'tenant 2');
??INSERT 0 1

## Check that tenant 1 sees the new data
[shell tenant1]
# give some time for the data to sync
[sleep 1]
?\e\[1melectric-chunk-last-offset\e\[0m: ([\d_]+)
[local offset=$1]
??[{"offset":"$offset","value":{"id":"3","val":"tenant 1"},"key":"\"public\".\"items\"/\"3\"","headers":{"relation":["public","items"],"operation":"insert","txid":
?[\d+]
??}},{"headers":{"control":"up-to-date"}}]$PS1

## Check that tenant 2 sees the new data
[shell tenant2]
[sleep 1]
?\e\[1melectric-chunk-last-offset\e\[0m: ([\d_]+)
[local offset=$1]
??[{"offset":"$offset","value":{"id":"3","val":"tenant 2"},"key":"\"public\".\"items\"/\"3\"","headers":{"relation":["public","items"],"operation":"insert","txid":
?[\d+]
??}},{"headers":{"control":"up-to-date"}}]$PS1

[shell electric]
# disable fail pattern because deleting a tenant will stop the tenant processes
# which will output some error messages because of the shutdown
-

## delete one of the tenants
[shell orchestrator]
!curl -X DELETE "http://localhost:3000/v1/admin/database?database_id=tenant2"
???"tenant2"
??$PS1
# Verify that tenant 2 is deleted
!curl -X GET http://localhost:3000/v1/health?database_id=tenant2
???Database not found.
??$PS1
# Verify that tenant 1 still exists
[invoke check_tenant_status "tenant1" "active" 3000]

## kill Electric
[shell orchestrator]
!kill $(lsof -ti:3000)
??$PS1

## restart Electric and check that only tenant 1 is reloaded and not tenant 2
[shell electric]
??$PS1
# Set fail pattern to fail if tenant 2 is reloaded
-Reloading tenant tenant2 from storage
!PORT=3000 ../scripts/electric_dev.sh
?? Reloading tenant tenant1 from storage
??[info] Running Electric.Plug.Router with Bandit 1.5.5 at 0.0.0.0:3000 (http)
Original file line number Diff line number Diff line change
Expand Up @@ -75,15 +75,15 @@
?Txn received in Shapes.Consumer: %Electric.Replication.Changes.Transaction{xid: $xid

# Both consumers hit their call limit and exit with simulated storage failures.
?\[error\] GenServer {Electric\.Registry\.Processes, {Electric\.Shapes\.Consumer, :default, "[0-9-]+"}} terminating
?\[error\] GenServer {Electric\.Registry\.Processes, {Electric\.Shapes\.Consumer, :default, "integration_test_tenant", "[0-9-]+"}} terminating
??Simulated storage failure
?\[error\] GenServer {Electric\.Registry\.Processes, {Electric\.Shapes\.Consumer, :default, "[0-9-]+"}} terminating
?\[error\] GenServer {Electric\.Registry\.Processes, {Electric\.Shapes\.Consumer, :default, "integration_test_tenant", "[0-9-]+"}} terminating
??Simulated storage failure

# The log collector process and the replication client both exit, as their lifetimes are tied
# together by the supervision tree design.
??[error] GenServer {Electric.Registry.Processes, {Electric.Replication.ShapeLogCollector, :default}} terminating
??[error] :gen_statem {Electric.Registry.Processes, {Electric.Postgres.ReplicationClient, :default}} terminating
??[error] GenServer {Electric.Registry.Processes, {Electric.Replication.ShapeLogCollector, :default, "integration_test_tenant"}} terminating
??[error] :gen_statem {Electric.Registry.Processes, {Electric.Postgres.ReplicationClient, :default, "integration_test_tenant"}} terminating

# Observe that both shape consumers and the replication client have restarted.
??[debug] Found existing replication slot
Expand Down
Loading