diff --git a/examples/gatekeeper-auth/.env.caddy b/examples/gatekeeper-auth/.env.caddy new file mode 100644 index 0000000000..3009186960 --- /dev/null +++ b/examples/gatekeeper-auth/.env.caddy @@ -0,0 +1 @@ +ELECTRIC_PROXY_URL=http://localhost:8080 \ No newline at end of file diff --git a/examples/gatekeeper-auth/.env.edge b/examples/gatekeeper-auth/.env.edge new file mode 100644 index 0000000000..0f1e51362f --- /dev/null +++ b/examples/gatekeeper-auth/.env.edge @@ -0,0 +1 @@ +ELECTRIC_PROXY_URL=http://localhost:8000 \ No newline at end of file diff --git a/examples/gatekeeper-auth/README.md b/examples/gatekeeper-auth/README.md new file mode 100644 index 0000000000..2b94c4c5b3 --- /dev/null +++ b/examples/gatekeeper-auth/README.md @@ -0,0 +1,335 @@ + +# Electric Gatekeeper Auth Example + +This example demonstrates a number of ways of implementing the [Gatekeeper auth pattern](https://electric-sql.com/docs/guides/auth#gatekeeper) for securing access to the [Electric sync service](https://electric-sql.com/product/sync). + +It includes: + +- an [`./api`](./api) service for generating auth tokens +- three options for validating those auth tokens when proxying requests to Electric: + - [`./api`](./api) the API itself + - [`./caddy`](./caddy) a Caddy web server as a reverse proxy + - [`./edge`](./edge) an edge function that you can run in front of a CDN + + +## How it works + +There are two steps to the gatekeeper pattern: + +1. first a client posts authentication credentials to a gatekeeper endpoint to generate an auth token +2. the client then makes requests to Electric via an authorising proxy that validates the auth token against the shape request + +The auth token can be *shape-scoped* (i.e.: can include a claim containing the shape definition). This allows the proxy to authorise a shape request by comparing the shape claim signed into the token with the [shape defined in the request parameters](https://electric-sql.com/docs/quickstart#http-api). This allows you to: + +- keep your main authorisation logic in your API (in the gatekeeper endpoint) where it's natural to do things like query the database and call external authorisation services; and to +- run your authorisation logic *once* when generating a token, rather than on the "hot path" of every shape request in your authorising proxy + +### Implementation + +The core of this example is an [Elixir/Phoenix](https://www.phoenixframework.org) web application in [`./api`](./api). This exposes (in [`api_web/router.ex`](./api/lib/api_web/router.ex)): + +1. a gatekeeper endpoint at `POST /gatekeeper/:table` +2. a proxy endpoint at `GET /proxy/v1/shape` + + + + + + Gatekeeper flow diagramme + + + +#### Gatekeeper endpoint + +1. the user makes a `POST` request to `POST /gatekeeper/:table` with some authentication credentials and a shape definition in the request parameters; the gatekeeper is then responsible for authorising the user's access to the shape +2. if access is granted, the gatekeeper generates a shape-scoped auth token and returns it to the client +3. the client can then use the auth token when connecting to the Electric HTTP API, via the proxy endpoint + +#### Proxy endpoint + +4. the proxy validates the JWT auth token and verifies that the shape definition in the token matches the shape being requested; if so it reverse-proxies the request onto Electric +5. Electric then handles the request as normal +6. sending a response back *through the proxy* to the client +7. the client can then process the data and make additional requests using the same auth token (step 3); if the auth token expires or is rejected, the client starts again (step 1). + + +## How to run + +There are three ways to run this example: + +1. with the [API as both gatekeeper and proxy](#1-api-as-gatekeeper-and-proxy) +2. with the [API as gatekeeper and Caddy as the proxy](#2-caddy-as-proxy) +3. with the [API as gatekeeper and an edge function as the proxy](#3-edge-function-as-proxy) + +It makes sense to run through these in order. + +### Pre-reqs + +You need [Docker Compose](https://docs.docker.com/compose/) and [curl](https://curl.se). We also (optionally) use [`psql`](https://www.postgresql.org/docs/current/app-psql.html) and pipe into [`jq`](https://jqlang.github.io/jq/) for JSON formatting. + +The instructions below all use the same [`./docker-compose.yaml`](./docker-compose.yaml) file in this folder. With a different set of services and environment variables. + +> [!TIP] +> All of the configurations are based on running Postgres and Electric. This is handled for you by the `./docker-compose.yaml`. However, if you're unfamiliar with how Electric works, it may be useful to go through the [Quickstart](https://electric-sql.com/docs/quickstart) and [Installation](https://electric-sql.com/docs/guides/installation) guides. + +### 1. API as gatekeeper and proxy + +Build the local API image: + +```shell +docker compose build api +``` + +Run `postgres`, `electric` and the `api` services: + +```console +$ docker compose up postgres electric api +... +gatekeeper-api-1 | 10:22:20.951 [info] == Migrated 20241108150947 in 0.0s +gatekeeper-api-1 | 10:22:21.453 [info] Running ApiWeb.Endpoint with Bandit 1.5.7 at :::4000 (http) +gatekeeper-api-1 | 10:22:21.455 [info] Access ApiWeb.Endpoint at http://localhost:4000 +``` + +In a new terminal, make a `POST` request to the gatekeeper endpoint: + +```console +$ curl -sX POST "http://localhost:4000/gatekeeper/items" | jq +{ + "headers": { + "Authorization": "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJKb2tlbiIsImV4cCI6MTczMTUwMjM2OSwiaWF0IjoxNzMxNDk1MTY5LCJpc3MiOiJKb2tlbiIsImp0aSI6IjMwM28zYmx0czN2aHYydXNiazAwMDJrMiIsIm5iZiI6MTczMTQ5NTE2OSwic2hhcGUiOnsibmFtZXNwYWNlIjpudWxsLCJ0YWJsZSI6Iml0ZW1zIiwid2hlcmUiOm51bGwsImNvbHVtbnMiOm51bGx9fQ.8UZehIWk1EDQ3dJ4ggCBNkx9vGudfrD9appqs8r6zRI" + }, + "url": "http://localhost:4000/proxy/v1/shape", + "table": "items" +} +``` + +You'll see that the response contains: + +- the proxy `url` to make shape requests to (`http://localhost:4000/proxy/v1/shape`) +- the request parameters for the shape we're requesting, in this case `"table": "items"` +- an `Authorization` header, containing a `Bearer ` + +Copy the auth token and set it to an env var: + +```shell +export AUTH_TOKEN="" +``` + +First let's make a `GET` request to the proxy endpoint *without* the auth token. It will be rejected with a `403` status: + +```console +$ curl -sv "http://localhost:4000/proxy/v1/shape?table=items&offset=-1" +... +< HTTP/1.1 401 Unauthorized +... +``` + +Now let's add the authorization header. The request will be successfully proxied through to Electric: + +```console +$ curl -sv --header "Authorization: Bearer ${AUTH_TOKEN}" \ + "http://localhost:4000/proxy/v1/shape?table=items&offset=-1" +... +< HTTP/1.1 200 OK +... +``` + +However if we try to request a different shape (i.e.: using different request parameters), the request will not match the shape signed into the auth token claims and will be rejected: + +```console +$ curl -sv --header "Authorization: Bearer ${AUTH_TOKEN}" \ + "http://localhost:4000/proxy/v1/shape?table=items&offset=-1&where=true" +... +< HTTP/1.1 403 Forbidden +... +``` + +Note that we got an empty response when successfully proxied through to Electric above because there are no `items` in the database. If you like, you can create some, e.g. using `psql`: + +```console +$ psql "postgresql://postgres:password@localhost:54321/electric?sslmode=disable" +psql (16.4) +Type "help" for help. + +electric=# \d + List of relations + Schema | Name | Type | Owner +--------+-------------------+-------+---------- + public | items | table | postgres + public | schema_migrations | table | postgres +(2 rows) + +electric=# select * from items; + id | value | inserted_at | updated_at +----+-------+-------------+------------ +(0 rows) + +electric=# insert into items (id) values (gen_random_uuid()); +INSERT 0 1 +electric=# \q +``` + +Now re-run the successful request and you'll get data: + +```console +$ curl -s --header "Authorization: Bearer ${AUTH_TOKEN}" \ + "http://localhost:3000/v1/shape?table=items&offset=-1" | jq +[ + { + "key": "\"public\".\"items\"/\"b702e58e-9364-4d54-9360-8dda20cb4405\"", + "value": { + "id": "b702e58e-9364-4d54-9360-8dda20cb4405", + "value": null, + "inserted_at": "2024-11-13 10:45:33", + "updated_at": "2024-11-13 10:45:33" + }, + "headers": { + "operation": "insert", + "relation": [ + "public", + "items" + ] + }, + "offset": "0_0" + } +] +``` + +So far we've shown things working with Electric's lower-level [HTTP API](https://electric-sql.com/docs/api/http). You can also setup the [higher-level clients](https://electric-sql.com/docs/api/clients/typescript) to use an auth token. See the [auth guide](https://electric-sql.com/docs/guides/auth) for more details. + +### 2. Caddy as proxy + +Build the local docker images: + +```shell +docker compose build api caddy +``` + +Run `postgres`, `electric`, `api` and `caddy` services with the `.env.caddy` env file: + +```shell +docker compose --env-file .env.caddy up postgres electric api caddy +``` + +As above, use the gatekeeper endpoint to generate an auth token. Note that the `url` in the response data has changed to point to Caddy: + +```console +$ curl -sX POST "http://localhost:4000/gatekeeper/items" | jq +{ + "headers": { + "Authorization": "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJKb2tlbiIsImV4cCI6MTczMTUwNDUxNCwiaWF0IjoxNzMxNDk3MzE0LCJpc3MiOiJKb2tlbiIsImp0aSI6IjMwM283OGd1cWIxZ240ODhmazAwMDJnNCIsIm5iZiI6MTczMTQ5NzMxNCwic2hhcGUiOnsibmFtZXNwYWNlIjpudWxsLCJ0YWJsZSI6Iml0ZW1zIiwid2hlcmUiOm51bGwsImNvbHVtbnMiOm51bGx9fQ.EkSj-ro9-3chGyuxlAglOjo0Ln8t4HLVLQ4vCCNjMCY" + }, + "url": "http://localhost:8080/v1/shape", + "table": "items" +} +``` + +Copy the auth token and set it to an env var: + +```shell +export AUTH_TOKEN="" +``` + +An unauthorised request to Caddy will get a 401: + +```console +$ curl -sv "http://localhost:8080/v1/shape?table=items&offset=-1" +... +< HTTP/1.1 401 Unauthorized +< Server: Caddy +... +``` + +An authorised request for the correct shape will succeed: + +```console +$ curl -sv --header "Authorization: Bearer ${AUTH_TOKEN}" \ + "http://localhost:8080/v1/shape?table=items&offset=-1" +... +< HTTP/1.1 200 OK +... +``` + +Caddy validates the shape request against the shape definition signed into the auth token. So an authorised request *for the wrong shape* will fail: + +```console +$ curl -sv --header "Authorization: Bearer ${AUTH_TOKEN}" \ + "http://localhost:8080/v1/shape?table=items&offset=-1&where=true" +... +< HTTP/1.1 403 Forbidden +... +``` + +Take a look at the [`./caddy/Caddyfile`](./caddy/Caddyfile) for more details. + +### 3. Edge function as proxy + +Electric is [designed to run behind a CDN](https://electric-sql.com/docs/api/http#caching). This makes sync faster and more scalable. However, it means that if you want to authorise access to the Electric API using a proxy, you need to run that proxy in-front-of the CDN. + +You can do this with a centralised cloud proxy, such as an API endpoint deployed as part of a backend web service. Or a reverse-proxy like Caddy that's deployed next to your Electric service. However, running these in front of a CDN from a central location reduces the benefit of the CDN — adding latency and introducing a bottleneck. + +It's often better (faster, more scalable and a more natural topology) to run your authorising proxy at the edge, between your CDN and your user. The gatekeeper pattern works well for this because it minimises both the logic that your edge proxy needs to perform and the network access and credentials that it needs to be granted. + +The example in the [`./edge`](./edge) folder contains a small [Deno HTTP server](https://docs.deno.com/runtime/fundamentals/http_server/) in the [`index.ts`](./edge/index.ts) file that's designed to work as a [Supabase Edge Function](https://supabase.com/docs/guides/functions/quickstart). See the README in the folder for more information about deploying to Supabase. + +Here, we'll run it locally using Docker in order to demonstrate it working with the other services: + +```shell +docker compose --env-file .env.edge up postgres electric api edge +``` + +Hit the gatekeeper endpoint to get an auth token: + +```console +$ curl -sX POST "http://localhost:4000/gatekeeper/items" | jq +{ + "headers": { + "Authorization": "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJKb2tlbiIsImV4cCI6MTczMTUyNDQ1OSwiaWF0IjoxNzMxNTE3MjU5LCJpc3MiOiJKb2tlbiIsImp0aSI6IjMwM3BiaGdob2phcW5pYnE4YzAwMDAwMiIsIm5iZiI6MTczMTUxNzI1OSwic2hhcGUiOnsibmFtZXNwYWNlIjpudWxsLCJ0YWJsZSI6Iml0ZW1zIiwid2hlcmUiOm51bGwsImNvbHVtbnMiOm51bGx9fQ.dNAhTVEUtWGjAoX7IbwX1ccpwZP5sUYTIiTaJnSmaTU" + }, + "url": "http://localhost:8000/v1/shape", + "table": "items" +} +``` + +Copy the auth token and set it to an env var: + +```shell +export AUTH_TOKEN="" +``` + +An unauthorised request to the edge-function proxy will get a 401: + +```console +$ curl -sv "http://localhost:8000/v1/shape?table=items&offset=-1" +... +< HTTP/1.1 401 Unauthorized +... +``` + +An authorised request for the correct shape will succeed: + +```console +$ curl -sv --header "Authorization: Bearer ${AUTH_TOKEN}" \ + "http://localhost:8000/v1/shape?table=items&offset=-1" +... +< HTTP/1.1 200 OK +... +``` + +An authorised request for the wrong shape will fail: + +```console +$ curl -sv --header "Authorization: Bearer ${AUTH_TOKEN}" \ + "http://localhost:8000/v1/shape?table=items&offset=-1&where=true" +... +< HTTP/1.1 403 Forbidden +... +``` + +## More information + +See the [Auth guide](https://electric-sql.com/docs/guides/auth). + +If you have any questions about this example please feel free to [ask on Discord](https://discord.electric-sql.com). diff --git a/examples/gatekeeper-auth/api/.dockerignore b/examples/gatekeeper-auth/api/.dockerignore new file mode 100644 index 0000000000..61a73933c8 --- /dev/null +++ b/examples/gatekeeper-auth/api/.dockerignore @@ -0,0 +1,45 @@ +# This file excludes paths from the Docker build context. +# +# By default, Docker's build context includes all files (and folders) in the +# current directory. Even if a file isn't copied into the container it is still sent to +# the Docker daemon. +# +# There are multiple reasons to exclude files from the build context: +# +# 1. Prevent nested folders from being copied into the container (ex: exclude +# /assets/node_modules when copying /assets) +# 2. Reduce the size of the build context and improve build time (ex. /build, /deps, /doc) +# 3. Avoid sending files containing sensitive information +# +# More information on using .dockerignore is available here: +# https://docs.docker.com/engine/reference/builder/#dockerignore-file + +.dockerignore + +# Ignore git, but keep git HEAD and refs to access current commit hash if needed: +# +# $ cat .git/HEAD | awk '{print ".git/"$2}' | xargs cat +# d0b8727759e1e0e7aa3d41707d12376e373d5ecc +.git +!.git/HEAD +!.git/refs + +# Common development/test artifacts +/cover/ +/doc/ +/test/ +/tmp/ +.elixir_ls + +# Mix artifacts +/_build/ +/deps/ +*.ez + +# Generated on crash by the VM +erl_crash.dump + +# Static artifacts - These should be fetched and built inside the Docker image +/assets/node_modules/ +/priv/static/assets/ +/priv/static/cache_manifest.json diff --git a/examples/gatekeeper-auth/api/.formatter.exs b/examples/gatekeeper-auth/api/.formatter.exs new file mode 100644 index 0000000000..5971023f6b --- /dev/null +++ b/examples/gatekeeper-auth/api/.formatter.exs @@ -0,0 +1,5 @@ +[ + import_deps: [:ecto, :ecto_sql, :phoenix], + subdirectories: ["priv/*/migrations"], + inputs: ["*.{ex,exs}", "{config,lib,test}/**/*.{ex,exs}", "priv/*/seeds.exs"] +] diff --git a/examples/gatekeeper-auth/api/.gitignore b/examples/gatekeeper-auth/api/.gitignore new file mode 100644 index 0000000000..74fcb6e6db --- /dev/null +++ b/examples/gatekeeper-auth/api/.gitignore @@ -0,0 +1,27 @@ +# The directory Mix will write compiled artifacts to. +/_build/ + +# If you run "mix test --cover", coverage assets end up here. +/cover/ + +# The directory Mix downloads your dependencies sources to. +/deps/ + +# Where 3rd-party dependencies like ExDoc output generated docs. +/doc/ + +# Ignore .fetch files in case you like to edit your project deps locally. +/.fetch + +# If the VM crashes, it generates a dump, let's ignore it too. +erl_crash.dump + +# Also ignore archive artifacts (built via "mix archive.build"). +*.ez + +# Temporary files, for example, from tests. +/tmp/ + +# Ignore package tarball (built via "mix hex.build"). +api-*.tar + diff --git a/examples/gatekeeper-auth/api/Dockerfile b/examples/gatekeeper-auth/api/Dockerfile new file mode 100644 index 0000000000..4579074c44 --- /dev/null +++ b/examples/gatekeeper-auth/api/Dockerfile @@ -0,0 +1,92 @@ +# Find eligible builder and runner images on Docker Hub. We use Ubuntu/Debian +# instead of Alpine to avoid DNS resolution issues in production. +# +# https://hub.docker.com/r/hexpm/elixir/tags?page=1&name=ubuntu +# https://hub.docker.com/_/ubuntu?tab=tags +# +# This file is based on these images: +# +# - https://hub.docker.com/r/hexpm/elixir/tags - for the build image +# - https://hub.docker.com/_/debian?tab=tags&page=1&name=bullseye-20240904-slim - for the release image +# - https://pkgs.org/ - resource for finding needed packages +# - Ex: hexpm/elixir:1.17.2-erlang-27.0.1-debian-bullseye-20240904-slim +# +ARG ELIXIR_VERSION=1.17.2 +ARG OTP_VERSION=27.0.1 +ARG DEBIAN_VERSION=bullseye-20240904-slim + +ARG BUILDER_IMAGE="hexpm/elixir:${ELIXIR_VERSION}-erlang-${OTP_VERSION}-debian-${DEBIAN_VERSION}" +ARG RUNNER_IMAGE="debian:${DEBIAN_VERSION}" + +FROM ${BUILDER_IMAGE} as builder + +# install build dependencies +RUN apt-get update -y && apt-get install -y build-essential git \ + && apt-get clean && rm -f /var/lib/apt/lists/*_* + +# prepare build dir +WORKDIR /app + +# install hex + rebar +RUN mix local.hex --force && \ + mix local.rebar --force + +# set build ENV +ENV MIX_ENV="prod" + +# install mix dependencies +COPY mix.exs mix.lock ./ +RUN mix deps.get --only $MIX_ENV +RUN mkdir config + +# copy compile-time config files before we compile dependencies +# to ensure any relevant config change will trigger the dependencies +# to be re-compiled. +COPY config/config.exs config/${MIX_ENV}.exs config/ +RUN mix deps.compile + +COPY priv priv + +COPY lib lib + +# Compile the release +RUN mix compile + +# Changes to config/runtime.exs don't require recompiling the code +COPY config/runtime.exs config/ + +COPY rel rel +RUN mix release + +# start a new build stage so that the final image will only contain +# the compiled release and other runtime necessities +FROM ${RUNNER_IMAGE} + +RUN apt-get update -y && \ + apt-get install -y libstdc++6 openssl libncurses5 locales ca-certificates \ + && apt-get clean && rm -f /var/lib/apt/lists/*_* + +# Set the locale +RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && locale-gen + +ENV LANG en_US.UTF-8 +ENV LANGUAGE en_US:en +ENV LC_ALL en_US.UTF-8 + +WORKDIR "/app" +RUN chown nobody /app + +# set runner ENV +ENV MIX_ENV="prod" + +# Only copy the final release from the build stage +COPY --from=builder --chown=nobody:root /app/_build/${MIX_ENV}/rel/api ./ + +USER nobody + +# If using an environment that doesn't automatically reap zombie processes, it is +# advised to add an init process such as tini via `apt-get install` +# above and adding an entrypoint. See https://github.com/krallin/tini for details +# ENTRYPOINT ["/tini", "--"] + +CMD ["sh", "-c", "/app/bin/migrate && /app/bin/server"] diff --git a/examples/gatekeeper-auth/api/README.md b/examples/gatekeeper-auth/api/README.md new file mode 100644 index 0000000000..0c40a1b5d8 --- /dev/null +++ b/examples/gatekeeper-auth/api/README.md @@ -0,0 +1,34 @@ + +# API gatekeeper (and proxy) application + +This is a [Phoenix](https://www.phoenixframework.org) web application written in [Elixir](https://elixir-lang.org). + +See the [Implementation](../README.md#implementation) and [How to run](../README.md#how-to-run) sections of the README in the root folder of this example for more context about the application and instructions on how to run it using Docker Compose. + +## Understanding the code + +Take a look at [`./lib/api_web/router.ex`](./lib/api_web/router.ex) to see what's exposed and read through the [`./lib/api_web/plugs`](./lib/api_web/plugs) and [`./lib/api_web/authenticator.ex`](./lib/api_web/authenticator.ex) to see how auth is implemented and could be extended. + +The gatekeeper endpoint is based on an [`Electric.Phoenix.Gateway.Plug`](https://hexdocs.pm/electric_phoenix/Electric.Phoenix.Gateway.Plug.html). + +## Run/develop locally without Docker + +See the [Phoenix Installation](https://hexdocs.pm/phoenix/installation.html) page for pre-reqs. + +Install and setup the dependencies: + +```shell +mix setup +``` + +Run the tests: + +```shell +mix test +``` + +Run locally: + +```shell +mix phx.server +``` diff --git a/examples/gatekeeper-auth/api/config/config.exs b/examples/gatekeeper-auth/api/config/config.exs new file mode 100644 index 0000000000..549f98014e --- /dev/null +++ b/examples/gatekeeper-auth/api/config/config.exs @@ -0,0 +1,23 @@ +import Config + +config :api, + ecto_repos: [Api.Repo], + generators: [timestamp_type: :utc_datetime, binary_id: true] + +config :api, ApiWeb.Endpoint, + url: [host: "localhost"], + adapter: Bandit.PhoenixAdapter, + render_errors: [ + formats: [json: ApiWeb.ErrorJSON], + layout: false + ] + +config :logger, :console, + format: "$time $metadata[$level] $message\n", + metadata: [:request_id] + +config :phoenix, :json_library, Jason + +# Import environment specific config. This must remain at the bottom +# of this file so it overrides the configuration defined above. +import_config "#{config_env()}.exs" diff --git a/examples/gatekeeper-auth/api/config/dev.exs b/examples/gatekeeper-auth/api/config/dev.exs new file mode 100644 index 0000000000..782898a6fd --- /dev/null +++ b/examples/gatekeeper-auth/api/config/dev.exs @@ -0,0 +1,42 @@ +import Config + +config :api, + auth_secret: "NFL5*0Bc#9U6E@tnmC&E7SUN6GwHfLmY", + # Configure the proxy endpoint to route shape requests to the external Electric + # sync service, which we assume in development is running on `localhost:3000`. + electric_url: "http://localhost:3000" + +# Configure your database +config :api, Api.Repo, + username: "postgres", + password: "password", + hostname: "localhost", + port: 54321, + database: "electric", + stacktrace: true, + show_sensitive_data_on_connection_error: true, + pool_size: 10 + +port = 4000 + +config :api, ApiWeb.Endpoint, + # Binding to loopback ipv4 address prevents access from other machines. + # Change to `ip: {0, 0, 0, 0}` to allow access from other machines. + http: [ip: {127, 0, 0, 1}, port: port], + check_origin: false, + debug_errors: true, + secret_key_base: "pVvBh/U565dk0DteMtnoCjwLcoZnMDU9QeQNVr0gvVtYUrF8KqoJeyn5YJ0EQudX" + +# Configure the Electric.Phoenix.Gateway.Plug to route electric client requests +# via this application's `GET /proxy/v1/shape` endpoint. +config :electric_phoenix, electric_url: "http://localhost:#{port}/proxy" + +# Do not include metadata nor timestamps in development logs +config :logger, :console, format: "[$level] $message\n" + +# Set a higher stacktrace during development. Avoid configuring such +# in production as building large stacktraces may be expensive. +config :phoenix, :stacktrace_depth, 20 + +# Initialize plugs at runtime for faster development compilation +config :phoenix, :plug_init_mode, :runtime diff --git a/examples/gatekeeper-auth/api/config/prod.exs b/examples/gatekeeper-auth/api/config/prod.exs new file mode 100644 index 0000000000..1fe2d9e854 --- /dev/null +++ b/examples/gatekeeper-auth/api/config/prod.exs @@ -0,0 +1,7 @@ +import Config + +# Do not print debug messages in production +config :logger, level: :info + +# Runtime production configuration, including reading +# of environment variables, is done on config/runtime.exs. diff --git a/examples/gatekeeper-auth/api/config/runtime.exs b/examples/gatekeeper-auth/api/config/runtime.exs new file mode 100644 index 0000000000..b62a145588 --- /dev/null +++ b/examples/gatekeeper-auth/api/config/runtime.exs @@ -0,0 +1,73 @@ +import Config + +if System.get_env("PHX_SERVER") do + config :api, ApiWeb.Endpoint, server: true +end + +if config_env() == :prod do + auth_secret = + System.get_env("AUTH_SECRET") || + raise """ + environment variable AUTH_SECRET is missing. + It should be a long random string. + """ + + electric_url = + System.get_env("ELECTRIC_URL") || + raise """ + environment variable ELECTRIC_URL is missing. + For example: https://my-electric.example.com + """ + + # Configure the proxy endpoint to route shape requests to the external + # Electric sync service. + config :api, + auth_secret: auth_secret, + electric_url: electric_url + + database_url = + System.get_env("DATABASE_URL") || + raise """ + environment variable DATABASE_URL is missing. + For example: ecto://USER:PASS@HOST/DATABASE + """ + + maybe_ipv6 = if System.get_env("ECTO_IPV6") in ~w(true 1), do: [:inet6], else: [] + + config :api, Api.Repo, + # ssl: true, + url: database_url, + pool_size: String.to_integer(System.get_env("POOL_SIZE") || "10"), + socket_options: maybe_ipv6 + + secret_key_base = + System.get_env("SECRET_KEY_BASE") || + raise """ + environment variable SECRET_KEY_BASE is missing. + You can generate one by calling: mix phx.gen.secret + """ + + host = System.get_env("PHX_HOST") || "example.com" + port = System.get_env("PHX_PORT") || 443 + scheme = System.get_env("PHX_SCHEME") || "https" + + config :api, ApiWeb.Endpoint, + url: [host: host, port: port, scheme: scheme], + http: [ + # Enable IPv6 and bind on all interfaces. + # Set it to {0, 0, 0, 0, 0, 0, 0, 1} for local network only access. + # See the documentation on https://hexdocs.pm/bandit/Bandit.html#t:options/0 + # for details about using IPv6 vs IPv4 and loopback vs public addresses. + ip: {0, 0, 0, 0, 0, 0, 0, 0}, + port: port + ], + secret_key_base: secret_key_base + + # Configure the URL that the Electric.Phoenix.Gateway.Plug uses when returning + # shape config to the client. Defaults to this API, specifically the `/proxy` + # endpoint configured in `../lib/api_web/router.ex`. + default_proxy_url = URI.parse("https://#{host}:#{port}/proxy") |> URI.to_string() + proxy_url = System.get_env("ELECTRIC_PROXY_URL") || default_proxy_url + + config :electric_phoenix, electric_url: proxy_url +end diff --git a/examples/gatekeeper-auth/api/config/test.exs b/examples/gatekeeper-auth/api/config/test.exs new file mode 100644 index 0000000000..4b538ac2eb --- /dev/null +++ b/examples/gatekeeper-auth/api/config/test.exs @@ -0,0 +1,36 @@ +import Config + +# Configure the proxy endpoint to route shape requests to the external Electric +# sync service, which we assume in test is running on `localhost:3002`. +config :api, + auth_secret: "NFL5*0Bc#9U6E@tnmC&E7SUN6GwHfLmY", + electric_url: "http://localhost:3000" + +# Configure your database +config :api, Api.Repo, + username: "postgres", + password: "password", + hostname: "localhost", + port: 54321, + database: "electric", + pool: Ecto.Adapters.SQL.Sandbox, + pool_size: System.schedulers_online() * 2 + +port = 4002 + +# We don't run a server during test. If one is required, +# you can enable the server option below. +config :api, ApiWeb.Endpoint, + http: [ip: {127, 0, 0, 1}, port: port], + secret_key_base: "FdsTo+z4sPEhsQNsUtBq26K9qn42nkn1OCH2cLURBZkPCvgJ4F3WiVNFo1NVjojw", + server: false + +# Configure the Electric.Phoenix.Gateway.Plug to route electric client requests +# via this application's `GET /proxy/v1/shape` endpoint. +config :electric_phoenix, electric_url: "http://localhost:#{port}/proxy" + +# Print only warnings and errors during test +config :logger, level: :warning + +# Initialize plugs at runtime for faster test compilation +config :phoenix, :plug_init_mode, :runtime diff --git a/examples/gatekeeper-auth/api/lib/api.ex b/examples/gatekeeper-auth/api/lib/api.ex new file mode 100644 index 0000000000..64618a0e65 --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api.ex @@ -0,0 +1,5 @@ +defmodule Api do + @moduledoc """ + Api keeps the contexts that define your domain and business logic. + """ +end diff --git a/examples/gatekeeper-auth/api/lib/api/application.ex b/examples/gatekeeper-auth/api/lib/api/application.ex new file mode 100644 index 0000000000..4fb2dfa194 --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api/application.ex @@ -0,0 +1,14 @@ +defmodule Api.Application do + use Application + + @impl true + def start(_type, _args) do + children = [ + Api.Repo, + ApiWeb.Endpoint + ] + + opts = [strategy: :one_for_one, name: Api.Supervisor] + Supervisor.start_link(children, opts) + end +end diff --git a/examples/gatekeeper-auth/api/lib/api/item.ex b/examples/gatekeeper-auth/api/lib/api/item.ex new file mode 100644 index 0000000000..4539e1569e --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api/item.ex @@ -0,0 +1,16 @@ +defmodule Api.Item do + use Ecto.Schema + import Ecto.Changeset + + @primary_key {:id, :binary_id, autogenerate: true} + @foreign_key_type :binary_id + schema "items" do + field :value, :string + end + + @doc false + def changeset(item, attrs) do + item + |> cast(attrs, [:value]) + end +end diff --git a/examples/gatekeeper-auth/api/lib/api/release.ex b/examples/gatekeeper-auth/api/lib/api/release.ex new file mode 100644 index 0000000000..2da6c27e8a --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api/release.ex @@ -0,0 +1,28 @@ +defmodule Api.Release do + @moduledoc """ + Used for executing DB release tasks when run in production without Mix + installed. + """ + @app :api + + def migrate do + load_app() + + for repo <- repos() do + {:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :up, all: true)) + end + end + + def rollback(repo, version) do + load_app() + {:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :down, to: version)) + end + + defp repos do + Application.fetch_env!(@app, :ecto_repos) + end + + defp load_app do + Application.load(@app) + end +end diff --git a/examples/gatekeeper-auth/api/lib/api/repo.ex b/examples/gatekeeper-auth/api/lib/api/repo.ex new file mode 100644 index 0000000000..87ced4ffe7 --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api/repo.ex @@ -0,0 +1,5 @@ +defmodule Api.Repo do + use Ecto.Repo, + otp_app: :api, + adapter: Ecto.Adapters.Postgres +end diff --git a/examples/gatekeeper-auth/api/lib/api/shape.ex b/examples/gatekeeper-auth/api/lib/api/shape.ex new file mode 100644 index 0000000000..414fe6d87b --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api/shape.ex @@ -0,0 +1,38 @@ +defmodule Api.Shape do + require Protocol + + alias Electric.Client.ShapeDefinition + + @public_fields [:namespace, :table, :where, :columns] + + Protocol.derive(Jason.Encoder, ShapeDefinition, only: @public_fields) + + # Compare the `shape` derived from the request params with the shape params + # signed in the auth token. Does the auth token grant access to this shape? + def matches(%ShapeDefinition{} = request_shape, %ShapeDefinition{} = token_shape) do + with ^token_shape <- request_shape do + true + else + _alt -> + false + end + end + + # Generate a `%ShapeDefinition{}` from a string keyed Map of `params`. + def from(params) do + with {table, other} when not is_nil(table) <- Map.pop(params, "table"), + options <- Enum.reduce(other, [], &put/2) do + ShapeDefinition.new(table, options) + end + end + + defp put({k, v}, opts) when is_binary(k) do + put({String.to_existing_atom(k), v}, opts) + end + + defp put({k, v}, opts) when k in @public_fields do + Keyword.put(opts, k, v) + end + + defp put(_, opts), do: opts +end diff --git a/examples/gatekeeper-auth/api/lib/api/token.ex b/examples/gatekeeper-auth/api/lib/api/token.ex new file mode 100644 index 0000000000..c0d636f56a --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api/token.ex @@ -0,0 +1,47 @@ +defmodule Api.Token do + @moduledoc """ + Generate and validate JWT Tokens. + """ + alias Api.Shape + alias Electric.Client.ShapeDefinition + + defmodule JWT do + use Joken.Config + + def signer do + secret = Application.fetch_env!(:api, :auth_secret) + + Joken.Signer.create("HS256", secret) + end + end + + def generate(%ShapeDefinition{} = shape) do + {:ok, token, _claims} = JWT.generate_and_sign(%{"shape" => shape}, JWT.signer()) + + token + end + + def verify(%ShapeDefinition{} = request_shape, token) do + with {:ok, shape_claim} <- validate(token) do + matches(request_shape, shape_claim) + end + end + + defp validate(token) do + with {:ok, %{"shape" => shape_claim}} <- JWT.verify_and_validate(token, JWT.signer()) do + {:ok, shape_claim} + else + _alt -> + {:error, :invalid} + end + end + + defp matches(%ShapeDefinition{} = request_shape, %{} = shape_claim) do + with {:ok, token_shape} <- Shape.from(shape_claim) do + Shape.matches(request_shape, token_shape) + else + _alt -> + false + end + end +end diff --git a/examples/gatekeeper-auth/api/lib/api_web.ex b/examples/gatekeeper-auth/api/lib/api_web.ex new file mode 100644 index 0000000000..5ceafe1421 --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api_web.ex @@ -0,0 +1,45 @@ +defmodule ApiWeb do + @moduledoc """ + The entrypoint for defining your web interface, such + as controllers, components, channels, and so on. + + This can be used in your application as: + + use ApiWeb, :controller + """ + + def controller do + quote do + use Phoenix.Controller, formats: [:json] + + import Plug.Conn + end + end + + def plug do + quote do + import Plug.Conn + end + end + + def router do + quote do + use Phoenix.Router, helpers: false + + import Plug.Conn + import Phoenix.Controller + + alias ApiWeb.Authenticator + + alias ApiWeb.Plugs.AssignShape + alias ApiWeb.Plugs.Auth + end + end + + @doc """ + When used, dispatch to the appropriate controller/live_view/etc. + """ + defmacro __using__(which) when is_atom(which) do + apply(__MODULE__, which, []) + end +end diff --git a/examples/gatekeeper-auth/api/lib/api_web/authenticator.ex b/examples/gatekeeper-auth/api/lib/api_web/authenticator.ex new file mode 100644 index 0000000000..a0dacc1ecd --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api_web/authenticator.ex @@ -0,0 +1,54 @@ +defmodule ApiWeb.Authenticator do + @moduledoc """ + `Electric.Client.Authenticator` implementation that generates + and validates tokens. + """ + alias Api.Token + alias Electric.Client + + @behaviour Client.Authenticator + @header_name "Authorization" + + def authenticate_shape(shape, _config) do + %{@header_name => "Bearer #{Token.generate(shape)}"} + end + + def authenticate_request(request, _config) do + request + end + + def authorise(shape, request_headers) do + header_map = Enum.into(request_headers, %{}) + header_key = String.downcase(@header_name) + + with {:ok, "Bearer " <> token} <- Map.fetch(header_map, header_key) do + Token.verify(shape, token) + else + _alt -> + {:error, :missing} + end + end + + # Provides an `Electric.Client` that uses our `Authenticator` + # implementation to generate signed auth tokens. + # + # This is configured in `./router.ex` to work with the + # `Electric.Phoenix.Gateway.Plug`: + # + # post "/:table", Electric.Phoenix.Gateway.Plug, client: &Authenticator.client/0 + # + # Because `client/0` returns a client that's configured to use our + # `ApiWeb.Authenticator`, then `ApiWeb.Authenticator.authenticate_shape/2` + # will be called to generate an auth header that's included in the + # response data that the Gateway.Plug returns to the client. + # + # I.e.: we basically tie into the `Gateway.Plug` machinery to use our + # `Authenticator` to generate and return a signed token to the client. + def client do + base_url = Application.fetch_env!(:electric_phoenix, :electric_url) + + {:ok, client} = Client.new(base_url: base_url, authenticator: {__MODULE__, []}) + + client + end +end diff --git a/examples/gatekeeper-auth/api/lib/api_web/controllers/error_json.ex b/examples/gatekeeper-auth/api/lib/api_web/controllers/error_json.ex new file mode 100644 index 0000000000..dcd97a7a5b --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api_web/controllers/error_json.ex @@ -0,0 +1,9 @@ +defmodule ApiWeb.ErrorJSON do + @moduledoc """ + This module is invoked by your endpoint in case of errors on JSON requests. + """ + + def render(template, _assigns) do + %{errors: %{detail: Phoenix.Controller.status_message_from_template(template)}} + end +end diff --git a/examples/gatekeeper-auth/api/lib/api_web/controllers/proxy_controller.ex b/examples/gatekeeper-auth/api/lib/api_web/controllers/proxy_controller.ex new file mode 100644 index 0000000000..b6ddfc2c5c --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api_web/controllers/proxy_controller.ex @@ -0,0 +1,38 @@ +defmodule ApiWeb.ProxyController do + use ApiWeb, :controller + + # Handles `GET /proxy/v1/shape` by proxying the request to Electric. + # Uses the `Req` HTTP client to stream the body through. + def show(conn, _params) do + %{status: status, headers: headers, body: stream} = proxy_request(conn) + + conn + |> clone_headers(headers) + |> stream_response(status, stream) + end + + defp proxy_request(%{req_headers: headers} = conn) do + conn + |> build_url() + |> Req.get!(headers: headers, into: :self) + end + + defp build_url(%{path_info: [_prefix | segments], query_string: query}) do + electric_url = Application.fetch_env!(:api, :electric_url) + + "#{electric_url}/#{Path.join(segments)}?#{query}" + end + + defp clone_headers(conn, headers) do + headers + |> Enum.reduce(conn, fn {k, [v]}, acc -> put_resp_header(acc, k, v) end) + end + + defp stream_response(conn, status, body) do + conn = send_chunked(conn, status) + + Enum.reduce(body, conn, fn chunk, conn -> + with {:ok, conn} <- chunk(conn, chunk), do: conn + end) + end +end diff --git a/examples/gatekeeper-auth/api/lib/api_web/endpoint.ex b/examples/gatekeeper-auth/api/lib/api_web/endpoint.ex new file mode 100644 index 0000000000..557926a2ff --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api_web/endpoint.ex @@ -0,0 +1,10 @@ +defmodule ApiWeb.Endpoint do + use Phoenix.Endpoint, otp_app: :api + + plug Plug.Parsers, + parsers: [:json], + pass: ["*/*"], + json_decoder: Phoenix.json_library() + + plug ApiWeb.Router +end diff --git a/examples/gatekeeper-auth/api/lib/api_web/plugs/assign_shape.ex b/examples/gatekeeper-auth/api/lib/api_web/plugs/assign_shape.ex new file mode 100644 index 0000000000..dface8fc45 --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api_web/plugs/assign_shape.ex @@ -0,0 +1,40 @@ +defmodule ApiWeb.Plugs.AssignShape do + @moduledoc """ + This plug builds a shape definition from the request parameters and + assigns it to the conn. + """ + use ApiWeb, :plug + + alias Api.Shape + + def init(opts), do: opts + + # If you pass `table_from_path: true` as an option, then it reads the + # tablename from the path. This is useful for using hardcoded paths to + # specific shapes with `Gateway.Plug`, e.g.; + # + # post "/items", Gateway.Plug, shape: Electric.Client.shape!("items") + # + def call(%{params: params} = conn, [{:table_from_path, true} | opts]) do + table_name = Enum.at(conn.path_info, -1) + + params = Map.put(params, "table", table_name) + + conn + |> Map.put(:params, params) + |> call(opts) + end + + def call(%{params: params} = conn, _opts) do + case Shape.from(params) do + {:ok, shape} -> + conn + |> assign(:shape, shape) + + _alt -> + conn + |> send_resp(400, "Invalid") + |> halt() + end + end +end diff --git a/examples/gatekeeper-auth/api/lib/api_web/plugs/auth/authenticate_user.ex b/examples/gatekeeper-auth/api/lib/api_web/plugs/auth/authenticate_user.ex new file mode 100644 index 0000000000..6b3be189a9 --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api_web/plugs/auth/authenticate_user.ex @@ -0,0 +1,16 @@ +defmodule ApiWeb.Plugs.Auth.AuthenticateUser do + @moduledoc """ + This plug is a no-op that just assigns a dummy user. + + In a real application, you would use this step to authenticate the user + based on some credentials and assign the real user to the conn. + """ + use ApiWeb, :plug + + def init(opts), do: opts + + def call(conn, _opts) do + conn + |> assign(:current_user, :dummy) + end +end diff --git a/examples/gatekeeper-auth/api/lib/api_web/plugs/auth/authorise_shape_access.ex b/examples/gatekeeper-auth/api/lib/api_web/plugs/auth/authorise_shape_access.ex new file mode 100644 index 0000000000..c42e0d441c --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api_web/plugs/auth/authorise_shape_access.ex @@ -0,0 +1,30 @@ +defmodule ApiWeb.Plugs.Auth.AuthoriseShapeAccess do + @moduledoc """ + This plug allows the dummy user to access any shape. + + In a real application, you would use this step to validate that the + `user` has the right to access the `shape`. For example, you could + perform a database lookup or call out to an external auth service. + """ + use ApiWeb, :plug + + def init(opts), do: opts + + def call(%{assigns: %{current_user: user, shape: shape}} = conn, _opts) do + case is_authorised(user, shape) do + true -> + conn + + false -> + conn + |> send_resp(403, "Forbidden") + |> halt() + end + end + + defp is_authorised(:dummy, _) do + true + end + + defp is_authorised(_, _), do: false +end diff --git a/examples/gatekeeper-auth/api/lib/api_web/plugs/auth/verify_token.ex b/examples/gatekeeper-auth/api/lib/api_web/plugs/auth/verify_token.ex new file mode 100644 index 0000000000..2760efc005 --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api_web/plugs/auth/verify_token.ex @@ -0,0 +1,33 @@ +defmodule ApiWeb.Plugs.Auth.VerifyToken do + @moduledoc """ + Verify that the auth token in the Authorization header matches the shape. + + We do this by comparing the shape defined in the request query params with + the shape signed into the auth token claims. + + So you can't proxy a shape request without having a signed token for + that exact shape definition. + """ + use ApiWeb, :plug + + alias ApiWeb.Authenticator + + def init(opts), do: opts + + def call(%{assigns: %{shape: shape}, req_headers: headers} = conn, _opts) do + case Authenticator.authorise(shape, headers) do + {:error, message} when message in [:invalid, :missing] -> + conn + |> send_resp(401, "Unauthorized") + |> halt() + + false -> + conn + |> send_resp(403, "Forbidden") + |> halt() + + true -> + conn + end + end +end diff --git a/examples/gatekeeper-auth/api/lib/api_web/router.ex b/examples/gatekeeper-auth/api/lib/api_web/router.ex new file mode 100644 index 0000000000..7aecf1a3f0 --- /dev/null +++ b/examples/gatekeeper-auth/api/lib/api_web/router.ex @@ -0,0 +1,45 @@ +defmodule ApiWeb.Router do + use ApiWeb, :router + + pipeline :api do + plug :accepts, ["json"] + end + + pipeline :gatekeeper do + plug AssignShape + + plug Auth.AuthenticateUser + plug Auth.AuthoriseShapeAccess + end + + pipeline :proxy do + plug AssignShape + + plug Auth.VerifyToken + end + + scope "/" do + pipe_through :api + + # The gatekeeper endpoint at `POST /gatekeeper/:table` authenticates the user, + # authorises the shape access, generates a shape-scoped auth token and returns + # this along with other config that an Electric client can use to stream the + # shape directly from Electric. + scope "/gatekeeper" do + pipe_through :gatekeeper + + post "/:table", Electric.Phoenix.Gateway.Plug, client: &Authenticator.client/0 + end + + # The proxy endpoint at `GET /proxy/v1/shape` proxies the request to an + # upstream Electric service. + # + # Access to this endpoint is protected by the `:proxy` pipeline, which verifies + # a shape signed token, generated by the gatekeeper endpoint above. + scope "/proxy", ApiWeb do + pipe_through :proxy + + get "/v1/shape", ProxyController, :show + end + end +end diff --git a/examples/gatekeeper-auth/api/mix.exs b/examples/gatekeeper-auth/api/mix.exs new file mode 100644 index 0000000000..3ad6a18e8e --- /dev/null +++ b/examples/gatekeeper-auth/api/mix.exs @@ -0,0 +1,47 @@ +defmodule Api.MixProject do + use Mix.Project + + def project do + [ + app: :api, + version: "0.1.0", + elixir: "~> 1.14", + elixirc_paths: elixirc_paths(Mix.env()), + start_permanent: Mix.env() == :prod, + aliases: aliases(), + deps: deps() + ] + end + + def application do + [ + mod: {Api.Application, []}, + extra_applications: [:logger, :runtime_tools] + ] + end + + defp elixirc_paths(:test), do: ["lib", "test/support"] + defp elixirc_paths(_), do: ["lib"] + + defp deps do + [ + {:bandit, "~> 1.5"}, + {:electric_phoenix, "~> 0.1.2"}, + {:ecto_sql, "~> 3.10"}, + {:jason, "~> 1.4"}, + {:joken, "~> 2.6"}, + {:phoenix, "~> 1.7.14"}, + {:phoenix_ecto, "~> 4.5"}, + {:postgrex, ">= 0.0.0"} + ] + end + + defp aliases do + [ + setup: ["deps.get", "ecto.setup"], + "ecto.setup": ["ecto.create", "ecto.migrate", "run priv/repo/seeds.exs"], + "ecto.reset": ["ecto.drop", "ecto.setup"], + test: ["ecto.create --quiet", "ecto.migrate --quiet", "test"] + ] + end +end diff --git a/examples/gatekeeper-auth/api/mix.lock b/examples/gatekeeper-auth/api/mix.lock new file mode 100644 index 0000000000..9cc68e5760 --- /dev/null +++ b/examples/gatekeeper-auth/api/mix.lock @@ -0,0 +1,36 @@ +%{ + "bandit": {:hex, :bandit, "1.5.7", "6856b1e1df4f2b0cb3df1377eab7891bec2da6a7fd69dc78594ad3e152363a50", [:mix], [{:hpax, "~> 1.0.0", [hex: :hpax, repo: "hexpm", optional: false]}, {:plug, "~> 1.14", [hex: :plug, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}, {:thousand_island, "~> 1.0", [hex: :thousand_island, repo: "hexpm", optional: false]}, {:websock, "~> 0.5", [hex: :websock, repo: "hexpm", optional: false]}], "hexpm", "f2dd92ae87d2cbea2fa9aa1652db157b6cba6c405cb44d4f6dd87abba41371cd"}, + "castore": {:hex, :castore, "1.0.9", "5cc77474afadf02c7c017823f460a17daa7908e991b0cc917febc90e466a375c", [:mix], [], "hexpm", "5ea956504f1ba6f2b4eb707061d8e17870de2bee95fb59d512872c2ef06925e7"}, + "db_connection": {:hex, :db_connection, "2.7.0", "b99faa9291bb09892c7da373bb82cba59aefa9b36300f6145c5f201c7adf48ec", [:mix], [{:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "dcf08f31b2701f857dfc787fbad78223d61a32204f217f15e881dd93e4bdd3ff"}, + "decimal": {:hex, :decimal, "2.1.1", "5611dca5d4b2c3dd497dec8f68751f1f1a54755e8ed2a966c2633cf885973ad6", [:mix], [], "hexpm", "53cfe5f497ed0e7771ae1a475575603d77425099ba5faef9394932b35020ffcc"}, + "dns_cluster": {:hex, :dns_cluster, "0.1.3", "0bc20a2c88ed6cc494f2964075c359f8c2d00e1bf25518a6a6c7fd277c9b0c66", [:mix], [], "hexpm", "46cb7c4a1b3e52c7ad4cbe33ca5079fbde4840dedeafca2baf77996c2da1bc33"}, + "ecto": {:hex, :ecto, "3.12.4", "267c94d9f2969e6acc4dd5e3e3af5b05cdae89a4d549925f3008b2b7eb0b93c3", [:mix], [{:decimal, "~> 2.0", [hex: :decimal, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "ef04e4101688a67d061e1b10d7bc1fbf00d1d13c17eef08b71d070ff9188f747"}, + "ecto_sql": {:hex, :ecto_sql, "3.12.1", "c0d0d60e85d9ff4631f12bafa454bc392ce8b9ec83531a412c12a0d415a3a4d0", [:mix], [{:db_connection, "~> 2.4.1 or ~> 2.5", [hex: :db_connection, repo: "hexpm", optional: false]}, {:ecto, "~> 3.12", [hex: :ecto, repo: "hexpm", optional: false]}, {:myxql, "~> 0.7", [hex: :myxql, repo: "hexpm", optional: true]}, {:postgrex, "~> 0.19 or ~> 1.0", [hex: :postgrex, repo: "hexpm", optional: true]}, {:tds, "~> 2.1.1 or ~> 2.2", [hex: :tds, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4.0 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "aff5b958a899762c5f09028c847569f7dfb9cc9d63bdb8133bff8a5546de6bf5"}, + "electric_client": {:hex, :electric_client, "0.1.2", "1b4b2c3f53a44adaf98a648da21569325338a123ec8f00b7d26c6e3c3583ac94", [:mix], [{:ecto_sql, "~> 3.12", [hex: :ecto_sql, repo: "hexpm", optional: true]}, {:gen_stage, "~> 1.2", [hex: :gen_stage, repo: "hexpm", optional: true]}, {:jason, "~> 1.4", [hex: :jason, repo: "hexpm", optional: false]}, {:nimble_options, "~> 1.1", [hex: :nimble_options, repo: "hexpm", optional: false]}, {:req, "~> 0.5", [hex: :req, repo: "hexpm", optional: false]}], "hexpm", "fde191b8ce7c70c44ef12a821210699222ceba1951f73c3c4e7cff8c1d5c0294"}, + "electric_phoenix": {:hex, :electric_phoenix, "0.1.2", "a6228e95e6fa03591307dc34514ba9baf365878efbbfbd78d0312b3b6898bb06", [:mix], [{:ecto_sql, "~> 3.10", [hex: :ecto_sql, repo: "hexpm", optional: true]}, {:electric_client, "~> 0.1.2", [hex: :electric_client, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}, {:nimble_options, "~> 1.1", [hex: :nimble_options, repo: "hexpm", optional: false]}, {:phoenix_live_view, "~> 0.20", [hex: :phoenix_live_view, repo: "hexpm", optional: false]}, {:plug, "~> 1.0", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm", "5d88ae053e8035335e6c6d779cc29c3228b71c02d4268e7e959cffff63debd04"}, + "finch": {:hex, :finch, "0.19.0", "c644641491ea854fc5c1bbaef36bfc764e3f08e7185e1f084e35e0672241b76d", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:mint, "~> 1.6.2 or ~> 1.7", [hex: :mint, repo: "hexpm", optional: false]}, {:nimble_options, "~> 0.4 or ~> 1.0", [hex: :nimble_options, repo: "hexpm", optional: false]}, {:nimble_pool, "~> 1.1", [hex: :nimble_pool, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "fc5324ce209125d1e2fa0fcd2634601c52a787aff1cd33ee833664a5af4ea2b6"}, + "hpax": {:hex, :hpax, "1.0.0", "28dcf54509fe2152a3d040e4e3df5b265dcb6cb532029ecbacf4ce52caea3fd2", [:mix], [], "hexpm", "7f1314731d711e2ca5fdc7fd361296593fc2542570b3105595bb0bc6d0fad601"}, + "jason": {:hex, :jason, "1.4.4", "b9226785a9aa77b6857ca22832cffa5d5011a667207eb2a0ad56adb5db443b8a", [:mix], [{:decimal, "~> 1.0 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm", "c5eb0cab91f094599f94d55bc63409236a8ec69a21a67814529e8d5f6cc90b3b"}, + "joken": {:hex, :joken, "2.6.2", "5daaf82259ca603af4f0b065475099ada1b2b849ff140ccd37f4b6828ca6892a", [:mix], [{:jose, "~> 1.11.10", [hex: :jose, repo: "hexpm", optional: false]}], "hexpm", "5134b5b0a6e37494e46dbf9e4dad53808e5e787904b7c73972651b51cce3d72b"}, + "jose": {:hex, :jose, "1.11.10", "a903f5227417bd2a08c8a00a0cbcc458118be84480955e8d251297a425723f83", [:mix, :rebar3], [], "hexpm", "0d6cd36ff8ba174db29148fc112b5842186b68a90ce9fc2b3ec3afe76593e614"}, + "mime": {:hex, :mime, "2.0.6", "8f18486773d9b15f95f4f4f1e39b710045fa1de891fada4516559967276e4dc2", [:mix], [], "hexpm", "c9945363a6b26d747389aac3643f8e0e09d30499a138ad64fe8fd1d13d9b153e"}, + "mint": {:hex, :mint, "1.6.2", "af6d97a4051eee4f05b5500671d47c3a67dac7386045d87a904126fd4bbcea2e", [:mix], [{:castore, "~> 0.1.0 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: true]}, {:hpax, "~> 0.1.1 or ~> 0.2.0 or ~> 1.0", [hex: :hpax, repo: "hexpm", optional: false]}], "hexpm", "5ee441dffc1892f1ae59127f74afe8fd82fda6587794278d924e4d90ea3d63f9"}, + "nimble_options": {:hex, :nimble_options, "1.1.1", "e3a492d54d85fc3fd7c5baf411d9d2852922f66e69476317787a7b2bb000a61b", [:mix], [], "hexpm", "821b2470ca9442c4b6984882fe9bb0389371b8ddec4d45a9504f00a66f650b44"}, + "nimble_pool": {:hex, :nimble_pool, "1.1.0", "bf9c29fbdcba3564a8b800d1eeb5a3c58f36e1e11d7b7fb2e084a643f645f06b", [:mix], [], "hexpm", "af2e4e6b34197db81f7aad230c1118eac993acc0dae6bc83bac0126d4ae0813a"}, + "phoenix": {:hex, :phoenix, "1.7.14", "a7d0b3f1bc95987044ddada111e77bd7f75646a08518942c72a8440278ae7825", [:mix], [{:castore, ">= 0.0.0", [hex: :castore, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:phoenix_pubsub, "~> 2.1", [hex: :phoenix_pubsub, repo: "hexpm", optional: false]}, {:phoenix_template, "~> 1.0", [hex: :phoenix_template, repo: "hexpm", optional: false]}, {:phoenix_view, "~> 2.0", [hex: :phoenix_view, repo: "hexpm", optional: true]}, {:plug, "~> 1.14", [hex: :plug, repo: "hexpm", optional: false]}, {:plug_cowboy, "~> 2.7", [hex: :plug_cowboy, repo: "hexpm", optional: true]}, {:plug_crypto, "~> 1.2 or ~> 2.0", [hex: :plug_crypto, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}, {:websock_adapter, "~> 0.5.3", [hex: :websock_adapter, repo: "hexpm", optional: false]}], "hexpm", "c7859bc56cc5dfef19ecfc240775dae358cbaa530231118a9e014df392ace61a"}, + "phoenix_ecto": {:hex, :phoenix_ecto, "4.6.3", "f686701b0499a07f2e3b122d84d52ff8a31f5def386e03706c916f6feddf69ef", [:mix], [{:ecto, "~> 3.5", [hex: :ecto, repo: "hexpm", optional: false]}, {:phoenix_html, "~> 2.14.2 or ~> 3.0 or ~> 4.1", [hex: :phoenix_html, repo: "hexpm", optional: true]}, {:plug, "~> 1.9", [hex: :plug, repo: "hexpm", optional: false]}, {:postgrex, "~> 0.16 or ~> 1.0", [hex: :postgrex, repo: "hexpm", optional: true]}], "hexpm", "909502956916a657a197f94cc1206d9a65247538de8a5e186f7537c895d95764"}, + "phoenix_html": {:hex, :phoenix_html, "4.1.1", "4c064fd3873d12ebb1388425a8f2a19348cef56e7289e1998e2d2fa758aa982e", [:mix], [], "hexpm", "f2f2df5a72bc9a2f510b21497fd7d2b86d932ec0598f0210fed4114adc546c6f"}, + "phoenix_live_view": {:hex, :phoenix_live_view, "0.20.17", "f396bbdaf4ba227b82251eb75ac0afa6b3da5e509bc0d030206374237dfc9450", [:mix], [{:floki, "~> 0.36", [hex: :floki, repo: "hexpm", optional: true]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:phoenix, "~> 1.6.15 or ~> 1.7.0", [hex: :phoenix, repo: "hexpm", optional: false]}, {:phoenix_html, "~> 3.3 or ~> 4.0", [hex: :phoenix_html, repo: "hexpm", optional: false]}, {:phoenix_template, "~> 1.0", [hex: :phoenix_template, repo: "hexpm", optional: false]}, {:phoenix_view, "~> 2.0", [hex: :phoenix_view, repo: "hexpm", optional: true]}, {:plug, "~> 1.15", [hex: :plug, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.2 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "a61d741ffb78c85fdbca0de084da6a48f8ceb5261a79165b5a0b59e5f65ce98b"}, + "phoenix_pubsub": {:hex, :phoenix_pubsub, "2.1.3", "3168d78ba41835aecad272d5e8cd51aa87a7ac9eb836eabc42f6e57538e3731d", [:mix], [], "hexpm", "bba06bc1dcfd8cb086759f0edc94a8ba2bc8896d5331a1e2c2902bf8e36ee502"}, + "phoenix_template": {:hex, :phoenix_template, "1.0.4", "e2092c132f3b5e5b2d49c96695342eb36d0ed514c5b252a77048d5969330d639", [:mix], [{:phoenix_html, "~> 2.14.2 or ~> 3.0 or ~> 4.0", [hex: :phoenix_html, repo: "hexpm", optional: true]}], "hexpm", "2c0c81f0e5c6753faf5cca2f229c9709919aba34fab866d3bc05060c9c444206"}, + "plug": {:hex, :plug, "1.16.1", "40c74619c12f82736d2214557dedec2e9762029b2438d6d175c5074c933edc9d", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:plug_crypto, "~> 1.1.1 or ~> 1.2 or ~> 2.0", [hex: :plug_crypto, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.3 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "a13ff6b9006b03d7e33874945b2755253841b238c34071ed85b0e86057f8cddc"}, + "plug_crypto": {:hex, :plug_crypto, "2.1.0", "f44309c2b06d249c27c8d3f65cfe08158ade08418cf540fd4f72d4d6863abb7b", [:mix], [], "hexpm", "131216a4b030b8f8ce0f26038bc4421ae60e4bb95c5cf5395e1421437824c4fa"}, + "postgrex": {:hex, :postgrex, "0.19.2", "34d6884a332c7bf1e367fc8b9a849d23b43f7da5c6e263def92784d03f9da468", [:mix], [{:db_connection, "~> 2.1", [hex: :db_connection, repo: "hexpm", optional: false]}, {:decimal, "~> 1.5 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:table, "~> 0.1.0", [hex: :table, repo: "hexpm", optional: true]}], "hexpm", "618988886ab7ae8561ebed9a3c7469034bf6a88b8995785a3378746a4b9835ec"}, + "req": {:hex, :req, "0.5.7", "b722680e03d531a2947282adff474362a48a02aa54b131196fbf7acaff5e4cee", [:mix], [{:brotli, "~> 0.3.1", [hex: :brotli, repo: "hexpm", optional: true]}, {:ezstd, "~> 1.0", [hex: :ezstd, repo: "hexpm", optional: true]}, {:finch, "~> 0.17", [hex: :finch, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}, {:mime, "~> 2.0.6 or ~> 2.1", [hex: :mime, repo: "hexpm", optional: false]}, {:nimble_csv, "~> 1.0", [hex: :nimble_csv, repo: "hexpm", optional: true]}, {:plug, "~> 1.0", [hex: :plug, repo: "hexpm", optional: true]}], "hexpm", "c6035374615120a8923e8089d0c21a3496cf9eda2d287b806081b8f323ceee29"}, + "telemetry": {:hex, :telemetry, "1.3.0", "fedebbae410d715cf8e7062c96a1ef32ec22e764197f70cda73d82778d61e7a2", [:rebar3], [], "hexpm", "7015fc8919dbe63764f4b4b87a95b7c0996bd539e0d499be6ec9d7f3875b79e6"}, + "telemetry_metrics": {:hex, :telemetry_metrics, "1.0.0", "29f5f84991ca98b8eb02fc208b2e6de7c95f8bb2294ef244a176675adc7775df", [:mix], [{:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "f23713b3847286a534e005126d4c959ebcca68ae9582118ce436b521d1d47d5d"}, + "telemetry_poller": {:hex, :telemetry_poller, "1.1.0", "58fa7c216257291caaf8d05678c8d01bd45f4bdbc1286838a28c4bb62ef32999", [:rebar3], [{:telemetry, "~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "9eb9d9cbfd81cbd7cdd24682f8711b6e2b691289a0de6826e58452f28c103c8f"}, + "thousand_island": {:hex, :thousand_island, "1.3.5", "6022b6338f1635b3d32406ff98d68b843ba73b3aa95cfc27154223244f3a6ca5", [:mix], [{:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "2be6954916fdfe4756af3239fb6b6d75d0b8063b5df03ba76fd8a4c87849e180"}, + "websock": {:hex, :websock, "0.5.3", "2f69a6ebe810328555b6fe5c831a851f485e303a7c8ce6c5f675abeb20ebdadc", [:mix], [], "hexpm", "6105453d7fac22c712ad66fab1d45abdf049868f253cf719b625151460b8b453"}, + "websock_adapter": {:hex, :websock_adapter, "0.5.7", "65fa74042530064ef0570b75b43f5c49bb8b235d6515671b3d250022cb8a1f9e", [:mix], [{:bandit, ">= 0.6.0", [hex: :bandit, repo: "hexpm", optional: true]}, {:plug, "~> 1.14", [hex: :plug, repo: "hexpm", optional: false]}, {:plug_cowboy, "~> 2.6", [hex: :plug_cowboy, repo: "hexpm", optional: true]}, {:websock, "~> 0.5", [hex: :websock, repo: "hexpm", optional: false]}], "hexpm", "d0f478ee64deddfec64b800673fd6e0c8888b079d9f3444dd96d2a98383bdbd1"}, +} diff --git a/examples/gatekeeper-auth/api/priv/repo/migrations/.formatter.exs b/examples/gatekeeper-auth/api/priv/repo/migrations/.formatter.exs new file mode 100644 index 0000000000..49f9151ed2 --- /dev/null +++ b/examples/gatekeeper-auth/api/priv/repo/migrations/.formatter.exs @@ -0,0 +1,4 @@ +[ + import_deps: [:ecto_sql], + inputs: ["*.exs"] +] diff --git a/examples/gatekeeper-auth/api/priv/repo/migrations/20241108150947_create_items.exs b/examples/gatekeeper-auth/api/priv/repo/migrations/20241108150947_create_items.exs new file mode 100644 index 0000000000..68c31a585d --- /dev/null +++ b/examples/gatekeeper-auth/api/priv/repo/migrations/20241108150947_create_items.exs @@ -0,0 +1,10 @@ +defmodule Api.Repo.Migrations.CreateItems do + use Ecto.Migration + + def change do + create table(:items, primary_key: false) do + add :id, :binary_id, primary_key: true + add :value, :string, null: true + end + end +end diff --git a/examples/gatekeeper-auth/api/priv/repo/seeds.exs b/examples/gatekeeper-auth/api/priv/repo/seeds.exs new file mode 100644 index 0000000000..8e275e9319 --- /dev/null +++ b/examples/gatekeeper-auth/api/priv/repo/seeds.exs @@ -0,0 +1,11 @@ +# Script for populating the database. You can run it as: +# +# mix run priv/repo/seeds.exs +# +# Inside the script, you can read and write to any of your +# repositories directly: +# +# Api.Repo.insert!(%Api.SomeSchema{}) +# +# We recommend using the bang functions (`insert!`, `update!` +# and so on) as they will fail if something goes wrong. diff --git a/examples/gatekeeper-auth/api/rel/overlays/bin/migrate b/examples/gatekeeper-auth/api/rel/overlays/bin/migrate new file mode 100755 index 0000000000..f093404c52 --- /dev/null +++ b/examples/gatekeeper-auth/api/rel/overlays/bin/migrate @@ -0,0 +1,5 @@ +#!/bin/sh +set -eu + +cd -P -- "$(dirname -- "$0")" +exec ./api eval Api.Release.migrate diff --git a/examples/gatekeeper-auth/api/rel/overlays/bin/migrate.bat b/examples/gatekeeper-auth/api/rel/overlays/bin/migrate.bat new file mode 100755 index 0000000000..d8003e83d8 --- /dev/null +++ b/examples/gatekeeper-auth/api/rel/overlays/bin/migrate.bat @@ -0,0 +1 @@ +call "%~dp0\api" eval Api.Release.migrate diff --git a/examples/gatekeeper-auth/api/rel/overlays/bin/server b/examples/gatekeeper-auth/api/rel/overlays/bin/server new file mode 100755 index 0000000000..de9c7d9ca7 --- /dev/null +++ b/examples/gatekeeper-auth/api/rel/overlays/bin/server @@ -0,0 +1,5 @@ +#!/bin/sh +set -eu + +cd -P -- "$(dirname -- "$0")" +PHX_SERVER=true exec ./api start diff --git a/examples/gatekeeper-auth/api/rel/overlays/bin/server.bat b/examples/gatekeeper-auth/api/rel/overlays/bin/server.bat new file mode 100755 index 0000000000..4cf5d1e061 --- /dev/null +++ b/examples/gatekeeper-auth/api/rel/overlays/bin/server.bat @@ -0,0 +1,2 @@ +set PHX_SERVER=true +call "%~dp0\api" start diff --git a/examples/gatekeeper-auth/api/test/api_web/authenticator_test.exs b/examples/gatekeeper-auth/api/test/api_web/authenticator_test.exs new file mode 100644 index 0000000000..4f19a2a3dd --- /dev/null +++ b/examples/gatekeeper-auth/api/test/api_web/authenticator_test.exs @@ -0,0 +1,33 @@ +defmodule ApiWeb.AuthenticatorTest do + use Api.DataCase + + alias Api.Shape + alias ApiWeb.Authenticator + + describe "authenticator" do + test "generate token" do + {:ok, shape} = Shape.from(%{"table" => "foo"}) + + assert %{"Authorization" => "Bearer " <> _token} = + Authenticator.authenticate_shape(shape, nil) + end + + test "validate token" do + {:ok, shape} = Shape.from(%{"table" => "foo"}) + + headers = Authenticator.authenticate_shape(shape, nil) + assert Authenticator.authorise(shape, headers) + end + + test "validate token with params" do + {:ok, shape} = + Shape.from(%{ + "table" => "foo", + "where" => "value IS NOT NULL" + }) + + headers = Authenticator.authenticate_shape(shape, nil) + assert Authenticator.authorise(shape, headers) + end + end +end diff --git a/examples/gatekeeper-auth/api/test/api_web/controllers/proxy_controller_test.exs b/examples/gatekeeper-auth/api/test/api_web/controllers/proxy_controller_test.exs new file mode 100644 index 0000000000..6bc2190f56 --- /dev/null +++ b/examples/gatekeeper-auth/api/test/api_web/controllers/proxy_controller_test.exs @@ -0,0 +1,49 @@ +defmodule ApiWeb.ProxyControllerTest do + use ApiWeb.ConnCase + + alias Api.Token + alias Electric.Client.ShapeDefinition + + setup %{conn: conn} do + {:ok, conn: put_req_header(conn, "accept", "application/json")} + end + + describe "show" do + test "requires shape params", %{conn: conn} do + assert conn + |> get("/proxy/v1/shape") + |> response(400) + end + + test "requires auth", %{conn: conn} do + assert conn + |> get("/proxy/v1/shape", table: "items") + |> response(401) + end + + test "requires valid auth header", %{conn: conn} do + assert conn + |> put_req_header("authorization", "Bearer invalid-token") + |> get("/proxy/v1/shape", table: "items") + |> response(401) + end + + test "requires matching shape definition", %{conn: conn} do + {:ok, shape} = ShapeDefinition.new("wrong", []) + + assert conn + |> put_req_header("authorization", "Bearer #{Token.generate(shape)}") + |> get("/proxy/v1/shape", table: "items", offset: -1) + |> response(403) + end + + test "proxies request", %{conn: conn} do + {:ok, shape} = ShapeDefinition.new("items", []) + + assert conn + |> put_req_header("authorization", "Bearer #{Token.generate(shape)}") + |> get("/proxy/v1/shape", table: "items", offset: -1) + |> json_response(200) + end + end +end diff --git a/examples/gatekeeper-auth/api/test/api_web/gatekeeper_test.exs b/examples/gatekeeper-auth/api/test/api_web/gatekeeper_test.exs new file mode 100644 index 0000000000..67df0fdfe0 --- /dev/null +++ b/examples/gatekeeper-auth/api/test/api_web/gatekeeper_test.exs @@ -0,0 +1,60 @@ +defmodule ApiWeb.GatekeeperTest do + use ApiWeb.ConnCase + + alias Api.Shape + alias ApiWeb.Authenticator + + setup %{conn: conn} do + {:ok, conn: put_req_header(conn, "accept", "application/json")} + end + + describe "gatekeeper plug" do + test "does not support GET requests", %{conn: conn} do + assert conn + |> get("/gatekeeper/items") + |> response(404) + end + + test "generates valid config", %{conn: conn} do + data = + conn + |> post("/gatekeeper/items") + |> json_response(200) + + assert %{"table" => "items"} = data + end + + test "generates valid config with params", %{conn: conn} do + clause = "value is not None" + + data = + conn + |> post("/gatekeeper/items", where: clause) + |> json_response(200) + + assert %{"table" => "items", "where" => ^clause} = data + end + end + + describe "gatekeeper auth" do + test "generates an auth header", %{conn: conn} do + data = + conn + |> post("/gatekeeper/items") + |> json_response(200) + + assert %{"headers" => %{"Authorization" => "Bearer " <> _token}} = data + end + + test "generates a valid auth header", %{conn: conn} do + assert %{"headers" => headers, "table" => table} = + conn + |> post("/gatekeeper/items") + |> json_response(200) + + {:ok, shape} = Shape.from(%{"table" => table}) + + assert Authenticator.authorise(shape, headers) + end + end +end diff --git a/examples/gatekeeper-auth/api/test/api_web/integration_test.exs b/examples/gatekeeper-auth/api/test/api_web/integration_test.exs new file mode 100644 index 0000000000..59fa7237e0 --- /dev/null +++ b/examples/gatekeeper-auth/api/test/api_web/integration_test.exs @@ -0,0 +1,51 @@ +defmodule ApiWeb.IntegrationTest do + use ApiWeb.ConnCase + + setup %{conn: conn} do + {:ok, conn: put_req_header(conn, "accept", "application/json")} + end + + describe "integration" do + test "gatekeeper auth token works with the proxy", %{conn: conn} do + # Define a shape. Any shape. + table = "items" + where = "value IS NOT NULL" + + # Fetch the client config from the gatekeeper endpoint. + assert %{"headers" => %{"Authorization" => auth_header}} = + conn + |> post("/gatekeeper/#{table}", where: where) + |> json_response(200) + + # Make an authorised shape request. + assert [] = + conn + |> put_req_header("authorization", auth_header) + |> get("/proxy/v1/shape", offset: -1, table: table, where: where) + |> json_response(200) + end + + test "using the gatekeeper config", %{conn: conn} do + # As above but this time dynamically construct the proxy request + # from the config returned by the gatekeeper endpoint. + assert data = + conn + |> post("/gatekeeper/items", where: "value IS NOT NULL") + |> json_response(200) + + assert {headers, data} = Map.pop(data, "headers") + assert {url, shape_params} = Map.pop(data, "url") + + # We use the path, not the full URL just because we're testing. + path = URI.parse(url).path + params = Map.put(shape_params, "offset", -1) + + conn = + headers + |> Enum.reduce(conn, fn {k, v}, acc -> put_req_header(acc, String.downcase(k), v) end) + |> get(path, params) + + assert [] = json_response(conn, 200) + end + end +end diff --git a/examples/gatekeeper-auth/api/test/support/conn_case.ex b/examples/gatekeeper-auth/api/test/support/conn_case.ex new file mode 100644 index 0000000000..01fd8e0d35 --- /dev/null +++ b/examples/gatekeeper-auth/api/test/support/conn_case.ex @@ -0,0 +1,37 @@ +defmodule ApiWeb.ConnCase do + @moduledoc """ + This module defines the test case to be used by + tests that require setting up a connection. + + Such tests rely on `Phoenix.ConnTest` and also + import other functionality to make it easier + to build common data structures and query the data layer. + + Finally, if the test case interacts with the database, + we enable the SQL sandbox, so changes done to the database + are reverted at the end of every test. If you are using + PostgreSQL, you can even run database tests asynchronously + by setting `use ApiWeb.ConnCase, async: true`, although + this option is not recommended for other databases. + """ + + use ExUnit.CaseTemplate + + using do + quote do + # The default endpoint for testing + @endpoint ApiWeb.Endpoint + + # Import conveniences for testing with connections + import Plug.Conn + import Phoenix.ConnTest + import ApiWeb.ConnCase + end + end + + setup tags do + Api.DataCase.setup_sandbox(tags) + + {:ok, conn: Phoenix.ConnTest.build_conn()} + end +end diff --git a/examples/gatekeeper-auth/api/test/support/data_case.ex b/examples/gatekeeper-auth/api/test/support/data_case.ex new file mode 100644 index 0000000000..7a39af55f7 --- /dev/null +++ b/examples/gatekeeper-auth/api/test/support/data_case.ex @@ -0,0 +1,59 @@ +defmodule Api.DataCase do + @moduledoc """ + This module defines the setup for tests requiring + access to the application's data layer. + + You may define functions here to be used as helpers in + your tests. + + Finally, if the test case interacts with the database, + we enable the SQL sandbox, so changes done to the database + are reverted at the end of every test. If you are using + PostgreSQL, you can even run database tests asynchronously + by setting `use Api.DataCase, async: true`, although + this option is not recommended for other databases. + """ + + use ExUnit.CaseTemplate + + using do + quote do + alias Api.Repo + + import Ecto + import Ecto.Changeset + import Ecto.Query + import Api.DataCase + end + end + + setup tags do + Api.DataCase.setup_sandbox(tags) + + :ok + end + + @doc """ + Sets up the sandbox based on the test tags. + """ + def setup_sandbox(tags) do + pid = Ecto.Adapters.SQL.Sandbox.start_owner!(Api.Repo, shared: not tags[:async]) + on_exit(fn -> Ecto.Adapters.SQL.Sandbox.stop_owner(pid) end) + end + + @doc """ + A helper that transforms changeset errors into a map of messages. + + assert {:error, changeset} = Accounts.create_user(%{password: "short"}) + assert "password is too short" in errors_on(changeset).password + assert %{password: ["password is too short"]} = errors_on(changeset) + + """ + def errors_on(changeset) do + Ecto.Changeset.traverse_errors(changeset, fn {message, opts} -> + Regex.replace(~r"%{(\w+)}", message, fn _, key -> + opts |> Keyword.get(String.to_existing_atom(key), key) |> to_string() + end) + end) + end +end diff --git a/examples/gatekeeper-auth/api/test/test_helper.exs b/examples/gatekeeper-auth/api/test/test_helper.exs new file mode 100644 index 0000000000..bba7843009 --- /dev/null +++ b/examples/gatekeeper-auth/api/test/test_helper.exs @@ -0,0 +1,2 @@ +ExUnit.start() +Ecto.Adapters.SQL.Sandbox.mode(Api.Repo, :manual) diff --git a/examples/gatekeeper-auth/caddy/Caddyfile b/examples/gatekeeper-auth/caddy/Caddyfile new file mode 100644 index 0000000000..c3aa1f46bc --- /dev/null +++ b/examples/gatekeeper-auth/caddy/Caddyfile @@ -0,0 +1,78 @@ +{ + order jwtauth before basicauth +} + +:8080 { + jwtauth { + # You can sign and validate JWT tokens however you prefer. Here we + # expect tokens to have been signed with the `HS256` algorithm and + # a shared symmetric signing key to match the configuration in + # `../api/lib/api/token.ex`, so that this example config validates + # tokens generated by the example Api service. + # + # Note that the signing key should be base64 encoded: + # + # sign_key "" + # + # See https://caddyserver.com/docs/modules/http.authentication.providers.jwt + sign_key {$AUTH_SECRET:"TkZMNSowQmMjOVU2RUB0bm1DJkU3U1VONkd3SGZMbVk="} + sign_alg HS256 + + # The jwtauth module requires a user claim but we don't actually use + # it here, so we just set it to the token issuer. + user_claims iss + + # Extract the shape definition from the JWT `shape` claim and write + # into {http.auth.user.*} variables, so e.g.: the `shape.table` + # becomes {http.auth.user.table} and is used below to match against + # the request parameters. + meta_claims \ + "shape.namespace -> namespace" \ + "shape.table -> table" \ + "shape.where -> where" \ + "shape.columns -> columns" + } + + # Match `GET /v1/shape` requests. + @get_shape { + method GET + + path /v1/shape + } + + # Match requests whose JWT shape definition matches the shape definition + # in the request parameters. + # + # So, for example, a claim of `{"shape": {"table": "items"}}` will match + # a query parameter of `?table=items`. + # + # Note that the first part of the expression matches the request table + # param against either the shape `table` or `namespace.table` depending + # on whether the shape `namespace` is empty or not. + @definition_matches { + expression < { + const url = new URL(req.url) + if (!isGetShapeRequest(req.method, url.pathname)) { + return new Response("Not found", {status: 404}) + } + + const [isValidJWT, claims] = verifyAuthHeader(req.headers) + if (!isValidJWT) { + return new Response("Unauthorized", {status: 401}) + } + + if (!matchesDefinition(claims.shape, url.searchParams)) { + return new Response("Forbidden", {status: 403}) + } + + // Reverse-proxy the request on to the Electric sync service. + return fetch(`${ELECTRIC_URL}/v1/shape${url.search}`, {headers: req.headers}) +}) diff --git a/examples/gatekeeper-auth/package.json b/examples/gatekeeper-auth/package.json new file mode 100644 index 0000000000..74811f78f6 --- /dev/null +++ b/examples/gatekeeper-auth/package.json @@ -0,0 +1,5 @@ +{ + "name": "@electric-examples/gatekeeper-auth-example", + "private": true, + "version": "0.0.1" +} diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index 17f610ab50..7c8f7d00c4 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -112,6 +112,8 @@ importers: specifier: ^5.3.4 version: 5.3.4(@types/node@20.14.11) + examples/gatekeeper-auth: {} + examples/linearlite: dependencies: '@electric-sql/client': @@ -644,6 +646,8 @@ importers: specifier: ^0.0.9 version: 0.0.9 + packages/elixir-client: {} + packages/react-hooks: dependencies: '@electric-sql/client':