Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrap up R&D proto #31

Merged
merged 27 commits into from
Mar 23, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
494e9f4
initial stack scaffolding.
c0c0n3 Feb 21, 2020
a807318
bundle together yamster lib & exe.
c0c0n3 Feb 26, 2020
85d0b2c
generate content of httpbin_service.yaml & orion_adapter_service.yaml
c0c0n3 Feb 26, 2020
382ee4e
introduce shake to generate httpbin_service.yaml & orion_adapter_serv…
c0c0n3 Feb 27, 2020
d37958e
stash away orion & mongodb k8s config from early prototyping
c0c0n3 Feb 27, 2020
8e5fdb3
generate mongodb_service.yaml
c0c0n3 Feb 27, 2020
23312f4
generate orion_service.yaml
c0c0n3 Feb 28, 2020
d7235ac
generate mock_daps_service.yaml
c0c0n3 Feb 28, 2020
6894303
generate ingress_routing.yaml
c0c0n3 Mar 2, 2020
fa4d3e8
fix orion startup command.
c0c0n3 Mar 2, 2020
2b817fa
fix orion/mongodb connection.
c0c0n3 Mar 2, 2020
91d73e8
fix orion routing/auth.
c0c0n3 Mar 4, 2020
9c82b92
generate egress_filter.yaml
c0c0n3 Mar 5, 2020
b34dfa5
document orion deployment.
c0c0n3 Mar 5, 2020
fb7430a
rename orion egress filter.
c0c0n3 Mar 6, 2020
54195ad
generate sample_operator_cfg.yaml
c0c0n3 Mar 9, 2020
1108fa4
write notes-to-self in yamster readme.
c0c0n3 Mar 9, 2020
0a9906f
Merge pull request #30 from orchestracities/edsl
c0c0n3 Mar 10, 2020
dec2e09
reimplement a more decent authz client.
c0c0n3 Mar 10, 2020
ab45a2d
implement adapter authz config.
c0c0n3 Mar 11, 2020
16183c6
extract authz data from ids token.
c0c0n3 Mar 11, 2020
03f9aa7
make token validation return jwt payload.
c0c0n3 Mar 11, 2020
d3ff5cb
make adapter use authz when configured.
c0c0n3 Mar 11, 2020
df09439
generate authz mesh config.
c0c0n3 Mar 13, 2020
ff22e5b
better logging of adapter request data.
c0c0n3 Mar 13, 2020
b5f67ea
fix scopes extraction from jwt.
c0c0n3 Mar 13, 2020
cd1b5b9
document authz workflow in readme.
c0c0n3 Mar 13, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
132 changes: 130 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,14 @@ it (i.e. set `disablePolicyChecks` to `false`) but it doesn't nor does it
work to specify that option at installation time which is why you'll have
to manually edit the K8s config after applying the Istio `demo` profile.

**Tip**. *Istio Dashboard*. If you're looking for an easy way to see
what's going on in your mesh (services, logs, config, etc.), why not
use the Kiali dashboard installed with the demo profile? Try

$ istioctl dashboard kiali

Log in with user `admin` and password `admin`.

##### Adapter and mock DAPS images

Let's "Dockerise" our adapter so we can run it on the freshly minted Istio
Expand Down Expand Up @@ -214,11 +222,13 @@ See if we can still get away with an invalid token...

You should get back a fat 403 with a message along the lines of:

PERMISSION_DENIED:h1.handler.istio-system:unauthorized: invalid JWT data
PERMISSION_DENIED:
orionadapter-handler.handler.istio-system:unauthorized: invalid JWT data

Like I said earlier, the adapter verifies the JWT you send as part of the IDSA-Header is valid---see
`deployment/sample_operator_cfg.yaml`. What happens if we send a valid
token then? Here's a valid JWT signed with the private key in the config.
token then? Here's a valid JWT signed with the private key in the config
(`idsa_private_key` field).

$ export MY_FAT_JWT=eyJhbGciOiJSUzI1NiJ9.e30.QHOtHczHK_bJrgqhXeZdE4xnCGh9zZhp67MHfRzHlUUe98eCup_uAEKh-2A8lCyg8sr1Q9dV2tSbB8vPecWPaB43BWKU00I7cf1jRo9Yy0nypQb3LhFMiXIMhX6ETOyOtMQu1dS694ecdPxMF1yw4rgqTtp_Sz-JfrasMLcxpBtT7USocnJHE_EkcQKVXeJ857JtkCKAzO4rkMli2sFnKckvoJMBoyrObZ_VFCVR5NGnOvSnLMqKrYaLxNHLDL_0Mxy_b8iKTiRAqyNce4tg8Evhqb3rPQcx9kMdwyv_1ggEVKQyiPWa3MkSBvBArgPghbJMcSJVMhtUO8M9BmNMyw

Expand Down Expand Up @@ -348,6 +358,124 @@ which case `tokenValue` should be a real DAPS identity token ;-)

Happy days!

##### Deploying Orion

Well, how about we do this with Orion instead of `httpbin`? Why the
heck not. Start by deploying MongoDB:

$ kubectl apply -f deployment/mongodb_service.yaml

This is a simple MongoDB service with no replication and ephemeral
storage---i.e. your DB won't survive a pod restart---but will do
for testing. You should wait until MongoDB is up and running before
deploying Orion---in a prod scenario, you'd want to automate this
with e.g. `init` containers, but hey we're just testing here :-)
Instead of waiting around just twiddling your thumbs, edit your
load balancer config to add an external port for Orion:

$ EDITOR=emacs kubectl -n istio-system edit svc istio-ingressgateway
# ^ replace with your fave or don't set the variable to use default

Then add the below port to the `ports` section:

ports:
...
- name: orion
nodePort: 31026
port: 1026
protocol: TCP
targetPort: 1026

This makes mesh gateway port `1026` reachable from outside the cluster
through port `31026`. Next deploy Orion

$ kubectl apply -f deployment/orion_service.yaml

and you're ready to play around! Here's how to get your feet wet:

$ curl -v "$(minikube ip):31026/v2"
# you should get back a 403/permission denied.

$ curl -v "$(minikube ip):31026/v2" -H "header:${HEADER_VALUE}"
# set HEADER_VALUE as we did earlier; you should get back some
# JSON with Orion's API entry points.

You can try adding entities, subscriptions and trigger notifications.
It should all go without a hitch, but there's a snag: because of
[#28](https://github.com/orchestracities/boost/issues/28), at the
moment no IDS header gets added to Orion notification messages. But
a fix should become available soon soon, stay tuned!

##### Access-control with AuthZ

Time to up the ante in the access-control war. We're going to require
CIA clearance now before you can access HTTP resources---in case that
wasn't obvious to you too, CIA stands for Control of Internet Access,
of course, what else?! We have an AuthZ test server at

* http://authzforceingress.appstorecontainerns.46.17.108.63.xip.io/authzforce-ce/domains/CYYY_V2IEeqMJKbegCuurA/pdp

configured with an XACML policy that only lets users in roles `role0`
through `role3` `GET` Orion resources through an application identified
by a resource ID of `b3a4a7d2-ce61-471f-b05d-fb82452ae686`, i.e. our
Mr Adapter the Constable. Also, the policy only gives the green light
if the resource the user is trying to access belongs to the `service`
tenant.

With default config, the adapter won't ask AuthZ to authorize calls:
if the incoming token is valid, the request gets forwarded to Orion.
But you can change that in a flash. Edit `sample_operator_cfg.yaml`
to set the `authz/enable` flag to `true`, then

$ kubectl apply -f deployment/sample_operator_cfg.yaml

Now whenever a request comes in, after okaying the client token in the
`header` header, the adapter will submit an authorization request to
AuthZ with the below data:

* *Resource ID*. Taken from `authz` config section.
* *Resource Path*. The incoming request path, e.g. `/v2/entities?id=1`.
* *Action*. Request verb, e.g. `GET`.
* *Tenant*. Content of the request's `Fiware-Service` header if any.
* *Roles*. User roles extracted from the `scopes` claim, if any, in
the incoming JWT token payload.

Let's see it in action. Try resubmitting the request we made earlier
to get Orion's API entry points

$ curl -v "$(minikube ip):31026/v2" -H "header:${HEADER_VALUE}"

and, surprise, surprise, the adapter should show you the door this
time (the boy ain't got no manners!) with a `403` and a message like

PERMISSION_DENIED:
orionadapter-handler.handler.istio-system:unauthorized:
AuthZ denied authorization

Let's see if we can get through. Since AuthZ expects the user to be
in `role0` through `role3` and the request to target the `service`
tenant, we'll need to change our request so it holds that data too.
So we're going to use a JWT with a `scopes` claim set to `[role0, role1,
role2, role3]` and add a `Fiware-Service` header. Here's the JWT,
signed with the private key in the adapter config---look for the
`idsa_private_key` field in `sample_operator_cfg.yaml`.

$ export MY_FAT_JWT=eyJhbGciOiJSUzI1NiJ9.eyJzY29wZXMiOlsicm9sZTAiLCJyb2xlMSIsInJvbGUyIiwicm9sZTMiXX0.JN66SWLPqNg7pqTFRcryo-3lX4V4BNKG5bZD3SDne4B3qV5kS-5NNW5wFkty870NFjuXP_nCxg3ayOCe8YZab3kRieaCeygVJwc2i1iUEHmYqKz6jx2EecfM2VbechaapDOFc9k01S5ea1t7fSHFsJsDWpVPpCJZBAv1ikPZrv88-7PLOacdGum--0-0gI6LGaXIFiTIAzbdeJ5V-ikIK7CgLJFaR3Ib5MwGRjrGTaPqQGE62SVpATphRhSJIfXm18ViF2fG7KTGPBYGY3rxAdy6l3klpKuxA0ATQRZJ39mpjrgbf-WVlvH_9nSFAn9BvLiSJohpSMmoJTX7ToWA0g

Just like we did earlier, we use the same convenience script to get
the base64-encoded IDSA header

$ export HEADER_VALUE=$(sh scripts/idsa-header-value.sh "${MY_FAT_JWT}")

and we're ready to try our luck

$ curl -v "$(minikube ip):31026/v2" \
-H "header:${HEADER_VALUE}" \
-H "Fiware-Service:service"

If everything went according to plan, you're looking at a `200`
response on your terminal with the JSON body returned by Orion :-)

##### Cleaning up

**TODO**
Expand Down
43 changes: 15 additions & 28 deletions deployment/egress_filter.yaml
Original file line number Diff line number Diff line change
@@ -1,52 +1,39 @@
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: orion-egress-filter
name: "orion-egress-filter"
namespace: default
spec:
workloadSelector:
labels:
app: httpbin
configPatches:
# The first patch adds the lua filter to the listener/http connection manager
- applyTo: HTTP_FILTER
match:
context: SIDECAR_OUTBOUND
listener:
portNumber: 80
filterChain:
filter:
name: "envoy.http_connection_manager"
name: envoy.http_connection_manager
subFilter:
name: "envoy.router"
name: envoy.router
portNumber: 80
patch:
operation: INSERT_BEFORE
value: # lua filter specification
name: envoy.lua
value:
config:
inlineCode: |
function envoy_on_request(request_handle)
local headers, body = request_handle:httpCall(
"lua_cluster", {
[":method"] = "GET",
[":path"] = "/",
[":authority"] = "lua_cluster"}, "", 5000)
request_handle:headers():add("header", body)
end
# The second patch adds the cluster that is referenced by the lua code
# cds match is omitted as a new cluster is being added
inlineCode: "\n function envoy_on_request(request_handle)\n local headers, body = request_handle:httpCall(\n \"lua_cluster\",\n { [\":method\"] = \"GET\",\n [\":path\"] = \"/\",\n [\":authority\"] = \"lua_cluster\"\n },\n \"\",\n 5000)\n request_handle:headers():add(\"header\", body)\n end\n"
name: envoy.lua
- applyTo: CLUSTER
# match:
# context: SIDECAR_OUTBOUND
patch:
operation: ADD
value: # cluster specification
name: "lua_cluster"
type: STRICT_DNS
value:
connect_timeout: 5.5s
lb_policy: ROUND_ROBIN
hosts:
- socket_address:
protocol: TCP
address: "orionadapterservice.istio-system"
port_value: 54321
protocol: TCP
lb_policy: ROUND_ROBIN
name: lua_cluster
type: STRICT_DNS
workloadSelector:
labels:
app: httpbin
27 changes: 8 additions & 19 deletions deployment/httpbin_service.yaml
Original file line number Diff line number Diff line change
@@ -1,38 +1,26 @@
# Copyright 2017 Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

##################################################################################################
# httpbin service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: httpbin
labels:
app: httpbin
name: httpbin
spec:
type: NodePort
ports:
- name: http
port: 8000
protocol: TCP
targetPort: 80
selector:
app: httpbin
type: NodePort

---

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: httpbin
name: httpbin
spec:
replicas: 1
Expand All @@ -52,3 +40,4 @@ spec:
name: httpbin
ports:
- containerPort: 80
name: http
46 changes: 39 additions & 7 deletions deployment/ingress_routing.yaml
Original file line number Diff line number Diff line change
@@ -1,30 +1,62 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
name: "boost-gateway"
spec:
selector:
istio: ingressgateway
servers:
- port:
- hosts:
- "*"
port:
name: "httpbin:80"
number: 80
name: http
protocol: HTTP
hosts:
- hosts:
- "*"
port:
name: "orion:1026"
number: 1026
protocol: HTTP

---

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
gateways:
- "boost-gateway"
hosts:
- "*"
gateways:
- httpbin-gateway
http:
- route:
- match:
- port: 80
route:
- destination:
host: httpbin
port:
number: 8000
weight: 100

---

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: orion
spec:
gateways:
- "boost-gateway"
hosts:
- "*"
http:
- match:
- port: 1026
route:
- destination:
host: orion
port:
number: 1026
weight: 100
20 changes: 12 additions & 8 deletions deployment/mock_daps_service.yaml
Original file line number Diff line number Diff line change
@@ -1,40 +1,44 @@
apiVersion: v1
kind: Service
metadata:
name: mockdaps
labels:
app: mockdaps
name: mockdaps
spec:
type: ClusterIP
ports:
- name: https
port: 44300
protocol: TCP
targetPort: 44300
selector:
app: mockdaps
type: ClusterIP

---

apiVersion: apps/v1
kind: Deployment
metadata:
name: mockdaps
labels:
app: mockdaps
name: mockdaps
spec:
replicas: 1
selector:
matchLabels:
app: mockdaps
template:
metadata:
labels:
app: mockdaps
annotations:
"scheduler.alpha.kubernetes.io/critical-pod": ""
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
app: mockdaps
spec:
containers:
- name: mockdaps
image: boost/mockdaps:latest
- image: "boost/mockdaps:latest"
imagePullPolicy: Never
name: mockdaps
ports:
- containerPort: 44300
name: https
Loading