Skip to content

Commit

Permalink
Css 4587 backport generic fixes (#965)
Browse files Browse the repository at this point in the history
* Added documentation and setup files

* Improve docker compose

* Merged backport ci/cd

* Fix compose for two exposed services

* Added improved setup instructions

* Further improvements

* Eliminated unnecessary sections
  • Loading branch information
kian99 authored Jun 26, 2023
1 parent 65d37d4 commit 5a2af0d
Show file tree
Hide file tree
Showing 12 changed files with 163 additions and 53 deletions.
1 change: 1 addition & 0 deletions .air.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ tmp_dir = "tmp"

[log]
time = false
main_only = true

[misc]
clean_on_exit = false
Expand Down
26 changes: 25 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,35 @@ them in `$GOPATH/bin`. This is the list of the installed commands:
- jemd: start the JIMM server;
- jaas-admin: perform admin commands on JIMM;

### Docker-compose:
### Docker compose:
See [here](./local/README.md) on how to get started.

## Testing

### Pre-requisite
As the juju controller internal suites start their our mongod instances, it is required to have juju-db (mongod).
This can be installed via: `sudo snap install juju-db`.
The latest JIMM has an upgraded dependency on Juju which requires in turn requires juju-db from channel `4.4/stable`,
this can be installed with `sudo snap install juju-db --channel=4.4/stable`

The rest of the suite relies on PostgreSQL, OpenFGA and Hashicorp Vault which are dockerised
and as such you may simple run `docker compose up` to be integration test ready.
The above command won't start a dockerised instance of JIMM as tests are normally run locally. Instead, to start a
dockerised JIMM that will auto-reload on code changes, simply run `docker compose --profile dev up`.

### Manual commands
The tests utilise [go.check](http://labix.org/gocheck) for suites and you may run tests individually like so:
```bash
$ go test -check.f dialSuite.TestDialWithCredentialsStoredInVault`
$ go test -check.f MyTestSuite
$ go test -check.f "Test.*Works"
$ go test -check.f "MyTestSuite.Test.*Works"
```

For more verbose output, use `-check.v` and `-check.vv`


### Make
Run `make check` to test the application.
Run `make help` to display help about all the available make targets.

Expand Down
17 changes: 15 additions & 2 deletions docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ services:
ports:
- 17070:80
environment:
JIMM_LOG_LEVEL: "debug"
JIMM_UUID: "3217dbc9-8ea9-4381-9e97-01eab0b3f6bb"
JIMM_DSN: "postgresql://jimm:jimm@db/jimm"
CANDID_URL: "http://0.0.0.0:8081" # For external client redirects (in the case of compose and running outside)
Expand Down Expand Up @@ -63,6 +64,13 @@ services:
depends_on:
db:
condition: service_healthy
traefik:
condition: service_healthy
labels:
traefik.enable: true
traefik.http.routers.jimm.rule: Host(`jimm.localhost`)
traefik.http.routers.jimm.entrypoints: websecure
traefik.http.routers.jimm.tls: true

db:
image: postgres
Expand Down Expand Up @@ -116,8 +124,6 @@ services:
image: candid:latest
container_name: candid
entrypoint: "/candid.sh"
command: ""
# command: "/etc/candid/config.yaml && echo 'hi' && ls"
expose:
- 8081
ports:
Expand All @@ -128,8 +134,15 @@ services:
depends_on:
db:
condition: service_healthy
traefik:
condition: service_healthy
healthcheck:
test: [ "CMD", "curl", "http://localhost:8081/debug/status" ]
interval: 5s
timeout: 5s
retries: 5
labels:
traefik.enable: true
traefik.http.routers.candid.rule: Host(`candid.localhost`)
traefik.http.routers.candid.entrypoints: websecure
traefik.http.routers.candid.tls: true
73 changes: 42 additions & 31 deletions local/README.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,38 @@
# Local Development
# Local Development & Testing

## Starting the environment
1. Ensure you have docker above v18, confirm this with docker --version
This doc is intended to help those new to JIMM get up and running
with the local Q/A environment. This environment is additionally
used for integration testing within the JIMM test suite.

# Starting the environment
1. Ensure you have docker above v18, confirm this with `docker --version`
2. Ensure you are in the root JIMM directory.
3. Run make pull/candid to get a local image of candid (this is subject to change!)
4. Run cd local/traefik/certs; ./certs.sh; cd -, this will setup some self signed certs and add them to your cert pool.
5. Run touch ./local/vault/approle.yaml
6. Run make version/commit.txt to populate the repo with the git commit info.
7. Run make version/version.txt to populate the repo with the git version info.
8. docker compose up
3. Run `make pull/candid` to get a local image of candid (this is subject to change!)
4. Run `cd local/traefik/certs; ./certs.sh; cd -`, this will setup some self signed certs and add them to your cert pool.
5. Run `touch ./local/vault/approle.json && touch ./local/vault/roleid.txt`
6. Run `make version/commit.txt && make version/version.txt` to populate the repo with the git commit and version info.
7. Run `go mod vendor` to vendor JIMM's dependencies and reduce repeated setup time.
8. `docker compose --profile dev up` if you encounter an error like "Error response from daemon: network ... not found" then the command `docker compose --profile dev up --force-recreate` should help.

After this initial setup, subsequent use of the compose can be done with `docker compose --profile dev up --force-recreate`

The services included are:
- JIMM
- JIMM (only started in the dev profile)
- Vault
- Postgres
- OpenFGA
- Traefik

> Any changes made inside the repo will automatically restart the JIMM server via a volume mount. So there's no need
to re-run the compose continuously, but note, if you do bring the compose down, remove the volumes otherwise
vault will not behave correctly, this can be done via `docker compose down -v`

If all was successful, you should seen an output similar to:
```
NAME COMMAND SERVICE STATUS PORTS
candid "/candid.sh" candid running (healthy) 0.0.0.0:8081->8081/tcp, :::8081->8081/tcp
jimmy "/go/bin/air" jimm running (healthy) 0.0.0.0:17070->8080/tcp, :::17070->8080/tcp
migrateopenfga "/openfga migrate --…" migrateopenfga exited (0)
openfga "/openfga run --data…" openfga running (unhealthy) 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp
postgres "docker-entrypoint.s…" db running (healthy) 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp
vault "docker-entrypoint.s…" vault running (unhealthy) 0.0.0.0:8200->8200/tcp, :::8200->8200/tcp
```

Now please checkout the [Authentication Steps](#authentication-steps) to authenticate postman for local testing & Q/A.
## Authentication Steps

# Q/A Using Postman
#### Setup
1. Run `make get-local-auth`
2. Head to postman and follow the instructions given by get-local-auth.

## Facades in Postman

#### Facades in Postman
You will see JIMM's controller WS API broken up into separate WS requests.
This is intentional.
Inside of each WS request will be a set of `saved messages` (on the right-hand side), these are the calls to facades for the given API under that request.
Expand All @@ -46,10 +41,26 @@ The `request name` represents the literal WS endpoint, i.e., `API = /api`.

> Remember to run the `Login` message when spinning up a new WS connection, otherwise you will not be able to send subsequent calls to this WS.
## Adding Controllers
TODO.

### Helpful tidbits!
# Q/A Using jimmctl

## Prerequisites

// TODO(): Ipv6 network on the Juju container don't work with JIMM. Figure out how to disable these at the container level so that the controller.yaml file doesn't present ipv6 at all. For now one can remove this by hand.

Note that you can export an environment variable `CONTROLLER_NAME` and re-run steps 3. and 4. below to create multiple Juju
controllers that will be controlled by JIMM.

1. `juju unregister jimm-dev` - Unregister any other local JIMM you have.
2. `juju login jimm.localhost -c jimm-dev` - Login to local JIMM. (If you name the controller jimm-dev, the script will pick it up!)
3. `./local/jimm/setup-controller.sh` - Performs controller setup.
4. `./local/jimm/add-controller.sh` - A local script to do many of the manual steps for us. See script for more details.
5. `juju add-model test` - Adds a model to qa-controller via JIMM.

# Helpful tidbits!
> Note: For any secure step to work, ensure you've run the local traefik certs script!
- To access vault UI, the URL is: `http://localhost:8200/ui` and the root key is `token`.
- The WS API for JIMM Controller is under: `ws://localhost:17070`.
- You can verify local deployment with: `curl http://localhost:17070/debug/status`
- The WS API for JIMM Controller is under: `ws://localhost:17070` (http direct) and `wss://jimm.localhost` for secure.
- You can verify local deployment with: `curl http://localhost:17070/debug/status` and `curl https://jimm.localhost/debug/status`
- Traefik is available on `http://localhost:8089`.
1 change: 1 addition & 0 deletions local/candid/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ identity-providers:
- type: static
name: static
description: Default identity provider
domain: candid.localhost
users:
jimm:
name: JIMM User
Expand Down
13 changes: 2 additions & 11 deletions local/candid/entry.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,6 @@

echo "Entrypoint being overriden for local environment."

# Grab curl quickly.
apt update
apt install curl -y
/root/candidsrv /etc/candid/config.yaml &

# Pseudo readiness probe such that we can continue local dev setup.
until eval curl --output /dev/null --silent --fail http://localhost:8081/debug/status; do
printf '.'
sleep 1
done
echo "Server appears to have started."
# If any further configuration to the IdP is required, it can now be done via this script.
wait
exec /root/candidsrv /etc/candid/config.yaml
46 changes: 46 additions & 0 deletions local/jimm/add-controller.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#!/bin/bash

# RUN THIS SCRIPT FROM PROJECT ROOT!
#
# This script adds a local controller to your compose JIMM instance.
# Due to TLS SANs we need to modify JIMMs /etc/hosts to map to the SANs a controller certificate has.
#
# For completeness sake, the SANs are: DNS:anything, DNS:localhost, DNS:juju-apiserver, DNS:juju-mongodb
# "juju-apiserver" feels most appropriate, so we use this.
#
# Requirements to run this script:
# - yq (snap)
set -eux

JIMM_CONTROLLER_NAME="${JIMM_CONTROLLER_NAME:-jimm-dev}"
CONTROLLER_NAME="${CONTROLLER_NAME:-qa-controller}"
CONTROLLER_YAML_PATH="${CONTROLLER_NAME}".yaml
CLIENT_CREDENTIAL_NAME="${CLIENT_CREDENTIAL_NAME:-localhost}"

echo
echo "JIMM controller name is: $JIMM_CONTROLLER_NAME"
echo "Target controller name is: $CONTROLLER_NAME"
echo "Target controller path is: $CONTROLLER_YAML_PATH"
echo
echo "Building jimmctl..."
# Build jimmctl so we may add a controller.
go build ./cmd/jimmctl
echo "Built."
echo
echo "Switching juju controller to $JIMM_CONTROLLER_NAME"
juju switch "$JIMM_CONTROLLER_NAME"
echo
echo "Retrieving controller info for $CONTROLLER_NAME"
./jimmctl controller-info "$CONTROLLER_NAME" "$CONTROLLER_YAML_PATH"
if [[ -f "$CONTROLLER_YAML_PATH" ]]; then
echo "Controller info retrieved."
else
echo "Controller info couldn't be created, exiting..."
exit 1
fi
echo
echo "Adding controller from path: $CONTROLLER_YAML_PATH"
./jimmctl add-controller "$CONTROLLER_YAML_PATH"
echo
echo "Updating cloud credentials for: $JIMM_CONTROLLER_NAME, from client credential: $CLIENT_CREDENTIAL_NAME"
juju update-credentials "$CLIENT_CREDENTIAL_NAME" --controller "$JIMM_CONTROLLER_NAME"
21 changes: 21 additions & 0 deletions local/jimm/setup-controller.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
#!/bin/bash

# RUN THIS SCRIPT FROM PROJECT ROOT!
# It will bootstrap a Juju controller and configure the necessary config to enable the controller
# to communicate with the docker compose

set -ux

CONTROLLER_NAME="${CONTROLLER_NAME:-qa-controller}"

echo "Bootstrapping controller"
juju bootstrap localhost "${CONTROLLER_NAME}" --config allow-model-access=true --config identity-url=https://candid.localhost
CONTROLLER_ID=$(juju show-controller --format json | jq --arg name "${CONTROLLER_NAME}" '.[$name]."controller-machines"."0"."instance-id"' | tr -d '"')
echo "Adding proxy to LXC instance ${CONTROLLER_ID}"
lxc config device add "${CONTROLLER_ID}" myproxy proxy listen=tcp:0.0.0.0:443 connect=tcp:127.0.0.1:443 bind=instance
echo "Pushing local CA"
lxc file push local/traefik/certs/ca.crt "${CONTROLLER_ID}"/usr/local/share/ca-certificates/
lxc exec "${CONTROLLER_ID}" -- update-ca-certificates
echo "Restarting controller"
lxc stop "${CONTROLLER_ID}"
lxc start "${CONTROLLER_ID}"
2 changes: 1 addition & 1 deletion local/traefik/certs/san.conf
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@
[v3_req]
subjectKeyIdentifier = hash
basicConstraints = critical,CA:false
subjectAltName = DNS:jimm.localhost,IP:127.0.0.1
subjectAltName = DNS:jimm.localhost,DNS:candid.localhost,IP:127.0.0.1
keyUsage = critical,digitalSignature,keyEncipherment
3 changes: 2 additions & 1 deletion scripts/lxd-snap-build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ set -eu

snap_name=${snap_name:-jimm}
image=${image:-ubuntu:20.04}
container=${container:-${snap_name}-snap-`uuidgen`}
container=${container:-${snap_name}-snap}

lxd_exec() {
lxc exec \
Expand Down Expand Up @@ -34,6 +34,7 @@ lxd_exec sh -c 'while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 0

lxd_exec apt-get update -q -y
lxd_exec apt-get upgrade -q -y
lxd_exec apt-get install build-essential -q -y
if [ -n "${http_proxy:-}" ]; then
lxd_exec snap set system proxy.http=${http_proxy:-}
lxd_exec snap set system proxy.https=${https_proxy:-${http_proxy:-}}
Expand Down
2 changes: 1 addition & 1 deletion service.go
Original file line number Diff line number Diff line change
Expand Up @@ -352,7 +352,7 @@ func newVaultStore(ctx context.Context, p Params) (VaultStore, error) {
}
defer f.Close()
s, err := vaultapi.ParseSecret(f)
if err != nil {
if err != nil || s == nil {
zapctx.Error(ctx, "failed to parse vault secret from file")
return nil, err
}
Expand Down
11 changes: 6 additions & 5 deletions snaps/jimm/snapcraft.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,22 +10,23 @@ apps:
jimm:
command: bin/jimmsrv
plugs:
- network
- network-bind
- network
- network-bind

parts:
jimmsrv:
plugin: go
source: ./
build-packages:
- git
- gcc
- git
- gcc
override-pull: |-
set -e
snapcraftctl pull
mkdir -p $SNAPCRAFT_PART_SRC/version
git -C $SNAPCRAFT_PART_SRC rev-parse --verify HEAD | tee $SNAPCRAFT_PART_SRC/version/commit.txt
git -C $SNAPCRAFT_PART_SRC describe --dirty --abbrev=0 | tee $SNAPCRAFT_PART_SRC/version/version.txt
snapcraftctl set-version `cat $SNAPCRAFT_PART_SRC/version/version.txt`
override-build: |-
set -e
go install -mod readonly -p 16 -ldflags '-linkmode=external' -tags version github.com/CanonicalLtd/jimm/cmd/jimmsrv
go install -mod readonly -ldflags '-linkmode=external' -tags version github.com/CanonicalLtd/jimm/cmd/jimmsrv

0 comments on commit 5a2af0d

Please sign in to comment.