Collect Tekton pipeline logs to S3, via Fluentd in a DaemonSet.
Advantage:
- Lightweight: only an 64.7MB docker image is used.
- server-side-enable: security controlled by RBAC from server side.
- Easy to use: user just need to provide your S3 info, and apply yaml file to your cluster, then everything is OK.
ACCESS_KEY_ID=admin123
SECRET_ACCESS_KEY=admin123
kubectl -n kube-system create secret generic pipeline-logs-s3-secret --from-literal "accesskey=$ACCESS_KEY_ID" --from-literal "secretkey=$SECRET_ACCESS_KEY"
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: pipeline-logs-s3-config
namespace: kube-system
data:
S3_BUCKET: mlpipeline
S3_REGION: test_region
FORCE_PATH_STYLE: "true"
S3_ENDPOINT: 'http://9.21.53.162:31846'
EOF
Notes:
S3_BUCKET: s3_bucket name, e.g. mlpipeline
S3_REGION: s3_region, e.g. us-east-1 or test_region
FORCE_PATH_STYLE: This prevents AWS SDK from breaking endpoint URL, set true if you are using minio
S3_ENDPOINT: s3_endpoint, if you are using minio. e.g. http://9.21.53.162:31846
make deploy
The container logs will be archived to S3 as below picture
Change the name of docker image in "Makefile", build your docker image:
make build
kubectl delete -f https://raw.githubusercontent.com/kubeflow/kfp-tekton/master/samples/logging_s3/pipeline_log_to_s3_by_fluentd_recommend/pipeline-logs-fluentd-s3.yaml
Refer to fenglixa/pipeline-logs-s3