Skip to content

Commit

Permalink
Merge pull request #89 from autometrics-dev/otlp-exporter
Browse files Browse the repository at this point in the history
Add otlp exporters
  • Loading branch information
actualwitch authored Oct 3, 2023
2 parents 7e42201 + c0f6ad2 commit 5579b02
Show file tree
Hide file tree
Showing 31 changed files with 1,673 additions and 631 deletions.
8 changes: 6 additions & 2 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,12 @@ jobs:
with:
python-version: ${{ matrix.python-version }}
cache: poetry
- name: Install dependencies
run: poetry install --no-interaction --no-root --with dev,examples
- name: Install dependencies (cpython)
if: ${{ matrix.python-version != 'pypy3.10' }}
run: poetry install --no-interaction --no-root --with dev,examples --all-extras
- name: Install dependencies (pypy)
if: ${{ matrix.python-version == 'pypy3.10' }}
run: poetry install --no-interaction --no-root --with dev,examples --extras=exporter-otlp-proto-http
- name: Check code formatting
run: poetry run black .
- name: Lint lib code
Expand Down
6 changes: 4 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,12 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
### Added

- Added support for `record_error_if` and `record_success_if`
- Added OTLP exporters for OpenTelemetry tracker (#89)

### Changed

-
- [💥 Breaking change] `init` function is now required to be called before using autometrics (#89)
- Prometheus exporters are now configured via `init` function (#89)

### Deprecated

Expand All @@ -32,7 +34,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).

### Security

-
- Updated FastAPI and Pydantic dependencies in the examples group (#89)

## [0.9](https://github.com/autometrics-dev/autometrics-py/releases/tag/0.8) - 2023-07-24

Expand Down
161 changes: 113 additions & 48 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,65 +29,76 @@ See [Why Autometrics?](https://github.com/autometrics-dev#why-autometrics) for m
## Quickstart

1. Add `autometrics` to your project's dependencies:
```shell
pip install autometrics
```

```shell
pip install autometrics
```

2. Instrument your functions with the `@autometrics` decorator

```python
from autometrics import autometrics
@autometrics
def my_function():
# ...
```

3. Export the metrics for Prometheus
```python
# This example uses FastAPI, but you can use any web framework
from fastapi import FastAPI, Response
from prometheus_client import generate_latest
# Set up a metrics endpoint for Prometheus to scrape
# `generate_latest` returns metrics data in the Prometheus text format
@app.get("/metrics")
def metrics():
return Response(generate_latest())
```

4. Run Prometheus locally with the [Autometrics CLI](https://docs.autometrics.dev/local-development#getting-started-with-am) or [configure it manually](https://github.com/autometrics-dev#5-configuring-prometheus) to scrape your metrics endpoint
```sh
# Replace `8080` with the port that your app runs on
am start :8080
```

5. (Optional) If you have Grafana, import the [Autometrics dashboards](https://github.com/autometrics-dev/autometrics-shared#dashboards) for an overview and detailed view of all the function metrics you've collected
```python
from autometrics import autometrics

@autometrics
def my_function():
# ...
```

3. Configure autometrics by calling the `init` function:

```python
from autometrics import init

init(tracker="prometheus", service_name="my-service")
```

4. Export the metrics for Prometheus

```python
# This example uses FastAPI, but you can use any web framework
from fastapi import FastAPI, Response
from prometheus_client import generate_latest

# Set up a metrics endpoint for Prometheus to scrape
# `generate_latest` returns metrics data in the Prometheus text format
@app.get("/metrics")
def metrics():
return Response(generate_latest())
```

5. Run Prometheus locally with the [Autometrics CLI](https://docs.autometrics.dev/local-development#getting-started-with-am) or [configure it manually](https://github.com/autometrics-dev#5-configuring-prometheus) to scrape your metrics endpoint

```sh
# Replace `8080` with the port that your app runs on
am start :8080
```

6. (Optional) If you have Grafana, import the [Autometrics dashboards](https://github.com/autometrics-dev/autometrics-shared#dashboards) for an overview and detailed view of all the function metrics you've collected

## Using `autometrics-py`

- You can import the library in your code and use the decorator for any function:

```py
from autometrics import autometrics
```python
from autometrics import autometrics

@autometrics
def sayHello:
return "hello"
@autometrics
def sayHello:
return "hello"

```
```

- To show tooltips over decorated functions in VSCode, with links to Prometheus queries, try installing [the VSCode extension](https://marketplace.visualstudio.com/items?itemName=Fiberplane.autometrics).

> **Note**: We cannot support tooltips without a VSCode extension due to behavior of the [static analyzer](https://github.com/davidhalter/jedi/issues/1921) used in VSCode.
> **Note**: We cannot support tooltips without a VSCode extension due to behavior of the [static analyzer](https://github.com/davidhalter/jedi/issues/1921) used in VSCode.
- You can also track the number of concurrent calls to a function by using the `track_concurrency` argument: `@autometrics(track_concurrency=True)`.
- You can also track the number of concurrent calls to a function by using the `track_concurrency` argument: `@autometrics(track_concurrency=True)`.

> **Note**: Concurrency tracking is only supported when you set with the environment variable `AUTOMETRICS_TRACKER=prometheus`.
> **Note**: Concurrency tracking is only supported when you set with the environment variable `AUTOMETRICS_TRACKER=prometheus`.
- To access the PromQL queries for your decorated functions, run `help(yourfunction)` or `print(yourfunction.__doc__)`.

> For these queries to work, include a `.env` file in your project with your prometheus endpoint `PROMETHEUS_URL=your endpoint`. If this is not defined, the default endpoint will be `http://localhost:9090/`
> For these queries to work, include a `.env` file in your project with your prometheus endpoint `PROMETHEUS_URL=your endpoint`. If this is not defined, the default endpoint will be `http://localhost:9090/`
## Dashboards

Expand Down Expand Up @@ -119,15 +130,15 @@ The library uses the concept of Service-Level Objectives (SLOs) to define the ac
In order to receive alerts, **you need to add a special set of rules to your Prometheus setup**. These are configured automatically when you use the [Autometrics CLI](https://docs.autometrics.dev/local-development#getting-started-with-am) to run Prometheus.

> Already running Prometheus yourself? [Read about how to load the autometrics alerting rules into Prometheus here](https://github.com/autometrics-dev/autometrics-shared#prometheus-recording--alerting-rules).
> Already running Prometheus yourself? [Read about how to load the autometrics alerting rules into Prometheus here](https://github.com/autometrics-dev/autometrics-shared#prometheus-recording--alerting-rules).
Once the alerting rules are in Prometheus, you're ready to go.

To use autometrics SLOs and alerts, create one or multiple `Objective`s based on the function(s) success rate and/or latency, as shown above.
To use autometrics SLOs and alerts, create one or multiple `Objective`s based on the function(s) success rate and/or latency, as shown above.

The `Objective` can be passed as an argument to the `autometrics` decorator, which will include the given function in that objective.

The example above used a success rate objective. (I.e., we wanted to be alerted when the error rate started to increase.)
The example above used a success rate objective. (I.e., we wanted to be alerted when the error rate started to increase.)

You can also create an objective for the latency of your functions like so:

Expand Down Expand Up @@ -191,8 +202,7 @@ Autometrics makes it easy to identify if a specific version or commit introduced
>
> autometrics-py will track support for build_info using the OpenTelemetry tracker via [this issue](https://github.com/autometrics-dev/autometrics-py/issues/38)

The library uses a separate metric (`build_info`) to track the version and, optionally, the git commit of your service.
The library uses a separate metric (`build_info`) to track the version and, optionally, the git commit of your service.

It then writes queries that group metrics by the `version`, `commit` and `branch` labels so you can spot correlations between code changes and potential issues.

Expand Down Expand Up @@ -230,7 +240,62 @@ exemplar collection by setting `AUTOMETRICS_EXEMPLARS=true`. You also need to en

## Exporting metrics

After collecting metrics with Autometrics, you need to export them to Prometheus. You can either add a separate route to your server and use the `generate_latest` function from the `prometheus_client` package, or you can use the `start_http_server` function from the same package to start a separate server that will expose the metrics. Autometrics also re-exports the `start_http_server` function with a preselected port 9464 for compatibility with other Autometrics packages.
There are multiple ways to export metrics from your application, depending on your setup. You can see examples of how to do this in the [examples/export_metrics](https://github.com/autometrics-dev/autometrics-py/tree/main/examples/export_metrics) directory.

If you want to export metrics to Prometheus, you have two options in case of both `opentelemetry` and `prometheus` trackers:

1. Create a route inside your app and respond with `generate_latest()`

```python
# This example uses FastAPI, but you can use any web framework
from fastapi import FastAPI, Response
from prometheus_client import generate_latest

# Set up a metrics endpoint for Prometheus to scrape
@app.get("/metrics")
def metrics():
return Response(generate_latest())
```

2. Specify `prometheus` as the exporter type, and a separate server will be started to expose metrics from your app:

```python
exporter = {
"type": "prometheus",
"address": "localhost",
"port": 9464
}
init(tracker="prometheus", service_name="my-service", exporter=exporter)
```

For the OpenTelemetry tracker, you have more options, including a custom metric reader. You can specify the exporter type to be `otlp-proto-http` or `otlp-proto-grpc`, and metrics will be exported to a remote OpenTelemetry collector via the specified protocol. You will need to install the respective extra dependency in order for this to work, which you can do when you install autometrics:

```sh
pip install autometrics[exporter-otlp-proto-http]
pip install autometrics[exporter-otlp-proto-grpc]
```

After installing it you can configure the exporter as follows:

```python
exporter = {
"type": "otlp-proto-grpc",
"address": "http://localhost:4317",
"insecure": True
}
init(tracker="opentelemetry", service_name="my-service", exporter=exporter)
```

To use a custom metric reader you can specify the exporter type to be `otel-custom` and provide a custom metric reader:

```python
my_custom_metric_reader = PrometheusMetricReader("")
exporter = {
"type": "otel-custom",
"reader": my_custom_metric_reader
}
init(tracker="opentelemetry", service_name="my-service", exporter=exporter)
```

## Development of the package

Expand All @@ -255,7 +320,7 @@ Code in this repository is:
In order to run these tools locally you have to install them, you can install them using poetry:

```sh
poetry install --with dev
poetry install --with dev --all-extras
```

After that you can run the tools individually
Expand Down
1 change: 1 addition & 0 deletions Tiltfile
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
docker_compose('docker-compose.yaml')
23 changes: 23 additions & 0 deletions configs/otel-collector-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
receivers:
otlp:
protocols:
grpc:
http:

exporters:
logging:
loglevel: debug
prometheus:
endpoint: "0.0.0.0:9464" # This is where Prometheus will scrape the metrics from.
# namespace: <namespace> # Replace with your namespace.


processors:
batch:

service:
pipelines:
metrics:
receivers: [otlp]
processors: []
exporters: [logging, prometheus]
35 changes: 35 additions & 0 deletions docker-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
version: "3.9"

volumes:
app-logs:

services:
am:
image: autometrics/am:latest
extra_hosts:
- host.docker.internal:host-gateway
ports:
- "6789:6789"
- "9090:9090"
container_name: am
command: "start http://otel-collector:9464/metrics host.docker.internal:9464"
environment:
- LISTEN_ADDRESS=0.0.0.0:6789
restart: unless-stopped
volumes:
- app-logs:/var/log
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
container_name: otel-collector
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./configs/otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "4317:4317"
- "4318:4318"
- "8888:8888" # expose container metrics in prometheus format
- "55680:55680"
- "55679:55679"
restart: unless-stopped
push-gateway:
image: ghcr.io/zapier/prom-aggregation-gateway:latest
27 changes: 27 additions & 0 deletions examples/export_metrics/otel-prometheus.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
import time
from autometrics import autometrics, init

# Autometrics supports exporting metrics to Prometheus via the OpenTelemetry.
# This example uses the Prometheus Python client, available settings are same as the
# Prometheus Python client. By default, the Prometheus exporter will expose metrics
# on port 9464. If you don't have a Prometheus server running, you can run Tilt or
# Docker Compose from the root of this repo to start one up.

init(
tracker="opentelemetry",
exporter={
"type": "prometheus",
"port": 9464,
},
service_name="my-service",
)


@autometrics
def my_function():
pass


while True:
my_function()
time.sleep(1)
26 changes: 26 additions & 0 deletions examples/export_metrics/otlp.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
import time
from autometrics import autometrics, init

# Autometrics supports exporting metrics to OTLP collectors via gRPC and HTTP transports.
# This example uses the gRPC transport, available settings are similar to the OpenTelemetry
# Python SDK. By default, the OTLP exporter will send metrics to localhost:4317.
# If you don't have an OTLP collector running, you can run Tilt or Docker Compose
# to start one up.

init(
exporter={
"type": "otlp-proto-grpc",
"push_interval": 1000,
},
service_name="my-service",
)


@autometrics
def my_function():
pass


while True:
my_function()
time.sleep(1)
Loading

0 comments on commit 5579b02

Please sign in to comment.