Metadata-Version: 2.1
Name: aioprometheus-summary
Version: 0.1.0
Summary: Aioprometheus summary with quantiles over configurable sliding time window
Home-page: https://github.com/RefaceAI/aioprometheus-summary
Author: RefaceAI
Author-email: github-support@reface.ai
License: Apache License 2.0
Platform: Platform Independent
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Topic :: Scientific/Engineering :: Mathematics
Description-Content-Type: text/markdown
License-File: LICENSE

# aioprometheus-summary
Aioprometheus summary with quantiles over configurable sliding time window

## Installation
```
pip install aioprometheus-summary==0.1.0
```

This package can be found on [PyPI](https://pypi.org/project/aioprometheus-summary/).

## Collecting

### Basic usage

```python
from aioprometheus_summary import Summary

s = Summary("request_latency_seconds", "Description of summary")
s.observe({}, 4.7)
```

### With labels

```python
from aioprometheus_summary import Summary

s = Summary("request_latency_seconds", "Description of summary")
s.observe({"method": "GET", "endpoint": "/profile"}, 1.2)
s.observe({"method": "POST", "endpoint": "/login"}, 3.4)
```

### With custom quantiles and precisions

By default, metrics are observed for next quantile-precision pairs
`((0.50, 0.05), (0.90, 0.01), (0.99, 0.001))`
but you can provide your own value when creating the metric.

```python
from aioprometheus_summary import Summary

s = Summary(
    "request_latency_seconds", "Description of summary",
    invariants=((0.50, 0.05), (0.75, 0.02), (0.90, 0.01), (0.95, 0.005), (0.99, 0.001)),
)
s.observe({}, 4.7)
```

### With custom time window settings

Typically, you don't want to have a Summary representing the entire runtime of the application,
but you want to look at a reasonable time interval. Summary metrics implement a configurable sliding time window.

The default is a time window of 10 minutes and 5 age buckets, i.e. the time window is 10 minutes wide, and
we slide it forward every 2 minutes, but you can configure this values for your own purposes.

```python
from aioprometheus_summary import Summary

s = Summary(
    "request_latency_seconds", "Description of summary",
    # time window 5 minutes wide with 10 age buckets (sliding every 30 seconds)
    max_age_seconds=5 * 60,
    age_buckets=10,
)
s.observe({}, 4.7)
```

## Querying

Suppose we have a metric:

```python
from aioprometheus_summary import Summary

s = Summary("request_latency_seconds", "Description of summary")
```

To show request latency by `method`, `endpoint` and `quntile` use next query:
```
max by (method, endpoint, quantile) (request_latency_seconds)
```

To only 99-th quantile:
```
max by (method, endpoint) (request_latency_seconds{quantile="0.99")
```
