Examples Overview¶
The examples/ directory contains production-ready SLO definitions organized by monitoring framework and use case.
Summary¶
| Example Set | SLOs | Framework | Layer | Description |
|---|---|---|---|---|
| RED Metrics | 5 | RED | Service | Request-driven service monitoring |
| USE Metrics | 11 | USE | Infrastructure | Resource monitoring |
| AI Agents | 20 | Custom | Business | AI platform metrics |
| SaaS CRM | 25 | Custom | Business | User journey metrics |
| Budgeting Methods | 3 | Custom | Service | Budget method comparison |
Total: 64 SLOs
Example Features¶
Each example includes:
- โ Complete, working Go code
- ๐ Detailed descriptions of what is being measured
- ๐ท๏ธ OpenSLO-compliant metadata with ontology labels
- ๐ Prometheus/BigQuery query examples
- ๐งช Automated validation tests
- ๐ README with methodology explanations
By Monitoring Framework¶
RED (Rate, Errors, Duration)¶
Request-driven monitoring for services and APIs:
- RED Metrics - Core RED implementation
- AI Agents - AI response time, quality, errors
USE (Utilization, Saturation, Errors)¶
Infrastructure and resource monitoring:
- USE Metrics - CPU, memory, disk, network
Custom / Business¶
Business metrics and custom implementations:
- SaaS CRM - User journey and engagement
- AI Agents - Cost efficiency, task completion
- Budgeting Methods - SLO budgeting strategies
By Audience¶
SRE / Platform¶
- RED Metrics - Service reliability
- USE Metrics - Infrastructure health
Product¶
Executive¶
By Use Case¶
Service Monitoring¶
Monitor API and microservice health:
Infrastructure Monitoring¶
Monitor underlying resources:
User Engagement¶
Track user activity and stickiness:
AI/ML Platforms¶
Monitor AI agent performance:
Label Distribution¶
See the Metrics Report for detailed analysis of label distribution across all 64 SLOs.