Logging & Monitoring
Configure logging and monitoring for EZ-Console.
Overview
EZ-Console provides comprehensive logging and monitoring capabilities. Logs can be configured for different levels, formats, and outputs. Monitoring can be integrated with Prometheus, Grafana, and other observability tools.
Logging Configuration
Basic Logging
log:
level: "info" # debug, info, warn, error
format: "json" # json, logfmt
path: "logs" # Log directory
Log Levels
- debug: Detailed information for debugging
- info: General informational messages
- warn: Warning messages
- error: Error messages
Log Formats
- json: JSON format (default, recommended for production)
- logfmt: Logfmt format (human-readable)
Configuration Options
log:
level: "info" # Log level
format: "json" # Log format
path: "logs" # Log directory
output: "stdout" # Output: stdout, file, both
file: "app.log" # Log file name (when output=file)
max_size: 100 # Max file size in MB
max_backups: 10 # Number of backup files
max_age: 30 # Max age in days
compress: true # Compress old log files
Command-Line Configuration
./server \
--log.level=debug \
--log.format=json \
--log.path=logs
Environment Variables
export LOG_LEVEL=info
export LOG_FORMAT=json
export LOG_PATH=logs
Log Output
Console Output
log:
output: "stdout"
format: "logfmt" # Human-readable for console
File Output
log:
output: "file"
path: "logs"
file: "app.log"
max_size: 100
max_backups: 10
max_age: 30
compress: true
Both Console and File
log:
output: "both"
path: "logs"
format: "json"
Structured Logging
JSON Format Example
{
"level": "info",
"time": "2024-01-15T10:30:00Z",
"msg": "User logged in",
"user_id": "550e8400-e29b-41d4-a716-446655440000",
"username": "john.doe",
"ip": "192.168.1.100"
}
Using in Code
import (
"github.com/sven-victor/ez-console/pkg/util"
)
func (c *ProductController) CreateProduct(ctx *gin.Context) {
util.Logger.Info("Creating product",
"user_id", userID,
"product_name", req.Name,
)
// ...
}
Log Rotation
Log files are automatically rotated when they reach the maximum size:
log:
max_size: 100 # Rotate at 100MB
max_backups: 10 # Keep 10 backup files
max_age: 30 # Delete logs older than 30 days
compress: true # Compress old log files
Monitoring
Health Check Endpoint
GET /api/health
Response:
{
"code": "0",
"data": {
"status": "ok",
"timestamp": "2024-01-15T10:30:00Z"
}
}
Metrics Endpoint
EZ-Console can expose Prometheus metrics:
import "github.com/prometheus/client_golang/prometheus/promhttp"
router.GET("/metrics", gin.WrapH(promhttp.Handler()))
Prometheus Integration
- Expose Metrics:
router.GET("/metrics", gin.WrapH(promhttp.Handler()))
- Configure Prometheus:
# prometheus.yml
scrape_configs:
- job_name: 'ez-console'
static_configs:
- targets: ['localhost:8080']
Grafana Dashboard
Create Grafana dashboards to visualize:
- Request rates
- Response times
- Error rates
- Database query performance
- System resources
Tracing
EZ-Console supports distributed tracing through OpenTelemetry, allowing you to track requests across services. Configure tracing exporters to send trace data to various backends.
Configuration Structure
tracing:
service_name: "ez-console" # Service name for trace identification
http: # HTTP exporter for OpenTelemetry Collector
grpc: # gRPC exporter for OpenTelemetry Collector
zipkin: # Zipkin exporter
file: # File exporter (for local debugging)
Common Options
- service_name: Service identifier in traces (required)
OpenTelemetry HTTP Integration
Send traces via HTTP to OpenTelemetry Collector or compatible backends:
tracing:
service_name: "ez-console"
http:
endpoint: "localhost:4317" # Collector endpoint (host:port)
url_path: "/v1/traces" # URL path for traces endpoint
timeout: 30s # Request timeout
insecure: true # Skip TLS verification (dev only)
compression: "gzip" # Compression: true/false/1/0/"gzip" (gzip) or false/0/"false" (none)
retry:
enabled: true # Enable retry (default: true)
initial_interval: 5s # Initial retry interval (default: 5s)
max_interval: 30s # Maximum retry interval (default: 30s)
max_elapsed_time: 1m # Maximum total retry time (default: 1m)
header: # Custom HTTP headers
Authorization: "Bearer token123"
tls_config: # TLS configuration
cert_file: "/path/to/cert.pem"
key_file: "/path/to/key.pem"
ca_file: "/path/to/ca.pem"
server_name: "collector.example.com"
insecure_skip_verify: false
Configuration Options:
- endpoint: Collector endpoint address (host:port)
- url_path: HTTP path for traces (default:
/v1/traces) - timeout: Request timeout duration
- insecure: Skip TLS certificate verification (use only in development)
- compression: Compression setting
true,"true",1,"1","gzip"→ gzip compressionfalse,"false",0,"0"→ no compression (default: no compression)
- retry: Retry configuration for failed requests (default: enabled)
- enabled: Enable/disable retry (can be set to
falsestring to disable) - initial_interval: Initial retry interval (default:
5s) - max_interval: Maximum retry interval (default:
30s) - max_elapsed_time: Maximum total time for all retries (default:
1m)
- enabled: Enable/disable retry (can be set to
- header: Custom HTTP headers (e.g., authentication tokens)
- tls_config: TLS configuration for secure connections
- cert_file: Client certificate file path
- key_file: Client private key file path
- ca_file: CA certificate file path
- server_name: Server name for certificate validation
- insecure_skip_verify: Skip TLS verification
OpenTelemetry gRPC Integration
Send traces via gRPC to OpenTelemetry Collector (recommended for production):
tracing:
service_name: "ez-console"
grpc:
endpoint: "localhost:4318" # Collector gRPC endpoint (host:port)
timeout: 30s # Request timeout
insecure: false # Skip TLS verification (dev only)
compression: "gzip" # Compression: true/false/1/0/"gzip" (gzip) or false/0/"false" (none)
reconnection_period: 5s # Reconnection interval for failures
retry:
enabled: true # Enable retry (default: true)
initial_interval: 5s # Initial retry interval (default: 5s)
max_interval: 30s # Maximum retry interval (default: 30s)
max_elapsed_time: 1m # Maximum total retry time (default: 1m)
header: # Custom gRPC metadata
api-key: "secret-key"
tls_config: # TLS configuration
cert_file: "/path/to/cert.pem"
key_file: "/path/to/key.pem"
ca_file: "/path/to/ca.pem"
server_name: "collector.example.com"
insecure_skip_verify: false
service_config: "" # gRPC service config JSON string
Configuration Options:
- endpoint: Collector gRPC endpoint address (host:port)
- timeout: Request timeout duration
- insecure: Skip TLS certificate verification (use only in development)
- compression: Compression setting
true,"true",1,"1","gzip"→ gzip compressionfalse,"false",0,"0"→ no compression (default: no compression)
- reconnection_period: Duration to wait before reconnecting after connection loss
- retry: Retry configuration for failed requests (same structure as HTTP retry)
- header: Custom gRPC metadata headers
- tls_config: TLS configuration (same structure as HTTP)
- service_config: Optional gRPC service configuration JSON
gRPC vs HTTP:
- gRPC: Lower latency, better performance, recommended for production
- HTTP: Easier firewall configuration, works with HTTP load balancers
Zipkin Integration
Send traces directly to Zipkin backend:
tracing:
service_name: "ez-console"
zipkin:
endpoint: "http://localhost:9411/api/v2/spans" # Zipkin API endpoint
Configuration Options:
- endpoint: Full URL to Zipkin spans API endpoint (typically
/api/v2/spans)
File Integration
Export traces to local file for debugging and testing:
tracing:
service_name: "ez-console"
file:
path: "traces.json" # Output file path
Configuration Options:
- path: File path where traces will be written (JSON format)
Use Cases:
- Local development and debugging
- Offline trace collection
- Testing trace generation without collector
Example Configurations
Development (HTTP, insecure):
tracing:
service_name: "ez-console"
http:
endpoint: "localhost:4317"
insecure: true
compression: false # Disable compression (or use "false", 0)
Production (gRPC, secure):
tracing:
service_name: "ez-console"
grpc:
endpoint: "collector.example.com:4318"
insecure: false
compression: "gzip"
timeout: 30s
tls_config:
ca_file: "/etc/ssl/ca.pem"
server_name: "collector.example.com"
Local Debugging (File):
tracing:
service_name: "ez-console"
file:
path: "./traces.json"
Log Aggregation
ELK Stack
Send logs to Elasticsearch:
log:
output: "file"
format: "json"
path: "logs"
Then use Filebeat to ship to Elasticsearch.
Loki
Send logs to Grafana Loki:
log:
output: "stdout"
format: "json"
Use Promtail to collect and send to Loki.
Alerting
Prometheus Alerts
# alerts.yml
groups:
- name: ez-console
rules:
- alert: HighErrorRate
expr: rate(http_requests_total{status=~"5.."}[5m]) > 0.05
for: 5m
annotations:
summary: "High error rate detected"
Log-Based Alerts
Monitor logs for critical errors:
# Use log monitoring tools
tail -f logs/app.log | grep ERROR
Best Practices
1. Use Structured Logging
// ✅ Good: Structured logging
util.Logger.Info("User created",
"user_id", userID,
"username", username,
)
// ❌ Bad: String formatting
util.Logger.Info(fmt.Sprintf("User %s created", username))
2. Appropriate Log Levels
- debug: Development and troubleshooting
- info: Normal operations
- warn: Recoverable issues
- error: Errors requiring attention
3. Don't Log Sensitive Data
// ❌ Bad: Log passwords
util.Logger.Info("User login", "password", password)
// ✅ Good: Log user ID only
util.Logger.Info("User login", "user_id", userID)