Performance Tuning
Optimize application performance for production workloads.
Overview
This guide covers performance optimization techniques for EZ-Console applications, including database optimization, query optimization, caching strategies, and application-level tuning.
Database Performance
Indexing
Add indexes on frequently queried fields:
type Product struct {
ID string `gorm:"primaryKey"`
Name string `gorm:"size:100;index"` // Index for search
Category string `gorm:"size:50;index"` // Index for filtering
Status string `gorm:"size:50;index"` // Index for status queries
UserID string `gorm:"type:varchar(36);index"` // Index for user queries
}
// Composite index
func init() {
db.Exec("CREATE INDEX idx_product_user_status ON t_product(user_id, status)")
}
Query Optimization
Limit Fields
// ✅ Good: Select only needed fields
db.Select("id", "name", "price").Find(&products)
// ❌ Bad: Select all fields
db.Find(&products)
Use Preloading
// ✅ Good: Preload relations
db.Preload("Category").Preload("User").Find(&products)
// ❌ Bad: N+1 queries
products, _ := db.Find(&products)
for _, p := range products {
db.Find(&p.Category)
db.Find(&p.User)
}
Pagination
// ✅ Good: Use pagination
db.Offset(offset).Limit(limit).Find(&products)
// ❌ Bad: Load all records
db.Find(&products)
Connection Pooling
Optimize connection pool settings:
database:
max_open_connections: 200 # Adjust based on load
max_idle_connections: 20 # Keep connections ready
max_connection_life_time: "5m" # Recycle connections
Caching Strategies
In-Memory Caching
package cache
import (
"sync"
"time"
)
type Cache struct {
items map[string]cacheItem
mutex sync.RWMutex
}
type cacheItem struct {
value interface{}
expiration time.Time
}
func (c *Cache) Get(key string) (interface{}, bool) {
c.mutex.RLock()
defer c.mutex.RUnlock()
item, exists := c.items[key]
if !exists {
return nil, false
}
if time.Now().After(item.expiration) {
delete(c.items, key)
return nil, false
}
return item.value, true
}
func (c *Cache) Set(key string, value interface{}, ttl time.Duration) {
c.mutex.Lock()
defer c.mutex.Unlock()
c.items[key] = cacheItem{
value: value,
expiration: time.Now().Add(ttl),
}
}
Redis Caching
import "github.com/go-redis/redis/v8"
func (s *ProductService) GetProduct(ctx context.Context, id string) (*Product, error) {
cacheKey := "product:" + id
// Try cache first
cached, err := s.redis.Get(ctx, cacheKey).Result()
if err == nil {
var product Product
json.Unmarshal([]byte(cached), &product)
return &product, nil
}
// Cache miss - query database
product, err := s.getProductFromDB(ctx, id)
if err != nil {
return nil, err
}
// Cache result
data, _ := json.Marshal(product)
s.redis.Set(ctx, cacheKey, data, 5*time.Minute)
return product, nil
}
Cache Invalidation
func (s *ProductService) UpdateProduct(ctx context.Context, id string, product *Product) error {
// Update database
err := s.db.WithContext(ctx).Where("id = ?", id).Updates(product).Error
if err != nil {
return err
}
// Invalidate cache
s.redis.Del(ctx, "product:"+id)
s.redis.Del(ctx, "products:list")
return nil
}
Application Performance
Garbage Collection Tuning
# Set GC percentage
GOGC=50 ./server # More aggressive GC
# Set GC percentage via environment
export GOGC=50
Go Profiling
Enable profiling:
import _ "net/http/pprof"
// Add profiling routes
router.GET("/debug/pprof/*any", gin.WrapH(http.DefaultServeMux))
Profile analysis:
# CPU profile
go tool pprof http://localhost:8080/debug/pprof/profile
# Memory profile
go tool pprof http://localhost:8080/debug/pprof/heap
# Goroutine profile
go tool pprof http://localhost:8080/debug/pprof/goroutine
Concurrent Requests
Use goroutines for parallel operations:
func (s *Service) ProcessBulk(ctx context.Context, items []Item) error {
var wg sync.WaitGroup
errChan := make(chan error, len(items))
for _, item := range items {
wg.Add(1)
go func(i Item) {
defer wg.Done()
if err := s.processItem(ctx, i); err != nil {
errChan <- err
}
}(item)
}
wg.Wait()
close(errChan)
// Check for errors
for err := range errChan {
if err != nil {
return err
}
}
return nil
}
HTTP Performance
Response Compression
Enable gzip compression:
import "github.com/gin-contrib/gzip"
router.Use(gzip.Gzip(gzip.DefaultCompression))
HTTP/2
Configure HTTP/2 support in reverse proxy (Nginx).
Static File Caching
location /static {
expires 1y;
add_header Cache-Control "public, immutable";
}
Database Query Optimization
Slow Query Logging
Enable slow query logging:
database:
slow_threshold: "1s" # Log queries slower than 1s
Query Analysis
Analyze slow queries:
-- MySQL: Explain query
EXPLAIN SELECT * FROM t_product WHERE category = 'electronics';
-- PostgreSQL: Analyze
EXPLAIN ANALYZE SELECT * FROM t_product WHERE category = 'electronics';
Batch Operations
Use batch operations for bulk inserts:
// ✅ Good: Batch insert
db.CreateInBatches(products, 100)
// ❌ Bad: Individual inserts
for _, product := range products {
db.Create(&product)
}
Monitoring Performance
Metrics
Monitor key metrics:
- Response Time: P50, P95, P99
- Throughput: Requests per second
- Error Rate: Percentage of errors
- Database Query Time: Average query duration
- Memory Usage: Peak memory consumption
- CPU Usage: Average CPU utilization
Prometheus Metrics
import "github.com/prometheus/client_golang/prometheus"
var (
httpRequestsTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "http_requests_total",
Help: "Total HTTP requests",
},
[]string{"method", "endpoint", "status"},
)
httpRequestDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "http_request_duration_seconds",
Help: "HTTP request duration",
},
[]string{"method", "endpoint"},
)
)
func init() {
prometheus.MustRegister(httpRequestsTotal)
prometheus.MustRegister(httpRequestDuration)
}
Best Practices
1. Profile First
Always profile before optimizing:
go tool pprof http://localhost:8080/debug/pprof/profile
2. Measure Impact
Measure performance improvements:
- Before and after benchmarks
- Load testing
- Production monitoring
3. Database First
Optimize database queries first:
- Add indexes
- Optimize queries
- Use connection pooling
4. Cache Strategically
Cache expensive operations:
- Database queries
- Computed results
- External API calls
Related Topics
- Scaling - Scaling strategies
- Frontend Performance - Frontend optimization
Need help? Ask in GitHub Discussions.