claude-code-proxy/proxy/internal/service/storage_aggregation_test.go

65 lines
2.1 KiB
Go
Raw Normal View History

Local fork: hardening + ops improvements (timeout knob, demotion, /livez, drain) This commit captures both the prior accumulated work-in-progress (framework migration web/→svelte/, postgres storage, conversation viewer, dashboard auth, OpenAPI spec, integration tests) AND today's operational improvements layered on top. History wasn't checkpointed incrementally; happy to split it via interactive rebase if a reviewer wants smaller commits. Today's changes (in addition to the older WIP): 1. Configurable upstream response-header timeout - ANTHROPIC_RESPONSE_HEADER_TIMEOUT env (default 300s) - Replaces hardcoded 300s in provider/anthropic.go that was firing on opus + 1M-context + extended thinking non-streaming requests - Files: internal/config/config.go, internal/provider/anthropic.go 2. Structured forward-error diagnostic logging - When a forward to Anthropic fails, log a single key=value line with request_id, model, stream, body_bytes, has_thinking, anthropic_beta, query, elapsed, ctx_err — alongside the existing human-readable error line for back-compat - Files: internal/handler/handlers.go (logForwardFailure) 3. Full SSE protocol passthrough + Flusher fix - handler/handlers.go: forward all SSE lines verbatim (event:, id:, retry:, : comments, blank-line terminators), not only data:. Previous code produced malformed SSE for strict parsers. - middleware/logging.go: explicit Flush() method on responseWriter. Embedding http.ResponseWriter (interface) does not auto-promote Flush(), so every w.(http.Flusher) check in the streaming handler was returning ok=false and SSE writes buffered in net/http until the body closed. 4. Non-streaming → streaming demotion (feature-flagged) - ANTHROPIC_DEMOTE_NONSTREAMING env (default false) - When enabled and the routed provider is anthropic, force stream=true upstream for clients that asked for stream=false. Receive SSE, accumulate via accumulateSSEToMessage (handles text, tool_use with partial_json reassembly, thinking, signature, citations_delta, usage merge), and synthesize a single non-streaming JSON response. - Eliminates the ResponseHeaderTimeout class of failure entirely. - Body rewrite uses json.Decoder + UseNumber() to preserve integer precision in unknown nested fields (tool inputs from prior turns). - Files: internal/config/config.go, internal/handler/handlers.go, cmd/proxy/main.go, cmd/proxy/main_test.go 5. Live operational state: /livez gauge + graceful drain - New internal/runtime package: atomic in-flight counter + draining flag - New middleware/inflight.go: increments runtime gauge, applied to /v1/* subrouter so Messages, ChatCompletions, and ProxyPassthrough are all counted - /v1/* moved to a gorilla/mux subrouter so the InFlight middleware applies surgically; /health, /livez, /openapi.* remain on parent router (unauthenticated, uncounted) - Health handler returns 503 draining when runtime.IsDraining() is true, so Traefik stops routing to a slot before drain begins - New /livez handler returns {status, in_flight, draining, timestamp} - SIGTERM handler in main.go: SetDraining(true), poll for in_flight==0 with 32-min ceiling and 1s tick (logs every 10s), then srv.Shutdown - Auth bypass list extended with /livez - Files: internal/runtime/runtime.go (new), internal/middleware/inflight.go (new), internal/middleware/auth.go, internal/handler/handlers.go (Health, Livez, runtime import), cmd/proxy/main.go (subrouter, drain loop) 6. OpenAPI spec updates - Document Health 503 response and new DrainingResponse schema - Add /livez path with LivezResponse schema - Files: internal/handler/openapi.go Verified: go build ./... clean, go test ./... all pass, go vet clean. Three rounds of codex peer review across changes 1-5; all feedback addressed (citations_delta, json.Number precision, drain-loop logging via lastLog timestamp, PathPrefix tightened to "/v1/").
2026-05-02 15:15:58 -06:00
package service
import (
"testing"
"github.com/seifghazi/claude-code-monitor/internal/model"
)
func TestAggregationHelpers(t *testing.T) {
t.Parallel()
t.Run("addDailyTokens accumulates by date and model", func(t *testing.T) {
t.Parallel()
dailyMap := make(map[string]*model.DailyTokens)
addDailyTokens(dailyMap, "2026-03-20", "claude", 10)
addDailyTokens(dailyMap, "2026-03-20", "claude", 5)
addDailyTokens(dailyMap, "2026-03-20", "gpt", 7)
day := dailyMap["2026-03-20"]
if day == nil || day.Tokens != 22 || day.Requests != 3 {
t.Fatalf("unexpected daily aggregate: %#v", day)
}
if day.Models["claude"].Tokens != 15 || day.Models["claude"].Requests != 2 {
t.Fatalf("unexpected claude daily model aggregate: %#v", day.Models["claude"])
}
if day.Models["gpt"].Tokens != 7 || day.Models["gpt"].Requests != 1 {
t.Fatalf("unexpected gpt daily model aggregate: %#v", day.Models["gpt"])
}
})
t.Run("addHourlyTokens accumulates by bucket and model", func(t *testing.T) {
t.Parallel()
bucketMap := make(map[string]*model.HourlyTokens)
addHourlyTokens(bucketMap, "09", "09:00", "claude", 4)
addHourlyTokens(bucketMap, "09", "09:00", "claude", 6)
addHourlyTokens(bucketMap, "09", "09:00", "gpt", 2)
bucket := bucketMap["09"]
if bucket == nil || bucket.Tokens != 12 || bucket.Requests != 3 || bucket.Label != "09:00" {
t.Fatalf("unexpected hourly aggregate: %#v", bucket)
}
if bucket.Models["claude"].Tokens != 10 || bucket.Models["claude"].Requests != 2 {
t.Fatalf("unexpected claude hourly model aggregate: %#v", bucket.Models["claude"])
}
if bucket.Models["gpt"].Tokens != 2 || bucket.Models["gpt"].Requests != 1 {
t.Fatalf("unexpected gpt hourly model aggregate: %#v", bucket.Models["gpt"])
}
})
t.Run("addModelTokens accumulates by model", func(t *testing.T) {
t.Parallel()
modelMap := make(map[string]*model.ModelTokens)
addModelTokens(modelMap, "claude", 8)
addModelTokens(modelMap, "claude", 12)
got := modelMap["claude"]
if got == nil || got.Tokens != 20 || got.Requests != 2 {
t.Fatalf("unexpected model aggregate: %#v", got)
}
})
}