Session infrastructure in LiveTemplate handles state storage, connection management, and WebSocket configuration. This guide covers state safety guarantees, session stores, connection management, and performance tuning for production deployments.
For pushing updates from server-side code, see Server Actions Reference.
groupID share the same state instance.AsState()State persistence is controlled at the field level using the lvt:"persist" struct tag. Fields tagged with lvt:"persist" are serialized to SessionStore after every successful action (WebSocket, HTTP POST, dispatched, server-initiated). Fields without the tag are ephemeral -- they start at their zero value on each page load and are populated by Mount() from the database, URL params, or other sources.
The SessionStore is only consulted when at least one field in the state struct has the lvt:"persist" tag. If no fields are tagged, the state is fully ephemeral -- equivalent to the old WithEphemeralState() behavior -- and the SessionStore is never read from or written to.
type TodoState struct {
// Persisted -- survives page refresh
Filter string `json:"filter" lvt:"persist"`
Page int `json:"page" lvt:"persist"`
// Ephemeral -- loaded fresh by Mount()
Items []Todo `json:"items"`
TotalCount int `json:"total_count"`
}
In this example, Filter and Page survive page refreshes because they are persisted to the SessionStore. Items and TotalCount are ephemeral -- they start as zero values and are loaded fresh by Mount() on every request, ensuring data is always current from the database.
Why selective persistence? LiveTemplate supports both WebSocket and HTTP-only modes. In WebSocket mode, state naturally lives in memory for the lifetime of the connection. In HTTP-only mode (progressive enhancement), each request is stateless -- without persistence, form submissions via POST-Redirect-GET would lose UI state like filters and pagination. Selective persistence lets you persist only the lightweight UI state that needs to survive page refreshes, while keeping database-sourced data ephemeral and always fresh from Mount().
Mount() lifecycle: Mount() is called on every HTTP request (GET and POST) and every WebSocket connect (new and reconnect). It receives the current state (with persisted fields restored, ephemeral fields at zero value) and returns refreshed state. This ensures data is always fresh from the database. Keep Mount cheap -- it runs on every request.
Mount on POST: Since Mount runs before actions on HTTP POST, it must populate ephemeral fields (e.g., state.Items = db.GetItems(state.Filter)). Persisted fields like Filter and Page are already restored from the SessionStore before Mount receives the state. Guard side effects with ctx.Action() == "" to restrict them to page loads.
Fields tagged with lvt:"persist" follow this persistence schedule. Untagged fields are always ephemeral (zero value on load, populated by Mount()). If no fields have the lvt:"persist" tag, the SessionStore is never consulted.
| Operation | lvt:"persist" fields |
Untagged fields |
|---|---|---|
| Mount() (new session) | Persisted | Zero value, loaded by Mount() |
| Mount() (HTTP GET) | Restored from store, then persisted | Zero value, loaded by Mount() |
| Mount() (HTTP POST) | Restored from store, then persisted | Zero value, loaded by Mount() |
| Mount() (WS reconnect) | Restored from store, then persisted | Zero value, loaded by Mount() |
| OnConnect() (WS connect) | Not persisted | Not persisted |
| HTTP POST action | Persisted on success | In-memory only, discarded after response |
| WebSocket action | Persisted on success | In-memory for connection lifetime |
| Dispatched action | Persisted on success | In-memory for connection lifetime |
| Server action | Persisted (once per group) | In-memory for connection lifetime |
| Page refresh | Restored from store | Zero value, loaded by Mount() |
When the controller implements a Sync() method, the framework automatically dispatches it to peer connections in the same session group after every action. This is the recommended way to keep multiple tabs in sync:
func (c *TodoController) Sync(state TodoState, ctx *livetemplate.Context) (TodoState, error) {
state.Items = c.DB.GetItems(ctx.UserID())
return state, nil // Peer connections reload from database
}
How it works:
livetemplate-id)groupID)Sync is dispatched to all other connections in the groupSync with its own state, reloading from the databaseSync(), no cross-tab dispatch occursLiveTemplate ensures safe state management through two dimensions: purity enforcement (preventing dependency types in state) and session isolation (preventing cross-user data leakage). For the full Controller+State pattern, see the Controller+State Pattern Reference.
State purity is enforced through four layers. Each layer catches different classes of mistakes.
Controllers and state are separate types. Action methods receive state by value and return a modified copy:
func (c *Controller) Action(state State, ctx *livetemplate.Context) (State, error)
The controller is a singleton holding shared dependencies (DB, Logger). State is pure data cloned per session. Because state is passed by value, mutations in the action body don't affect the original — the framework only applies the returned copy.
AsState[T]() validates the state type at registration time using validatePureState[T](). This performs a recursive descent through:
If a dependency type is found, AsState panics immediately with an actionable message:
livetemplate.AsState: field DB appears to be a dependency (*sql.DB) - move to controller
Detected Dependency Patterns:
| Pattern | Category |
|---|---|
*sql.DB |
Database |
*sql.Tx |
Database |
*sql.Conn |
Database |
*slog.Logger |
Logging |
*log.Logger |
Logging |
*http.Client |
Network |
*redis.Client |
Cache |
io.Writer |
I/O |
io.Reader |
I/O |
Detection is heuristic — it matches these 9 known dependency patterns by comparing the reflected type string (for example, *sql.DB). Struct wrappers that embed these types (e.g., type AppDB struct{ *sql.DB }) are still detected because validation descends into nested struct fields. Defined pointer types (e.g., type AppDB *sql.DB) and other third-party types (e.g., *pgxpool.Pool) are not caught by this layer. Layer 4 (the test helper) runs the same validation logic but reports failures as test errors instead of panics.
Each new session gets a deep copy of state via JSON marshal/unmarshal. This catches non-serializable fields that pass Layer 2:
encoding/json; data in these fields is not cloned or persisted)If state contains functions, channels, or circular references, the JSON round-trip fails at runtime when the first session is created; unexported fields are silently omitted from the cloned and persisted state.
AssertPureState[T](t) runs the same validation as Layer 2 but fails the test instead of panicking:
func TestState(t *testing.T) {
livetemplate.AssertPureState[TodoState](t)
}
Add this to every state type's test file. It catches dependency leakage in CI before the code reaches production.
| Violation | When Detected | Outcome |
|---|---|---|
| Dependency type in state struct | AsState[T]() at handler registration |
Panic with field name and type |
| Non-serializable field (func, chan) | First session clone at runtime | JSON marshal error |
| Dependency in test | AssertPureState[T](t) in test suite |
Test failure with field name and type |
Session isolation ensures that user A's state is never visible to user B.
Every request — HTTP and WebSocket — goes through the Authenticator to compute a groupID:
Authenticator.Identify(r) → userID
Authenticator.GetSessionGroup(r, userID) → groupID
The groupID is the authorization boundary for all state access. Users cannot specify a groupID directly in the URL or headers — the Authenticator computes it from the request's identity (cookies, auth headers).
The built-in AnonymousAuthenticator generates a random 256-bit groupID per browser (base64-encoded, stored in a livetemplate-id cookie). The BasicAuthenticator maps groupID = userID for authenticated users.
Each session group gets an independent state clone via JSON serialization/deserialization. No shared pointers exist across sessions. Each WebSocket connection also gets its own template clone (Template.Clone()) for independent diff state, preventing one tab's updates from corrupting another tab's tree comparison.
State persistence is keyed by groupID. No API exists to access another group's state:
groupIDlivetemplate:session:{groupID}State is deserialized fresh on each Get(), preventing reference sharing across requests.
Both Sync() auto-dispatch and explicit BroadcastAction() are scoped to the sender's groupID. The ConnectionRegistry filters recipients via GetByGroup(groupID) — messages only reach connections in the same group. Different groups are never informed of each other's updates.
In the HTTP path (non-WebSocket), each session group has a per-group httpTemplateCacheEntry with its own mutex. Each POST request clones state, processes the action, and persists the result back to the group's SessionStore entry. The mutex serializes concurrent requests for the same group, preventing data races.
| Component | Isolation Key | Cross-User Leakage |
|---|---|---|
| SessionStore | groupID |
Not possible (keyed by groupID) |
| ConnectionRegistry | groupID (dual-indexed) |
Not possible (lookup filtered) |
| Template | Per-connection clone | Not possible (independent instances) |
| State | Per-group clone via JSON | Not possible (deep copy) |
| HTTP cache | Per-group entry + mutex | Not possible (separate entries) |
| Broadcast | groupID in registry |
Not possible (filtered by group) |
The session store interface manages session groups:
type SessionStore interface {
// Get retrieves the state for a session group.
// Returns nil if the group doesn't exist.
Get(ctx context.Context, groupID string) interface{}
// Set stores state for a session group.
// Creates a new group if it doesn't exist, updates if it does.
Set(ctx context.Context, groupID string, state interface{})
// Delete removes a session group and all its state.
Delete(ctx context.Context, groupID string)
// List returns all active session group IDs.
List(ctx context.Context) []string
}
An optimization interface for updating a single named store within a session group without replacing the entire state. Both MemorySessionStore and RedisSessionStore implement this interface. Note: MemorySessionStore.SetStore() is a no-op since in-memory references are already updated in-place; the optimization primarily benefits RedisSessionStore where it avoids re-serializing all stores on every action:
type SingleStoreSetter interface {
SetStore(ctx context.Context, groupID string, storeName string, store interface{})
}
In-memory session store for single-instance deployments.
Features:
Configuration:
store := livetemplate.NewMemorySessionStore(
livetemplate.WithCleanupTTL(12*time.Hour), // Default: 24 hours
livetemplate.WithCleanupInterval(30*time.Minute), // Default: 1 hour
)
defer store.Close() // Stop cleanup goroutine on shutdown
tmpl := livetemplate.New("app",
livetemplate.WithSessionStore(store),
)
| Option | Default | Description |
|---|---|---|
WithCleanupTTL(ttl) |
24 hours | Time-to-live for inactive groups |
WithCleanupInterval(interval) |
1 hour | How often cleanup runs |
Redis-backed session store for distributed/multi-instance deployments.
Features:
Redis Key Schema:
livetemplate:session:{groupID} -> Gob-encoded Storeslivetemplate:session:{groupID}:access -> Last access timestampConfiguration:
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
})
store := livetemplate.NewRedisSessionStore(client,
livetemplate.WithSessionTTL(24*time.Hour), // Default: 24 hours
livetemplate.WithMaxRetries(5), // Default: 3
livetemplate.WithRetryDelay(200*time.Millisecond), // Default: 100ms
)
tmpl := livetemplate.New("app",
livetemplate.WithSessionStore(store),
)
| Option | Default | Description |
|---|---|---|
WithSessionTTL(ttl) |
24 hours | TTL for sessions in Redis |
WithMaxRetries(n) |
3 | Retry attempts for Redis operations |
WithRetryDelay(delay) |
100ms | Base delay for exponential backoff |
Important: Register Custom Types
Custom state types MUST be registered with gob.Register() before use. Register ALL types that will be serialized, including nested structs and slice element types:
type User struct {
ID string
Name string
}
type MyState struct {
Value int
Users []User // Nested type - must also register
Metadata map[string]any // Maps with interface values need care
}
func init() {
// Register the state type AND all nested types
gob.Register(&MyState{})
gob.Register(&User{}) // Required for []User slice
gob.Register(map[string]any{}) // If using interface{} maps
}
Common Pitfall: Forgetting to register nested types causes silent serialization failures. If state doesn't persist across Redis, check gob registration.
Health Checks:
// Check Redis connectivity
if err := store.Ping(); err != nil {
log.Printf("Redis unhealthy: %v", err)
}
// With context timeout
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
if err := store.PingContext(ctx); err != nil {
log.Printf("Redis unhealthy: %v", err)
}
Represents a single WebSocket connection with metadata:
type Connection struct {
Conn *websocket.Conn // The WebSocket connection
GroupID string // Session group ID (shared state boundary)
UserID string // User identity ("" for anonymous)
Template interface{} // Per-connection template for tree diffing
State interface{} // Reference to shared state from session group
Uploads interface{} // Per-connection upload registry
}
Key Methods:
Send(messageType, data) error - Thread-safe async send (non-blocking)Close() error - Thread-safe graceful shutdownAsync Send Architecture:
Send(msgType, data) [called from handler]
|
[Non-blocking] Queue to sendChan (buffered channel)
|
writePump goroutine (one per connection)
|
Dequeue from sendChan
|
conn.WriteMessage (protected by mutex)
|
WebSocket
Efficient lookup for broadcasting with dual indexing:
type ConnectionRegistry struct {
byGroup map[string][]*Connection // groupID -> connections
byUser map[string][]*Connection // userID -> connections
}
| Method | Description |
|---|---|
Register(conn, bufferSize) |
Add connection and start writePump |
Unregister(conn) |
Remove connection, trigger graceful shutdown |
GetByGroup(groupID) |
All connections in a session group (multi-tab) |
GetByUser(userID) |
All connections for a user (multi-device) |
GetAll() |
All active connections |
Count() |
Total connection count |
GroupCount() |
Number of session groups |
UserCount() |
Number of unique users |
Resource protection with two-level limits:
tmpl := livetemplate.New("app",
livetemplate.WithMaxConnections(10000), // Global limit
livetemplate.WithMaxConnectionsPerGroup(10), // Per-group limit
)
| Option | Default | Description |
|---|---|---|
WithMaxConnections(max) |
0 (unlimited) | Global connection limit |
WithMaxConnectionsPerGroup(max) |
0 (unlimited) | Per-group limit (prevents single-user DOS) |
Configure the message buffer per connection:
tmpl := livetemplate.New("app",
livetemplate.WithWebSocketBufferSize(100), // Default: 50
)
Or via environment variable:
export LVT_WS_BUFFER_SIZE=100
Buffer Size Recommendations:
| Traffic Level | Buffer Size | Notes |
|---|---|---|
| Low/memory-constrained | 10-25 | Minimal memory footprint |
| Normal | 50 (default) | Good balance |
| High/burst-heavy | 100-200 | Handles traffic spikes |
Memory Warning: Buffer size directly affects memory usage per connection. Each buffered message slot reserves memory even when empty. Plan for:
connections × buffer_size × avg_message_size.
Memory calculation:
For memory-constrained environments, prefer smaller buffers (10-25) and rely on backpressure to handle slow clients.
Disable WebSocket for HTTP-only operation:
tmpl := livetemplate.New("app",
livetemplate.WithWebSocketDisabled(),
)
Limit message rate per connection using a token bucket algorithm:
tmpl := livetemplate.New("app",
livetemplate.WithMessageRateLimit(10, 20), // 10 msg/sec, burst of 20
)
| Parameter | Description | Default |
|---|---|---|
messagesPerSecond (float64) |
Sustained message rate | 10 |
burstCapacity (int) |
Maximum burst size above sustained rate | 20 |
Set messagesPerSecond = 0 to disable rate limiting (not recommended for production).
var (
// Returned when Send() is called on a closed connection
ErrConnectionClosed = errors.New("connection closed")
// Returned when buffer is full (client not consuming fast enough)
// Connection will be closed automatically
ErrClientTooSlow = errors.New("client too slow")
)
Handling slow clients:
wsBufferFull and wsSlowClientCloses metrics| Metric | Value |
|---|---|
| Concurrent sends/sec | 165M (lock-free) |
| Queued sends/sec | 54.7M (through buffer) |
| Memory per connection | ~980 bytes base |
Memory calculation:
*sql.DB plus common third-party types like *redis.Client). AssertPureState[T](t) uses the same validatePureState heuristics as AsState—it helps you catch the same issues earlier in CI / tests without panicking, but it does not broaden detection. Truly unknown custom wrappers or third-party types will not be flagged unless you extend the framework's pattern list or add custom validation.Authenticator that returns the same groupID for different users would break isolation. Use the built-in authenticators or ensure GetSessionGroup maps distinct users to distinct groups.See Current Limitations for the full limitations reference.