Broadcasting, deeper

Counter, deeper showed ctx.BroadcastAction("Increment", nil) keeping every tab in a single browser in sync — a counter clicked in one tab ticks in the others. The scope of "every tab" was the session group: the browser's cookie pins all its tabs to one group, and BroadcastAction fans the named action out to every connection in that group.

Broadcasting goes further within the same scope. Counter shared one integer; this pattern shares a multi-author message log. Same BroadcastAction primitive, two design choices that change everything — which fields are per-connection vs persisted, and where the source of truth lives.

Broadcasting

ctx.BroadcastAction("NewMessage", nil) fans the named action out to every other connection in the session group. Peers receive it as a regular action invocation; their handler reads the shared message log under a mutex and refreshes local state. The broadcast is queued during the action and executes after it returns successfully.

Try: Open this page in a second tab and Join with a different name. Sending in either tab broadcasts to both. The shared log lives on the controller; each tab's Username is per-connection (not persisted) so two tabs in the same browser stay independent — see Reconnection Recovery for the persist case.

Limitation: The shared message log is in-memory and uncapped — production apps would ring-buffer, paginate, or persist to a TTL store. Kept simple here to focus on the broadcast mechanism itself.

Open the page in a second tab. Join with a different name. Send a message from either side. Both update. Both tabs are in the same session group (same cookie), so the broadcast reaches both — but each tab keeps its own Username because identity is per-connection, not persisted.

(For a setup where every visitor — across browsers, across machines — sees the same broadcasts, you'd swap AnonymousAuthenticator for one that returns a constant group ID. That's an authentication choice, not a BroadcastAction choice.)

Anatomy of the state

type BroadcastingState struct {
	Title    string
	Category string
	// Username is intentionally NOT lvt:"persist" — persist storage is keyed
	// by session group (state.go:1421 SessionStore.Set(ctx, groupID, ...)),
	// so persisting it would force every tab in the same browser to share a
	// single Username. The whole point of the demo is letting two tabs join
	// as different users; per-connection state is what makes that work.
	// Reconnect Recovery (#29) covers the persist scenario instead.
	Username string
	Messages []BroadcastMessage
}

state_realtime.go:16-28

Note what's not persisted. Username looks like a candidate for lvt:"persist" — it's user identity, surely you want it to survive a reconnect? But persist storage is keyed by session group, so persisting Username would force every tab in the same browser to share one identity, defeating the demo where two tabs join as different users.

The pattern that does persist state across reconnects is ReconnectionState (also in this file) — different recipe, same package. Same broadcast scope (session group), but every connection sees the same value across drops because the field is lvt:"persist"-tagged.

Where the messages live

type BroadcastingController struct {
	mu       sync.RWMutex
	nextID   int
	messages []BroadcastMessage
}

// snapshotLocked returns a copy of c.messages. The Locked suffix signals
// that the caller MUST hold c.mu (read or write) — without that, slices.Clone
// reads c.messages concurrently with Send's append and races.
func (c *BroadcastingController) snapshotLocked() []BroadcastMessage {
	return slices.Clone(c.messages)
}

func (c *BroadcastingController) Mount(state BroadcastingState, ctx *livetemplate.Context) (BroadcastingState, error) {
	c.mu.RLock()
	state.Messages = c.snapshotLocked()
	c.mu.RUnlock()
	return state, nil
}

handlers_realtime.go:61-80

The message log is on the controller, not in state. State is per-connection; the controller is the singleton dependency layer the Controller+State pattern puts in front of every connection routed to this handler. c.messages is the source of truth — every tab reads from it under the same RWMutex.

The Mount method runs on every initial render — without it, a tab that opens after others have sent messages would render with Messages: nil until the next broadcast arrives. Mount snapshots the current log into per-connection state so each tab starts coherent.

The broadcast

func (c *BroadcastingController) Send(state BroadcastingState, ctx *livetemplate.Context) (BroadcastingState, error) {
	if state.Username == "" {
		return state, nil
	}
	text := strings.TrimSpace(ctx.GetString("text"))
	if text == "" {
		return state, nil
	}
	c.mu.Lock()
	c.nextID++
	// No cap on c.messages: deliberately omitted to keep the demo focused
	// on the BroadcastAction mechanism. Production apps would ring-buffer,
	// paginate, or persist to a store with TTL.
	c.messages = append(c.messages, BroadcastMessage{ID: c.nextID, User: state.Username, Text: text})
	state.Messages = c.snapshotLocked()
	c.mu.Unlock()
	// BroadcastAction must come after the lock release — holding the
	// connection registry mutex while queuing broadcasts can deadlock with
	// peer dispatches that take the same mutex from the other side. Peers
	// receive "NewMessage" and refresh their local copy.
	ctx.BroadcastAction("NewMessage", nil)
	return state, nil
}

handlers_realtime.go:93-116

Two non-obvious mutex rules in this method:

  1. BroadcastAction after the lock release. Holding the connection registry mutex while queuing broadcasts can deadlock with peer dispatches taking the same mutex from the other side. The pattern: mutate-and-snapshot under your lock, release, then broadcast.

  2. snapshotLocked() requires the caller hold the lock. A naked slices.Clone(c.messages) reads concurrently with Send's append and races. The Locked suffix is documentation: violate it and you get a data race the test suite will catch under -race.

The third rule is implicit — c.messages is uncapped here. Production apps would ring-buffer, paginate, or persist to a TTL store. This demo skips that to keep the focus on BroadcastAction itself.

What peers do

func (c *BroadcastingController) NewMessage(state BroadcastingState, ctx *livetemplate.Context) (BroadcastingState, error) {
	c.mu.RLock()
	state.Messages = c.snapshotLocked()
	c.mu.RUnlock()
	return state, nil
}

handlers_realtime.go:120-126

NewMessage runs on every peer when the broadcast fires. It reads the shared log under RLock and copies into per-connection state. The template re-renders; the diff goes over the wire as patches, not full HTML.

This is why broadcast volume isn't proportional to message size: each peer's wire bytes equal the diff between its local state before and after NewMessage, which is roughly "one new message appended to the messages list."

When this scales

Single process, single replica: works as-shown. The mutex serializes appends; the broadcast is in-process Pub/Sub.

Multi-replica: swap in-process broadcast for Redis Pub/Sub via WithPubSubBroadcaster. The handler shape stays identical — the Send and NewMessage methods don't change. What changes is where c.messages lives (a shared store instead of a Go slice) and how BroadcastAction propagates (Redis publish, replica subscribers fire NewMessage on their connections).

What's next

The reconnection-recovery pattern (live demo at /patterns/realtime/reconnection) is the persist-state companion. Same BroadcastAction shape, but the demo state survives a WebSocket drop because the fields are lvt:"persist"-tagged. A future recipe will go deep on it; for now the live widget plus its source in the same _app/ is the reference.