Routing Analysis · March 2026

From 495x to 3.5x amplification

Campfire IDs are opaque keys—no prefixes, no hierarchy, no aggregation. Unlike BGP, a router can't pick "the one best path." We found what works instead.


The problem

In BGP, a router receiving a packet for 10.3.0.0/16 checks one prefix, picks one next-hop, done. The hierarchical address space means forwarding is O(1) per packet. You never flood because the prefix tells you which direction to go.

Campfire IDs are 32-byte Ed25519 public keys. There's no hierarchy, no aggregation, no "this next-hop covers these campfires." When a node builds its forwarding set, it collects all unique NextHops from its routing table—and on a meshed network with 5 routes from 5 different directions, that's 5 forwards instead of 1.

The question: how bad is it, and what fixes it?


Test topology

We built a BGP-realistic network model: 200 nodes arranged as a binary spanning tree (backbone) with two ring augmentations (stride=40 and stride=3) for cross-links. Average degree 6.0. Members scattered across the network, not placed along clean chains.

Nodes200
Avg degree6.0
Campfires20
Members each3
Member slots60
Placementscattered

This topology is the same for all scenarios—flood, path-vector, and spanning tree all run on the identical graph. Apples to apples.


Results

Mode
Sends
Amplification
vs Flood
Flood v0.4.2
19,820
495.50x
Path-vector v0.5.0
1,895
47.38x
90%
Bloom SPT v0.7.0
139
3.48x
99.3%
99.3%
waste eliminated
3.48x
amplification
0
false negatives
32B
per peer

Why bloom filters

A bloom filter is a probabilistic data structure with two guarantees: zero false negatives (if a campfire is reachable through this peer, the filter always says yes) and low false positives (occasionally says yes for a campfire that isn't actually downstream, bounded by filter size).

Each node maintains a 256-bit bloom filter per direct peer, encoding which campfire IDs have members reachable through that peer. When forwarding a message for campfire C, the node checks each peer's bloom: only peers whose filter matches C are included in the forwarding set.

This approximates a per-campfire shortest-path tree (SPT) without any explicit tree construction, tree maintenance, or additional protocol messages. The bloom filters build naturally as beacons propagate.

Why not actual spanning trees? Explicit SPT requires tree construction messages, failure detection, reconvergence. Bloom filters achieve the same forwarding decisions with zero coordination overhead—they're populated as a side effect of beacon processing that already happens.

Filter sizing

With k=3 hash functions on a 256-bit filter, the false positive rate for 20 campfires is ~0.02%. At 1,000 campfires, a 1,024-bit (128 bytes) filter keeps FPR below 1%. The filter is per-peer, so total overhead is peers × filter_size—typically under 1KB per node.


The spanning tree insight

The core realization: campfire forwarding is multicast, not unicast. A message needs to reach N members in different directions. BGP solves unicast with prefix aggregation. Multicast solves it with spanning trees.

For each campfire, the optimal forwarding structure is a Steiner tree: the minimum subgraph connecting the origin to all members. The bloom filter encodes which peers are on this tree for which campfires, without building the tree explicitly.

How it works at each node

  1. A beacon for campfire C arrives from peer P.
  2. The node adds C to peer P's bloom filter.
  3. When forwarding a message for campfire C, the node checks each peer's bloom filter.
  4. Only peers whose bloom contains C are forwarded to.
  5. Because beacons propagate via BFS (shortest path first), the first peer to deliver a beacon for C is on the shortest path toward a member—exactly the SPT edge.

Implementation

Three changes to the campfire routing stack, all in pkg/transport/http/:

1. BloomFilter type

// 256 bits, k=3, FNV-1a hash. Zero-alloc bit array.
type BloomFilter struct {
    bits [4]uint64
}
func (bf *BloomFilter) Add(item string)
func (bf *BloomFilter) MayContain(item string) bool

2. RoutingTable.peerBlooms

// Populated on every beacon receipt. No new protocol messages.
peerBlooms map[string]*BloomFilter  // peer_id -> downstream campfires

3. forwardMessage() bloom check

// Before: forward to all NextHops
// After: forward to NextHops whose bloom matches
if h.transport.routingTable.PeerBloomCheck(route.NextHop, campfireID) {
    forwardingSet[route.NextHop] = true
}

Backwards compatible. If no bloom data exists for a peer (pre-v0.7.0 node), PeerBloomCheck returns true—conservative fallback to path-vector behavior. The bloom filter is purely additive.


Run the proof

All tests are in the campfire repo.

$ cd campfire
$ go test -v -run TestAmplificationBGP ./pkg/transport/http/
$ go test -v -run TestAmplificationBloom ./pkg/transport/http/
$ go test -v -run TestAmplification ./pkg/transport/http/