Skip to content
This repository has been archived by the owner on Sep 21, 2022. It is now read-only.

Switchboard Refactor Notes

Chris Hendrix edited this page Aug 31, 2016 · 3 revisions

runner/bridge new data path (something which can tell me who to route to, and and must be fast) -> should we route at all? (listens for ROUTING BOOL) -> if NOT, exit -> if YES, who should we route to? (listens for NEW_CURRENTLY_ACTIVE_NODE) -> if ANY, make bridge to that, pass bridge off to something

existing data path (should be severable)

  • bridge from some backend to some client
    • if one side of the connection dies, obliterate the bridge
  • listens for: ROUTING_OFF
    • severs all connections

runner/api -> handler exists in api/ human control path (via API)

  • broadcast changes in whether to route at all (impacts data path)
    • broadcasts ROUTING_OFF routing BOOL
    • broadcasts ROUTING_ON

runner/monitor timed monitor (broadcast changes in who to route to) -> go through all backends -> determine which is the currently active one - broadcast NEW_CURRENTLY_ACTIVE_NODE


API about current state of the system

  • consumed by the dashboard

  • operator

  • BackendsIndex

  • Cluster


where does this live?

  • when new active backend is different from existing active backend, sever connections

ENABLE/DISABLE TRAFFIC MOVES TO BRIDGE

  • In runner/bridge, keep a reference to whether or not we are routing traffic at all

    • if we are routing traffic, find active node + create bridge
    • if not, return
  • In runner/bridge, default to routing traffic, but listen on a channel for changes, and update our internal state if we receive a message

    • if we go from ON -> OFF, sever existing connections
  • Rip out backend.go's idea of "trafficEnabled", and remove methods that turn it on/off

  • in runner/api, send ENABLE/DISABLE messages on the same channel as is consumed by the runner/bridge


ACTIVE BACKEND CHANGED MOVES TO MONITOR

  • In runner/bridge, keep a reference to the currently active backend

    • always route traffic to it
  • In runner/bridge, listen for new active backends, and change the active backend for new routes

    • sever connections to existing active backend?????????????? (we think yes)
  • In runner/monitor, periodically poll the cluster and figure out what the active backend is,

    • EITHER broadcast the active backend, every single time OR broadcast the active backend when it is a new active backend (PREFERRED)

Read API consumes messages

Emitted by the monitor

&Cluster{
	Active: Backend{
		Host: 
		Port:
	},
	Backends: []Backend{
		Backend{
			Host: 
			Port:
			Healthy:
		},
		Backend{
			Host: 
			Port:
			Healthy:
		},
		Backend{
			Host: 
			Port:
			Healthy:
		},
	},
}

Emitted by the bridge - re-emitted every time a bridge is generated

CurrentSessionCount++ (bridge generated)
CurrentSessionCount: 0 (all severed)
CurrentSessionCount-- (a bridge has been closed)

Emitted by the PATCH API

TrafficEnabled:

Consumed by the Read API listens for all 3, merges them into whatever kind of JSON it wants