distributed key value storage with raft
most of the code is from etcd , just for reading and learning its source code
go get github.com/ejunjsh/raftkv/cmd/raftkvd
cd $GOPATH/src/github.com/ejunjsh/raftkv/
sh ci.sh
First start a single-member cluster of raftkvd:
raftkvd --id 1 --cluster http://127.0.0.1:12379 --port 12380
Each raftkvd process maintains a single raft instance and a key-value server. The process's list of comma separated peers (--cluster), its raft ID index into the peer list (--id), and http key-value server port (--port) are passed through the command line.
Next, store a value ("hello") to a key ("my-key"):
curl -L http://127.0.0.1:12380/my-key -XPUT -d hello
Finally, retrieve the stored key:
curl -L http://127.0.0.1:12380/my-key
raftkvd --id 1 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 12380
raftkvd --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
raftkvd --id 3 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 32380
This will bring up three raftkvd instances.
Now it's possible to write a key-value pair to any member of the cluster and likewise retrieve it from any member.
To test cluster recovery, first start a cluster and write a value "foo":
# start all nodes before
raftkvd --id 1 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 12380
raftkvd --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
raftkvd --id 3 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 32380
curl -L http://127.0.0.1:12380/my-key -XPUT -d foo
Next, remove a node and replace the value with "bar" to check cluster availability:
# kill the node 2 before
kill ...
curl -L http://127.0.0.1:12380/my-key -XPUT -d bar
curl -L http://127.0.0.1:32380/my-key
Finally, bring the node back up and verify it recovers with the updated value "bar":
# restart node 2 before
raftkvd --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
curl -L http://127.0.0.1:22380/my-key
Nodes can be added to or removed from a running cluster using requests to the REST API.
For example, suppose we have a 3-node cluster that was started with the commands:
raftkvd --id 1 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 12380
raftkvd --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
raftkvd --id 3 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 32380
A fourth node with ID 4 can be added by issuing a POST:
curl -L http://127.0.0.1:12380/4 -XPOST -d http://127.0.0.1:42379
Then the new node can be started as the others were, using the --join option:
raftkvd --id 4 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379,http://127.0.0.1:42379 --port 42380 --join
The new node should join the cluster and be able to service key/value requests.
We can remove a node using a DELETE request:
curl -L http://127.0.0.1:12380/3 -XDELETE
Node 3 should shut itself down once the cluster has processed this request.