Skip to content

Commit

Permalink
update deps
Browse files Browse the repository at this point in the history
  • Loading branch information
tellnobody1 committed Aug 19, 2023
1 parent 92e80fd commit 385d237
Show file tree
Hide file tree
Showing 32 changed files with 98 additions and 97 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,5 +12,5 @@ jobs:
submodules: 'recursive'
- uses: actions/[email protected]
with:
java-version: 18
java-version: 20
- run: sbt test
2 changes: 1 addition & 1 deletion .sdkmanrc
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# Enable auto-env through the sdkman_auto_env config
# Add key=value pairs of SDKs to use below
java=18-zulu
java=20.0.2-zulu
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
DHARMA License

Copyright (c) 2015–2022 zero-deps
Copyright (c) 2015–2023 zero-deps

Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
Expand Down
26 changes: 11 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,27 @@
# Abstract scala type database
# Scala Abstract Type Database

[![CI](https://img.shields.io/github/workflow/status/zero-deps/kvs/ci)](https://github.com/zero-deps/kvs/actions/workflows/test.yml)
[![License](https://img.shields.io/badge/license-DHARMA-green)](LICENSE)
[![Documentation](https://img.shields.io/badge/documentation-pdf-yellow)](kvs.pdf)
[![Paper](https://img.shields.io/badge/paper-pdf-lightgrey)](https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf)
[![LoC](https://img.shields.io/tokei/lines/github/zero-deps/kvs)](#)

Abstract Scala storage framework with high-level API for handling linked lists of polymorphic data (feeds).
This open-source project presents an abstract storage framework in Scala, offering a high-level API tailored for managing linked lists of polymorphic data, referred to as 'feeds.' The system, known as KVS (Key-Value Storage), boasts attributes such as high availability, distributed architecture (AP), strong eventual consistency (SEC), and sequential consistency achieved through cluster sharding. Its primary application involves handling data from sports and gaming events, but it can also serve as a distributed network file system or a versatile general-purpose storage solution for various applications.

KVS is highly available distributed (AP) strong eventual consistent (SEC) and sequentially consistent (via cluster sharding) storage. It is used for data from sport and games events. In some configurations used as distributed network file system. Also can be a generic storage for application.
The design philosophy behind KVS encompasses versatility, with support for multiple backend implementations and compatibility within a pure JVM environment. The implementation is grounded in the KAI framework (an Erlang-based Amazon DynamoDB implementation), adapted to utilize the pekko-cluster infrastructure.

Designed with various backends in mind and to work in pure JVM environment. Implementation based on top of KAI (implementation of Amazon DynamoDB in Erlang) port with modification to use akka-cluster infrastructure.

Currently main backend is RocksDB to support embedded setup alongside application. Feed API (add/entries/remove) is built on top of Key-Value API (put/get/delete).
At its core, KVS relies on RocksDB as the primary backend, enabling seamless integration in embedded setups alongside applications. The central Feed API, facilitating operations like addition, entry retrieval, and removal, is constructed upon the foundation of the Key-Value API, which includes functions for putting, getting, and deleting data.

## Usage

Add project as a git submodule.

## Structure
## Project Structure

* `./feed` -- Feed over Ring
* `./search` -- Seach over Ring
* `./sort` -- Sorted Set on Ring
* `./ring` -- Ring on Akka Cluster
* `./sharding` -- Sequential Consistency & Cluster Sharding
* `./src` -- Example apps and tests
* `./feed`: Introduces the Feed over Ring concept
* `./search`: Offers Search over Ring functionality
* `./sort`: Implements a Sorted Set on Ring
* `./ring`: Establishes a Ring structure using Pekko Cluster
* `./sharding`: Addresses Sequential Consistency & Cluster Sharding aspects
* `./src`: Contains illustrative sample applications and comprehensive tests

## Test & Demo

Expand Down
16 changes: 8 additions & 8 deletions build.sbt
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
val scalav = "3.2.0"
val zio = "2.0.0"
val akka = "2.6.19"
val rocks = "7.3.1"
val protoj = "3.21.1"
val scalav = "3.3.0"
val zio = "2.0.15" // 16
val pekko = "1.0.1"
val rocks = "8.3.2"
val protoj = "3.24.0"
val lucene = "8.11.2"

lazy val kvsroot = project.in(file(".")).settings(
scalaVersion := scalav
, libraryDependencies ++= Seq(
"dev.zio" %% "zio-test-sbt" % zio % Test
, "com.typesafe.akka" %% "akka-cluster-sharding" % akka
, "org.apache.pekko" %% "pekko-cluster-sharding" % pekko
)
, testFrameworks += new TestFramework("zio.test.sbt.ZTestFramework")
, scalacOptions ++= scalacOptionsCommon
Expand All @@ -22,7 +22,7 @@ lazy val ring = project.in(file("ring")).settings(
scalaVersion := scalav
, Compile / scalaSource := baseDirectory.value / "src"
, libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-cluster" % akka
"org.apache.pekko" %% "pekko-cluster" % pekko
, "org.rocksdb" % "rocksdbjni" % rocks
, "dev.zio" %% "zio" % zio
)
Expand All @@ -33,7 +33,7 @@ lazy val sharding = project.in(file("sharding")).settings(
scalaVersion := scalav
, Compile / scalaSource := baseDirectory.value / "src"
, libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-cluster-sharding" % akka
"org.apache.pekko" %% "pekko-cluster-sharding" % pekko
)
, scalacOptions ++= scalacOptionsCommon
).dependsOn(ring)
Expand Down
2 changes: 1 addition & 1 deletion feed/src/ops.scala
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
package kvs.feed

import akka.actor.Actor
import org.apache.pekko.actor.Actor
import kvs.rng.{Dba, Key, AckQuorumFailed, AckTimeoutFailed}
import proto.*
import zio.*, stream.*
Expand Down
2 changes: 1 addition & 1 deletion project/build.properties
Original file line number Diff line number Diff line change
@@ -1 +1 @@
sbt.version=1.7.1
sbt.version=1.9.3
2 changes: 1 addition & 1 deletion ring/src/Data.scala
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ package data

import proto.*

import akka.cluster.given
import org.apache.pekko.cluster.given

sealed trait StoreKey

Expand Down
4 changes: 2 additions & 2 deletions ring/src/GatherDel.scala
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
package kvs.rng

import akka.actor.*
import akka.cluster.Cluster
import org.apache.pekko.actor.*
import org.apache.pekko.cluster.Cluster
import scala.concurrent.duration.*

class GatherDel(client: ActorRef, t: FiniteDuration, prefList: Set[Node], k: Array[Byte], conf: Conf) extends FSM[FsmState, Set[Node]] with ActorLogging {
Expand Down
2 changes: 1 addition & 1 deletion ring/src/GatherGet.scala
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
package kvs.rng

import annotation.unused
import akka.actor.*
import org.apache.pekko.actor.*
import scala.concurrent.duration.*
import scala.collection.immutable.{HashSet}

Expand Down
2 changes: 1 addition & 1 deletion ring/src/GatherPut.scala
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
package kvs.rng

import akka.actor.{ActorLogging, ActorRef, FSM, Props, RootActorPath}
import org.apache.pekko.actor.{ActorLogging, ActorRef, FSM, Props, RootActorPath}
import scala.concurrent.duration.*

import data.Data, model.{StoreGetAck, StorePut}
Expand Down
6 changes: 3 additions & 3 deletions ring/src/Hash.scala
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
package kvs.rng

import akka.actor.*
import akka.cluster.ClusterEvent.*
import akka.cluster.{Member, Cluster}
import org.apache.pekko.actor.*
import org.apache.pekko.cluster.ClusterEvent.*
import org.apache.pekko.cluster.{Member, Cluster}
import scala.collection.immutable.{SortedMap, SortedSet}
import proto.*

Expand Down
2 changes: 1 addition & 1 deletion ring/src/Hashing.scala
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
package kvs.rng

import akka.actor.*
import org.apache.pekko.actor.*
import java.security.MessageDigest
import scala.annotation.tailrec
import scala.collection.SortedMap
Expand Down
4 changes: 2 additions & 2 deletions ring/src/Replication.scala
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
package kvs.rng

import scala.collection.immutable.SortedMap
import akka.actor.{ActorLogging, Props, FSM}
import akka.cluster.Cluster
import org.apache.pekko.actor.{ActorLogging, Props, FSM}
import org.apache.pekko.cluster.Cluster

import model.{ReplBucketPut, ReplBucketsVc, ReplGetBucketIfNew, ReplBucketUpToDate, ReplNewerBucketData, KeyBucketData}
import ReplicationSupervisor.{State, ReplGetBucketsVc}
Expand Down
2 changes: 1 addition & 1 deletion ring/src/SelectionMemorize.scala
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
package kvs.rng

import akka.actor.*
import org.apache.pekko.actor.*

case class Watch(a: ActorRef)
case class Select(node: Node, path: String)
Expand Down
4 changes: 2 additions & 2 deletions ring/src/StoreReadonly.scala
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
package kvs.rng
package store

import akka.actor.{Actor, ActorLogging, Props}
import akka.cluster.{VectorClock}
import org.apache.pekko.actor.{Actor, ActorLogging, Props}
import org.apache.pekko.cluster.{VectorClock}
import org.rocksdb.*
import proto.{encode, decode}

Expand Down
4 changes: 2 additions & 2 deletions ring/src/StoreWrite.scala
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
package kvs.rng
package store

import akka.actor.{Actor, ActorLogging, Props}
import akka.cluster.{Cluster, VectorClock}
import org.apache.pekko.actor.{Actor, ActorLogging, Props}
import org.apache.pekko.cluster.{Cluster, VectorClock}
import org.rocksdb.*
import proto.{encode, decode}

Expand Down
4 changes: 2 additions & 2 deletions ring/src/cluster.scala
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
package akka
package org.apache.pekko

import proto.*

package object cluster {
implicit val vcodec: MessageCodec[(String,Long)] = caseCodecNums[(String,Long)]("_1"->1, "_2"->2)
implicit val vccodec: MessageCodec[akka.cluster.VectorClock] = caseCodecNums[akka.cluster.VectorClock]("versions"->1)
implicit val vccodec: MessageCodec[org.apache.pekko.cluster.VectorClock] = caseCodecNums[org.apache.pekko.cluster.VectorClock]("versions"->1)
val emptyVC = VectorClock()
}
6 changes: 3 additions & 3 deletions ring/src/conf.scala
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ case class Conf(
, dir: String = "data_rng"
)

def akkaConf(name: String, host: String, port: Int): String = s"""
akka {
def pekkoConf(name: String, host: String, port: Int): String = s"""
pekko {
actor {
provider = cluster
deployment {
Expand Down Expand Up @@ -59,6 +59,6 @@ def akkaConf(name: String, host: String, port: Int): String = s"""
hostname = $host
port = $port
}
cluster.seed-nodes = [ "akka://$name@$host:$port" ]
cluster.seed-nodes = [ "pekko://$name@$host:$port" ]
}
"""
6 changes: 3 additions & 3 deletions ring/src/dba.scala
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
package kvs.rng

import akka.actor.{Actor, ActorLogging, Props, Deploy}
import akka.event.Logging
import akka.routing.FromConfig
import org.apache.pekko.actor.{Actor, ActorLogging, Props, Deploy}
import org.apache.pekko.event.Logging
import org.apache.pekko.routing.FromConfig
import kvs.rng.store.{ReadonlyStore, WriteStore}
import org.rocksdb.{util as _, *}
import proto.*
Expand Down
33 changes: 19 additions & 14 deletions ring/src/merge.scala
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@ object MergeOps {
@tailrec def loop(xs: Vector[KeyBucketData], acc: HashMap[Bytes, KeyBucketData]): Vector[KeyBucketData] = {
xs match
case Vector() => acc.values.toVector
case received +: t =>
case Vector(received, s*) =>
val t = s.toVector
val k = Bytes.unsafeWrap(received.key)
acc.get(k) match
case None =>
Expand All @@ -49,7 +50,8 @@ object MergeOps {
def loop(xs: Vector[KeyBucketData], acc: HashMap[Bytes, KeyBucketData]): Vector[KeyBucketData] =
xs match
case Vector() => acc.values.toVector
case received +: t =>
case Vector(received, s*) =>
val t = s.toVector
val k = Bytes.unsafeWrap(received.key)
acc.get(k) match
case None =>
Expand All @@ -68,20 +70,23 @@ object MergeOps {
def loop(xs: Vector[Option[Data]], newest: Option[Data]): Option[Data] =
xs match
case Vector() => newest
case None +: t => loop(t, newest)
case (r@Some(received)) +: t =>
newest match
case None => loop(t, r)
case s@Some(saved) =>
saved < received match
case OkLess(true) => loop(t, r)
case OkLess(false) => loop(t, s)
case ConflictLess(true, _) => loop(t, r)
case ConflictLess(false, _) => loop(t, s)
case Vector(x, s*) =>
x match
case None => loop(s.toVector, newest)
case r@Some(received) =>
val t = s.toVector
newest match
case None => loop(t, r)
case s@Some(saved) =>
saved < received match
case OkLess(true) => loop(t, r)
case OkLess(false) => loop(t, s)
case ConflictLess(true, _) => loop(t, r)
case ConflictLess(false, _) => loop(t, s)
xs match
case Vector() => None -> HashSet.empty
case h +: t =>
val correct = loop(t.map(_._1), h._1)
case Vector(h, t*) =>
val correct = loop(t.view.map(_._1).toVector, h._1)
def makevc1(x: Option[Data]): VectorClock = x.map(_.vc).getOrElse(emptyVC)
val correct_vc = makevc1(correct)
correct -> xs.view.filterNot(x => makevc1(x._1) == correct_vc).map(_._2).to(HashSet)
Expand Down
6 changes: 3 additions & 3 deletions ring/src/package.scala
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
package kvs.rng

import akka.actor.{Address, ActorRef}
import org.apache.pekko.actor.{Address, ActorRef}
import proto.*

type Bucket = Int
type VNode = Int
type Node = Address
type Key = Array[Byte]
type Value = Array[Byte]
type VectorClock = akka.cluster.VectorClock
type VectorClock = org.apache.pekko.cluster.VectorClock
type Age = (VectorClock, Long)
type PreferenceList = Set[Node]

val emptyVC = akka.cluster.emptyVC
val emptyVC = org.apache.pekko.cluster.emptyVC

extension (value: String)
def blue: String = s"\u001B[34m${value}\u001B[0m"
Expand Down
6 changes: 3 additions & 3 deletions ring/src/pickle.scala
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
package kvs.rng

import akka.actor.{ExtendedActorSystem}
import akka.serialization.{BaseSerializer}
import org.apache.pekko.actor.{ExtendedActorSystem}
import org.apache.pekko.serialization.{BaseSerializer}
import proto.*

import kvs.rng.model.*, kvs.rng.data.codec.*
import akka.cluster.given
import org.apache.pekko.cluster.given

class Serializer(val system: ExtendedActorSystem) extends BaseSerializer {
implicit val msgCodec: MessageCodec[Msg] = {
Expand Down
4 changes: 2 additions & 2 deletions ring/src/system.scala
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ package kvs.rng
import com.typesafe.config.{ConfigFactory, Config}
import zio.*

type ActorSystem = akka.actor.ActorSystem
type ActorSystem = org.apache.pekko.actor.ActorSystem

object ActorSystem:
case class Conf(name: String, config: Config)
Expand All @@ -16,7 +16,7 @@ object ActorSystem:
ZIO.acquireRelease(
for
conf <- ZIO.service[Conf]
system <- ZIO.attempt(akka.actor.ActorSystem(conf.name, conf.config))
system <- ZIO.attempt(org.apache.pekko.actor.ActorSystem(conf.name, conf.config))
yield system
)(
system => ZIO.fromFuture(_ => system.terminate()).either
Expand Down
2 changes: 1 addition & 1 deletion search/src/dba.scala
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ class DbaEff(dba: Dba):
def delete(key: K): R[Unit] = run(dba.delete(stob(key)))

private def run[A](eff: IO[Err, A]): R[A] =
Unsafe.unsafe(Runtime.default.unsafe.run(eff.either).toEither).flatten
Unsafe.unsafely(Runtime.default.unsafe.run(eff.either).toEither).flatten

private inline def stob(s: String): Array[Byte] =
s.getBytes("utf8").nn
4 changes: 2 additions & 2 deletions sharding/src/consistency.scala
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
package kvs.sharding

import akka.actor.{Actor, Props}
import org.apache.pekko.actor.{Actor, Props}
import kvs.rng.DbaErr
import zio.*

Expand All @@ -20,7 +20,7 @@ object SeqConsistency:
cfg.name
, Props(new Actor:
def receive: Receive =
a => sender() ! Unsafe.unsafe(Runtime.default.unsafe.run(cfg.f(a)))
a => sender() ! Unsafe.unsafely(Runtime.default.unsafe.run(cfg.f(a)))
)
, cfg.id)
yield
Expand Down
Loading

0 comments on commit 385d237

Please sign in to comment.