Skip to content
/ nnv Public

NNV(No-Named.V) is a high-level project where we take on the challenge of deploying a all-in-one database from scratch to production. Join me on this journey!

License

Notifications You must be signed in to change notification settings

sjy-dv/nnv

Repository files navigation

NNV (No-Named.V)

logo

NNV (No-Named.V) is a database designed to be implemented from scratch to production. NNV can be deployed in edge environments and used in small-scale production settings. Through the innovative architectural approach described below, it is envisioned and developed to be used reliably in large-scale production environments as well.

🎉 Release Update - 2024.11.14

For the full update history, see UPDATE HISTORY.


🔹 NNV-Edge

  • No updates in this release.

🔹 NNV

See the detailed comparison with ChromaDB for search results.

Enhancements

  • Restored HNSW: The previously used HNSW algorithm has been reintroduced.
  • New Product Quantization (PQ): HNSW Product Quantization has been added for improved efficiency.
  • Pure Go Implementation: All CGO dependencies have been removed, making the implementation entirely in Go.
  • Optimized Search Speed:
    • 50,000-item dataset: < 14ms
    • 10,000-item dataset or fewer: < 3ms

🚀 Update Preview

⚠️ Expected release date is TBD. Development is ongoing, and updates will be added as we progress.(It's slow because I work in my spare time outside of work.) 😭


🔸 Planned Features and Improvements

NNV-Edge

  • Enhanced Logging: Detailed logging will be added for better traceability and debugging.
  • Edge-based Project Integration: Ongoing work with Edge-based projects will continue, with improvements based on progress and feedback.

NNV

  • Cosine Similarity Compatibility: PQ (Product Quantization) operates primarily with Euclidean distance. However, with Cosine similarity, vector normalization logic is required. (Normalized vectors for Euclidean distance yield performance similar to Cosine similarity.)
  • RPC Setup for HNSW: RPC functionality for HNSW is planned to facilitate remote usage.
  • Storage Enhancements: Fast in-memory storage and reliable disk-based storage will be introduced.
  • System Idle-State Backup: An automatic backup process will be added to periodically save data during idle states.
  • Automatic Recovery: A feature for automatic recovery will be implemented.
  • Advanced Filtering: Support for expressions and various range searches will be included in the filter functionality.
  • Performance Benchmarking: Comprehensive benchmarking will be conducted once the system stabilizes.
  • Load Balancer: A load balancer will be developed post-stabilization to manage system load effectively.

⚠️ Important Notice

Performance may be temporarily reduced due to ongoing development. Thank you for your patience!


Run from the source code.

Windows & Linux
git clone https://github.com/sjy-dv/nnv
cd nnv
go run cmd/root/main.go

MacOS
**The CPU acceleration (SSE, AVX2, AVX-512) code has caused an error where it does not function on Mac, and it is not a priority to address at this time.**

git clone https://github.com/sjy-dv/nnv
cd nnv
source .env
deploy
make simple-docker

Index

Features

When planning this project, I gave it a lot of thought.

When setting up the cluster environment, it's natural for most developers to choose the RAFT algorithm, as I had always done before. The reason being that it's a proven approach used by successful projects.

However, I began to wonder: isn't it a bit complex? RAFT increases read availability but decreases write availability. So, how would I solve this if multi-write becomes necessary in the long run?

Given the nature of vector databases, I assumed that most services would be structured around batch jobs rather than real-time writing. But does that mean I can just skip addressing the issue? I didn't think so. However, building a multi-leader setup on top of RAFT using something like gossip felt extremely complex and difficult. img1

Therefore, as of today (2024-10-20), I am considering two architectural approaches.

ARCHITECTURE

The architecture is divided into two approaches.

LoadBalancer & Database Integration

First, a load balancer is placed at the front, supporting both sharding and integration of the data. The internal database exists in a pure state.

architecture1 architecture2
Replica LB Shard LB

The replication load balancer waits for all databases to successfully complete writes before committing or rolling back, while the shard load balancer distributes the load evenly across the shard databases to ensure similar storage capacities.

The key difference is that replication can slow down write operations but provides faster read performance in the medium to long term compared to the shard load balancer. On the other hand, the shard approach offers faster write speeds because it only commits to a specific shard, but reading requires gathering data from all shards, which is slower initially but could become faster than replication as the dataset grows.

Therefore, for managing large volumes of data, the shard balancer is slightly more recommended. However, the main point of both architectures is their simplicity in setup and management, making them as easy to handle as a typical backend server. arch1_structure

JetStream(Nats) Multi-Leader

arch4

The second approach utilizes JetStream for the configuration.

While this is architecturally simpler than the previous approach, from the user's perspective, the setup is not significantly different from RAFT.

However, the key difference is that, unlike RAFT, it supports multi-write and multi-read configurations, rather than single-write and multi-read.

In this approach, the database is configured in a replication format, and JetStream is used to enable multi-leader configurations.

arch5 Each database contains its own JetStream, and these JetStreams join the same group of topics and clusters. In this case, whenever all nodes attempt to publish changes to a row, they pass through the same JetStream. If two nodes attempt to modify the same data in parallel, they will compete to publish their changes. While it's possible to prevent changes from being propagated, this could lead to data loss. According to the RAFT quorum constraint in JetStream, only one writer can publish the change. Therefore, we designed the system to allow the last writer to win. This is not an issue for vector databases because, compared to traditional databases, the data structure is simpler (this doesn't imply that the system itself is simple, but rather that there are fewer complex transactions and procedures, such as transaction serialization). This also avoids global locks and performance bottlenecks.

summary

Summary:

  1. RAFT and Quorum Constraints
    RAFT is an algorithm that dictates which server writes data first. In RAFT, the concept of a quorum refers to the minimum number of servers required to confirm data before it's written. This ensures that even if two servers try to write data simultaneously, RAFT allows only one server to write first.
  2. Last Writer Wins
    Even if one server writes data first, the server that writes last ultimately "wins." This means that the data from the last server to write will overwrite the previous server’s data.
  3. Transaction Serialization Concerns
    Transaction serialization refers to ensuring that consistent actions occur across multiple tables. In NNV, to improve performance, global locking (locking all servers before writing data) is avoided. Instead, when multiple servers modify data simultaneously, the last one to modify it will win. This approach is feasible because vector databases are simpler than traditional databases—they don’t require complex transaction serialization across multiple tables or collections.
  4. Why This Design?
    The primary reason is performance. Locking all servers before processing data is safe but slow. Instead, allowing each server to freely modify data and accepting the last modification as the final result is faster and more efficient.

I will explain the internal data storage flow.

arch6

First, HNSW operates in memory internally, and its data is stored as cached files. However, this poses a risk of data corruption in the event of an abnormal shutdown.

To address this, nnlogdb (no-named-tsdb) is internally deployed to track insert, update, and delete events. Since only metadata and vectors are needed without node links, this is not a significant issue.

The observer continuously compares the tracked log values with the latest node, and if a problem arises, HNSW recovery is initiated.

Disk files can sometimes become corrupted and fail to open, leading to significant issues. Is cached data safe?

arch7 Cache data files support fast loading and saving through the INFLATE/DEFLATE compression algorithm. However, cache files are inherently much less stable than disk files.

To address this, we deploy "old" versions. These versions are not user-specified; instead, they are managed internally. During idle periods, data changes are saved as new cache data, while the previous stable open file version is stored as the "old" version. When this happens, the last update time of the "old" version aligns with the sync time in nnlogdb.

To manage disk usage efficiently, all previous partitions up to the reliably synced period are deleted.

This approach ensures stable data management.

Does disk storage also have structural flexibility?

Disk storage will not initially have structural flexibility. However, in the long term, we aim to either introduce flexibility for the disk structure or, unfortunately, impose some restrictions on the memory side. While no final decision has been made, we believe memory storage should maintain flexibility, so we’re likely to design disk storage with some degree of structural flexibility in the future. SQLite will be supported as the disk storage option.

What is NNV-Edge?

Edge refers to the ability to transmit and receive data on nearby devices without communication with a central server. However, in practice, "Edge" in software may sometimes differ from this concept, as it is often deployed in lighter, resource-constrained environments compared to a central server.

NNV-Edge is designed to operate quickly on smaller-scale vector datasets (up to 1 million vectors) in a lightweight manner, transferring automated tasks from the original NNV back to the user for greater control.

Advanced algorithms like HNSW, Faiss, and Annoy are excellent, but don’t you think they may be a bit heavy for smaller-scale specs? And setting aside algorithms, while projects like Milvus, Weaviate, and Qdrant are built by brilliant minds, aren’t they somewhat too resource-intensive to run alongside other software on small, portable devices? arch9 That’s where NNV-Edge comes in.

What if you distribute multiple edges? By using NNV-Edge with the previously mentioned load balancer, you can create an advanced setup that shards data across multiple edges and aggregates it seamlessly!

About

NNV(No-Named.V) is a high-level project where we take on the challenge of deploying a all-in-one database from scratch to production. Join me on this journey!

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published