Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test(integration): add pg kvbackend #5331

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions .github/workflows/develop.yml
Original file line number Diff line number Diff line change
Expand Up @@ -703,6 +703,25 @@ jobs:
GT_KAFKA_ENDPOINTS: 127.0.0.1:9092
GT_KAFKA_SASL_ENDPOINTS: 127.0.0.1:9093
UNITTEST_LOG_DIR: "__unittest_logs"
- name: Run integration test with pg kvbackend
run: cargo test integration
env:
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
RUST_BACKTRACE: 1
CARGO_INCREMENTAL: 0
GT_S3_BUCKET: ${{ vars.AWS_CI_TEST_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.AWS_CI_TEST_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.AWS_CI_TEST_SECRET_ACCESS_KEY }}
GT_S3_REGION: ${{ vars.AWS_CI_TEST_BUCKET_REGION }}
GT_MINIO_BUCKET: greptime
GT_MINIO_ACCESS_KEY_ID: superpower_ci_user
GT_MINIO_ACCESS_KEY: superpower_password
GT_MINIO_REGION: us-west-2
GT_MINIO_ENDPOINT_URL: http://127.0.0.1:9000
GT_POSTGRES_ENDPOINTS: postgres://greptimedb:[email protected]:5432/postgres
GT_KAFKA_ENDPOINTS: 127.0.0.1:9092
GT_KAFKA_SASL_ENDPOINTS: 127.0.0.1:9093
UNITTEST_LOG_DIR: "__unittest_logs"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will double our runner time and CI cost. We need figure out a way to run this effectively.

Copy link
Collaborator Author

@CookiePieWw CookiePieWw Jan 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not double actually, but still quite slow
image

Besides, I doubt if the pg kvbackend is actually running, I didn't see the info logs that tells me which backend it's using...

If we'd like to run integration test over pg kvbackend, the cost seems unevitable. I wonder if there's a method that we can run it when something related to pg_kvbackend features have been changed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move this to a separated task that uses free runner (ubuntu-latest) to see if it's capable to run this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quite reasonable for me. I'll have a try.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also remember to change the command from cargo test integration to cd tests-integration && cargo nextest run. This will reduce test binaries required to compile and link.

- name: Codecov upload
uses: codecov/codecov-action@v4
with:
Expand Down
2 changes: 2 additions & 0 deletions tests-integration/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ edition.workspace = true
license.workspace = true

[features]
pg_kvbackend = ["common-meta/pg_kvbackend", "meta-srv/pg_kvbackend"]
default = ["pg_kvbackend"]
dashboard = []

[lints]
Expand Down
47 changes: 32 additions & 15 deletions tests-integration/src/cluster.rs
Original file line number Diff line number Diff line change
Expand Up @@ -36,11 +36,14 @@ use common_meta::heartbeat::handler::HandlerGroupExecutor;
use common_meta::kv_backend::chroot::ChrootKvBackend;
use common_meta::kv_backend::etcd::EtcdStore;
use common_meta::kv_backend::memory::MemoryKvBackend;
#[cfg(feature = "pg_kvbackend")]
use common_meta::kv_backend::postgres::PgStore;
use common_meta::kv_backend::KvBackendRef;
use common_meta::peer::Peer;
use common_meta::DatanodeId;
use common_runtime::runtime::BuilderBuild;
use common_runtime::Builder as RuntimeBuilder;
use common_telemetry::info;
use common_test_util::temp_dir::create_temp_dir;
use common_wal::config::{DatanodeWalConfig, MetasrvWalConfig};
use datanode::config::{DatanodeOptions, ObjectStoreConfig};
Expand Down Expand Up @@ -94,21 +97,35 @@ pub struct GreptimeDbClusterBuilder {

impl GreptimeDbClusterBuilder {
pub async fn new(cluster_name: &str) -> Self {
let endpoints = env::var("GT_ETCD_ENDPOINTS").unwrap_or_default();

let kv_backend: KvBackendRef = if endpoints.is_empty() {
Arc::new(MemoryKvBackend::new())
} else {
let endpoints = endpoints
.split(',')
.map(|s| s.to_string())
.collect::<Vec<String>>();
let backend = EtcdStore::with_endpoints(endpoints, 128)
.await
.expect("malformed endpoints");
// Each retry requires a new isolation namespace.
let chroot = format!("{}{}", cluster_name, Uuid::new_v4());
Arc::new(ChrootKvBackend::new(chroot.into(), backend))
let etcd_endpoints = env::var("GT_ETCD_ENDPOINTS").unwrap_or_default();
let pg_endpoint = env::var("GT_PG_ENDPOINTS").unwrap_or_default();

let kv_backend: KvBackendRef = match (etcd_endpoints.is_empty(), pg_endpoint.is_empty()) {
(true, true) => {
info!("Using memory kv backend");
Arc::new(MemoryKvBackend::new())
}
(false, _) => {
info!("Using etcd endpoints: {}", etcd_endpoints);
let endpoints = etcd_endpoints
.split(',')
.map(|s| s.to_string())
.collect::<Vec<String>>();
let backend = EtcdStore::with_endpoints(endpoints, 128)
.await
.expect("malformed endpoints");
// Each retry requires a new isolation namespace.
let chroot = format!("{}{}", cluster_name, Uuid::new_v4());
Arc::new(ChrootKvBackend::new(chroot.into(), backend))
}
(true, false) => {
info!("Using pg endpoint: {}", pg_endpoint);
let backend = PgStore::with_url(&pg_endpoint, "greptime_metakv", 128)
.await
.expect("malformed pg endpoint");
let chroot = format!("{}{}", cluster_name, Uuid::new_v4());
Arc::new(ChrootKvBackend::new(chroot.into(), backend))
}
};

Self {
Expand Down
Loading