Powerful & Easy way for big data discovery
-
Updated
Apr 8, 2024 - TypeScript
Powerful & Easy way for big data discovery
Full stack data engineering tools and infrastructure set-up
Terraform module to deploy Apache Druid in Kubernetes
Comparison of batch and real time OLAP databases
End to End Distributed Deep Learning Engine, works both with Streaming and Batch Data built using Apache Flink
Druid Exporter plays a fundamental role as a receiver of metrics events coming from Druid clusters, adopting the HTTP format as a means of communication. In addition to this capability, its primary function is to export these metrics to Prometheus, thus allowing the creation of meaningful graphs and visualizations.
A tutorial for lookup feature in Apache Druid
Toy data platform for a company that provides web analytics
A collection of jar files that make up a DataGrip driver able to query Apache Druid
Move data from Amazon Redshift to other sources like Amazon S3, Apache Druid and more
Data streaming project with Apache Druid & Grafana: Real-time data processing, alerts, integration with AWS. It uses a combination of technologies and services, including Confluent-Kafka, Apache Druid, AWS SNS, EC2, Athena, S3, Glue and EventBridge, StepFunctions. Contribute to this powerful solution!
POC. Using HE data, preparation, ingestion into Druid, vis in superset, orchestration in Airflow
NoSQL time-series databases
This project is an engineering data pipeline designed to collect real-time data from an e-commerce platform and process it for visualization using various technologies.
🚀 Learn Apache Druid: a high performance real-time analytics database.
Hadoop infrastructure for Druid Batch ingestion using docker-compose
Add a description, image, and links to the apache-druid topic page so that developers can more easily learn about it.
To associate your repository with the apache-druid topic, visit your repo's landing page and select "manage topics."