The system requirements for Confluent Control Center state that 500GB disk space is required. CCC appears to store some data in Kafka itself, so can someone please tell me what this storage is actually required for? i.e. what is preventing CCC from being deployed as a container in Kubernetes without persistent volume storage?
Confluent Control Center disk space requirements - why so large?
821 views Asked by John At
1
There are 1 answers
Related Questions in APACHE-KAFKA
- No method found for class java.lang.String in Kafka
- How to create beans of the same class for multiple template parameters in Spring
- Troubleshoot .readStream function not working in kafka-spark streaming (pyspark in colab notebook)
- Handling and ignore UNKNOWN_TOPIC_OR_PARTITION error in Kafka Streams
- Connect Apache Flink with Apache kudu as sink using Pyflink
- Embedded Kafka Failed to Start After Spring Starter Parent Version 3.1.10
- Producer Batching Service Bus Vs Kafka
- How to create a docker composer environment where containers can communicate each other?
- Springboot Kafka Consumer unable to maintain connect to kafka cluster brokers
- Kafka integration between two micro service which can respond back to the same function initiated the request
- Configuring Apache Spark's MemoryStream to simulate Kafka stream
- Opentelemetry Surpresses Kafka Produce Message Java
- Kafka: java.lang.NoClassDefFoundError: Could not initialize class org.apache.logging.log4j.core.appender.mom.kafka.KafkaManager
- MassTransit Kafka producers configure to send several events to the same Kafka topic
- NoClassDefFoundError when running JAR file with Apache Kafka dependencies
Related Questions in CONFLUENT-PLATFORM
- Kafka compression on Broker side
- Confluent HDFS Sink connector error while connecting HDFS to Hive
- Docker, Kafka and .NET - Error to connect .NET application container in Kafka container
- What is the difference between debezium vs confluent JdbcSinkConnector?
- Issue with Quarkus and Kafka Avro Schemas not using self generated avro scheme
- Cluster linking in Apache Kafka
- Kafka in Docker using Confluent CLI doesn't work
- mongo Kafka connector issue confluent-hub install - error java.net.ConnectException
- How to hide Confluent CLI tooltip
- Best Practice for deduplication using Flink SQL Query from raw_topic and create new topic with deduplicates(Confluent)
- Flink AvroDeserilized Records defaulting to GenericType
- Kafka Confluent - Issue with Audit Log Configuration
- Kafka in KRaft mode keeps asking about Zookeeper
- Single Confluent Kafka cluster in different host machine
- Kafka Raft SASL_SSL client failing to initialize
Related Questions in CONFLUENT-CONTROL-CENTER
- Confluent control-center keeps quotes to the string key
- Kubernetes - Confluent Kafka Control Center Ingress Routing
- Why do I get an error when running control center in a confluent cluster environment?
- LDAP custom users not able to authenticate on control center
- LDAP bind authentication with Jetty
- Receiving strange values to kafka topic from kura
- Continuously Getting "Found no committed offset for partition 'topic-name-x' " for new consumer and consumer group
- kafka, confluent - 'broker' exited with 137 exit code when running with docker
- Confluent Control Center no clusters found
- Confluent Audit Logs and Confluent Control Center Access Control
- Produce Avro messages in Confluent Control Center UI
- How to open deploy Confluent Platform for production uses (multiple nodes)?
- Kafka Mongo Sink Connector Degraded due to port 8083
- Confluent Control Center failure: Unable to fetch consumer offsets for cluster id
- Confluent control-center -INFO Restored batch for store [MetricsAggregateStore] with topic-partition
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Yes, Control Center maintains some data back in Kafka for rolling aggregates, but it's using Kafka Streams to do that, which is configured with a RocksDB database, and can store that information on disk.
The latest documentation says less than 500GB, though.
If you have used Control Center, you would know that you can access months of Kafka historical metrics.
300 GB isn't really that much, when look at your average Oracle or SQL Server database at most companies.
Other than being able to retain historical data, and have a slow start-up period on the pod rebooting and getting data, probably not much.
8 cores and 32 GB of RAM to a container, though, might make one heavy pod.