Companies that are looking out to store massive amounts of data in Kafka can now do it by spending fewer dollars through the “Infinite storage” option. The feature launched by the Kafka Company is focused on providing a storage space that is economically feasible. Event data of companies can now scale automatically because of the newly separated storage and compute. Apache Kafka is one of the leading providers for storing event data; this includes all the unstructured and semi-structured data that is generally produced by applications and people. Kafka provided a mechanism, which can be used to store data streams and later flow them into the warehouses and data lakes.
Giant Kafka clusters are created by companies to store event data for historical analysis. Previously, this function was not affordable because of some technical barriers. The Kafka customers had to purchase scale compute, even if they were not going to use the computational horsepower fully. The breakthrough which was unveiled by Confluent showed that now you can separate storage and compute. Group product manager at Confluent, Dan Rosanova, stated that the new breakthrough is a milestone because Kafka’s compute and storage layers are tightly knit.