More than two hundred and thirty years after Benjamin Franklin wrote “In this world, nothing is certain except death and taxes,” it’s abundantly clear that “data growth” belongs on any modern list of certainties. Data is fuel that drives insight and innovation, but it can become unmanageable and unusable if not handled effectively.
Understanding the lifecycle of data is the key to making it benefit, not burden, the organization. The typical data lifecycle involves three stages – creation or acquisition, active use, and preservation. Each of these stages has unique requirements for storage.
During the first stage, data comes into being, thanks to a camera, a sensor, an IoT device, a transaction, or a calculation. Storage performance is key so that streaming information is not lost.
The second stage involves active use of the data by humans and/or machines, such as researchers or AI algorithms. Again, storage performance is key, and data must frequently be moved and copied to support different steps in its processing. These steps often span local data centers and the cloud.
After active use, the focus moves to preservation. Historically, retained data required little additional processing, but increasingly, and across many industries, data must be kept accessible, with the expectation that it will be needed again in the future, to be re-processed, re-analyzed or monetized in new ways. It is this stage that many organizations struggle with the most. How can they keep massive, ever-growing quantities of valuable data protected and available over the long term, without breaking the bank?
These are the challenges that Quantum ActiveScale was born to solve. This paper provides a detailed review of the ActiveScale architecture, specifically calling out features and abilities that drive its unique combination of simplicity, scalability, performance, availability, data durability, security, and low TCO.