September 8, 2015
Some innovators are abandoning long-held database principles. Why?
The website for Room Key, a joint venture of six hotel chains to help travelers find and book lodging, collects data from as many as 17 million pages per month, records an average of 1.5 million events every 24 hours, and handles peak loads of 30 events per second. To process that onslaught of complex information, its database records each event without waiting for some other part of the system to do something first.
The Room Key data store doesn’t much resemble the relational database typically used for transaction processing. One important difference is the way databases record new entries. In a relational database, files are mutable, which means a given cell can be overwritten when there are changes to the data relevant to that cell. Room Key has an accumulate-only file system that overwrites nothing. Each file is immutable, and any changes are recorded as separate timestamped files. The method lends itself not only to faster and more capable stream processing, but also to various kinds of historical time-series analysis—that is, both ends of the data lifecycle continuum.
The traditional method—overwriting mutable files—originated during the era of smaller data when storage was expensive and all systems were transactional. Now disk storage is inexpensive, and enterprises have lots of data they never had before. They want to minimize the latency caused by overwriting data and to keep all data they write in a full-fidelity form so they can analyze and learn from it. In the future, mutable could become the exception rather than the rule. But as of 2015, the use of immutable files for recording data changes is by far the exception.
The more expansive use of files in a database context is one of the many innovations creating a sea change in database technology. This article examines the evolution of immutable databases and what it means for enterprise databases outside the tech sector.
A growing collection of immutable facts
Whereas transactional databases are systems of record, most new data stores are systems of engagement. These new data stores are designed for analysis, whether they house Internet of Things (IoT) data, social-media comments, or other structured and unstructured data that have current or anticipated analytical value. The new data stores are built on cloud-native architectures, and immutable files are more consistent with the cloud mentality. At this early stage in data analytics, any web-scale architecture is a candidate for a data store that has immutable files.
This evolving technology makes sense for many niches within mainstream enterprises. Those niches are the emerging applications that enterprises might not be using much now, but will be in the future. Forward-leaning enterprises should plant one foot in the future. They need to understand the immutable option and learn to work with it.
Consider this IoT example: The Bosch Group is launching sensor technology that connects railroad cars to the Internet and gathers information while the train is moving. The data will provide insight into temperature, noise, vibrations, and other conditions useful for understanding what is happening to the train and its freight in transit. Data is transmitted wirelessly to servers, evaluated by control logistics processes, and presented in a data portal, integrated with the customer’s business processes.
Such a system would transmit a near-steady stream of data. The users would want the entire data set intact as recorded to study, analyze, learn from, and study again; they would not want anything overwritten by changes. Immutable files are ideal for this use case.
Martin Kleppmann, a serial entrepreneur, developer, and author on sabbatical from LinkedIn, thinks the only reason databases still have the mutable state is inertia. Mutable state, he says, is the enemy, something software engineers have tried to root out of every part of the system except databases. Now storage is inexpensive, which makes feasible immutable data storage at scale. Given the economics, Kleppmann says a database should be “an always-growing collection of immutable facts,” rather than a technology that can overwrite any given cell.
Besides the expense, overwriting also harkens back to a time when data was more predictable. Kleppmann, by his own definition a database rebel, questions the conventional wisdom of sticking with traditional data technologies that overwrite, instead of leaving data written the way it is the first time and documenting changes in separate files. He points out that conventional database replication mechanisms already rely on streams of immutable events.
Kleppmann, Pat Helland of Salesforce, and Jay Kreps, formerly of LinkedIn and now CEO of Confluent, are three of the most recent and vocal advocates for the concept of immutability in database technology. They share these views for transactional systems, as well as analytics systems.
At present, Hadoop data lakes are immutable, but there are few other examples. Datomic (Room Key’s partner) and the Microsoft Tango object database are among the other data stores currently available or being developed that claim the ability to support consistent, high-volume transactions and write guarantees without a mutable state. Combinations of Apache Kafka (a messaging broker described later in this article) and Samza may be headed in that direction as well.
What is an immutable or log-centric database, anyway?
Conventional database wisdom says you need to overwrite a cell in a table, or a collection of related cells in multiple tables, and lock what’s affected until the write takes hold. This action must be taken to guarantee integrity within and across related data tables. Write locking by definition builds in contention or dependency among locked data entries, and thus the potential for delay in overwriting the data. One part of the database system needs to wait for cells to be unlocked before writing to them. The need to ensure consistency in transactions involving overwrites, part of the ACID guarantees1 that relational databases are known for, means that write locking is necessary. Some database vendors (Couchbase, for example) claim an ability to perform nonblocking writes, but that’s within the context of eventual rather than immediate consistency.
Analytics have been the latest focus of the immutable file, or log-centric, databases. LinkedIn uses Apache Samza for various resource metrics related to site performance. The monitoring takes the form of a data-flow graph.
As described earlier, traditional databases store data in tables. Some cells in a table or tables are overwritten whenever changes need to be made. But not all data persistence requirements fit this pattern. A different pattern could be a long list of time-ordered facts, such as a system of log files where each individual fact never changes. Any business use of the data is concerned only with understanding that history of facts. A time-based log of facts is a similar pattern. But the business use could be confused about the current state of the business as recorded in a log.
Immutable transaction logs have existed for decades.
For example, a rental owner would record a history of tenants, but also be interested in who the current tenants are. This scenario sounds like a transaction requiring an overwrite of the field “current tenant.” Instead, when there is a change, the system records that change as a separate record in a separate file. In other words, the files taken together reflect all the changes, whereas in the tabular database, each table must reflect all changes related to it.
Kreps says the immutable equivalent to a table is a log. Kreps observes that the “log is the simplest storage abstraction,” and points out that “a log is not that different from a file or a table. A file is an array of bytes, a table is an array of records, and a log is really just a kind of table or file where records are sorted by time.” So why not store all your records in log form, as Kreps suggests? When a record changes, just store the change, and that becomes a separate timestamped record. Storage is inexpensive enough that this option is possible.
Why is immutability in big data stores significant?
Immutable databases promise these advantages:
- Fewer dependencies: Immutable files reduce dependencies or resource contention, which means one part of the system doesn’t need to wait for another to do its thing. That’s a big deal for large, distributed systems that need to scale and evolve quickly. Web companies are highly focused on reducing dependencies. Helland of Salesforce says, “We can now afford to keep immutable copies of lots of data, and one payoff is reduced coordination challenge.”2
- Higher-volume data handling and improved site-response capabilities: Room Key, with the help of Datomic, can present additional options to site visitors who leave before booking a room. Room Key can also handle more than 160 million urgent alerts per month to shoppers, making sure they know when room availability is diminishing and giving them a chance to book rooms quickly during periods of high demand.
- More flexible reads and faster writes: Michael Hausenblas of Mesosphere observes that writing the data without structuring it beforehand means that “you can have both fast reads and writes,” as well as more flexibility in how you view the data.
- Compatibility with microservices architecture, log-based messaging protocols, and Hadoop: LinkedIn’s Apache Samza and Apache Kafka, a simplified, high-volume messaging queue also designed at LinkedIn, are symbiotic and compatible with the Hadoop Distributed File System (HDFS), a popular method of distributed storage for less-structured data.
- Suitability for auditability and forensics, especially in data-driven, fully instrumented online environments: Log-centric databases and the transactional logs of many traditional databases share a common design approach that stresses consistency and durability (the C and D in ACID). But only the fully immutable shared log systems preserve the history that is most helpful for audit trails and forensics.
Most of the technologies that make such log systems possible are not new. Systems such as the transaction logs mentioned earlier, which hold timestamped log and other files that contain immutable data, have existed for decades.
Data storage as an immutable series of append-only files has been the norm for more than 15 years in some truly distributed, big data analytics computing environments. Examples include the Google File System (GFS) in the mid-2000s and most recently the HDFS, a clone of GFS, in the 2000s and 2010s. Earlier append-only log-file systems include the Zebra striped network file system from the mid-1990s. Among NoSQL databases, CouchDB (a document store) also stores database files in append-only mode.
Functional programming languages such as Erlang, Haskell, and LISP have seen a resurgence of interest because of the growth of parallel or cluster computing, and these all embrace the principle of immutability to simplify how state, for example, is handled. Rich Hickey, creator of Clojure (a LISP library for Java developers) and the founder of Datomic, boils the notion of state down to a value associated with an identity at a given point in time. “Immutable stores aren’t new,” Dave Duggal, founder of full-stack integration provider EnterpriseWeb, points out. “They are common for high-level programming—that’s the reason why immutability proponents are often data-driven application folks.”
What is new is that Kleppmann and others advocate the immutable, append-only approach as the norm for online analytical processing (OLAP) and online transaction processing (OLTP) environments. On the OLAP side, LinkedIn’s Kreps advocates a unified data architecture that supports stream and batch processing. On the OLTP side, the designers of the Microsoft Tango metadata object store claim fast transactions across partitions. However, the intended purpose of that data store is metadata services, not transactional services, according to the research paper.
Conclusion: Renewed immutability debates and other considerations
It’s interesting to note the proponents of these immutable, log-centric databases generally don’t have database backgrounds. Instead, they tend to be data-driven application and large-scale system engineers. It’s also interesting to note that more traditional database designers can be immutability’s most vocal opponents.
Take, for example, Baron Schwartz, one of the contributors to MySQL, an open-source relational database. Schwartz wrote a cogent critique of databases such as Datomic, RethinkDB, and CouchDB in 2013. Among Schwartz’s arguments:
- Maintaining access to old facts comes at a high price, like the cost of infinitely growing storage.
- Even in a solid-state environment, entities are spread out across physical storage, slowing things down.
- Disks eventually get full, which means you must save the old database and start a new one in the case of CouchDB, and reserve enough space to do so. Running out of space can make a database such as Datomic totally unavailable, he says.
- Relational databases do have short-term immutability that takes the form of old rows maintained in a history list. In this way and others, designs have long steered clear of the shortfalls that log-centric databases seem to be facing now.
Schwartz made some valid points, but these points didn’t seem to factor in the broader data storage landscape and its more recent impact. The fact is that an immutable approach has been in place for a number of years now; in Hadoop clusters with HDFS, for example. That approach continues to evolve. NoSQL database immutability will presumably improve along similar lines.
Schwartz’s points about ACID-compliant databases and how they preserve consistency and availability are well taken, and those databases continue to be well suited to traditional core transactional systems. Those systems aren’t going to change much anytime soon.
“At one point, if you weren’t using MySQL as your back end, VCs [venture capitalists] would take their money away from you.” —Roberto Zicari
Schwartz also doesn’t seem to think much about the data (or the metadata) between the data and how big data analytics systems are designed to look at the same data sets or aggregations of data sets from different vantage points. Immutable databases take snapshots of points in time, and analysts can mine that data in many different ways for different purposes. Conventional transactional systems are rather single-minded and singular in purpose by comparison.
To pursue innovation, companies engaged in transformation efforts must do different things with the data they’ve collected. Demand for immutability in databases will be strong in certain industries—for applications such as patient records that value the long-term audit trail and a full history. Avera’s use of CouchDB for its longitudinal studies is an example.
The potential impact of web-company innovations can’t be discounted. The database landscape has seen the intrusion of these engineers before, in the case of NoSQL, Hadoop, and related big data analytics technologies. These technologies had a considerable disruptive impact, and that impact continues. Roberto Zicari, a professor at J. W. Goethe University Frankfurt, Germany, and editor of ODBMS.org, a site that tracks database evolution, described the birth of NoSQL in an interview with PwC:
MySQL came out. And at one point, if you weren’t using MySQL as your back end, VCs [venture capitalists] would take their money away from you. It became standard operating procedure. But they didn’t question the fundamental value of the relational model at that time. It was like violating a law.
Then the next big challenge was that the scale of these web companies reached the point where they faced problems staying within the relational model. Driven more by necessity than insight, they started creating solutions to match the scale of the problems they faced. These solutions were produced through open-source processes and became the solve-this-problem databases: key-value, column, and document stores.
Since then, the broader storage environment has become heterogeneous, multitiered, and aligned with cloud application and infrastructure technologies. Taken together, Apache Kafka and Samza aren’t just another sort of database; they’re components in an entirely redesigned ecosystem that LinkedIn uses. They fit with the architectural principles of microservices. Samza works with the input from Apache Kafka, which is already a popular, high-volume, high-speed messaging queue in native cloud architectures. And equivalents exist in Apache Spark and Storm when used in conjunction with HDFS. The Apache Spark stack, or Berkeley Data Analytics Stack, incorporates its own database called Tachyon. This database is designed to complement distributed file systems or object stores such as HDFS, GlusterFS, or S3 and act as a checkpointing repository for the data sets in memory.
Hundreds of companies are developing and using applications based on Spark and Storm. At a more tactical level, there’s the database tier that’s used on its own. For more strategic and ad hoc views across silos by using analytics technologies, there’s the data lake tier—the locus of much of the meaningful big data–related innovation happening in open source.
Mutable data, particularly in core transactional systems, will continue to have a place in database management. Sometimes users will want to take old records offline to ensure superseded products are not accidentally made available in online operational systems. Recalls of automotive parts, for example, would require a new part number in ordering systems to replace the old part. This kind of critical identifier is handled most reliably via a well-established ACID-compliant transactional system.
With big data analytics, a new approach demands new structures and methods for collecting, recording, and analyzing enterprise data. Machine learning, for example, thrives on more data, so smart machines can learn more and faster. Immutable files and their more loosely coupled nature will help humans and machines to wrestle all the data they acquire into usable form.
- ACID stands for atomicity, consistency, isolation, and durability, which are principles that database designers have historically adhered to for mission-critical transactional systems.
- Pat Helland, “Immutability Changes Everything,” Conference on Innovative Data Systems Research (CIDR), January 2015.