What Is New About NewSQL?

Most programmers are familiar with SQL and the relational database management systems, or RDBMSs, like MySQL or PostgreSQL. The basic principles for such architectures have been around for decades. Around 2000s came NoSQL solutions, like MongoDB or Cassandra, developed for distributed, scalable data needs.

But, for the past few years, there has been a new kid on the block: NewSQL.

NewSQL is a new approach to relational databases that wants to combine transactional ACID (atomicity, consistency, isolation, durability) guarantees of good ol’ RDBMSs and the horizontal scalability of NoSQL. It sounds like a perfect solution, the best of both worlds. What took it so long to arrive?

Databases were born out of a need to separate code from data in the mid-1960s. These first databases were designed with several considerations:

  1. The number of users querying the database is limited.
  2. The types of queries are unlimited – the developer can use any query they want.
  3. Hardware is quite expensive.

In those days of developers entering interactive queries to a terminal, as the only users with access to the database, these considerations were relevant and valuable. Correctness and consistency were the two important metrics, rather than today’s metrics of performance and availability. Vertical scaling was the solution to growing data needs, and downtime needed for the data to be moved in case of database migration or recovery was bearable.

Fast forwarding a couple of decades, the requirements from databases in the Internet and cloud era are much more different. The scale of data is enormous, and commodity hardware is much cheaper compared to the 20th-century costs.

As the scale of data grew and real-time interactions through Internet became widespread, basic needs from databases started to be divided into the two main categories of OLAP and OLTP, Online Analytical Processing and Online Transaction Processing, respectively.

OLAP databases are commonly known as data warehouses. They store a historical footprint for statistical analysis purposes in business intelligence operations. OLAP databases are thus focused on read-only workloads with ad-hoc queries for batch processing. The number of users querying the database is considerably low, as usually, only the employees of a company have access to the historical information.

OLTP databases correspond to the highly concurrent, transactional data processing, characterized by short-lived and pre-defined queries enacted by real-time users. Searches a regular user does on an e-commerce website and buying of items are basic examples of transactional processing. While the users access a smaller subset of the data when compared with OLAP users, the number of users are considerably higher and the queries can include both read and write operations. The important considerations in OLTP databases thus are high availability, concurrency, and performance.

For most websites, for any given time, there are hundreds or thousands of users effectively querying the database concurrently. With this scale in mind, the system needs to be highly available, as every minute of downtime can cost the bigger companies thousands or even millions of dollars.

On websites, the queries made by the users are pre-defined; the users do not have access to the terminal of the database to execute any query that they’d like. The queries are buried in the application logic. This allows for optimizations towards high performance.

In the new database ecosystem where scalability is an important metric, and high availability is essential for making profits, NoSQL databases were offered as a solution for achieving easier scalability and better performance, opting for an AP design from the CAP theorem. However, this meant giving up strong consistency and the transactional ACID properties offered by RDMBSs in favor of eventual consistency in most NoSQL designs.

NoSQL databases use a different model than the relational, such as key-value, document, wide-column, or graph. With these models, NoSQL databases are not normalized, and are inherently schemaless by design. Most NoSQL databases support auto-sharding, allowing for easy horizontal scaling without developer intervention.

NoSQL can be useful for applications such as social media, where eventual consistency is acceptable – users do not notice if they see a non-consistent view of the database, and since the data involves status updates, tweets, etc. strong consistency is not essential. However, NoSQL databases are not easy to use for systems where consistency is critical, such as e-commerce platforms.

NewSQL systems are born out of the desire to combine the scalability and high availability of NoSQL alongside the relational model, transaction support, and SQL of traditional RDBMSs. The one-size-fits-all solutions are at an end, and specialized databases for different workloads like OLTP started to rise. Most NewSQL databases are born out of a complete redesign focused heavily on OLTP or hybrid workloads.

Traditional RDMBS architecture was not designed with a distributed system in mind. Rather, when the need arose, support for distributed designs was built as an afterthought on top of the original design. Due to their normalized structure, rather than the aggregated form of NoSQL, RDBMS had to introduce complicated concepts to both scale out and conserve its consistency requirements. Manual sharding  and master-slave architectures were developed to allow horizontal scaling.

However, RDBMS loses much of its performance when scaling out, as joins become more costly with moving data between different nodes for aggregation, and maintenance overhead became time consuming. To preserve the performance, complex systems and products were developed – but today, still, traditional RDBMSs are not regarded as inherently scalable.

NewSQL databases are built for the cloud era, with a distributed architecture in mind from the start.

What are the different characteristics observed in NewSQL solutions?

Consistency:

Favoring consistency over availability, CP from CAP, most NewSQL databases offer strong consistency by sacrificing some availability. Using consensus protocols such as Paxos or Raft, from a global system or local partition level, these databases are able to achieve consistency. Some solutions, such as MemSQL, also offer tuning the tradeoff between consistency and availability, allowing for different configurations in different use cases.

Main Memory:

Traditional RDBMSs rely on secondary storage, or disk, as the medium for storing data, most commonly SSDs or HDDs. Since OLTP workloads do not require as much data, as the historical data can be archived in data warehouses and only the more current information is needed, a couple of NewSQL solutions use main memory (RAM) as storage. Memory access is significantly faster than disk access, almost 100 times faster than SSD, and 10.000 times faster than HDD.

In-memory solutions offer the added performance boosts of eliminating or simplifying heavy concurrency systems and especially buffer managers.

Since all the data (or most of it) is already in the main memory, buffer managers become obsolete. As for concurrency, different solutions exist in different implementations, e.g. serialization.

What about persistence? RAMs are, by nature, volatile. When power is lost, the data that needs to persist can be lost. In-memory databases alleviate this in different ways, usually by combinations of infrequent backups on disks, logging for preserving state and for recoverability, or by utilizing non-volatile RAMs for critical data.

The two main examples of in-memory NewSQL solutions are VoltDB and MemSQL.

 

VoltDB

VoltDB is an in-memory ACID-compliant relational database. VoltDB’s architecture is based on H-Store, designed by Michael Stonebraker et. al., an in-memory database designed for OLTP workloads.

VoltDB is focused on fast data and is built to serve the specific applications where large streams of data must be processed quickly, such as trading applications, online gaming, IoT sensors, and more. Fitting with the OLTP principles, VoltDB is designed from scratch to be performant.

With the conscious decision of having only stored procedures and moving them closer to the data, VoltDB can execute serialized transactions. The procedures are broken up into atomic transactions, and these transactions, in turn, are serialized and performed from a queue. This serialized transaction scheme gets rid of the overhead for managing concurrency, improving performance. While VoltDB also supports ad-hoc queries, these stored procedures are the ones that benefit from performance optimizations. This fits well with the OLTP workloads, as the end-user cannot execute ad-hoc queries.

For in-memory databases, an important question, and one of the requirements for ACID principles is durability. VoltDB achieves durability through various techniques, including snapshots, command logging, K-safety, and database replication. With these approaches, VoltDB ensures redundancy and allows for durable data.

If you want more information on VoltDB and its architecture, you can check our past shows with John Hugg and with Ryan Betts.

 

HTAP

As I pointed out before, most NewSQL databases are designed from scratch. With the possibilities such an endeavor brings, some projects wanted to bring a unified database, where transactional and analytical workloads can be handled. The term Hybrid Transactional/Analytical Processing, or HTAP, was coined by Gartner. HTAP capabilities in a database enable advanced real-time analytics and can lead to real-time business decisions and intelligent transactional processing. While VoltDB also offers HTAP capabilities, it focuses more on transactional workloads. Other notable HTAP databases include TiDB and Google’s Spanner.

 

TiDB

An open-source solution to come out of China, TiDB is a strongly consistent distributed scalable MySQL-compatible HTAP database. TiDB has a layered architecture: TiDB server sits on top, as a stateless computing layer. Underlying storage model comes to life in TiKV, a transactional key-value database inspired by Google’s Spanner.

TiDB layer listens to SQL queries, parses them and creates an execution plan. The query is then, if desirable, split into parts and sent to corresponding TiKV stores. Since it is stateless, it’s easy to scale the TiDB layer.

TiKV is the underlying storage layer, a key-value database using RocksDB for physical storage. TiKV organizes data by regions: these regions are stored and replicated. To achieve the durability and high availability with this replication scheme, TiKV utilizes the Raft consensus algorithm for strong consistency. The distributed nature of TiKV allows for distributed queries.

What enables TiDB to be powerful in both OLTP and OLAP situations is its decoupled architecture: the computation layer is different from the storage layer. While TiDB can handle both OLTP and simple OLAP workloads, TiSpark is an OLAP solution that runs Spark SQL directly on TiKV and can be added easily to the TiDB/TiKV architecture. TiDB on its own, through its cost optimizer and distributed executor can handle 80% of ad-hoc OLAP queries.

TiSpark is optimized for complex OLAP queries. Just like TiDB, TiSpark is also a stateless compute layer that communicates with TiKV, however it’s designed to handle complex OLAP queries, and communicates using Spark SQL.

So, deploying both TiDB and TiSpark results in eliminating ETL costs and allowing for a unified solution for both analytical and transactional needs.

Check out our recent episode on TiDB with Kevin Xu for more information about TiDB and its architecture; our episode on RocksDB with Dhruba Borthakur and Igor Canadi, for more information about the physical data store RocksDB that powers TiKV and TiDB, and our article on Chinese open source projects for more information about TiKV.

 

Cosmos DB

Azure Cosmos DB from Microsoft is a highly flexible solution, and through numerous tuneable features that can be tweaked to fit various use cases, it can be considered as a NewSQL database.

Cosmos DB is a globally distributed, multi-model database service. As a multi-model service, it supports key-value, column-family, document, and graph databases as the underlying storage models. The API with which the data is exposed can be both SQL and and NoSQL APIs.

With global distribution, Cosmos DB holds replicas of the data in several data centers around the world, ensuring reliability and high availability. The developer can create replicas and horizontally scale their data with a few simple API calls.

Cosmos DB is designed to alleviate the costs of database management. The developers don’t need to deal with index or schema management, as Cosmos DB handles indexing automatically to ensure performance.

Through several consistency levels, Cosmos DB lets developers decide the trade-offs that they want to make with appropriate SLAs. Instead of the two extreme ends of strong consistency and eventual consistency, there are five well-defined consistency levels alongside the spectrum. Each consistency level comes with a separate SLA, ensuring certain levels of availability and performance.

Being the product of a tech and cloud giant, Cosmos DB is simple for developers to use, and gives comprehensive guarantees for performance, availability, and consistency.

 

Augmenting RDBMS

NewSQL can also come in the form of augmenting existing RDBMSs to give them the ability to scale-out. Without a completely redesigned database, these solutions are implemented on top of an already battle-tested SQL database to enhance their capabilities. This idea is useful for large enterprises that have an established system and not willing to migrate to a new database solution.

 

Citus

A successful example that builds upon PostgreSQL is Citus.

Citus Data, recently acquired by Microsoft, develops and maintains Citus: an open-source PostgreSQL extension that allows for a distributed PostgreSQL by transparently distributing tables and queries to support horizontal scaling.

In a cluster managed by Citus, the tables are distributed: tables are horizontally partitioned across different worker nodes, and appear as normal SQL tables. The coordinator, having a table metadata to oversee the worker PostreSQL nodes, handles query processing and parallelizes the queries to the appropriate table partitions.

By adding features such as query routing, distributed tables and distributed transactions, and stored procedures, Citus takes care of the numerous low-level details to present a horizontally scalable, performant PostgreSQL.

Check out our episodes on Scaling PostgreSQL with Ozgun Erdogan and Postgres Sharding with Marco Slot for more information about Citus.

 

Vitess

While Citus builds upon PostgreSQL, Vitess is built to enhance MySQL, and make it fit to the current requirements of the cloud age.

Vitess was built first at Youtube for their scaling needs in 2011. With a growing user base and data, horizontal scaling and sharding became necessary, and Vitess was created to handle this scaling transparently. It has been open-sourced, and is now hosted under the CNCF. Getting the stamp of approval as a cloud-native technology, Vitess provides several improvements to MySQL.

First improvement is the introduction of various sharding schemas. Users can create their own sharding schemas, and Vitess is responsible for organizing the shards and the data accordingly. Vitess allows for automatic sharding without requiring manual application code, and enables live (re)sharding with minimal read-only down time.

Sharding is done through Vindexes and keyspaces. A Primary Vindex is similar to a primary index used in the indexing schemes of databases. User can specify the attribute they want as the Primary Vindex, and how many different shards the data can be split based on this vindex. After the database is sharded, the queries based on keyspaces are directed to the appropriate shards.

Vitess’s architecture provides load-balancing and query routing through vtgates. Since these gates are stateless layers, they can be easily scaled up and down. In turn, these vtgates route queries to vtablets that are proxies into shards, which return the aggregated result to vtgates.

Vitess retains all its benefits when deployed on a cluster orchestration tool like Kubernetes. Since the vtgates act as stateless proxies, they are suitable for deployment on a container cluster. lockserver or etcd acts as the metadata store, and handles the administrative work such as schema definitions.

Implemented in Go, Vitess can handle thousands of connections using Go’s concurrency support.

Listen to our episode on Vitess with Sugu Sougoumarane for deeper discussions on Vitess’ history, architecture, and use cases.

The NewSQL ecosystem is constantly growing and evolving. While it is almost impossible to make a general definition or come up with general characteristics that can encapsulate all NewSQL databases, the distinctive database designs that come out as a result under the umbrella of NewSQL add to the range of options that developers can choose from for specific use cases. One-size-fits-all architectures are not desirable anymore, and NewSQL is the movement towards innovation and specialized database designs.

Gokhan Simsek

Eindhoven, The Netherlands

Gokhan is a computer science graduate, currently pursuing a MSc. degree in Data Science at Eindhoven University of Technology. He’s interested in big data, NLP, and machine learning.

Software Daily

Software Daily

 
Subscribe to Software Daily, a curated newsletter featuring the best and newest from the software engineering community.