Database Performance and Optimization with Andrew Davidson
When a database gets large, it can start to perform poorly. This can manifest in slow query speed. You can speed up a query by defining an index, which is a data structure that allows for faster access to the data that is being indexed. As a consequence, whenever you update the database, you will now need to update the index with that new piece of data.
The more you index your data, the faster the access time. In order to have more indexes you must pay a write penalty in order to maintain consistency around that data, since the indexes need to be updated with each new entry. This illustrates one simple tradeoff that a developer can make within a database deployment.
Why are there so many different databases in the world? Why do we need SQL databases like Postgres, document databases like MongoDB, key/value systems like Cassandra, and search systems like Elasticsearch? Because each of these each system optimizes for different sets of tradeoffs. Tradeoffs can affect the speed of a read, the speed of a write, the user experience, the consistency of data, and the cost of running the database.
Andrew Davidson is the lead product manager of MongoDB Atlas. Andrew joins the show to talk about how database performance can degrade when a database gets large, and how to measure and optimize performance of a critical database.
Andrew explores the range of distributed systems cases–from a single node database to a multi-geographic distribution of nodes around the world, and describes how the configuration of a database in the cloud can help or hurt the application that the database is serving.
Full disclosure: MongoDB is a sponsor of Software Engineering Daily.
Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.