Relational Database Service (RDS)

Table of contents
  1. RDS Simplified
    1. RDS Key Details
    2. RDS Multi-AZ
    3. RDS Read Replicas
    4. RDS Backups
    5. RDS Security
    6. RDS Enhanced Monitoring
  2. Aurora
    1. Aurora Simplified
    2. Aurora Key Details
    3. Aurora Serverless
    4. Aurora Cluster Endpoints
    5. Aurora Reader Endpoints
  3. DynamoDB
    1. DynamoDB Simplified
    2. DynamoDB Key Details
    3. DynamoDB Accelerator (DAX)
    4. DynamoDB Streams
    5. DynamoDB Global Tables

RDS Simplified

RDS is a managed service that makes it easy to set up, operate, and scale a relational database in AWS. It provides cost-efficient and resizable capacity while automating or outsourcing time-consuming administration tasks such as hardware provisioning, database setup, patching and backups.

RDS Key Details

  • RDS comes in six different flavors:
    • SQL Server
    • Oracle
    • MySQL Server
    • PostgreSQL
    • MariaDB
    • Aurora
  • Think of RDS as the DB engine in which various DBs sit on top of.
  • RDS has two key features when scaling out:
    • Read replication for improved performance
    • Multi-AZ for high availability
  • In the database world, Online Transaction Processing (OLTP) differs from Online Analytical Processing (OLAP) in terms of the type of querying that you would do. OLTP serves up data for business logic that ultimately composes the core functioning of your platform or application. OLAP is to gain insights into the data that you have stored in order to make better strategic decisions as a company.
  • RDS runs on virtual machines, but you do not have access to those machines. You cannot SSH into an RDS instance so therefore you cannot patch the OS. This means that AWS is responsible for the security and maintenance of RDS. You can provision an EC2 instance as a database if you need or want to manage the underlying server yourself, but not with an RDS engine.
  • Just because you cannot access the VM directly, it does not mean that RDS is serverless. There is Aurora serverless however (explained below) which serves a niche purpose.
  • SQS queues can be used to store pending database writes if your application is struggling under a high write load. These writes can then be added to the database when the database is ready to process them. Adding more IOPS will also help, but this alone will not wholly eliminate the chance of writes being lost. A queue however ensures that writes to the DB do not become lost.

RDS Multi-AZ

  • Disaster recovery in AWS always looks to ensure standby copies of resources are maintained in a separate geographical area. This way, if a disaster (natural disaster, political conflict, etc.) ever struck where your original resources are, the copies would be unaffected.
  • When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable.
  • With a Multi-AZ configuration, EC2 connects to its RDS data store using a DNS address masked as a connection string. If the primary DB fails, Multi-AZ is smart enough to detect that failure and automatically update the DNS address to point at the secondary. No manual intervention is required and AWS takes care of swapping the IP address in DNS.
  • Multi-AZ is supported for all DB flavors except aurora. This is because Aurora is completely fault-tolerant on its own.
  • Multi-AZ feature allows for high availability across availability zones and not regions.
  • During a failover, the recovered former primary becomes the new secondary and the promoted secondary becomes primary. Once the original DB is recovered, there will be a sync process kicked off where the two DBs mirror each other once to sync up on the new data that the failed former primary might have missed out on.
  • You can force a failover for a Multi-AZ setup by rebooting the primary instance
  • With a Multi-AZ RDS configuration, backups are taken from the standby.

RDS Read Replicas

  • Read Replication is exclusively used for performance enhancement.
  • With a Read Replica configuration, EC2 connects to the RDS backend using a DNS address and every write that is received by the master database is also passed onto a DB secondary so that it becomes a perfect copy of the master. This has the overall effect of reducing the number of transactions on the master because the secondary DBs can be queried for the same data.
  • However, if the master DB were to fail, there is no automatic failover. You would have to manually create a new connection string to sync with one of the read replicas so that it becomes a master on its own. Then you’d have to update your EC2 instances to point at the read replica. You can have up to five copies of your master DB with read replication.
  • You can promote read replicas to be their very own production database if needed.
  • Read replicas are supported for all six flavors of DB on top of RDS.
  • Each Read Replica will have its own DNS endpoint.
  • Automated backups must be enabled in order to use read replicas.
  • You can have read replicas with Multi-AZ turned on or have the read replica in an entirely separate region. You can even have read replicas of read replicas, but watch out for latency or replication lag. The caveat for Read Replicas is that they are subject to small amounts of replication lag. This is because they might be missing some of the latest transactions as they are not updated as quickly as primaries. Application designers need to consider which queries have tolerance to slightly stale data. Those queries should be executed on the read replica, while those demanding completely up-to-date data should run on the primary node.

RDS Backups

  • When it comes to RDS, there are two kinds of backups:
    • automated backups
    • database snapshots
  • Automated backups allow you to recover your database to any point in time within a retention period (between one and 35 days). Automated backups will take a full daily snapshot and will also store transaction logs throughout the day. When you perform a DB recovery, RDS will first choose the most recent daily backup and apply the relevant transaction logs from that day. Within the set retention period, this gives you the ability to do a point in time recovery down to the precise second. Automated backups are enabled by default. The backup data is stored freely up to the size of your actual database (so for every GB saved in RDS, that same amount will freely be stored in S3 up until the GB limit of the DB). Backups are taken within a defined window so latency might go up as storage I/O is suspended in order for the data to be backed up.
  • DB snapshots are done manually by the administrator. A key different from automated backups is that they are retained even after the original RDS instance is terminated. With automated backups, the backed up data in S3 is wiped clean along with the RDS engine. This is why you are asked if you want to take a final snapshot of your DB when you go to delete it.
  • When you go to restore a DB via automated backups or DB snapshots, the result is the provisioning of an entirely new RDS instance with its own DB endpoint in order to be reached.

RDS Security

  • You can authenticate to your DB instance using IAM database authentication. IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB instance. Instead, you use an authentication token.
  • An authentication token is a unique string that Amazon RDS generates on request. Authentication tokens have a lifetime of 15 minutes. You don’t need to store user credentials in the database because authentication is managed externally using IAM.
  • IAM database authentication provides the following benefits:
    • Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL).
    • You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance.
    • For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security
  • Encryption at rest is supported for all six flavors of DB for RDS. Encryption is done using the AWS KMS service. Once the RDS instance has encryption enabled, the data in the DB becomes encrypted as well as all backups (automated or snapshots) and read replicas.
  • After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance. You don’t need to modify your database client applications to use encryption.
  • Amazon RDS encryption is currently available for all database engines and storage types. However, you need to ensure that the underlying instance type supports DB encryption.
  • You can only enable encryption for an Amazon RDS DB instance when you create it, not after the DB instance is created and DB instances that are encrypted can’t be modified to disable encryption.

RDS Enhanced Monitoring

  • RDS comes with an Enhanced Monitoring feature. Amazon RDS provides metrics in real time for the operating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console, or consume the Enhanced Monitoring JSON output from CloudWatch Logs in a monitoring system of your choice.
  • By default, Enhanced Monitoring metrics are stored in the CloudWatch Logs for 30 days. To modify the amount of time the metrics are stored in the CloudWatch Logs, change the retention for the RDS OS Metrics log group in the CloudWatch console.
  • Take note that there are key differences between CloudWatch and Enhanced Monitoring Metrics. CloudWatch gathers metrics about CPU utilization from the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance. As a result, you might find differences between the measurements, because the hypervisor layer performs a small amount of work that can be picked up and interpreted as part of the metric.

Aurora

Aurora Simplified

Aurora is the AWS flagship DB known to combine the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. It is a MySQL/PostgreSQL-compatible RDBMS that provides the security, availability, and reliability of commercial databases at 1/10th the cost of competitors. It is far more effective as an AWS database due to the 5x and 3x performance multipliers for MySQL and PostgreSQL respectively.

Aurora Key Details

  • In case of an infrastructure failure, Aurora performs an automatic failover to a replica of its own.
  • Amazon Aurora typically involves a cluster of DB instances instead of a single instance. Each connection is handled by a specific DB instance. When you connect to an Aurora cluster, the host name and port that you specify point to an intermediate handler called an endpoint. Aurora uses the endpoint mechanism to abstract these connections. Thus, you don’t have to hard code all the host names or write your own logic for load-balancing and rerouting connections when some DB instances aren’t available.
  • By default, there are 2 copies in a minimum of 3 availability zones for 6 total copies of all of your Aurora data. This makes it possible for it to handle the potential loss of up to 2 copies of your data without impacting write availability and up to 3 copies of your data without impacting read availability.
  • Aurora storage is self-healing and data blocks and disks are continuously scanned for errors. If any are found, those errors are repaired automatically.
  • Aurora replication differs from RDS replicas in the sense that it is possible for Aurora’s replicas to be both a standby as part of a multi-AZ configuration as well as a target for read traffic. In RDS, the multi-AZ standby cannot be configured to be a read endpoint and only read replicas can serve that function.
  • With Aurora replication, you can have up to fifteen copies. If you want downstream MySQL or PostgreSQL as you replicated copies, then you can only have 5 or 1.
  • Automated failover is only possible with Aurora read replication
  • For more on the differences between RDS replication and Aurora Replication, please consult the following:

Screen Shot 2020-06-18 at 3 02 39 PM

  • Automated backups are always enabled on Aurora instances and backups don’t impact DB performance. You can also take snapshots which also don’t impact performance. Your snapshots can be shared across AWS accounts.
  • A common tactic for migrating RDS DBs into Aurora RDs is to create a read replica of a RDS MariaDB/MySQL DB as an Aurora DB. Then simply promote the Aurora DB into a production instance and delete the old MariaDB/MySQL DB.
  • Aurora starts w/ 10GB and scales per 10GB all the way to 128 TB via storage autoscaling. Aurora’s computing power scales up to 32vCPUs and 244GB memory

Aurora Serverless

  • Aurora Serverless is a simple, on-demand, autoscaling configuration for the MySQL/PostgreSQl-compatible editions of Aurora. With Aurora Serverless, your instance automatically scales up or down and starts on or off based on your application usage. The use cases for this service are infrequent, intermittent, and unpredictable workloads.
  • This also makes it possible cheaper because you only pay per invocation
  • With Aurora Serverless, you simply create a database endpoint, optionally specify the desired database capacity range, and connect your applications.
  • It removes the complexity of managing database instances and capacity. The database will automatically start up, shut down, and scale to match your application’s needs. It will seamlessly scale compute and memory capacity as needed, with no disruption to client connections.

Aurora Cluster Endpoints

  • Using cluster endpoints, you map each connection to the appropriate instance or group of instances based on your use case.
  • You can connect to cluster endpoints associated with different roles or jobs across your Aurora DB. This is because different instances or groups of instances perform different functions.
  • For example, to perform DDL statements you can connect to the primary instance. To perform queries, you can connect to the reader endpoint, with Aurora automatically performing load-balancing among all the Aurora Replicas behind the reader endpoint. For diagnosis or tuning, you can connect to a different endpoint to examine details.
  • Since the entryway for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention for any of your endpoints.

Aurora Reader Endpoints

  • Aurora Reader endpoints are a subset of the above idea of cluster endpoints. Use the reader endpoint for read operations, such as queries. By processing those statements on the read-only Aurora Replicas, this endpoint reduces the overhead on the primary instance.
  • There are up 15 Aurora Read Replicas because a Reader Endpoint to help handle read-only query traffic.
  • It also helps the cluster to scale the capacity to handle simultaneous SELECT queries, proportional to the number of Aurora Replicas in the cluster. Each Aurora DB cluster has one reader endpoint.
  • If the cluster contains one or more Aurora Replicas, the reader endpoint load-balances each connection request among the Aurora Replicas. In that case, you can only perform read-only statements such as SELECT in that session. If the cluster only contains a primary instance and no Aurora Replicas, the reader endpoint connects to the primary instance directly. In that case, you can perform write operations through the endpoint.

DynamoDB

DynamoDB Simplified

Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multiregion, multimaster, durable non-SQL database. It comes with built-in security, backup and restore, and in-memory caching for internet-scale applications.

DynamoDB Key Details

  • The main components of DynamoDB are:
    • a collection which serves as the foundational table
    • a document which is equivalent to a row in a SQL database
    • key-value pairs which are the fields within the document or row
  • The convenience of non-relational DBs is that each row can look entirely different based on your use case. There doesn’t need to be uniformity. For example, if you need a new column for a particular entry you don’t also need to ensure that that column exists for the other entries.
  • DynamoDB supports both document and key-value based models. It is a great fit for mobile, web, gaming, ad-tech, IoT, etc.
  • DynamoDB is stored via SSD which is why it is so fast.
  • It is spread across 3 geographically distinct data centers.
  • The default consistency model is Eventually Consistent Reads, but there are also Strongly Consistent Reads.
  • The difference between the two consistency models is the one second rule. With Eventual Consistent Reads, all copies of data are usually identical within one second after a write operation. A repeated read after a short period of time should return the updated data. However, if you need to read updated data within or less than a second and this needs to be a guarantee, then strongly consistent reads are your best bet.
  • If you face a scenario that requires the schema, or the structure of your data, to change frequently, then you have to pick a database which provides a non-rigid and flexible way of adding or removing new types of data. This is a classic example of choosing between a relational database and non-relational (NoSQL) database. In this scenario, pick DynamoDB.
  • A relational database system does not scale well for the following reasons:
    • It normalizes data and stores it on multiple tables that require multiple queries to write to disk.
    • It generally incurs the performance costs of an ACID-compliant transaction system.
    • It uses expensive joins to reassemble required views of query results.
  • High cardinality is good for DynamoDB I/O performance. The more distinct your partition key values are, the better. It makes it so that the requests sent will be spread across the partitioned space.
  • DynamoDB makes use of parallel processing to achieve predictable performance. You can visualize each partition or node as an independent DB server of fixed size with each partition or node responsible for a defined block of data. In SQL terminology, this concept is known as sharding but of course DynamoDB is not a SQL-based DB. With DynamoDB, data is stored on Solid State Drives (SSD).

DynamoDB Accelerator (DAX)

  • Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second.
  • With DAX, your applications remain fast and responsive, even when unprecedented request volumes come your way. There is no tuning required.
  • DAX lets you scale on-demand out to a ten-node cluster, giving you millions of requests per second.
  • DAX does more than just increase read performance by having write through cache. This improves write performance as well.
  • Just like DynamoDB, DAX is fully managed. You no longer need to worry about management tasks such as hardware or software provisioning, setup and configuration, software patching, operating a reliable, distributed cache cluster, or replicating data over multiple instances as you scale.
  • This means there is no need for developers to manage the caching logic. DAX is completely compatible with existing DynamoDB API calls.
  • DAX enables you to provision one DAX cluster for multiple DynamoDB tables, multiple DAX clusters for a single DynamoDB table or somewhere in between giving you maximal flexibility.
  • DAX is designed for HA so in the event of a failure of one AZ, it will fail over to one of its replicas in another AZ. This is also managed automatically.

DynamoDB Streams

  • A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.
  • Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that automatically respond to events in DynamoDB Streams.
  • Immediately after an item in the table is modified, a new record appears in the table’s stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. The Lambda function can perform any actions you specify, such as sending a notification or initiating a workflow.
  • With triggers, you can build applications that react to data modifications in DynamoDB tables.
  • Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attribute(s) of the items that were modified. A stream record contains information about a data modification to a single item in a DynamoDB table. You can configure the stream so that the stream records capture additional information, such as the “before” and “after” images of modified items.

DynamoDB Global Tables

  • Global Tables is a multi-region, multi-master replication solution for fast local performance of globally distributed apps.
  • Global Tables replicates your Amazon DynamoDB tables automatically across your choice of AWS regions.
  • It is based on DynamoDB streams and is multi-region redundant for data recovery or high availability purposes. Application failover is as simple as redirecting your application’s DynamoDB calls to another AWS region.
  • Global Tables eliminates the difficult work of replicating data between regions and resolving update conflicts, enabling you to focus on your application’s business logic. You do not need to rewrite your applications to make use of Global Tables.
  • Replication latency with Global Tables is typically under one second.

Back to top

Copyright © 2022-2023 Interview Docs Email: docs.interview@gmail.com