Redshift Security: Data Backups and Encryption Best Practices

TL;DR In this blog, a second in a series, we are going to focus on more risks that come from a misconfigured Redshift, like encryption, backups and logs. We will explore these security features and best practices that can prevent data leakage.
You can catch up with the previous blog here and when you're ready let's dig in.
Configuring Data Encryption in Redshift
Redshift provides two types of encryption to help with protecting sensitive data, at-rest and in-transit.
Protecting sensitive data in transit
Encryption of data in transit is important to avoid data manipulations, leaks and repudiations. Redshift allows data movements between the clusters to and clients over JDBC/ODBC,or between Redshift cluster and other AWS services such as S3 and DynamoDB.
Amazon Redshift uses SSL for communication with Amazon S3 or Amazon DynamoDB for copying and unloading data.
By default, Redshift accepts connections either using SSL or not.
Enforce encryption in transit with cli:
Enforce encryption in transit with console:
Under “Properties” tab we can see the parameter group attached to our Redshift cluster, click on the parameter group (showing “ofir-redshift-parametergroup” in the image below):

Now to change the parameters click on “Edit”:

Set the require_ssl parameter to true then hit “Save”:

Why encrypt data at rest?
- When encryption is enabled, snapshots of the cluster are also encrypted, which ensures that the data is protected in any form outside your Redshift cluster, even when the snapshot is moved between regions or accounts
- The regulations governing your data may also require you to use encryption
Redshift encryption uses envelope encryption, which means that you can rotate keys without having to re-encrypt data blocks. The key hierarchy consists of four tiers:
- root key
- cluster encryption key (CEK)
- database encryption key (DEK)
- data encryption keys
To learn more about Amazon Redshift encryption - click here
Encrypting sensitive data using KMS / CMS
It is possible to use AWS KMS or Customer managed keys (CMS) for encryption, which offer more flexibility including the ability to create, rotate, disable, define access control and audit encryption keys.
Encrypting data using HSM
If you don't use AWS KMS for key management, you can use a hardware security module (HSM) for key management with Amazon Redshift.
Key rotation for Amazon Redshift cluster
Redshift lets you rotate encryption keys for encrypted clusters. Whenever a key rotation is initiated, the Redshift CEK is rotated for the specified clusters and any automated or manual snapshots. In the same way, Redshift rotates the DEK for the cluster, but not for snapshots, since they are stored internally in an internal S3 bucket.
Enable encryption at rest with cli:
Enable encryption with console:

Under the “Properties” tab, on “Database configurations” section, click on “Edit” then “Edit encryption”.
Choose the desired encryption type:

Sensitive Data Backups
Creating a fresh backup of Redshift’s clusters is a best practice for data compliance, data redundancy, and data availability.
However, managing multiple backups, knowing where they are, and securing them against exposure and exfiltration can be challenging.
To keep our sensitive data safe, we must understand Redshift's snapshot capabilities and features.
DB Snapshots
Snapshots are point-in-time backups of clusters. Snapshots are either automatically or manually generated and they are stored internally in Amazon S3 using an SSL connection.
Automated DB snapshots
Redshift clusters create “automated snapshots” by default.
These snapshots are deleted automatically when the retention period expires (default is 1 day), when automated snapshots are disabled, or when the cluster is deleted.
Setting the retention for any future automated snapshot with cli:
Setting the retention for any future automated snapshot with AWS console:
Under the cluster page, click on “Edit” then under the “Backup” section set the retention for automated and manual snapshots:

Manual DB snapshots
Without specifying a retention period when you create manual snapshots, Redshift manual snapshots remain forever, even after the cluster itself is deleted. Leaving behind an unmanaged sensitive data creates an exposure that can be exploited by attackers who are looking for easy targets.
Creating manual snapshot with retention period - with cli:
Setting retention for an existing manual snapshot with cli:
Setting retention for manual snapshots with AWS console:
Inside the desired snapshot click on “Actions” then “Manage access”:
.png)
Now under “Manual snapshot retention period” choose the wanted retention:

Sharing snapshots
One of the features of snapshots is sharing, it is possible to share snapshots between AWS accounts. A snapshot that is shared with different accounts can be restored on that account.
To authorizes the specified AWS account to restore the specified snapshot with cli:
To check with which AWS accounts the snapshot is shared check the “AccountsWithRestoreAccess” when describing the snapshot:
To remove AWS account share on snapshot with cli:
To remove AWS account share on snapshot with AWS console:
Inside the desired snapshot click on “Actions” then “Manage access”:
.png)
Now under “Manage access” click on :”Remove account” to remove the unwanted account:

Disable automatic copying snapshots between AWS regions
Some compliance and regulations like GDPR prevent data movement between restrictive areas e.g data containing EU based PII cannot exist in a US storage location. To ensure your organization remains in compliance it is recommended to disable the option to copy snapshots between regions.
Disable cross region snapshot copy with cli:
Disable cross region snapshot copy with AWS console:
Under the cluster page, click on “Actions” then choose “Configure cross-region snapshot”:

Choose “No” and click on “Save”:

Enable Logs on Sensitive Data
Monitoring is an important part of the ongoing effort in defending sensitive data inside Redshift. Monitoring can also help detect suspicious activity and respond in time to prevent a leak. In our next blog, we will share specific examples of data attacks.
Redshift supports two types of monitoring:
- CloudTrail - audit management activity in AWS & API events
- Audit logs - audit activity connections and queries inside the Redshift cluster
Enable CloudTrail
CloudTrail management events are enabled by default (after a trail is created).
It is important to regularly monitor the management events related to your Redshift cluster and make sure that no configuration changes put your data at risk. For example Redshift cluster made public, suspicious temporary credentials created etc.
Enable audit log
Audit logs are consist of the following log types:
- Connection log – Logs authentication attempts, connections, and disconnections
- User log – Logs information about changes to database user definitions
- User activity log – Logs each query before it's run on the database
We will explore Redshift logging capabilities more deeply while trying to hunt suspicious activities in our next blog.
Connection and user logs are enabled by default and kept inside the database system tables STL_USERLOG and STL_CONNECTION_LOG respectively.
When enabling Redshift audit log it exports the logs from within the database to S3 bucket or CloudWatch.
Enable Redshift audit logs export with cli:
To enable User Activity Log it is necessary to enable the database parameter “enable_user_activity_logging” by setting its value to “true” (it is disabled by default).
Enable Redshift user activity log with cli:
Enable Redshift audit logs with AWS console:
Under the “Properties” tab, on “Database configurations” section, click on “Edit” then “Edit audit logging”:

To turn on the activity log, go to the Redshift parameter group and modify the value of enable_user_activity_logging:
.png)
Summary
As we continue to explore the different security features of Redshift, we hope that you find this useful and you will consider these steps to further reduce the risk of exposing your data in Redshift.
What's next?
The next blog in the series will cover some of the novel attack vectors our research team has discovered in Redshift. This is where the real action begins.
Stay tuned and secure!
About the Authors
Ofir Shaty is a Senior Security Researcher at Dig Security. A member of the Security Research team, Ofir has over 6 years of experience in Data Security and Web Application Security. When away from work on sunny days Ofir likes to hit the road with his motorcycle.
Ofir Balassiano is leading the security research team at Dig Security.
Ofir has over 8 years of experience in security research specifically in cloud security as well as low-level OS internals research.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed consectetur do eiusmod tempor incididunt eiusmod.