Mobile app version of vmapp.org
Login or Join
Karen161

: 10 megabyte gzip limit in AWS Cloudfront? My website is a static blog of visualizations and I am serving it via AWS S3. Some of the visualizations use large CSVs, ranging from 1 megabytes

@Karen161

Posted in: #AmazonAws #AmazonCloudfront #Gzip #Optimization #WebDevelopment

My website is a static blog of visualizations and I am serving it via AWS S3.

Some of the visualizations use large CSVs, ranging from 1 megabytes to 20 megabytes.

I've set up Cloudfront to automatically gzip files, but for some reason there is a maximum size of 10 megabytes.

As a result, the page that depends on a 20 megabyte CSV takes around 5 seconds to load because Cloudfront isn't gzipping it.

I've checked, and if this file was gzipped then it would only be around 2 megabytes.

Is there any reason Cloudfront doesn't gzip files past 10 megabytes, or is there any sort of workaround I can use for automatically serving a compressed version of this file without too much hassle?

10.02% popularity Vote Up Vote Down


Login to follow query

More posts by @Karen161

2 Comments

Sorted by latest first Latest Oldest Best

 

@Odierno851

My website is a static blog


Since your site is static, it is an excellent candidate for s3_website, which will automatically gzip files locally before uploading, and will also handle setting the content-type and cache invalidation on CloudFront. It's a 'no-brainer' once you get it set up and I highly recommend it. It's also free and open-source.

You need both Ruby and Java installed to run it.

To get you started, here is a sample config file s3_website.yml that I use for an S3 bucket + Cloudfront delivered static site which is served via HTTPS with HTTP/2 enabled:

# S3 Credentials
s3_id: <redacted>
s3_secret: <redacted>

# Site URL
s3_bucket: example.com
# Config options
s3_endpoint: eu-west-1
index_document: index.html
cache_control:
"assets/*": public, no-transform, max-age=1209600, s-maxage=1209600
"*": no-cache, no-store
gzip:
- .html
- .css
- .js
- .ico
- .svg

# CloudFront
cloudfront_distribution_id: AABBCC123456
cloudfront_wildcard_invalidation: true
cloudfront_invalidate_root: true
cloudfront_distribution_config:
default_cache_behavior:
min_ttl: <%= 60 * 60 * 24 %>
http_version: http2

10% popularity Vote Up Vote Down


 

@Welton855

This is a design limitation:


The file size must be between 1,000 and 10,000,000 bytes.

docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html

Compressing files is resource-intensive, so the designers of CloudFront placed an upper bound on the size of files that CloudFront will spend resources compressing on-the-fly.

There's not an "automatic" workaround.

If your files are that large, and compressible, it's probably in your interest to store them compressed in S3 anyway, to reduce your storage costs.

Gzip the files with gzip -9 (which may actually result in slightly smaller files than generated by CloudFront -- gzip has varying levels of compression, with -9 being the most aggressive, and the level used by CloudFront does not appear to be documented), then remove the .gz extension, and upload the files to S3, setting Content-Encoding: gzip in the upload metadata.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme