Use Dropshare with an AWS S3 Bucket and a custom Domain with SSL Certificate

• 4 min read

For the last couple of years I've been using Dropshare to quickly share files with colleagues and friends. The cool thing about Dropshare is that I can control the server and domain to where the files are being uploaded.

In the past, I've used the SFTP option and a DigitalOcean droplet to host those files. A shell script removed all files older than 24 hours to keep things tidy.

While setting up a new computer, I revisited the settings of Dropshare and decided to give the AWS S3 option a try. S3 is almost free with the low numbers of files I'm sharing and doesn't require me to host a server. A Win-Win situation.

Here's a quick step by step guide, on how I've created a new S3 bucket, Cloudfront distribution and a life cycle rule to replicate my current setup on AWS.

Creating the S3 bucket #

In this post I'm using the domain as an example. Update the mention of the domain when you apply this guide for your own use case.

First we need a new AWS S3 bucket.

  1. Create a new S3 bucket in the AWS Console called
  2. Uncheck the "Block all public access" setting. The files we will upload via Dropshare have to be public after all.

Now we could use the "Virtual Hosting" feature of S3 to use to serve our files. You would have to add a CNAME DNS record like this. (Don't do this yet) CNAME

This CNAME record would allows us to access the uploaded files via HTTP on but not under HTTPS.

Let's add an SSL certificate so we can serve our files under HTTPS.

SSL certificate and Cloudfront Distribution #

Let's create a SSL certificate through the Certificate Manager. It's important that you select the us-east-1 region to make this all work.

Click on "Request a Certificate" and follow the instructions. I've chosen to validate my domain through DNS. In a matter of seconds the SSL certificate has been issued and is ready to be used.

Cloudfront Distribution #

We can't assign the newly created SSL certificate to our S3 bucket directly. We have to put a Cloudfront distribution in between.

  1. Open the Cloudfront Console and click on "Create Distribution".
  2. Under "Origin Domain Name" select your S3 bucket. The UI should auto select a bunch of settings. Feel free to adjust them to your liking or keep them as is.
  3. Under "Distribution Settings" add our custom domain to the "Alternate Domain Names (CNAMEs)" field.
  4. Under "SSL Certificate" select "Custom SSL Certificate (" and select our previously created SSL certificate from the list of options.

Click on "Create Distribution" to deploy the distribution. This might take a while.

After your distribution has been successfully deployed, you should see it in the index table. Copy the value of "domain name" to your clipboard. For me the value looks something like this:

In your DNS settings for your domain, add now a new CNAME record for that points to the Cloudfront distribution. CNAME

Let's set up Dropshare to use our S3 bucket.

Add S3 bucket to Dropshare #

In Dropshare, go to "Settings" → "Connections" → "+ New Connection" → "Third Party Cloud" → "AWS S3".

If you have a dedicated Amazon user for Dropshare, great! If not, best follow this documentation by Dropshare on how to create a new user with the correct permissions.

After DNS propagation our uploaded and shared files should be available under (or whatever domain you're using).

Extra: Automatically delete older files #

In the beginning of this post I've mentioned that on my old setup, I've used a shell script to automatically delete files older than 24 hours.

You can replicate this behaviour by using S3's lifecycle rules.

  1. In your Bucket settings, navigate to "Management" → "Lifecycle Rules". Click on "Create Lifecycle Rule"
  2. Give the rule a good name and select the scope "This rule applies to all objects in the bucket". Acknowledge that this rule will apply to all items in this bucket.
  3. Select the "Expire current versions of objects" action in the list
  4. Enter the desired number of day in "Number of days after object creation" after when files should automatically be deleted.

Hit "Create Rule" and you're done.

Extra: AWS Policy for S3 bucket #

You can use the following AWS policy to give a user only access to your specific S3 bucket. Note that users associated with this policy won't be able to list all S3 bucket in the account.

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": ["", "*"]

Outro #

It hope this little guide was helpful for at least one other person. If you are looking for a neat minimal landing page template for Dropshare, you might want to checkout the one I'm using.