Create a YUM Repo on S3
2026-01-26Intro
In this post, I am going to describe the process of creating a YUM/DNF repo on S3. Now, the first question that you are likely asking is
why would you want to do this? Well, there are a couple of different reasons that you would want to host your own repository. Perhaps, you
want to distribute the application that you wrote. Or perhaps like me, you want to be able to use dnf update to update a package, but the
application in question does not have a repository and only allows you to download the RPM package from their website. No matter the reason,
having your own repository can make life easier in a number of ways.
Your next question is probably why S3? Well, there really are an almost limitless number of options here. You could put it on an existing web server. You could use a VPS on something like Linode. You could stick it in CloudFlare with workers and R2. I decided on S3 for a couple of reasons. While I self-host a lot of different services, a public facing web server is not one of them right now. It would add a lot of maintenance overhead to add one, and I didn't feel like a simple YUM repo was worth the hassle. I would also have tnis overhead, plus some cost if I used a VPS, so that option was out as well. I initially thought that I would just use CloudFlare. I have an account there, but I have never really used any of their worker features nor have I used R2. Well, when I dug into the documentation, it just seemed more complicated than what I was looking for. So I settled on S3. I have been using AWS for years, and it is the provider that I know the best. I already have a decent amount of stuff there as well, so this would not be a noticeable change to my bill.
The Setup
Now that we have the whys out of the way, lets move on to the how.
Setting up the S3 Bucket
The first step is to create an S3 bucket.
aws s3 mb <YOUR-BUCKET-NAME>
Next, we will disable public access blocking for the bucket.
aws s3api put-public-access-block --bucket <YOUR-BUCKET-NAME> --public-access-block-configuration "BlockPublicAcls=false,IgnorePublicAcls=flase,BlockPublicPolicy=false,RestrictPublicBuckets=false"
Then, we create a new bucket policy to allow access to the objects in the bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessRepo",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<YOUR-BUCKET-NAME>/*"
}
]
}
aws s3api put-bucket-policy --bucket <YOUR-BUCKET-NAME> --policy file://bucketPolicy.json
Lastly, we will enable static website hosting for the bucket. This is optional, but I think that it makes things easier in the long run.
aws s3 website s3://<YOUR-BUCKET-NAME> --index-document index.html --error-document error.html
Setting up the Repository
First, you need to create a directory where you will put the .rpm package and the associated data.
Once you have copied the rpm file there, you will need to create the various metadata files that YUM/DNF expect.
createrepo </path/to/repo/directory>
Now we sync the directory with the S3 bucket.
aws s3 sync </path/to/repo/directory> s3://<YOUR-BUCKET-NAME>
Optional Steps for Enhancement
If you want to make things a little more polished, we can create a CloudFront distribution, give it an SSL certificate, and create an alias record in Route53.
First, we will create a CloudFront distribution configuration.
{
"DistributionConfig": {
"CallerReference": "unique-id",
"Aliases": {
"Quantity": 1,
"Items": [
"your-domain-here"
]
},
"DefaultRootObject": "",
"Origins": {
"Quantity": 1,
"Items": [
{
"Id": "s3-origin",
"DomainName": "s3-website-origin-endpoint",
"OriginPath": "",
"CustomHeaders": {
"Quantity": 0
},
"CustomOriginConfig": {
"HTTPPort": 80,
"HTTPSPort": 443,
"OriginProtocolPolicy": "http-only",
"OriginSslProtocols": {
"Quantity": 4,
"Items": [
"SSLv3",
"TLSv1",
"TLSv1.1",
"TLSv1.2"
]
},
"OriginReadTimeout": 30,
"OriginKeepaliveTimeout": 5
},
"ConnectionAttempts": 3,
"ConnectionTimeout": 10,
"OriginShield": {
"Enabled": false
},
"OriginAccessControlId": ""
}
]
},
"OriginGroups": {
"Quantity": 0
},
"DefaultCacheBehavior": {
"TargetOriginId": "s3-origin-endpoint",
"TrustedSigners": {
"Enabled": false,
"Quantity": 0
},
"TrustedKeyGroups": {
"Enabled": false,
"Quantity": 0
},
"ViewerProtocolPolicy": "redirect-to-https",
"AllowedMethods": {
"Quantity": 2,
"Items": [
"HEAD",
"GET"
],
"CachedMethods": {
"Quantity": 2,
"Items": [
"HEAD",
"GET"
]
}
},
"SmoothStreaming": false,
"Compress": true,
"LambdaFunctionAssociations": {
"Quantity": 0
},
"FunctionAssociations": {
"Quantity": 0
},
"FieldLevelEncryptionId": "",
"CachePolicyId": "658327ea-f89d-4fab-a63d-7e88639e58f6",
"GrpcConfig": {
"Enabled": false
}
},
"CacheBehaviors": {
"Quantity": 0
},
"CustomErrorResponses": {
"Quantity": 0
},
"Comment": "",
"Logging": {
"Enabled": false,
"IncludeCookies": false,
"Bucket": "",
"Prefix": ""
},
"PriceClass": "PriceClass_All",
"Enabled": true,
"ViewerCertificate": {
"CloudFrontDefaultCertificate": false,
"ACMCertificateArn": "acm-certificate-arn",
"SSLSupportMethod": "sni-only",
"MinimumProtocolVersion": "TLSv1.2_2021",
"Certificate": "acm-certificate-arn",
"CertificateSource": "acm"
},
"Restrictions": {
"GeoRestriction": {
"RestrictionType": "none",
"Quantity": 0
}
},
"WebACLId": "arn:aws:wafv2:us-east-1:710577310509:global/webacl/CreatedByCloudFront-0709bd35/e3788eef-dcef-4583-8ca0-f4c7a7385706",
"HttpVersion": "http2",
"IsIPV6Enabled": true,
"ContinuousDeploymentPolicyId": "",
"Staging": false
}
}
Make sure that you update the fields above as needed. Then, we create the distribution.
```bash
aws cloudfront create-distribution --cli-input-json file://distribution-config.json
Lastly, we will create an alias in Route53. In order to do this, we will create a record set json.
{
"Comment": "Alias record for CloudFront YUM repo",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "<your-domain-name>",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "Z2FDTNDATAQYW2",
"DNSName": "your-cloudfront-distribution-url>",
"EvaluateTargetHealth": false
}
}
}
]
}
Apply the changes.
aws route53 change-resource-record-sets --hosted-zone-id "/hostedzone/<your-hosted-zone-id>" --change-batch file://alias-record.json
Running this command will return a change id. You can use this change id to track the status of the change.
aws route53 get-change --id "/change/<your-change-id>"
Once the status says IN_SYNC you are good to go.
Using the new repo
To use your new repository, you are going to create a new repo file in /etc/yum.repos.d. You can name it whatever makes sense for you.
In the file put the following
[repo-friendly-name]
name=<repo-name>
baseurl=<url-created-above>
enabled=1
gpgcheck=0
Then you can do the following to test things out
sudo dnf clean all
sudo dnf repolist
sudo dnf install <application>
And thats it, you have repo in S3 thats all ready to go.