DEV Community

Divesh Kumar
Divesh Kumar

Posted on

How I Slashed My AWS Bill to $0 While Securing a 46GB Backup (In 1 Command) πŸš€

Managing cloud infrastructure often feels like a balancing act between utility and cost. Recently, I faced a challenge: I needed to perform a total shutdown of an AWS account while ensuring a 46GB infrastructure backup was perfectly preserved.

Doing this manually via the AWS Console is a recipe for missed resources and lingering costs. Instead, I chose the path of automation. This article breaks down how I transformed a complex multi-service environment into a "single-enter" terminal operation.


πŸ” Phase 1: The Automated Audit

Before you can back up, you have to know what you have. A simple look at the EC2 dashboard isn't enough.

I used scripts to crawl the account, mapping every dependency. One crucial trick was inspecting Lambda environment variables. Often, these variables contain hidden RDS connection strings, MongoDB URIs, or API keys that aren't immediately visible in the service-specific dashboards.

Mapping these "hidden" connections ensured that I didn't just back up the code, but also identified every data source that needed extraction.

πŸ“¦ Phase 2: Surgical Extraction

Once the map was ready, the terminal took over. Here’s the breakdown of the core commands used for each service:

1. S3 Sync

For high-performance file retrieval, aws s3 sync is the gold standard. It only copies new or changed files, making it efficient for large buckets.

aws s3 sync s3://my-bucket-name ./backup/s3/my-bucket-name
Enter fullscreen mode Exit fullscreen mode

2. Lambda Downloads

Retrieving Lambda code isn't as direct as S3. You first get a temporary signed URL, which you then use to download the ZIP file.

# Get the function details
aws lambda get-function --function-name MyFunction
Enter fullscreen mode Exit fullscreen mode

Note: Look for the Code.Location URL in the JSON responseβ€”that's your download key!

3. RDS SQL Dumps

For local portability and ease of restoration, I bypassed snapshots and went straight for raw .sql files using pg_dump (for PostgreSQL) or mysqldump.

pg_dump -h [endpoint] -U [user] -d [dbname] -f backup.sql
Enter fullscreen mode Exit fullscreen mode

4. DynamoDB JSON Export

For NoSQL data, I used full table scans to export everything into portable JSON format.

aws dynamodb scan --table-name MyTable --output json > MyTable.json
Enter fullscreen mode Exit fullscreen mode

🧹 Phase 3: The "Zero Bill" Cleanup

The most satisfying part? Watching the cost counter stop. Once the 46GB backup was verified locally, I ran a cleanup script to target the "silent cost killers":

  • EC2 Instances: aws ec2 terminate-instances --instance-ids i-12345...
  • Load Balancers: aws elbv2 delete-load-balancer --load-balancer-arn ...
  • Elastic IPs: Often overlooked! Releasing them is vital to avoid hourly idle charges.

    aws ec2 release-address --allocation-id eipalloc-...
    
  • RDS Instances: Deleted after verifying the local SQL dumps were 100% intact. I skipped the final snapshot to ensure zero remaining storage costs.

    aws rds delete-db-instance --db-instance-identifier my-db --skip-final-snapshot
    

☁️ Phase 4: Final Cloud Redundancy

A backup isn't a backup until it's in at least two places. I synced the entire local backup/ folder to Google Drive using rclone. This provided geo-redundancy and peace of mind, knowing the data was safe outside of the AWS ecosystem.


πŸ’‘ The Takeaway

The result was a comprehensive, documented backup and a clean AWS bill.

Modern DevOps is about building scripts that handle the manual heavy lifting. By automating the research, extraction, and cleanup, you minimize human error and ensure that "turning off the lights" doesn't leave any expensive bulbs burning in the background.

How do you handle your infrastructure decommissioning? Let's discuss in the comments!

aws #devops #automation #cloudcomputing #backup

Top comments (0)