We audited the cloud bill of a 200-person technology company last year. Their monthly AWS spend was ₹28 lakh. After four weeks of analysis and six weeks of remediation, it was ₹17 lakh — same workloads, same performance, same reliability. ₹11 lakh per month, eliminated without changing a single line of application code.
The waste had accumulated over four years. Not through negligence — through normal, fast-moving engineering: environments spun up for experiments that outlived their purpose, instances provisioned at peak sizes that were never downsized, data stored in the wrong storage tier, workloads running on on-demand pricing that should have been on reserved capacity years ago.
The Taxonomy of Cloud Waste
Idle and underutilised resources (typically 15–20% of spend)
Instances running at under 10% CPU utilisation. Development environments running 24/7 when they're used 40 hours a week. RDS instances sized for a peak that happened two years ago and hasn't recurred. These are the easiest wins — identifiable in a day, remediable in a week.
Wrong pricing model (typically 10–15% of spend)
On-demand pricing is the most expensive way to run stable workloads. Any resource with predictable usage patterns — production databases, baseline application servers, batch processing — should be on reserved instances or savings plans. The discount is 40–60%. The downside is a 1–3 year commitment, which most stable production workloads can easily make.
Data storage and transfer inefficiencies (typically 5–10% of spend)
S3 objects in Standard tier that haven't been accessed in 180 days. Data transferred across regions unnecessarily. Snapshots retained far beyond the retention policy. EBS volumes attached to terminated instances. None of these are large individually. Together they compound.
Architectural waste (variable, often largest)
Applications that can't scale horizontally — so vertical scaling (expensive) is the only option. Synchronous architectures that require always-on servers to handle peak loads that last two hours a day. Monolithic databases that can't be partitioned or cached effectively. These require architectural work to address, but the return is ongoing.
FinOps as a Practice, Not a Project
A one-time cost optimisation engagement without ongoing governance is a depreciating asset. Without tagging enforcement, budget alerts, and regular rightsizing reviews, new waste accumulates as fast as the old waste was removed. Sustainable FinOps means building cost visibility and ownership into engineering culture — where every team can see their spend, understands it, and is accountable for it.
Ready to solve this for your business?
Talk to our engineering team about your specific challenge.