Lots of clocks with different styles rusting and aging.

Fix your opportunity cost estimate for productivity tools

I spend a lot of time thinking about increasing people’s efficiencies and opportunity cost. Over time, I’ve found many high-level resources discussing the balance of build vs. buy, but they rarely touch on pragmatic solutions. I want to help fix that, especially for software teams. This analysis works the same if you’re the buyer or the seller, but I’ll adopt the buyer’s perspective.

When thinking about making a team more efficient, there are three main cost factors folks get wrong:

  1. Time cost of implementing solutions
  2. Time cost of maintaining those solutions
  3. Running cost of those solutions

These costs should help you evaluate what else the team could do with that time and how much an improvement to efficiency would be worth. In my experience, companies tend to underestimate the time cost of developing and maintaining such systems while also overestimating the compute cost of running such solutions. I’ll focus mainly on some quick evaluation points to identify the break-even point for your particular problem and potential solutions.

Time cost to implement

Teams should always consider the cost to make tasks more efficient to avoid hidden expenses that will bite you later. Unfortunately, it’s effortless to ignore how long we spend optimizing something with a script or a fully custom system and instead focus on the gain after the fact. The general problem of estimating how long software projects will take exacerbates this hidden cost.

One of my favorite comics from xkcd is a simple table showing how long you can spend optimizing a task before it becomes more expensive than the time you’re saving. I’ve adapted that table to account for business days/hours. I’ve also created a short script to help you play with numbers and understand how long a solution is worth.

How long can you work on making a task more efficient before you’re spending more time than you save across five years?

Time saved vs
Task frequency
5/daydailyweeklymonthlyyearly
1 second1.8 hours21.7 min4.3 min1 min5 sec
1 minute13.5 days2.7 days4.3 hours1 hour5 min
1 hour162.5 days32.5 days7.7 days5 hours
1 day260 days61.9 days5 days

times per

Result:

This table helps build some intuition for how expensive some tasks are in the long run. The gains become more significant as you consider the scale factor. You can multiply your time investment for every person that the change benefits. Let's look at an example.

Consider some change that saves 1 minute of compilation time for your developer team. This task is something that your developers do multiple times a day, but let's be very conservative and say they do it five times a day. Looking at the table, we should invest 13.5 days for one of the developers to break even. But if your team has ten developers, you could invest up to 135 full workdays to save that single minute. This scope is essentially a quarter-long project for three engineers (plus a PM) at this point when counting the time they would be in meetings and other unrelated work tasks. That's $200k[1] over 5y. Is it worth building or buying a solution? What other things could the team build in a quarter?

I bet you can think of several high-leverage potential solutions for your team or org using that table as a guide.

Time cost to maintain

After implementing the system, we must consider how much it costs to maintain, including eventual disruption caused by the new system. This cost is surprisingly overlooked or underestimated in most software projects. But this is a tax that the team has to pay continuously to improve efficiency. So I've created a similar table for evaluating the maintenance cost.

How long can you spend per week on keeping the task more efficient before you're spending more time than you save?

Time saved vs
Task frequency
5/daydailyweeklymonthlyyearly
1 second25 sec5 sec1 sec
1 minute25 min5 min1 min14.3 sec1.2 sec
1 hour3.1 days5 hours1 hours14.3 min1.2 min
1 day5 days1 days1.9 hours9.2 min

times per

Result:

Again, this evaluation benefits from a large scale. If you have a team of 10 developers, you can multiply the value by 10 to break even. If the total cost goes above 40 hours per week, you can justify having one full-time position that is only concerned with maintaining the new system. This cost to maintain tends to be underestimated, especially for small teams, making them more optimistic about developing solutions without considering the ownership cost.

Let's take the example from before of saving 1 minute 5 times a day for ten developers. Although it was worth 135 days to develop, you should only spend 4 hours maintaining it per week before it starts costing you more than it's saving. Besides the initial implementation cost, that's $20k/year[1]. So is it worth building or buying a solution? What is the risk of actually only spending 4h/week?

I could argue that this cost is still cheap (average 5 minutes per day per developer is not bad). But the main challenge to keep in mind is that maintaining any software system is not an ongoing cost but happens in bursts. There will be days or weeks without maintenance, and then someone will spend hours or days fixing or upgrading the system in an incident. Plus other teammates might become less efficient while the system is being repaired. Most team planning practices have a tough time dealing with bursty events, so consider this for your team and solutions.

Cost to run

Now, let's evaluate the non-people cost involved in running the solution your team has implemented. In this case, I'll focus on the cloud compute/storage cost as that's usually the most expensive component and for which people have less intuition.

I'll start with a simple question: how many cloud resources can you buy with the money you keep when you save 1 hour for one of your engineers? Please take a minute to think about it and write your guess. I'd love to hear people's intuitions here, as I know I was heavily underestimating the answer.

Let's do some back-of-the-envelope calculations. Using our base cost[1], we have a lovely $100/hour to evaluate. If we check S3's pricing page, we can store 2TB for one month for $46. Considering the API calls and data transfer, let's round it to $60. We can get 2 CPUs and 8GB of RAM at about $0.1/hour at the on-demand price for computing, which means that we can get 400 of those instances for one hour with the remaining $40.

A system that saves one person in your team 1h can use 2TB of storage for 1 month, and 800 vCPUs and 3200 GB of RAM for one hour to break even.

Let this sink in for a moment. You can consume those resources to run some large-scale computation or to build a cache, and I'm pretty sure you'd still not consume anywhere near that amount.


[1]: For the sake of round numbers, I'll assume you spend $200k/year on each person, and they work 50 weeks/year and 40 hours/week. There are a ton of business costs besides the direct compensation for folks. Of course, we all know that these numbers are wrong for many reasons, so adjust them for your case.