Computing Cost Paradigms Have Shifted Dramatically — Have Your Management Techniques?

You need to rethink how your company deals with computing costs

There has been a fundamental, technology-driven switch in computing costs from fixed and step costs to marginal costs. This trend has accelerated over the last five years and it is necessitating a change in the way many businesses make decisions and manage their computing costs. It will eventually impact everything from pricing strategy, to cost management, corporate reporting structures, security analysis and financial reporting. So, basically everything. Unfortunately most companies are managing computing costs using outdated paradigms, toolsets, and organizational structures and missing out on a lot of bottom line profit in the process.

Quickly… a definition or two:

For the purpose of this piece let’s assume that in the history of computing, all business have consumed basically five elements in order to achieve their economic objectives: (1) processing power or CPU, (2) memory including permanent and volatile, (3) pipe, which includes all forms and methods of moving data from one place to another, (4) human resources and (5) intellectual property. Let’s take all of these as a composite and for convenience call it “Computing” or “Business Computing” or “Computing Resources”.

Now, quickly, three basic cost concepts. “Fixed Costs” are costs that within a reasonable range of expected output for a Company will remain unchanged. “Step Costs” are costs that will change, but do so in discrete increments or blocks within a range of expected outputs, and “Marginal Costs” or “Variable Costs” are costs that vary directly with the units of output.

Why Fixed Costs and Step Costs are becoming pure Variable Costs

  • The advent and growth of microservice frameworks means that Computing is not amalgamated in monoliths that are hard to allocate. Instead, it is broken down into “domains” which have specific labor, resources, and control structures associated with them.
  • The proliferation of SaaS-based services tends to charge in increments — sometimes related to output (direct) and sometimes not directly related to output but variable (indirect).
  • The advent and growth of the cloud which tends to price based upon utilization metrics which are generally related to business process that ultimately relate to output.
  • The advent and growth of serverless technology means that Computing is not always on, but launched for specific, identifiable tasks.

History on how this paradigm shift sneaked up on lots of companies

It’s instructive to note the history of how Computing Costs were handled in organizations. In the early days of Computing, companies tended to create separate computing departments. These were almost always cost centers that budgeted on an annual or quarterly basis against some expectation of the services that were supposed to be provided. Companies sometimes engaged in elaborate accounting and transfer-pricing schemes to allocate the cost of running these divisions to operating units based upon metrics generated from mainframe systems.

The advent of the PC resulted in the proliferation of Computing budgets in operating and staff units. These items were usually budgeted on a head count basis and treated as overhead. The old model Computing division remained in place for most of the Computing related to the operation of the company’s core business. In recent years the divisional budgets have expanded to include tablets, laptops, phones, and a host of SaaS services that perform business functions. But deployment of servers still is generally budgeted in a Computing unit.

With the advent of virtual machines and the cloud, some companies started to put into place “Cost Review and Optimization” departments. In some cases these departments coordinate with the operating units to explain the impact of customer satisfaction goals on expenses. But the budget is still being managed fundamentally as a cost center, and is placed below the line (i.e.: not included in gross margin calculations) for financial statement purposes as well as not being included in any contribution margin calculation. The integration of cost information with the operating or revenue side of the business for most companies remained weak in part because of toolset limitations but also because most companies have not yet grasped the extent to which the change in the nature of costs must also change the underlying structures and mindset.

What do companies miss by continuing to think of computing as relatively fixed?

Three things: Profitability, Contribution Margin, Operating Leverage

There are many ways to increase profitability that are tied up with a thorough knowledge of your costs. Not only the obvious decisions about how to improve profit by reducing costs, but your pricing decisions also rely on knowing your costs. Go/No-Go decisions and product or customer paring decisions ultimately impact the bottom line. For the first time, with the proper tools, you can directly link Computing with product, customers, branding, and more:

  • How much does it cost to process an order, with or without a coupon?
  • When the client asks for a special CPU intensive report, how much should you charge?
  • What if your developer has written an incredibly inefficient piece of code that makes the cost of running a process ten times more than what it should cost — do you know? Has that impacted a decision?

In this world, where companies are keeping more and more data and creating more and more event flows from a single event (like an order now triggers CRM, Inventory, Logistics, Big Data), what does anything really cost? In this new marginal environment, it’s now knowable.

In order to know its contribution margin, a company needs to accurately identify variable costs. Right now, a large block of costs are either being allocated on some rather arbitrary allocation or not allocated at all. However, with some of the current tools coming onto the market, you can now have accurate cost accounting down to the method call:

  • How much does it actually cost in computing resources to process a claim, handle a customer complaint, or render a complicated report?
  • What is the cost impact of decreasing the processing time on your SLA or customer quality assurances?
  • Can you locate your efficiency frontier between service levels and costs?

It’s now possible to start thinking about computing in this framework. Your analysts need to know where and how to get this data.

A good understanding of your operating leverage is critical in order to know when you can put your ‘foot on the gas’, or whether or not you can survive a downturn. The changing cost paradigm changes your operating leverage. You can answer questions like: “Do you really need the fifty fixed costs servers, or can you run marginally on your favorite cloud provider?”

For the first time in history you can look at the calculated operating leverage and have a high degree of certainty that it will hold over a very wide range. It’s possible to both calculate and forecast the costs associated with the computing resources.

And, this matters because…

Over time, companies that have good cost controls, really understand how to price, and know how to accurately identify marginal costs, are much better positioned to compete. It could make the difference between the Company hitting its required rate of return (or bonus) or not.

The above is a chart of technology spend by industry. Arguendo, let’s assume that sustained EBITDA is between ten and 25 percent of total revenue (source). This means that even the lowest technology area, Construction, could have over ten percent of its profit tied up in its technology spend and just a 10 percent improvement in that means an additional 1.5% profit would drop to the bottom line. That’s a difference worth making knowable.

Mr. Kesselman is CTO and Board Chair of, the observability and actionability platform that focuses on reducing overall clouds deployment costs, tying operational revenue with computing costs for better decision making and cost accounting, and allowing companies to trigger events based upon what’s actually happening on their platforms with relationship to computing expenses and business metrics.

read original article here