Automation cost drivers are more relevant (Part Two)

Posted on Posted in blog

The previous blog explained the necessity of building a business case even for strategic automation projects. It is not always obvious to calculate Total Cost of Ownership (TCO) from day one as the deliverables of agile projects are not frozen. This is why it is important to understand how each cost component scales. Sometimes the cost drivers have a bigger meaning than the costs themselves.

Let’s focus in this blog on two costs components as they drive the TCO and its evolution: the automation/orchestration software cost and the service modeling costs. Their scalability play a key part in the total cost of the automation roadmap.

The right software pricing scalability

The obvious project costs deal with the core automation/orchestration software. For Commercial Software on the Shelf (COTS), the costing is based on well-documented license fees. Yet the devil may be in the details if the license fees scale according to the number of “deployed units”, e.g. number of VNFs, network elements, VPNs… The automation sponsor has to forecast the number of units for the next years. These forecasts are the main drivers when estimating multi-year license fees. It is a major challenge for telco/service providers/CSPs to issue such forecasts. And to maintain them for each automation initiative. This is why one should really think twice before deploying an orchestration software whose license is not volume agnostic.

An organization has usually a very good knowledge of its future automation use cases. Hence the project costs are much more deterministic if the license fees scale with the number of automation use cases. Organizations should favor this model as the business analysts can more easily both predict the cost over time and distribute them.

Special case: Cost driver for open-source automation

Alternatively, the core automation software may be an open-source software (OSS). OSS software will not work out of the box in the operator’s environment and will require further development and/or customization.

In this case, its TCO includes operational expenditures (OPEX) versus capital expenditure (CAPEX). For instance, how many Full Time Equivalents (FTEs) work on OSS testing, validation and internal development/customization. Skilled DevOps engineers usually come with high labor costs. They may only work part-time on OSS support. When they leave the organization, there will be additional OPEX dealing with handover and training of the new staff to ensure continuity. The same is applicable when working with system integrators for this work.

So whoever in charge of project costs cannot ignore these significant expenses. Yet, it is complicated to estimate them without using “unpopular” resource tracking systems, e.g. timesheets. Long story short, it is quite challenging to identify OPEX related to OSS. However, this is a must.

Recycling matters, again

Automation projects are always a bit unique, even if they share common objectives. They usually require some specific coding aka service modeling. Indeed each organization has its own way to deliver services: that’s how they distinguish themselves from the competition. Internal resources, an external system integrator or a vendor can do the service modeling.

Even if there is no generic definition of a service, the service modeling can not be a greenfield exercise for each new automation initiative. Otherwise this would result in a “flat” unit cost per project whereas efficient automation projects generate economies of scale. A top-notch Domain Specific Language (DSL) offers re-usability of the code to avoid this pitfall. It makes it easier through abstraction and clean interfaces to “recycle” a significant percentage of the service modeling code. This capability is the major driver of economy of scales. DevOps teams can achieve between 30% and 50% savings for subsequent automation use cases when they belong to the same domain.

Reuse of existing building blocks is key to achieve economies of scale.

In addition, the DSL should address the provisioning of heterogenous resources, e.g. physical equipment, cloud services or VNF. The multi-domain support is a key enabler for driving project costs down as developers can recycle code more easily with a single DSL for the entire automation roadmap. And it reduces the learning and maintenance costs. So is the vendor-agnostic support. The DSL should allow organizations to select best of breed equipment and resources in order to minimize CAPEX.

Hence from day one choose an automation vendor with the right DSL supporting multi-domain automation without any relationships with a manufacturer.

Our next blog will cover the remaining category of automation costs related to integration costs, CI/CD roll-out and risk.