Heeding the call: Old-line carriers expand virtualization techniques

Telecommunication networks are expanding their virtualization techniques to become more like datacenters in an effort to build more cost-effective, software-based networks. While this industry is typically conservative, they are taking a big step forward with this recent change.

One of the reasons for this change is that telecom companies are attempting to claim a piece of the IoT pie that is continuously growing. Because of this shift, certain companies are searching for unified control.

According to Ross Cassan, director of product marketing for mobility infrastructure at Spirent Networks, “The carriers are looking at these IoT-like services, and at the same time looking at how and when they transition off of their 2G networks, and virtualization is an absolute key technology. We are seeing all of our carrier customers really come to grips with that.”

Researchers are additionally showing that telecom carriers would benefit from adopting “a new operational and delivery model, powered by virtualization technologies, such as software-defined networking/network function virtualization (SDN/NFV), which are driven by business and cost efficiency opportunities.”

The carriers are looking at these IoT-like services, and at the same time looking at how and when they transition off of their 2G networks, and virtualization is an absolute key technology. — Ross Cassan, Spirent Networks

The server side

Carriers expand virtualization

Spirent Networks’ platform works to simplify protocols and silicon both on the device or the core of the network by emulating a Narrowband-IoT network from end to end. This ensures whether the network is able to handle the IoT traffic they expect. If they can, it then makes it more simple to choose the service-level agreements (SLAs) by knowing which can be relied on.

The IoT business model works well for carriers because they can section of a portion of their network to be used for IoT with particular customers and applications through virtualization. This includes things like utilities and power meters or banks for transaction processing.

Another important part of this puzzle is M-CORD, or Mobile Central Office Re-architected as a Data Center, created by “NFV (network function virtualization) Congress to iron out how equipment makers, standard bodies, and carriers will get this very seismic shift in telco network architecture accomplished.”

This is made possible because of the combination of the continuous push of new cloud-based software and the achievements made by the semiconductor industry. Raj Singh, vice president and general manager of the network and communication group at Cavium, explained that “the semiconductor industry crossed a threshold in terms of compute power and latency, which makes virtualization and intelligence across the network possible.”

When cores created by ARM and Intel became able to support hypervisors, the carriers realized that they were able to scale virtualized functions rather than make custom hardware.

ARM created its Intelligent Flexible Cloud, or IFC, to “[bring] together platforms based on diverse, scalable, highly-integrated system-on-chips with heterogeneous compute capabilities supported by a common layer of enabling software and distributed network intelligence.”

It is able to accomplish this by building on software-defined networking (SDN) and network function virtualization (NFV) to create a more elastic and resilient network as well as significantly reduce power consumption.

The reason the network architecture must be changed now is because of the gigabit-per-second capacity needed for 5G networks. Singh explains, “That’s 20 times the capacity per sector — 100 Gbps, in each direction, and not by two. It’s by eight. That is impossible…Suddenly that becomes very important if you can virtualize the network for the radio access.”

Cellular infrastructure

The cellular infrastructure is changing along with the servers of telecom networks and, according to many, it’s an extremely dynamic segment.

Bob Monkman, senior segment marketing manager at ARM, elaborated that, “You need a mix of the right core, the right interconnect [depending on whether the ASIC is for the Antenna, basestation (BTS), or mobile edge computing (MEC) boxes]. As functions disaggregate from the BTS, real time offload and feature rich APIs are key.”

Many system-on-a-chip (SoC) vendors from ARM deliver a very wide range of cores, from two to over 100 cores in an application, depending on the style of network compute power and use case.

Carriers are often using new ARM-based cellular equipment, transitioning from the previous MIPs or POWER-based designs. The transition continues with the reuse of software over platforms, as well as more generic compute requirements.

In addition to this, there is now faster deployment and development cycles, with companies typically using frameworks such as CoRD to handle this development and deployment.

Monkman explained how ARM’s involvement with the open source software community Linaro and “more specifically helping to define the right software interfaces to allow network offload and ensuring NFV compliance for these containerized/virtualized environments” displays the shift in the company’s’ use of cellular infrastructure.

Are all telecom networks changing?

Carriers expand virtualization

Many claim that carriers only wish to decommission their 2G infrastructure as necessary. Virtualization, however, can help them gain customers who are looking for IoT services and scale up only as needed.

Others, however, seem to believe that the only carriers that have deployed virtualization found it to be expensive and time-consuming, rather than the desired effect. Why, though? “Because without standards there can be no certification testing; without certification testing there can be no interoperability; and without interoperability, service providers are in the same place they were in the 1990s: locked into buying overpriced proprietary solutions from incumbent equipment vendors.”

According to Steve Saunders, CEO of Light Reading, vendors additionally are not delivering solutions that these service providers need. “Today,” he says, “it takes an average of six months for Tier 1 service providers just to get the NFVi code from the industry’s leading infrastructure vendors to work.”

Clearly, that is not a functional solution. However, with companies working to provide these services so carriers can take full advantage of virtualization, this is poised to change in the future.

Large telecommunications companies are beginning to convert their infrastructure to rely more on virtualization, although it isn’t clear how soon it will be until this is a widespread change among carriers. Will they find it to be too difficult and time-consuming, or will the telecom companies see the same advantages that other IT sectors have discovered when it comes to virtualization?

Photo credits: Pixabay

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top