Many organisations have spent the past couple of years with some, most or all of their staff working remotely, most likely from home.
In doing so they have learned a thing or two about the value of providing good user experiences and fast access to content under challenging conditions.
While things are easing in the sense that a handful of days back in the office are now possible, organisations are still being asked to enable hybrid working scenarios - where staff work some days in the office and other days from home, and require an IT environment and application experience that is stable, performant and consistent.
As the Productivity Commission said recently, “it is inevitable that more Australians will work from home” into the future.
That is a challenge.
Managing the delivery and performance of applications to a single office location or to a small number of office locations is a network structure that most organisations are used to. Once the number of ‘office’ locations increases exponentially - effectively as every ‘home office’ becomes a new location to manage the delivery of applications and data to - the challenges for organisations and their IT and network teams also multiply.
Organisations become much more reliant on the public internet for application performance and delivery. They may have less control over the paths that traffic takes to and from head office - or from wherever an application is hosted - and to and from the range of “home offices” that now form part of the reach of their extended corporate network.
These challenges are bringing organisations back to the drawing board on network design and configuration.
As they chase a greater degree of end-to-end control of the connection between head office and the home office, organisations are revisiting past architectural decisions and redesigning the corporate networks to fit the permanent change in the way that employees access applications and perform their work.
Of course, there are two sides to application performance and delivery - there’s the applications themselves - the way they are designed and coded, as well as the networks they are delivered across.
Some observers - in the network space, no less - make the case that application creators and hosts themselves should have more urgency to modify their software so that it performs better for decentralised or highly distributed organisational structures.
That modernisation work may cover the addition of adaptive capabilities, for example, that enable an application to change its behaviour based on ambient conditions it encounters as it operates across portions of the public internet as well as private networks.
We have seen pockets of this in the wild, notably in the consumer space where the codecs of video streaming services like Netflix and YouTube optimise quality in real-time to the maximum available bandwidth.
It is not a stretch of the imagination to consider this kind of network-aware performance might come to a much broader range of applications, particularly those focused on the enterprise space.
That being said, it relies on the application providers being pressured enough by organisations to change.
As those changes may be a while coming, organisations would be well-advised to focus more on what they can control, and that brings us to the network itself.
Being ‘slow’ or ‘down’ are equally feared
Application latency in the as-a-service era is, of course, not a new problem.
Some attempted to encapsulate the problem from a networking perspective back in 2016 when they coined the phrase, ‘slow is the new down’.
Six years has elapsed, and the phrase is still in use though it’s not entirely accurate.
Outages and downtime remain a legitimate and real concern for organisations that use a large amount of cloud-based and third-party hosted applications.
Indeed, it may be more accurate to say that in the as-a-service era, application and network performance challenges have multiplied.
Latency AND downtime are on a par with each other as operational concerns. Ideally, both are to be avoided or mitigated as best as possible.
A new network approach
Organisations are advised to take back control of as much of their network as possible.
Engaging the services of a national network aggregator like Somerville can assist in this regard. As an aggregator, our approach is to strive for control of all the different network elements - carrier infrastructure, data centre links, IP transit, direct peering and service access - that sit between a customer site and the application or data they are trying to access. We can then holistically present that back to the customer organisation as a single end-to-end connection - as “one single string”. Despite its make-up or composition, we can centrally manage and support it as well.
Over and above connectivity, Somerville can also layer SD-WAN on top to optimise how traffic is routed across the different underlying links. We also have our own private cloud infrastructure, where a customer may be able to extract a bit of extra performance by hosting a database or application workload there instead of in a public cloud. There are many options available to improve network design and configuration, and there is no one-size-fits-all approach in this regard.
What’s important is that, under a network aggregation model, the orchestration of all the different pieces and abstraction of complexity means that customer organisations can stay focused on what is core to them, while handing off the administrative burden of maintaining that business-critical connectivity. And right now, what is important to many customer organisations is being able to ride the broader economic rebound and hunt out new growth opportunities, while delivering on key business-as-usual requirements.
Always-on, fast and uninterrupted connectivity services are crucial. Somerville’s network aggregation model is the key to controlled, high-performance experiences for the redesigned corporate network.