Written by Pamela Weaver
Networking has always been about capacity. The amount of data you can usefully shift is only as good as the pipes carrying it and with traffic volumes growing exponentially, it’s fair to say that backbone providers the world over are struggling to cope.
Research from Deloitte points the finger of blame firmly at the growing transmission of video files, which account for the bulk of the sharp acceleration in traffic growth rates we’ve seen over the past couple of years. The proliferation of home-grown consumer content depicting everything from videos of cats on skateboards to teenage boys pulling ninja moves in their bedrooms is joining with the more serious transmission of services and data to force ISPs and other backbone providers into new ways of thinking and business models that can make retail broadband provision sustainable.
Research by Cisco indicates that, in traffic terms, user-generated content equalled commercial video in 2008; the fact that the latter accounted for less than 25% of all video streams could be viewed as worrying news for those with more serious intentions online. More challenging is the prediction that 90% of all consumer IP traffic will, by 2012, be video-on-demand/ IPTV/Internet video. According to Cisco, by the end of 2008, Internet video sites YouTube and Hulu generated more than twice the amount of traffic crossing the American backbone in 2008. The company projects a CAGR of 46% in all IP traffic between 2007 and 2012 – almost doubling every two years. Global business traffic is expected to grow at a CAGR of 35% in the same period, with the developing markets and Asia expected to demonstrate the fastest growth in this sector. Mobile data traffic adds to the strain – according to Cisco, this is expected to roughly double annually between 2008 and 2012.
Success breeds... logjams
At first glance, all this traffic would appear to spell boom time for service providers but the reality is that the networked, converging world is becoming a victim of its own success. Better networks have fed the growth of new services and applications with voracious appetites for bandwidth. From America to the UK and Australia to South Africa, backbone providers are struggling to cope with the massive growth in traffic.
Locally, service providers have fought long and hard to be able to compete on an equal basis with the incumbent Telkom only to find that the cost of rolling infrastructure out is simply too great – as a result, they’re now looking to partner with others to share the cost of rolling out fibre networks. As Accenture, pointing mainly to the American market, has said, “The business case for fibre was made during a period of consumer prosperity and the current recession shatters many of the underlying assumptions.” It’s a reality that applies equally – and more – to developing markets such as our own; the challenge locally being one of pure infrastructural necessity balanced against affordability in a country with relatively low disposal income and high unemployment.
South African business is all too painfully aware of the connection between network infrastructure and success in a global, converging economy; the philosophy of “if you build it, they will come” places local providers in a position where tough decisions have to be made in hard economic times. Local loop unbundling (LLU) and the landing of new undersea cables should help free up infrastructure in the medium term, but more traffic over the same pipes will bring new problems. As recent Gartner research indicates, business migration to Web-enabled and Web services application environments has substantially increased the dependencies between applications and the network. According to Gartner, “Although these environments were developed to ease the integration of remote users and resources into a more distributed business process, the new underlying protocols are not optimised for network performance – in reality, they put significant new burdens on the network. Too many businesses deploy new applications without a complete understanding of how these applications will work in a complex and distributed network environment.” So it’s not just the pipes that are too skinny, it’s the cars we’re trying to drive along the network road that also need re-thinking.
Some sun on the horizon
Not that the horizon is without sunshine: Neotel, for example, has leveraged its Tata Communications links to secure SAT-3/SAFE cable capacity, operating from Melkbosstrand to Portugal (SAT-3) and from Mtunzini to India (SAFE). Co-location drives capacity and the SNO is reporting in excess of 1Gbps in a scenario in which the telco has complete technical and commercial independence. It’s not all rosy in the SNO’s garden, however: there have been user complaints about performance as the company seeks to meet subscriber demand with capacity.
On broader downside, the cost of incoming bandwidth – largely a legacy of the incumbent’s previous monopoly on SAT-3 – continues to raise the ire of those looking to compete in the networking space. World Wide Worx MD Steven Ambrose told delegates to ITWeb’s recent SaaS conference that, even with the new cabling, national infrastructure continues to be a problem for SA. Ambrose pointed to the twin challenges of regulations and infrastructure as key bugbears in a backlog that he predicts will take 5-10 years to clear.
LOWER COSTS, NEW CHALLENGES
Deloitte research points to problems inherent in low-cost bandwidth versus increasing wholesale transmission costs and the impact that this could have on ISPs, who are likely to have to increase tariffs or even change business models if they are to make a sustainable go of surviving the retail broadband market. Small wonder, then, that many providers are chasing convergence in the direction of consolidation, with roles melding all the time: fixed-line-meets-cellular-meets-insert-technology-of-choice. Locally, our tiered service provider infrastructure has seen those at the bottom of the pile (tier three, where providers essentially repackage another company’s bandwidth and infrastructure services) have traditionally struggled to compete. Changing customer expectations, particularly on the quality front, have changed all that – technical prowess and infrastructure footprint are the kinds of added extras that many consumer users are happy enough to go without.
According to Ipoqe, ISPs globally often struggle to cope with the disproportionately high-bandwidth use by a relatively small portion of subscribers. This is largely driven by peer-to-peer file-sharing (P2P), video-streaming and large file downloads from hosting services (DDL). The challenge is to deal with infrastructure costs while facing the reality that these are the very services that attract subscribers in the first place. Excessive over-use, according to Ipoque, hits service quality for important interactive applications such as browsing and Internet telephony; increasing user intolerance for latency is adding to the challenge. Recent moves by MWEB to close its uncapped bandwidth trial, partly due to excessive use (rumours of users making it past the 1Tb post abound) indicate that the challenges are unlikely to be any different locally.
South African users, long afflicted with the burden of the bandwidth cap will no doubt cast a wry eye on the furore that the mere suggestion of same in America has caused. A poll conducted by MyBroadband found that 90% of local broadband users said they required more than 3GB of bandwidth per month; with global web sites increasingly offering embedded video and high-resolution imagery, you don’t have to be a streaming video addict to feel the pinch. That some service providers are offering usage limits measure in Mbs indicates how far we have still to go. If those offering services want to see increased up-take, the infrastructure (and associated costs) will have to come to the party. Chicken, meet egg...
The large-scale adoption of IP-based technologies has turned WAN management and service requirements on its head. If it’s hot in the networking kitchen, there aren’t many organisations rushing for the door – the majority seem well-equipped to face the challenge of carrying converged services without having to engage in a full-scale rip-and-replace infrastructural investment. What’s happening in the ground, outside your premises, you cannot do that much about.
Own your own
Pressure to provide aside, Gartner research indicates that owning infrastructure may not be all it’s cracked up to be, stating that traditional telecoms carriers “can no longer rely on conventional competitive tactics such as price cuts, promotions and basic product bundling” to maintain an edge in the consumer space. New-generation players such as Apple and Nokia have, according to Gartner, got a better understanding of consumers and are adopting new business models that are forcing carriers to reassess their approach to service delivery.
According to Accenture research, carriers need to get jiggy with the reassessment as well.
The consultancy firm points the finger at the “many carriers” that “appear to be overly content with selling legacy services, attempting to recoup sunk costs for their existing communications infrastructure.” Not exactly the right way of going about meeting consumer expectations; locally, those used to playing on a more uneven field may find they’re a little too complacent when the additional capacity hits the SA market – and this applies equally to the incumbent as well as those who have grown used to giving perhaps too much attention to their biggest competitor and the faults of our telecommunications system. As Accenture points out, “holding onto the past for too long creates inertia among carriers and impedes the transformation process...”
In the face of new competition such as Nokia, traditional carriers are at least looking to transform themselves, mainly by exploiting the seemingly endless appetite for content.
Locally, even Telkom’s stated intention to outsource the running of its networks (one of the biggest divisions in the telco) underlines the importance of offering converged services over direct ownership of infrastructure for organisations strategising to escape the fate of “dumb pipe” provider. Globally, it seems that most infrastructure owners see themselves as at least matching the control over content and services their partners are currently exercising. How many of them will genuinely be able to mix it in what, until recently, was nowhere near a core competency remains to be seen.
Locally, Neotel and the other non-incumbents are trying to operate on two fronts, rolling out infrastructure and playing catch-up before offering services that users elsewhere in the world currently take for granted. It’s not one-way traffic, however: unlike Telkom, Neotel isn’t faced with legacy infrastructure in dire need of an upgrade. Its proposed next-generation-network, already underway, will be ready to offer converged services – it’s only a matter of when the provider itself can roll these out. Telkom, all too aware of its competition, is making similar moves into the converged service sector. According to Accenture, networks have the potential to walk a similar line to that of middleware – a “glue” that brings everything together initially but gets left behind in its original format as technology finds newer, more direct ways to bring things together.
ETHERNET SETS THE STANDARDS
Ethernet only really took off as a LAN standard with the arrival of 10Mbps Ethernet (IEEE Standard 802.3), known as 10-BASE-T. This has since evolved into “Fast Ethernet”, or 100BASE-T, the standard connection for many corporate LANs, operating at 100Mbps. Along with its low deployment and operational costs, the key to Ethernet’s success is its ability to facilitate interoperability between device and network, irrespective of the varying standard (Ethernet comes in three flavours: 100BASE-T4, 100BASE-TX and 100BASE-FX).
Most Ethernet cards and switch ports support multiple speeds, using auto-negotiation to set the speed and duplex for the best values supported by both connected devices. If auto-negotiation fails, a multiple-speed device will sense the speed used by its partner, but will assume half-duplex.
A 10/100 Ethernet port can therefore support 10BASE-T and 100BASE-TX; a 10/100/1000 port supports 10BASE-T, 100BASE-TX and 1000BASE-T – today’s Gigabit Ethernet. According to the industry organisation Ethernet Alliance, the technology’s popularity is such that almost all of today’s Internet traffic either originates or terminates with an Ethernet connection.
Voice and video services are joined in the bandwidth gridlock by the presence of cellular providers seeking respite from their own traffic issues by offloading onto fixed lines using dual-mode phones. Gigabit Ethernet (GbE) offers a significant upgrade path, allowing for a nominal speed of one gigabit per second, which can be delivered over fibre-optic cable but most current installations use the more traditional – and significantly cheaper – twisted pair cabling.
Increasing numbers of desktop computers and most network cards already support GbE.
Ready the technology may be but all indications are that our wallets are not yet deep enough to take the GbE plunge. For many companies, Fast Ethernet is still under-utilised inhouse; where it is being pushed harder, it’s more than up to the task of DVD-quality video streaming and voice to many desktops. Where GbE is being implemented is in high-bandwidth networks, such as large campus networks in the financial services sector.
Even if it is overkill for some organisations, Intel has pinpointed some of the key benefits of GbE:
• Speed: As the name indicates, GbE is 100 times faster than regular 10Mbps Ethernet and 10 times faster than 100Mbps Fast Ethernet’s
• Fatter pipes: Increased bandwidth for higher performance and elimination of bottlenecks
• Double bandwidth: Full-duplex capacity allows for the effective bandwidth to be virtually doubled
• Multi-Gigabit speeds: Aggregate bandwidth to multi-gigabit speeds using Gigabit server adapters and switches.
• Quality of Service: QoS features eliminate jittery or distorted video and sound.
• Compatible: Fully compatible with the large installed base of Ethernet and Fast Ethernet nodes.
Up until relatively recently, most desktop machines did not really have the processing capacity to take advantage of GbE and the applications that could use it, but recent developments have seen dual-and-quad core becoming more mainstream and should drive uptake further. In the meantime, inhibitors include:
• Convergence: Internet Protocol (IP) telephones are best rolled out using Power over Ethernet (PoE) which, while available on GbE, adds a premium to already expensive equipment.
• Existing infrastructure: Many organisations will find that their existing cabling infrastructure does not fully support GbE. In theory, existing cable can carry it, but older installations are likely to be of too poor a quality or utilise cable that is too long for high-quality GbE transmission.
• External bandwidth: Most local businesses are at the mercy of external bandwidth available to them. Unless bandwidth prices decline by orders of magnitude, the bottleneck will remain, making GbE a solution for a problem it cannot control. 10-Gigabit ethernet
This is currently the fastest Ethernet standard available, offering 10 times the speed of GbE, which it is likely to replace as soon as that proves too sluggish. To really get the best out of it, 10GbE should travel over optical fibre connections but new standards issued by the Institute for Electrical Engineers (IEE) mean it is possible to have traditional copper as an alternative medium.
Since 2007, transmission distances have almost doubled to reach 100 metres but expensive hardware, switch and interfaces – and the power they require – could continue to slow uptake. For those watching the watts, PHY chips for 10 GbE typically chow 8-10 watts per port – far more than its less-pacey cousins, which use anything between 1 and 5. As with pretty much all technologies, those with the patience (and time) to wait it out will normally find prices dropping to more acceptable levels over time. IDC research indicates that, when 10GB Ethernet switches debuted in 2001, the average per-port cost was $39 000; today, organisation can expect a budget-friendlier $4 000 or less. In the longer term, 10GbE will allow for more cost-effective networks while extending the range and capabilities of LANs, WANs and, in the case of WiMAX technology, Metropolitan Area Networks (MANs). Many corporate data centres, where Ethernet switches are used to operate clusters of high-speed servers are already using 10GbE uplinks to aggregate traffic coming in at 1GB.
10, 40, 100...
40GbE is viewed by many as the next logical step in Ethernet’s evolution, with many commentators advocating a straight leap all the way to 100 rather than stopping for breath at 40. Be that as it may, many of the leading developers are introducing equipment that does pause for breath – but that will nonetheless allow for a smooth eventual transition to 100GbE.
Jumping from 10 to 40 might pump up the speed but often results in the weakening of optical signals, necessitating infrastructure with a shorter reach. Either way, many of the major global telcos are aiming high, with Deutsche Telekom joining the likes of Alcatel-Lucent, Ericsson and Siemens to claim their spot among those who have had a successful field trial of 100GbE.
It reigns supreme in the Local Area Network (LAN) stakes, but Ethernet is making inroads to the Wide Area Network (WAN) as well. Forrester research indicates that organisations that have used traditional private line WANs for their high-performance applications are “migrating to Ethernet WANs in droves.” According to the Metro Ethernet Forum (MEF), carrier Ethernet is defined by five attributes that differentiate it from the traditional LAN-based Ethernet:
• Standardised services
• Quality of service
• Service management
Despite the inevitable slow down in the global market for 2009, Ethernet’s capacity to expand to meet demand when a new service is needed is likely to see it gaining in popularity among network providers looking to offer converged services without having to roll out massive infrastructural changes, connecting as it can smaller local networks with high speed links to greater power. The implications for a market that is moving towards Software as a Service (SaaS), cloud computing, virtualisation, gaming and digital images/video are clear. According to Infonetics research, the economic downturn actually favours carrier Ethernet technologies because they offer a less expensive alternative to legacy hardware. Infonetics says that service provider investment in carrier Ethernet globally is growing at a faster rate than overall telecom capital expenditure.
Aberdeen research concludes that elegant architecture is the mainstay of Ethernet WAN deployments. “Switching Ethernet frames directly onto long-haul optical transport removes the addressing, queuing, filtering and IP packet processing normally required in MPLS or IP Virtual Private Network services and thereby reduce jitter and latency classically associated with layer 3 packet processing. It also eliminates the need for complex and expensive MPLS Customer Equipment.” According to Forrester research, adding an Ethernet interface to alleviate WAN capacity allows for increased scalability, helping organisations avoid purchasing bandwidth in large chunks (as is the case with TDM-based options such as T1/E1). Forrester says that “with Ethernet, most carriers will offer increments of 64Kbps and can scale from 64Kbps to 1Gbps on a single pipe.”
Be that as it may, Forrester believes that, although Ethernet and MPLS offer flexibility, the costs associated with them and strains placed on networks by ever-hungrier applications could see organisations turning towards traditional network architectures to keep costs in line. “Rather than a few public links connected to a private corporate network, we see companies shifting to public networks connected to small ‘islands’ of private connectivity in key locations like data centres,” says the research house’s Robert Whitely. Forrester calls this the “Internet everywhere” model and says that the confluence of business requirements and better security have made it a reality for many.
MULTIPROTOCOL LABEL SWITCHING
Multiprotocol Label Switching (MPLS) is rapidly becoming the foundation stone of choice for those looking to operate converged networks, particularly large telecommunications networks where voice and data are travelling the same roads. MPLS offers intelligent networks capable of seamless integration with IP, Ethernet, Frame Relay or ATM and offers a cost-effective solution as it can operate independently of the access technology any given user has. It also offers high availability – a key component of any properly-functioning converged network. Ethernet continues to be the technology of choice on high-speed core networks, but MPLS is increasingly popular with organisations seeking to connect branches.
One of the reasons why MPLS is popular on converged networks is its capacity to enable class of service (CoS) tagging and traffic prioritisation – allowing administrators to define and choose which applications take precedence on the network. So whether you’re a heavy user of high-end business applications such as Oracle or SAP or VoIP is a big priority for you, using MPLS technology can help ensure quality of service (QoS).It is generally agreed that MPLS offers an excellent opportunity for future-proofing networks – its capacity for technological agnosticism means it will work alongside legacy networks just as well as newer core backbones. Add to that the fact that all the major telecom and network vendors support it and the picture looks better again. IP/MPLS networks are attractive because they offer scalability and can handle speed upgrades relatively easily. Better still, they’re cheaper to deploy than comparable alternatives; their legacy-friendliness means that transition paths are smoother. MPLS also allows for the immediate optimisation of traffic infrastructure, along with rapid recovery rates – the pre-calculation of alternative routes means there’s always a plan B or C. power over ethernet... plus
According to the IEEE, power over Ethernet (PoE) increased “the value of an Ethernet port by connecting and powering devices such as IP phones using a common network infrastructure.” It uses the standard twisted pair cable of Ethernet to transmit electrical power, along with data, to remote devices in a similar way to that of a traditional POTS phone, which draws power from the same lines that connect them to the exchange. The secret of PoE is that it uses a “phantom” technique so that powered pairs can carry electricity without any negative effect on the bearing of data.
PoE can be used to power any number of devices but in the converged networks context is most widely used for IP phones. Because traditional phones do not need electricity, rolling out IP phones would be complicated – and slowed down – by the need to provide separate electricity cabling to each device. PoE circumvents this and is considered best practice for IP desktop phone implementations, making them a mainstay for its adoption.
The IEEE is currently working on adopting the IEEE P802.3 standard, known as Power over Ethernet Plus. This technology promises more power, allowing for a broader range of Ethernet devices, all while continuing to support the older standard. The IEEE says that the newer standard will offer business users “enormous savings” along with a simplified infrastructure when used for services such as security, where a CCTV camera could capture, transmit and power camera using only one wire.
LOSING THE NUMBERS GAME
Never mind worrying about a bandwidth shortage, convergence and an ever-increasing array of gadgets and devices means that IP addresses are becoming like hen’s teeth. The current incumbent protocol, IPv4 supports 4 billion addresses, nowhere near enough to support an increasingly switched-on planet. Which is where IPv6 comes in.
It’s capable of supporting 50 Octillion IP addresses, more than enough to cope with the plethora of “smart” devices now and in the future. More importantly in a networking context, IPv6 offers multi-casting as standard – i.e. it can deliver a data stream to multiple destinations simultaneously without the need for duplication. IPv4 does not have this functionality.
An additional benefit of the newer protocol is security – the huge addresses on offer make attacks based on scanning an entire network too much hassle to be worth the time, effort or bandwidth. That said, individual hosts will still need to protect themselves from attack and viruses/worms etc at application level won’t be going away any time soon either.
With the date by which we’ll run out of IPv4 addresses constantly being revised backwards (the latest assessments by the likes of Google have us hovering around the mid-2010 mark), the usual combination of panic merchants and hype-jaded cynics are coming out of the woodwork.
Comparisons with Y2K are inevitable but this time it really is different – at some point in the near future, IPv4 addresses are going to run out. That’s just the way it is. Panellists at the IFA Press Conference in April 2009 (a pre-cursor to the IFA Trade Show in Berlin, famous for debuting some of the world’s best-known technologies) pointed out that, with 3G and even 4G technology beginning to ship in everything from digital cameras to mobile devices, the pressure has well-and-truly mounted. As carriers globally continue to build-out their next-generation/LTE networks, more and more users will be looking to do more and more things online – and, at the same time, carriers will want to offer as broad a range of devices as possible, creating a self-feeding loop. At the IFA conference, Businessweek’s Steve Wildstrom said that, to date, the only real takers in the IPv6 department had been the U.S. Government and those who sell products and services to it (Google being a notable exception). While all the current operating systems have implemented and support it, the private sector has been slow on the uptake, thanks largely to the costs involved in upgrading hardware and the fact that more than just a software fix is required. For these, network address translation is offering a useful workaround, but the issue is not going to go away and the proliferation of devices will see to it that it becomes more urgent in the near future.
For many, the key hurdle has been one of backwards-compatibility or interoperability. The original intention was for “dual-stack mode”, in which the pair would work side by side, but with a lot of hardware completely unable to communicate with the newer protocol and vice versa, it seems few users want to jump with hardly a bridge in sight. The Internet Engineering Task Force (IETF) is working on new transition tools that it is hoped will work properly. These are likely to be available by the end of 2009. Switching now or factoring-in compatibility (or, at least, the ability to accept a hardware/firmware upgrade at a later date) when you make purchases today could save you a lot of heartache, hassle and money later on.
With a World Cup looming and a global economic climate in which every extra piece of infrastructure a country offers mattering, networks will continue to exercise the minds of everyone operating in this space. As the pressure mounts, network infrastructure and capacity will continue to occupy an important place on the business agenda – those managing best will inevitably outperform the competition. Increasing mobility joins with an almost zero-tolerance for latency and hugely-increased user expectations to pile on the pressure as corporate networks seek to rise to the challenges of convergence and new services without having to rip and replace existing infrastructure.
Estimates put us up to three years behind our counterparts in North America and Europe – long enough to see which way the wind is blowing, for sure. But also far enough behind to miss any boats that might be heading into lucrative new waters. With Neotel piling on the pressure for Telkom and the incumbent itself seeking to redefine its business offering and position in the market, 2009 will be a crunch year for the South African network space. But then, we say that every year...