`Triple play` services - services that simultaneously provide voice, Internet and video over an IP connection - are marketed as instantly accessible, constantly available, cost-effective and reliable. Service quality measurement and management is the foundation for delivering on these marketing promises, and for ensuring the growth of this high-end, high-value market.
The current paradox inherent in service management is that the end-user is the only person that can truly and accurately assess the quality of a voice, video or data service. Many of the service management tools available rely on data generated by network devices, systems, applications and elements to cobble together a picture of the service.
From a true quality of service management perspective, however, it is the end-user`s quality assessment that should simultaneously be the starting point and the end point of any quality of service (QOS) optimisation strategy. From this end-user view, the service provider can make informed, tactical and strategic decisions about service delivery, resource provision and equipment utilisation - after all, why allocate budget to network upgrades if the service appears slow, but the quality of the service as perceived by the end-user is perfectly acceptable?
Beyond this basic rule, optimising the quality of service of data, VOIP or video infrastructure is slightly different, depending on the requirement of each service. For example, data services provide a clear path through the components of the infrastructure to the end-user. Analysing each transaction, and then consolidating the results, IT administrators can build service models and set up thresholds for end-user response times, error rates, transaction rates, etc. Because it is measurable, the quality of experience (QOE) of data services is also relatively straightforward to reproduce for test and analysis purposes, allowing IT administrators to optimise in near-real-time the performance and availability of the infrastructures.
The situation is slightly different when it comes to VOIP or video-on-demand (VOD) infrastructures. Quality of experience is not so easily measurable, because it is mostly based on the end-user`s subjectivity. As a result, the cause-to-effect link between technical quality of service metrics and user experience is broken. The QOE of VOIP or video-on-demand services is essentially comprised of non-technical criteria, while VOIP or VOD QOS is still based on the correlation of infrastructure performance metrics. As QOS metrics do not share the same level of information with the QOE perceived by the end-user, IT administrators cannot precisely correlate the technical performance data to the users` experience, and so cannot gain a true picture of the end-user`s experience. With VOIP and VOD, the traditional methodology for measuring and optimising quality of service is broken and needs to be recreated.
End-users` subjectivity measurement
QOE measurement technologies focus on creating an end-to-end visibility of the user`s experience, through automated and reliable mechanisms. There are already a number of technical building blocks available to achieve this goal, including Mean Opinion Score (MOS) to measure QOE on VOIP, and Video Quality Metrics system (VQM) to assess the users` perception of VOD services. Over the past few years, these algorithms have been improved, allowing reliable measurement of QOE. But these algorithms are just technical building blocks, and it is important to integrate these components into a comprehensive quality of experience infrastructure that can correlate QOE metrics with infrastructure QOS metrics and optimisation procedures.
Pushing the limits
To achieve the correlation between QOE and QOS metrics, it is necessary to change both the perspective and the scale of the service provider`s quality management strategy. QOE implementation in VOIP or VOD requires extending the service delivery architecture, reviewing the service management methodology and renewing the service management and measurement tools. From an architecture standpoint, QOE starts where QOS stops. As QOE monitors the end-user experience, monitoring logic must be implemented as closely as possible to the end-users. That is beyond the traditional limits of QOS: while QOS only needs to monitor the access points (DSLAM, VOIP access server or VOD connection servers), QOE aims to cross the boundary and to enter into users` homes, implementing the monitoring logic directly into the end-user`s terminal (VOIP phone, VOD decoder, ADSL modem). In effect, implementing QOE requires the extension of the boundaries of the provider-controlled network to the hundreds and thousands of end-user terminals connected to QOS-monitored access points.
New requirements
A true paradigm shift, QOE requires the end-user`s terminal to take an active role in the overall quality of service infrastructure. Previously, the technical approach of QOS was based on the principle that the QOS of each end-user`s terminal was strictly equal to the QOS of the federating access point. As a result, it was not necessary to monitor each end-user terminal individually. QOE changes all that. Each end-user terminal must individually report the users` experience simulation metrics. This means that each end-user terminal produces information, and each terminal is authenticated by the QOE infrastructure. This change of direction profoundly modifies the way quality measurement information must be managed.
Firstly, the QOE raw data needs to become almost immediately qualified information. To be useful, QOE raw data must be enriched with a number of both technical and non-technical information, including configuration data, event history, user identification and access point ID. Secondly, data collection needs to evolve from pull to push. Most of QOS infrastructures are still based on pulling techniques, from a centralised QOS server. However, this technique does not allow an end point to flag user experience degradation in real-time. To achieve this, it is required to transform the end-user terminal into an active device, capable of raising an alert when a threshold is passed, triggering QOS or technical analysis to speed up the detection of the root cause of the problem.
As a result, real-time QOE is triggering an exponential growth in management rules, including filtering, consolidation and correlation. This dramatically increases the volume and the complexity of QOE management, to a point where the centralised legacy QOS servers will not be able to scale. Not only will the volume of information grow rapidly, but the complexity of management rules may directly affect the performance and the real-time response times of the existing QOS infrastructure, thereby impacting the ability of the provider to address and proactively resolve QOS issues. Additionally, VOIP- and VOD-extended QOE infrastructures (including end-user terminals) must take into account a significantly higher level of heterogeneity. Many service providers have its networks over a period of time, and so will possess a large variety of end-user terminals. Correlating QOE data from these heterogeneous sources will create a significant overhead, which may adversely affect the performance of the service delivery infrastructure. This centralised approach also tends to slow the implementation of new technologies and services.
Distributed analysis intelligence
By extending the analysis perimeter, QOE only highlights an existing flaw of centralised QOS management systems. The only way to effectively deal with an ever-growing number of end-user terminals, along with increasing heterogeneity, is to distribute the analysis intelligence throughout the infrastructure, enabling local processing of QOS information as close as possible to its source, providing IT administrators with the kind of metrics they need to fine-tune the VOIP or the VOD infrastructure to deliver an enhanced end-user experience.
From QOS to QOE
Early detection of degradation of end-user experience through QOE mechanisms allows IT administrators to act proactively, detecting and solving issues before they impact the end-user`s experience. By aggregating quality of experience information provided by end-user terminals with the technical data of traditional network QOS tools, IT administrators can gain an end-to-end visibility of the quality optimisation processes, and establish a link between the end-user experience and the technical QOS. This approach not only allows service providers to anticipate the claims of end-users, but it also enables IT administrators to plan network evolution with more accuracy and more speed.
Share