Subscribe Us

Get free daily email updates!

Follow us!

Friday, January 21, 2011


Performance measurement represents a new stage of monitoring data. In the past monitoring networks meant decoding messages and filtering which messages belong to the same call. Single calls were analyzed and failures were often only found by chance. Performance measurement is an effective means of scanning the whole network at any time and systematically searching for errors, bottlenecks and suspicious behavior.

Performance measurement procedures and appropriate equipment have already been introduced in GSM and 2.5G GSM/GPRS radio access networks as well as in core networks, however, compared to the performance measurement requirements of UTRAN those legacy requirements were quite simple and it was relatively easy to collect the necessary protocol data as well as to compute and aggregate appropriate measurement results. Nowadays even Technical Standard 3GPP 32.403 (Telecommunication Management. Performance Management (PM). Performance Measurements – UMTS and Combined UMTS/GSM) contains only a minimum set of requirements that is not much more than the tip of an iceberg.

Performance measurement is fairly unique. There are many parameters and events that can be measured and many measurements that can be correlated to each other. The number of permutations is infinite. Hence, the question is: what is the right choice? There is no general answer except perhaps the following: A network operator will define business targets based on economical key performance indicators (KPIs). These business targets provide the guidance to define network optimization targets. And from network optimization targets technical KPI targets can be derived, which describe an aspired behavior of the network. Based on this, step by step, services are offered by operators. On a very common level these are e.g. speech calls and packet calls. These services will be optimized and detected errors will be eliminated. All in all it is correct to say that the purpose of performance measurement is to troubleshoot and optimize the network. An illustration depicting the above explained approached is shown below;

However, whatever network operators do, it is up to the subscriber to finally evaluate if a network has been optimized in a way that meets customers’ expectations. A rising churn rate (i.e. number of subscribers cancelling a contract and setting up a new one with a competitor operator) is an indicator that there might also be something wrong in the technical field. Fortunately there is very good news for all analysts and market experts who care about churn rates: it is very difficult to calculate a real churn rate. This is because most subscribers in mobile networks today are prepaid subscribers, and since many prepaid subscribers are people who temporarily stay abroad, and based on the fact that prepaid tariffs are often significantly cheaper than roaming tariffs, such subscribers become temporary customers, so to speak. Once they go
back to their home countries their prepaid accounts remain active until their contracts expire. Therefore not every expired contract is a churn. The actual number of churns is expected to be much less, but how much less? Additional information is necessary to find out about this. The fact that additional information is necessary to compute non-technical key performance indicators based on measurement results (in this case based on a counter that counts the number of cancelled and expired contracts) also applies to the computation of technical KPIs and key quality indicators (KQIs). See the figure below;

The general concept of these indicators is that network elements and probes, which are used as service resource instances, are placed at certain nodes of the network infrastructure to pick up performance-related data, e.g. cumulative counters of protocol events. In constant time intervals or in near real time this performance-related data is transferred to higher level service assurance and performance management systems.

However, there is one major problem with this concept: network elements that feed higher level network management systems with data are basically designed to switch connections. It is not the primary job of an RNC to measure and report performance related data. The most critical part of mobile networks is the radio interface, and the UTRAN controlled by RNCs is an excellent place to collect data giving an overview of radio interface quality considering that drive tests that can do the same job are expensive (at least it is necessary to pay two people per day and a car for a single drive test campaign). Secondly, performance data measured during drive tests cannot be reported frequently and directly to higher layer network management systems. Therefore a great deal of important performance measurement data that could be of high value for service quality management is simply not available. This triggers the need for a new generation of measurement equipment that is able to capture terabytes of data from UTRAN interfaces, performs highly sophisticated filtering and correlation processes, stores key performance data results in databases and is able to display, export and import these measurement results using standard components and procedures.


Post a Comment