Skip to content

Architecture

efotopoulou edited this page Jan 8, 2020 · 5 revisions

The high level architectural approach of the “Analytics Engine” component is depicted below:

The “Analytics Engine” consumes time series data that are collected during the execution of a V&V or an SDK trial.
The “Analytics Engine” has the capacity to register one or more analysis services. Analytic services can be implemented in both R and Python statistical languages.

The “Analytics Engine” is responsible for defining the specific analysis templates that drive the configuration of the execution of the analysis processes. It is also responsible for executing on demand these templates and subsequently saving the analysis results at the “Analytics Engine” local storage and then generating a user friendly final report.

A set of indicative but important -based on pilots requirements- analysis services that are currently supported are:

  • Resource efficiency analysis, aiming to identify trends and capacity limits in resources consumption. The monitored metrics combine a resources_usage metric (e.g. CPU, memory) with a service output metric (e.g. traffic served, http requests served, active users, number of sessions). A visualisation/graph is produced with the resource usage metric in the x axis and the service output metric in the y axis. Min and max values for resources efficiency can be estimated. A linear regression model can be also produced, highlighting correlation among the monitored values and p-value.
  • Elasticity efficiency, applied taking as prerequisite the application of an elasticity policy via an NFVO (e.g. the SP). Elasticity efficiency may be expressed as a pair of discrete metrics (Application Capacity Change as output and Capacity Change Lead Time as input). Application Capacity Change is the incremental capacity change related to a scaling action. Capacity Change Lead Time is the time required for a capacity change. Both metrics are going to be depicted in relevant visualisations.
  • Correlograms, depicting data in correlation matrices. A correlation matrix combines various resource usage metrics and service output metrics and provides correlation values that can lead to various insights (e.g. which parameters are highly dependent, which parameters can create bottlenecks in the overall performance). The provided values are also accompanied by values related to the statistical significance of the produced results.
  • Time series decomposition aiming to reveal trends or seasonality aspects in the collected time series data. Such an analysis is meaningful in cases with large time series data under real operational conditions. Time series decomposition may be mainly applied in resources usage metrics, leading to insights really valuable for forecasting and planning for an operator.