. . Login | Feed | Imprint

Method Overview

This page presents a coarse-grained overview on the recommended workflow for using the Q-ImPrESS approach. Please click on the process steps shown in the UML diagram, in order to get more information on a process step. If you wish to obtain even more information on the Q-ImPrESS method, please refer to D6.1: Method and Abstract Workflow.

Gathering New Requirements (3.1)

The workflow starts with the collection of new requirements. Usually the product manager collects new requirements. These requirements could for example come from the customers or could be defined after a comparison with similar products from competitors.

Any new requirement leads to a change in the system. When the impact of the change on the system quality is not predictable, or when more than one potential change scenario can be followed, the impact of the solution can be assessed first using the Q-ImPrESS method.

The Q-ImPrESS method does not support with any tool the collection of requirements, but this process triggers anyway the entire workflow.

Defining Change Scenario (3.2)

A brief analysis of these requirements leads to the selection of the change scenarios to assess. This activity is performed by the product manager with the help of the System architect as deep knowledge of the system is needed. The System architect proposes some alternative solutions to meet the requirements. These alternative solutions will be modelled and their impact analysed by the Q-ImPrESS toolkit. These first introductory activities are performed several times during a product life cycle and, up to here, do not require any use and knowledge on specific Q-ImPrESS tools.

Modelling Change Scenario (3.3)

The Q-ImPrESS toolkit can be used to easily identify the implementation alternative that fits best the required quality of service, without having to proceed with the implementation of each of them. This is the most prominent added value in using the Q-ImPrESS working method compared to proceeding the usual way.

Even if the System architect does not have to know all Q-ImPrESS platform internals to use the Q-ImPrESS toolkit, the knowledge of the main abstractions and ideas at the base of Q-ImPrESS is necessary in order to understand the overall workflow.

To predict the effects of a change scenario on quality attributes, Q-ImPrESS analysis tools use a model of the system, the Service Architecture Model (SAM).

Any change scenario (either usage, assembly or allocation) leads to an update of the Service Architecture Model of the system under analysis.

Actually, the modelling of the system is split in two levels. Each analysis tool has its own internal representation of the system information, specialized for the tool goal. This specialized system model representation can be seen as the result of a transformation over a common Service Architecture Meta-Model (SAMM). Model-to-model transformation from SAMM to a tool prediction model is automatic and performed behind the scenes (using a standard model transformation engine like QVT). Automatic Model-to-model transformation in the other direction (from tool prediction model back to SAMM) is not foreseen at the moment.

Predict System Quality (3.4)

The following process is the prediction of the quality metrics with respect to the updated Service Architecture Model.

Processes 3.3 and 3.4 will be iterated for each defined change scenario, results will be saved along with the model of the implementation alternative.

Performance Prediction (3.4.1)

Having a SAM, the user can start a performance prediction using a transformation into the Palladio Component Model (PCM). For this, the SAM has to be complete, i.e., all components are modelled, the service architecture is available, it is allocated and its usage model is available. Furthermore, all model elements have the needed quality annotations. The Q-ImPrESS tool set will check whether the model is valid which mainly means that all model elements and all needed quality annotations are available. The transformation to PCM and the resulting system simulation are executed transparently for the Q-ImPrESS method’s user.

As a result, the PCM’s performance prediction annotates the SAM model with average response time distribution, resource utilisation distribution, and an estimated system throughput. Any of these predicted values can be used in an succeeding trade-off analysis.

Reliability Prediction (3.4.2)

Q-ImPrESS toolkit uses KLAPER for the reliability prediction analysis.

KLAPER (Kernel LAnguage for PErformance and Reliability analysis) is a kernel language which can be used as starting point to carry out performance or reliability analysis. KLAPER adopts a single model of the system, which can carry different kinds of additional information to support the analysis of different attributes (of performance or reliability). In the Q-ImPrESS scope, KLAPER is used to support only reliability prediction. This information can be used by the system architect to verify if a given system architecture satisfies reliability constraints or to evaluate multiple candidate design alternatives for trade-off analysis.

Reliability is a measure of the continuous delivery of correct service, or, equivalently, of the time to failure and in the Q-ImPrESS framework is evaluated as the probability that the system performs its required functions under stated conditions for a specified period of time. Furthermore KLAPER allows working at a finer grain obtaining the probability that any subsystem or a set of them successfully complete a given service invocation (see Deliverable D3.1).

To support the reliability analysis, KLAPER uses information embedded in models expressed in SAMM, decorated with quality annotations such as usage profile and component failure rates. At the end of the analysis KLAPER provides reliability values of components and systems nodes.

Maintainability Prediction (3.4.3)

The maintainability prediction has the goal to guide a software architect to estimate the effort which is necessary to implement a given change request in the architecture described by an architecture model.

Therefore the maintainability prediction process takes as inputs the architecture model, i. e. an instance of the SAMM and the description of a change request. The description of change request is represented as scenario in the overall workflow.

Tradeoff Analysis (3.5)

Quality prediction results are compared in the Trade-Off Analysis Process. To perform a trade-off analysis, the designer needs the following inputs: the architectural design alternatives, together with the predicted values of the QoS attributes. In addition, inputs pertaining to various usage profiles should be specified, such as utilization functions, size of data, loop counts etc., as well as the change scenarios. Under these inputs, a model builder generates an optimisation model, which is then fed to a model solver that generates the Pareto curves needed for determining the trade-offs between maintainability, performance and reliability obtained via approximated techniques. Based on such curves, the most appropriate architectural design alternative is chosen.

The result of the trade-off analysis could be that none of the proposed alternatives actually meets the requirements, and the overall process will have to continue again with 3.2, defining new implementation scenarios.

Implement SAM (3.6)

After the system architect has selected the change scenario which best meets the requirements, he can proceed with the implementation. The system architect therefore creates basic implementation stubs for the new code. He has the choice to either do this manually or use model driven engineering techniques, where parts of the Service Architecture Model or other models can be used to generate code artefacts.

Having created these basic code artefacts, the system architect can delegate the responsibility for parts of the new system architecture to system engineers, who implement the necessary changes.

Should the system architect decide not to use model driven techniques for the implementation of the change scenario, he needs to make sure that the architecture represented by the source code actually matches the change scenario defined in the Service Architecture Model. This step can be omitted, if the Service Architecture Model is use to generate architectural code fragments.

Validate Model (3.7)

To make sure the implemented model meets the requirements for which the change scenarios have been evaluated, it has to be validated by measurements. For example, it has to be tested if the Proxy component actually yields the performance that has been predicted.

Deploy System (3.8)

When the system architect has verified that the system meets the defined requirements, he can proceed by deploying the system. The used method for the deployment heavily depends on the domain for which the system is implemented. It may be as simple as manually copying and executing the system on the target platform. In more complex scenarios, the deployment may be automated using deployment scripts using Make, Perl, Ant or other script based approaches, where the system architect simply executes the script for the system to be deployed.