Jens Happe, Benjamin Klatt, Martin Küster, Fabian Brosig, Alexander Wert,
Simon Spinner, and Heiko Koziolek.
Getting the data.
In Modeling and Simulating Software Architectures - The
Palladio Approach, Ralf H. Reussner, Steffen Becker, Jens Happe, Robert
Heinrich, Anne Koziolek, Heiko Koziolek, Max Kramer, and Klaus Krogmann,
editors, chapter 6, pages 115-138. MIT Press, Cambridge, MA, October 2016.
[ bib |
http ]
Fabian Gorsler, Fabian Brosig, and Samuel Kounev.
Performance queries for architecture-level performance models.
In Proceedings of the 5th ACM/SPEC International Conference on
Performance Engineering (ICPE 2014), Dublin, Ireland, 2014. ACM, New York,
NY, USA.
2014, Accepted for publication. Acceptance Rate (Full Paper): 29%.
[ bib ]
Fabian Gorsler, Fabian Brosig, and Samuel Kounev.
Controlling the palladio bench using the descartes query language.
In Proceedings of the Symposium on Software Performance: Joint
Kieker/Palladio Days (KPDAYS 2013), Steffen Becker, Wilhelm Hasselbring,
André van Hoorn, and Ralf Reussner, editors, November 2013, number 1083,
pages 109-118. CEUR-WS.org, Aachen, Germany.
November 2013.
[ bib |
http |
.pdf | Abstract ]
The Palladio Bench is a tool to model, simulate and analyze Palladio Component Model (PCM) instances. However, for the Palladio Bench, no single interface to automate experiments or Application Programming Interface (API) to trigger the simulation of PCM instances and to extract performance prediction results is available. The Descartes Query Language (DQL) is a novel approach of a declarative query language to integrate different performance modeling and prediction techniques behind a unifying interface. Users benefit from the abstraction of specific tools to prepare and trigger performance predictions, less effort to obtain performance metrics of interest, and means to automate performance predictions. In this paper, we describe the realization of a DQL Connector for PCM and demonstrate the applicability of our approach in a case study.
Fabian Brosig, Fabian Gorsler, Nikolaus Huber, and Samuel Kounev.
Evaluating Approaches for Performance Prediction in Virtualized
Environments (Short Paper).
In Proceedings of the IEEE 21st International Symposium on
Modeling, Analysis and Simulation of Computer and Telecommunication Systems
(MASCOTS 2013), San Francisco, USA, August 14-16, 2013.
[ bib |
.pdf ]
Nikolaus Huber, André van Hoorn, Anne Koziolek, Fabian Brosig, and Samuel
Kounev.
S/T/A: Meta-Modeling Run-Time Adaptation in Component-Based System
Architectures.
In Proceedings of the 9th IEEE International Conference on
e-Business Engineering (ICEBE 2012), Hangzhou, China, September 9-11, 2012,
pages 70-77. IEEE Computer Society, Los Alamitos, CA, USA.
September 2012, Acceptance Rate (Full Paper): 19.7% (26/132).
[ bib |
DOI |
http |
.pdf | Abstract ]
Modern virtualized system environments usually host diverse applications of different parties and aim at utilizing resources efficiently while ensuring that quality-of-service requirements are continuously satisfied. In such scenarios, complex adaptations to changes in the system environment are still largely performed manually by humans. Over the past decade, autonomic self-adaptation techniques aiming to minimize human intervention have become increasingly popular. However, given that adaptation processes are usually highly system specific, it is a challenge to abstract from system details enabling the reuse of adaptation strategies. In this paper, we propose a novel modeling language (meta-model) providing means to describe system adaptation processes at the system architecture level in a generic, human-understandable and reusable way. We apply our approach to three different realistic contexts (dynamic resource allocation, software architecture optimization, and run-time adaptation planning) showing how the gap between complex manual adaptations and their autonomous execution can be closed by using a holistic model-based approach.
Fabian Brosig, Nikolaus Huber, and Samuel Kounev.
Modeling Parameter and Context Dependencies in Online
Architecture-Level Performance Models.
In Proceedings of the 15th ACM SIGSOFT International Symposium
on Component Based Software Engineering (CBSE 2012), June 26-28, 2012,
Bertinoro, Italy, June 2012.
Acceptance Rate (Full Paper): 28.5%.
[ bib |
http |
.pdf | Abstract ]
Modern enterprise applications have to satisfy increasingly stringent Quality-of-Service requirements. To ensure that a system meets its performance requirements, the ability to predict its performance under different configurations and workloads is essential. Architecture-level performance models describe performance-relevant aspects of software architectures and execution environments allowing to evaluate different usage profiles as well as system deployment and configuration options. However, building performance models manually requires a lot of time and effort. In this paper, we present a novel automated method for the extraction of architecture-level performance models of distributed component-based systems, based on monitoring data collected at run-time. The method is validated in a case study with the industry-standard SPECjEnterprise2010 Enterprise Java benchmark, a representative software system executed in a realistic environment. The obtained performance predictions match the measurements on the real system within an error margin of mostly 10-20 percent.
Nikolaus Huber, Fabian Brosig, and Samuel Kounev.
Modeling Dynamic Virtualized Resource Landscapes.
In Proceedings of the 8th ACM SIGSOFT International Conference
on the Quality of Software Architectures (QoSA 2012), Bertinoro, Italy, June
25-28, 2012, pages 81-90. ACM, New York, NY, USA.
June 2012, Acceptance Rate (Full Paper): 25.6%.
[ bib |
DOI |
http |
.pdf | Abstract ]
Modern data centers are subject to an increasing demand for flexibility. Increased flexibility and dynamics, however, also result in a higher system complexity. This complexity carries on to run-time resource management for Quality-of-Service (QoS) enforcement, rendering design-time approaches for QoS assurance inadequate. In this paper, we present a set of novel meta-models that can be used to describe the resource landscape, the architecture and resource layers of dynamic virtualized data center infrastructures, as well as their run-time adaptation and resource management aspects. With these meta-models we introduce new modeling concepts to improve model-based run-time QoS assurance. We evaluate our meta-models by modeling a representative virtualized service infrastructure and using these model instances for run-time resource allocation. The results demonstrate the benefits of the new meta-models and show how they can be used to improve model-based system adaptation and run-time resource management in dynamic virtualized data centers.
Daniel Funke, Fabian Brosig, and Michael Faber.
Towards Truthful Resource Reservation in Cloud Computing.
In Proceedings of the 6th International ICST Conference on
Performance Evaluation Methodologies and Tools (ValueTools 2012),
Cargèse, France, 2012.
[ bib |
.pdf | Abstract ]
Prudent capacity planning to meet their clients future computational needs is one of the major issues cloud computing providers face today. By offering resource reservations in advance, providers gain insight into the projected demand of their customers and can act accordingly. However, customers need to be given an incentive, e.g. discounts granted, to commit early to a provider and to honestly, i.e., truthfully reserve their predicted future resource requirements. Customers may reserve capacity deviating from their truly predicted demand, in order to exploit the mechanism for their own benefit, thereby causing futile costs for the provider. In this paper we prove, using a game theoretic approach, that truthful reservation is the best, i.e., dominant strategy for customers if they are capable to make precise forecasts of their demands and that deviations from truth-telling can be profitable for customers if their demand forecasts are uncertain.
Katja Gilly, Fabian Brosig, Ramon Nou, Samuel Kounev, and Carlos Juiz.
Online prediction: Four case studies.
In Resilience Assessment and Evaluation of Computing Systems,
K. Wolter, A. Avritzer, M. Vieira, and A. van Moorsel, editors, XVIII.
Springer-Verlag, Berlin, Heidelberg, 2012.
ISBN: 978-3-642-29031-2.
[ bib |
http |
.pdf | Abstract ]
Current computing systems are becoming increasingly complex in nature and exhibit large variations in workloads. These changing environments create challenges to the design of systems that can adapt themselves while maintaining desired Quality of Service (QoS), security, dependability, availability and other non-functional requirements. The next generation of resilient systems will be highly distributed, component-based and service-oriented. They will need to operate in unattended mode and possibly in hostile environments, will be composed of a large number of interchangeable components discoverable at run-time, and will have to run on a multitude of unknown and heterogeneous hardware and network platforms. These computer systems will adapt themselves to cope with changes in the operating conditions and to meet the service-level agreements with a minimum of resources. Changes in operating conditions include hardware and software failures, load variation and variations in user interaction with the system, including security attacks and overwhelming situations. This self adaptation of next resilient systems can be achieved by first online predicting how these situations would be by observation of the current environment. This chapter focuses on the use of online predicting methods, techniques and tools for resilient systems. Thus, we survey online QoS adaptive models in several environments as grid environments, service-oriented architectures and ambient intelligence using different approaches based on queueing networks, model checking, ontology engineering among others.
Nikolaus Huber, Fabian Brosig, N. Dingle, K. Joshi, and Samuel Kounev.
Providing Dependability and Performance in the Cloud: Case Studies.
In Resilience Assessment and Evaluation of Computing Systems,
K. Wolter, A. Avritzer, M. Vieira, and A. van Moorsel, editors, XVIII.
Springer-Verlag, Berlin, Heidelberg, 2012.
ISBN: 978-3-642-29031-2.
[ bib |
http |
.pdf ]
Nikolaus Huber, Marcel von Quast, Fabian Brosig, Michael Hauck, and Samuel
Kounev.
A Method for Experimental Analysis and Modeling of Virtualization
Performance Overhead.
In Cloud Computing and Services Science, Ivan Ivanov, Marten
van Sinderen, and Boris Shishkov, editors, Service Science: Research and
Innovations in the Service Economy, pages 353-370. Springer, New York, 2012.
[ bib |
DOI |
http |
.pdf ]
Samuel Kounev, Nikolaus Huber, Simon Spinner, and Fabian Brosig.
Model-based techniques for performance engineering of business
information systems.
In Business Modeling and Software Design, Boris Shishkov,
editor, volume 0109 of Lecture Notes in Business Information Processing
(LNBIP), pages 19-37. Springer-Verlag, Berlin, Heidelberg, 2012.
[ bib |
http |
.pdf | Abstract ]
With the increasing adoption of virtualization and the transition towards Cloud Computing platforms, modern business information systems are becoming increasingly complex and dynamic. This raises the challenge of guaranteeing system performance and scalability while at the same time ensuring efficient resource usage. In this paper, we present a historical perspective on the evolution of model-based performance engineering techniques for business information systems focusing on the major developments over the past several decades that have shaped the field. We survey the state-of-the-art on performance modeling and management approaches discussing the ongoing efforts in the community to increasingly bridge the gap between high-level business services and low level performance models. Finally, we wrap up with an outlook on the emergence of self-aware systems engineering as a new research area at the intersection of several computer science disciplines.
Samuel Kounev, Philipp Reinecke, Fabian Brosig, Jeremy T. Bradley, Kaustubh
Joshi, Vlastimil Babka, Anton Stefanek, and Stephen Gilmore.
Providing dependability and resilience in the cloud: Challenges and
opportunities.
In Resilience Assessment and Evaluation of Computing Systems,
K. Wolter, A. Avritzer, M. Vieira, and A. van Moorsel, editors, XVIII.
Springer-Verlag, Berlin, Heidelberg, 2012.
ISBN: 978-3-642-29031-2.
[ bib |
http |
.pdf | Abstract ]
Cloud Computing is a novel paradigm for providing data center resources as on demand services in a pay-as-you-go manner. It promises significant cost savings by making it possible to consolidate workloads and share infrastructure resources among multiple applications resulting in higher cost- and energy-efficiency. However, these benefits come at the cost of increased system complexity and dynamicity posing new challenges in providing service dependability and resilience for applications running in a Cloud environment. At the same time, the virtualization of physical resources, inherent in Cloud Computing, provides new opportunities for novel dependability and quality-of-service management techniques that can potentially improve system resilience. In this chapter, we first discuss in detail the challenges and opportunities introduced by the Cloud Computing paradigm. We then provide a review of the state-of-the-art on dependability and resilience management in Cloud environments, and conclude with an overview of emerging research directions.
Fabian Brosig, Nikolaus Huber, and Samuel Kounev.
Automated Extraction of Architecture-Level Performance
Models of Distributed Component-Based Systems.
In 26th IEEE/ACM International Conference On Automated Software
Engineering (ASE 2011), November 2011. Oread, Lawrence, Kansas.
Acceptance Rate (Full Paper): 14.7% (37/252).
[ bib |
.pdf | Abstract ]
Modern service-oriented enterprise systems have increasingly complex and dynamic loosely-coupled architectures that often exhibit poor performance and resource efficiency and have high operating costs. This is due to the inability to predict at run-time the effect of dynamic changes in the system environment and adapt the system configuration accordingly. Architecture-level performance models provide a powerful tool for performance prediction, however, current approaches to modeling the execution context of software components are not suitable for use at run-time. In this paper, we analyze the typical online performance prediction scenarios and propose a novel performance meta-model for expressing and resolving parameter and context dependencies, specifically designed for use in online scenarios. We motivate and validate our approach in the context of a realistic and representative online performance prediction scenario based on the SPECjEnterprise2010 standard benchmark.
Samuel Kounev, Fabian Brosig, and Nikolaus Huber.
Self-Aware QoS Management in Virtualized Infrastructures (Poster
Paper).
In 8th International Conference on Autonomic Computing (ICAC
2011), Karlsruhe, Germany, June 14-18, 2011.
[ bib |
.pdf | Abstract ]
We present an overview of our work-in-progress and long-term research agenda aiming to develop a novel methodology for engineering of self-aware software systems. The latter will have built-in architecture-level QoS models enhanced to capture dynamic aspects of the system environment and maintained automatically during operation. The models will be exploited at run-time to adapt the system to changes in the environment ensuring that resources are utilized efficiently and QoS requirements are satisfied.
Nikolaus Huber, Fabian Brosig, and Samuel Kounev.
Model-based Self-Adaptive Resource Allocation in Virtualized
Environments.
In 6th International Symposium on Software Engineering for
Adaptive and Self-Managing Systems (SEAMS 2011), Waikiki, Honolulu, HI, USA,
May 23-24, 2011, pages 90-99. ACM, New York, NY, USA.
May 2011, Acceptance Rate (Full Paper): 27% (21/76).
[ bib |
DOI |
http |
.pdf | Abstract ]
The adoption of virtualization and Cloud Computing technologies promises a number of benefits such as increased flexibility, better energy efficiency and lower operating costs for IT systems. However, highly variable workloads make it challenging to provide quality-of-service guarantees while at the same time ensuring efficient resource utilization. To avoid violations of service-level agreements (SLAs) or inefficient resource usage, resource allocations have to be adapted continuously during operation to reflect changes in application workloads. In this paper, we present a novel approach to self-adaptive resource allocation in virtualized environments based on online architecture-level performance models. We present a detailed case study of a representative enterprise application, the new SPECjEnterprise2010 benchmark, deployed in a virtualized cluster environment. The case study serves as a proof-of-concept demonstrating the effectiveness and practical applicability of our approach.
Samuel Kounev, Konstantin Bender, Fabian Brosig, Nikolaus Huber, and Russell
Okamoto.
Automated Simulation-Based Capacity Planning for Enterprise Data
Fabrics.
In 4th International ICST Conference on Simulation Tools and
Techniques, Barcelona, Spain, March 21-25, 2011, pages 27-36. ICST,
Brussels, Belgium, Belgium.
March 2011, Acceptance Rate (Full Paper): 29.8% (23/77), ICST
Best Paper Award.
[ bib |
slides |
.pdf | Abstract ]
Enterprise data fabrics are gaining increasing attention in many industry domains including financial services, telecommunications, transportation and health care. Providing a distributed, operational data platform sitting between application infrastructures and back-end data sources, enterprise data fabrics are designed for high performance and scalability. However, given the dynamics of modern applications, system sizing and capacity planning need to be done continuously during operation to ensure adequate quality-of-service and efficient resource utilization. While most products are shipped with performance monitoring and analysis tools, such tools are typically focused on low-level profiling and they lack support for performance prediction and capacity planning. In this paper, we present a novel case study of a representative enterprise data fabric, the GemFire EDF, presenting a simulation-based tool that we have developed for automated performance prediction and capacity planning. The tool, called Jewel, automates resource demand estimation, performance model generation, performance model analysis and results processing. We present an experimental evaluation of the tool demonstrating its effctiveness and practical applicability.
Fabian Brosig.
Online performance prediction with architecture-level performance
models.
In Software Engineering (Workshops) - Doctoral Symposium,
February 21-25, 2011, Ralf Reussner, Alexander Pretschner, and Stefan
Jähnichen, editors, February 2011, volume 184 of Lecture Notes in
Informatics (LNI), pages 279-284. GI, Bonn, Germany.
February 2011.
[ bib |
.pdf | Abstract ]
Today's enterprise systems based on increasingly complex software architectures often exhibit poor performance and resource efficiency thus having high operating costs. This is due to the inability to predict at run-time the effect of changes in the system environment and adapt the system accordingly. We propose a new performance modeling approach that allows the prediction of performance and system resource utilization online during system operation. We use architecture-level performance models that capture the performance-relevant information of the software architecture, deployment, execution environment and workload. The models will be automatically maintained during operation. To derive performance predictions, we propose a tailorable model solving approach to provide flexibility in view of prediction accuracy and analysis overhead.
Nikolaus Huber, Marcel von Quast, Fabian Brosig, and Samuel Kounev.
Analysis of the Performance-Influencing Factors of Virtualization
Platforms.
In The 12th International Symposium on Distributed Objects,
Middleware, and Applications (DOA 2010), Crete, Greece, October 26, 2010.
Springer Verlag, Crete, Greece.
October 2010, Acceptance Rate (Full Paper): 33%.
[ bib |
.pdf | Abstract ]
Nowadays, virtualization solutions are gaining increasing importance. By enabling the sharing of physical resources, thus making resource usage more efficient, they promise energy and cost savings. Additionally, virtualization is the key enabling technology for Cloud Computing and server consolidation. However, the effects of sharing resources on system performance are not yet well-understood. This makes performance prediction and performance management of services deployed in such dynamic systems very challenging. Because of the large variety of virtualization solutions, a generic approach to predict the performance influences of virtualization platforms is highly desirable. In this paper, we present a hierarchical model capturing the major performance-relevant factors of virtualization platforms. We then propose a general methodology to quantify the influence of the identified factors based on an empirical approach using benchmarks. Finally, we present a case study of Citrix XenServer 5.5, a state-of-the-art virtualization platform.
Samuel Kounev, Fabian Brosig, Nikolaus Huber, and Ralf Reussner.
Towards self-aware performance and resource management in modern
service-oriented systems.
In Proceedings of the 7th IEEE International Conference on
Services Computing (SCC 2010), July 5-10, Miami, Florida, USA, Miami,
Florida, USA, July 5-10, 2010. IEEE Computer Society.
July 2010.
[ bib |
.pdf | Abstract ]
Modern service-oriented systems have increasingly complex loosely-coupled architectures that often exhibit poor performance and resource efficiency and have high operating costs. This is due to the inability to predict at run-time the effect of dynamic changes in the system environment (e.g., varying service workloads) and adapt the system configuration accordingly. In this paper, we describe a long-term vision and approach for designing systems with built-in self-aware performance and resource management capabilities. We advocate the use of architecture-level performance models extracted dynamically from the evolving system configuration and maintained automatically during operation. The models will be exploited at run-time to adapt the system to changes in the environment ensuring that resources are utilized efficiently and performance requirements are continuously satisfied.
Fabian Brosig, Samuel Kounev, and Klaus Krogmann.
Automated Extraction of Palladio Component Models from Running
Enterprise Java Applications.
In Proceedings of the 1st International Workshop on Run-time
mOdels for Self-managing Systems and Applications (ROSSA 2009). In
conjunction with the Fourth International Conference on Performance
Evaluation Methodologies and Tools (VALUETOOLS 2009), Pisa, Italy, 2009,
pages 10:1-10:10. ACM, New York, NY, USA.
2009.
[ bib |
.pdf | Abstract ]
Nowadays, software systems have to fulfill increasingly stringent requirements for performance and scalability. To ensure that a system meets its performance requirements during operation, the ability to predict its performance under different configurations and workloads is essential. Most performance analysis tools currently used in industry focus on monitoring the current system state. They provide low-level monitoring data without any performance prediction capabilities. For performance prediction, performance models are normally required. However, building predictive performance models manually requires a lot of time and effort. In this paper, we present a method for automated extraction of performance models of Java EE applications, based on monitoring data collected during operation. We extract instances of the Palladio Component Model (PCM) - a performance meta-model targeted at component-based systems. We evaluate the model extraction method in the context of a case study with a real-world enterprise application. Even though the extraction requires some manual intervention, the case study demonstrates that the existing gap between low-level monitoring data and high-level performance models can be closed.
Fabian Brosig, Philipp Meier, Steffen Becker, Anne Koziolek, Heiko Koziolek,
and Samuel Kounev.
Quantitative evaluation of model-driven performance analysis and
simulation of component-based architectures.
Software Engineering, IEEE Transactions on, 41(2):157-175, Feb
2015.
[ bib |
DOI | Abstract ]
During the last decade, researchers have proposed a number of model transformations enabling performance predictions. These transformations map performance-annotated software architecture models into stochastic models solved by analytical means or by simulation. However, so far, a detailed quantitative evaluation of the accuracy and efficiency of different transformations is missing, making it hard to select an adequate transformation for a given context. This paper provides an in-depth comparison and quantitative evaluation of representative model transformations to, e.g., Queueing Petri Nets and Layered Queueing Networks. The semantic gaps between typical source model abstractions and the different analysis techniques are revealed. The accuracy and efficiency of each transformation are evaluated by considering four case studies representing systems of different size and complexity. The presented results and insights gained from the evaluation help software architects and performance engineers to select the appropriate transformation for a given context, thus significantly improving the usability of model transformations for performance prediction.
Nikolaus Huber, André van Hoorn, Anne Koziolek, Fabian Brosig, and Samuel
Kounev.
Modeling Run-Time Adaptation at the System Architecture Level in
Dynamic Service-Oriented Environments.
Service Oriented Computing and Applications Journal (SOCA),
8(1):73-89, 2014, Springer London.
[ bib |
DOI |
.pdf ]
Nigel Thomas, Jeremy Bradley, William Knottenbelt, Samuel Kounev, Nikolaus
Huber, and Fabian Brosig.
Preface.
Electronic Notes in Theoretical Computer Science, 275:1 - 3,
2011, Elsevier Science Publishers B. V., Amsterdam, The Netherlands.
[ bib |
DOI ]
Fabian Brosig, Samuel Kounev, and Charles Paclat.
Using WebLogic Diagnostics Framework to Enable Performance
Prediction for Java EE Applications.
Oracle Technology Network (OTN) Article, 2009.
[ bib |
.html | Abstract ]
Throughout the system life cycle, the ability to predict a software system's performance under different configurations and workloads is highly valuable to ensure that the system meets its performance requirements. During the design phase, performance prediction helps to evaluate different design alternatives. At deployment time, it facilitates system sizing and capacity planning. During operation, predicting the effect of changes in the workload or in the system configuration is beneficial for run-time performance management. The alternative to performance prediction is to deploy the system in an environment reflecting the configuration of interest and conduct experiments measuring the system performance under the respective workloads. Such experiments, however, are normally very expensive and time-consuming and therefore often considered not to be economically viable. To enable performance prediction we need an abstraction of the real system that incorporates performance-relevant data, i.e., a performance model. Based on such a model, performance analysis can be carried out. Unfortunately, building predictive performance models manually requires a lot of time and effort. The model must be designed to reflect the abstract system structure and capture its performance-relevant aspects. In addition, model parameters like resource demands or system configuration parameters have to be determined. Given the costs of building performance models, techniques for automatic extraction of models based on observation of the system at run-time are highly desirable. During system development, such models can be exploited to evaluate the performance of system prototypes. During operation, an automatically extracted performance model can be applied for efficient and performance-aware resource management. For example, if one observes an increased user workload and assumes a steady workload growth rate, performance predictions help to determine when the system would reach its saturation point. This way, system operators can react to the changing workload before the system has failed to meet its performance objectives thus avoiding a violation of service level agreements (SLAs). Current performance analysis tools used in industry mostly focus on profiling and monitoring transaction response times and resource consumption. The tools often provide large amounts of low level data while important information needed for building performance models is missing, e.g., the resource demands of individual components. In this article, we present a method for automated extraction of performance models for Java EE applications during operation. We implemented the method in a tool prototype and evaluated its effectiveness in the context of a case study with an early prototype of the SPECjEnterprise2009 benchmark application which in the following we will refer to as SPECjEnterprise2009_pre. (SPECjEnterprise2009 is the successor benchmark of the SPECjAppServer2004 benchmark developed by the Standard Performance Evaluation Corp. [SPEC]; SPECjEnterprise is a trademark of SPEC. The SPECjEnterprise2009 results or findings in this publication have not been reviewed or accepted by SPEC, therefore no comparison nor performance inference can be made against any published SPEC result.) The target Java EE platform we consider is Oracle WebLogic Server (WLS). The extraction is based on monitoring data that is collected during operation using the WebLogic Diagnostics Framework (WLDF). As a performance model, we selected the Palladio Component Model (PCM). PCM is a sophisticated performance modeling framework with mature tool support. In contrast to low level mathematical models like, e.g., queueing networks, PCM is a high-level UML-like design-oriented model that captures the performance-relevant aspects of the system architecture. This makes PCM models easy to understand and use by software developers. We begin by providing some background on the technologies we use, focusing on the WLDF monitoring framework and the PCM models. We then describe the model extraction method in more detail. Finally, we present the case study we conducted and conclude with a summary.