System Goodput (GS): A Modeling and Simulation Approach to Refute Current Thinking Regarding System Level Quality of Service Open Access
Downloadable ContentDownload PDF
This dissertation represents a modeling and simulation approach toward determining whether distributed computing architectures (e.g., Cloud Computing) require state of the art servers to ensure top performance, and whether alternate approaches can result in optimized Quality of Service by reducing operating costs while maintaining high overall system performance.The author first investigated the origins of Cloud Computing, to ensure that the model of distributed computing architectures still applied to the Cloud Computing business model. After establishing that Cloud Computing was in fact a new iteration of a current architecture, the author conducted a series of modeling and simulation experiments using the OPNET Modeler system dynamics tool to evaluate whether variations in the server infrastructure altered the overall system performance of a distributed computing architecture environment. This modeling exercise focused on comparing state of the art commodity Information Technology (IT) servers to those meeting the Advanced Telecommunications Association (AdvancedTCA or ATCA) open standard requirements, which are generally at least one generation behind commodity servers in terms of performance benchmarks.After modeling an enterprise IT environment, and simulating network traffic using the OPNET Modeler tool, the author concluded that there is no system-level performance degradation in using AdvancedTCA servers for the consolidation effort, using ANOVA/Tukey and Kruskal-Wallis multivariate data analysis of the simulation results. In order to conduct this comparison, the author developed a system-level performance benchmark, System Goodput (GS) to represent end to end performance of services, a more appropriate measure of the performance of distributed systems such as Cloud Computing. The analysis of the data proved that individual component benchmarks are not an accurate predictor of system-level performance. After establishing that using slower servers (e.g., ATCA) does not affect overall system performance in a Cloud Computing environment, the author developed a model for optimizing system-level Quality of Service (QoS) for Cloud Computing infrastructures by relying on the more rugged ATCA servers to extend the service life of a Cloud Computing environment, resulting in a much lower Total Ownership Cost (TOC) for the Cloud Computing infrastructure provider.