论文标题
基于公制的无服务器计算平台的性能建模
Performance Modeling of Metric-Based Serverless Computing Platforms
论文作者
论文摘要
分析性能模型在确保服务质量和服务部署成本在不同的条件和工作量下仍然是可取的。尽管已经为云计算中的先前范式提出了各种分析性能模型,但无服务器计算缺乏可以为开发人员提供性能保证的模型。此外,大多数无服务器计算平台仍然需要开发人员的输入来指定其部署的配置,这些配置可能会影响其部署的性能和成本,而无需为他们提供任何直接和直接的反馈。在先前的研究中,我们建立了这样的性能模型,用于对每次缩放的无服务器计算平台的稳态和瞬态分析(例如AWS Lambda,Azure功能,Google Cloud功能),可以使开发人员立即就其部署的服务质量和成本提供反馈。在这项工作中,我们旨在为无服务器计算平台的最新趋势开发分析性能模型,这些趋势使用并发价值以及每秒的请求速率用于自动决策。此类无服务器计算平台的示例是Knative和Google Cloud Run(Google的托管服务)。提出的性能模型可以帮助开发人员和提供商预测具有不同配置的部署性能和成本,这可以帮助他们将配置调整为最佳结果。我们通过对Knative的广泛实验来验证所提出的性能模型的适用性和准确性,并表明我们的性能模型能够准确预测给定工作负载的稳态特征,而数据收集量最少。
Analytical performance models are very effective in ensuring the quality of service and cost of service deployment remain desirable under different conditions and workloads. While various analytical performance models have been proposed for previous paradigms in cloud computing, serverless computing lacks such models that can provide developers with performance guarantees. Besides, most serverless computing platforms still require developers' input to specify the configuration for their deployment that could affect both the performance and cost of their deployment, without providing them with any direct and immediate feedback. In previous studies, we built such performance models for steady-state and transient analysis of scale-per-request serverless computing platforms (e.g., AWS Lambda, Azure Functions, Google Cloud Functions) that could give developers immediate feedback about the quality of service and cost of their deployments. In this work, we aim to develop analytical performance models for the latest trend in serverless computing platforms that use concurrency value and the rate of requests per second for autoscaling decisions. Examples of such serverless computing platforms are Knative and Google Cloud Run (a managed Knative service by Google). The proposed performance model can help developers and providers predict the performance and cost of deployments with different configurations which could help them tune the configuration toward the best outcome. We validate the applicability and accuracy of the proposed performance model by extensive real-world experimentation on Knative and show that our performance model is able to accurately predict the steady-state characteristics of a given workload with minimal amount of data collection.