Please use this identifier to cite or link to this item:
http://dspace.cityu.edu.hk/handle/2031/8702
Title: | Performance Evaluation of Resource Allocation Policies on the Kubernetes Container Management Framework |
Authors: | Vasudevan, Varun |
Department: | Department of Computer Science |
Issue Date: | 2016 |
Supervisor: | Supervisor: Dr. Xu, Hong Henry; First Reader: Dr. Xue, Chun Jason; Second Reader: Prof. Li, Qing |
Abstract: | Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications. Its scheduling is a policy-rich, topology-aware, workload-specific function that significantly impacts availability, performance, and capacity. Certain workloads such as those that involve the usage of large Data Sets jobs perform better when a certain scheduling policy is used. The objective of the project is to establish comparative advantages and use-cases of the Dominant Resource Fairness (DRF) scheduling policiy. Firstly, we will be setting up Docker Containers of applications that execute a particular job or task. After understanding how to use Docker containers, Kubernetes is setup in order to orchestrate and manage containers. Furthermore, Kubernetes is used to provision and manage a cluster on Amazon Web Service's Elastic Compute 2 (EC2) Service. These containers that run atop of EC2 Virtual Machines (VMs) and Kubernetes are then scheduled using various scheduling policies. With the help of graphing and logging tools such as Grafana and InfluxDB, it is possible to collect and record the usage patterns of system resrouces such as CPU Utilization and Memory Utilization in real-time. After monitoring both the CPU and Memory Utilization of the Cluster, in addition with the running time of the workloads, it will be possible to determine the efficacy of different scheduling policies. It is observed that when the dominant resource of any workload is requested for from the scheduler in larger quantities, the workload has access to the resources that is needed for the execution time to be significantly reduced and there is almost no idle resources in the entire cluster. This therefore implies that for the given workloads that are mentioned within this project, manually provisioning the dominant resource from the cluster scheduler yields better results than the native scheduling policy of Kubernetes |
Appears in Collections: | Computer Science - Undergraduate Final Year Projects |
Files in This Item:
File | Size | Format | |
---|---|---|---|
fulltext.html | 145 B | HTML | View/Open |
Items in Digital CityU Collections are protected by copyright, with all rights reserved, unless otherwise indicated.