Muhtaroğlu, NitelArı, İsmailKolcu, Birkan2019-02-112019-02-112018-11-101532-0626http://hdl.handle.net/10679/6161https://doi.org/10.1002/cpe.4782In this paper, we investigate several design choices for HPC services at different layers of the cloud computing architecture to simplify and broaden its use cases. We start with the platform-as-a-service (PaaS) layer and compare direct and iterative parallel linear equation solvers. We observe that several matrix properties that can be identified before starting long-running solvers can help HPC services automatically select the amount of computing resources per job, such that the job latency is minimized and the overall job throughput is maximized. As a proof of concept, we use classical problems in structural mechanics and mesh these problems with increasing granularities leading to various matrix sizes, ie, largest having 1 billion non-zero elements. In addition to matrix size, we take into account matrix condition numbers, preconditioning effects, and solver types and execute these finite element analysis (FEA) over an IBM HPC cluster. Next, we focus on the infrastructure-as-a-service (IaaS) layer and explore HPC application performance, load isolation, and deployment issues using application containers (Docker) while also comparing them to physical and virtual machines (VM) over a public cloud.enginfo:eu-repo/semantics/restrictedAccessDemocratization of HPC cloud services with automated parallel solvers and application containersConference paper302111400044726790001310.1002/cpe.4782Condition numberDirect solverDockerFinite element analysisHadoopHPC-as-a-ServiceIterative solverVirtual machine2-s2.0-85052437551