Fleets: Scalable services in FOS
#1

[attachment=12565]
Fleets: Scalable Services in a Factored Operating System
ABSTRACT

Current monolithic operating systems are designed for uniprocessor systems, and their architecture reflects this. The rise of multicore and cloud computing is drastically changing the tradeoffs in operating system design. The culture of scarce computational resources is being replaced with one of abundant cores, where spatial layout of processes supplants time multiplexing as the primary scheduling concern. Efforts to parallelize monolithic kernels have been difficult and only marginally successful, and new approaches are needed. This paper presents fleets, a novel way of constructing scalable OS services. With fleets, traditional OS services are factored out of the kernel and moved into user space, where they are further parallelized into a distributed set of concurrent, message-passing servers. We evaluate fleets within FOS, a new factored operating system designed from the ground up with scalability as the first-order design constraint. This report details the main design principles of fleets, and how the system architecture of FOS enables their construction. We describe the design and implementation of three critical fleets (network stack, page allocation, and file system) and compare with Linux. These comparisons show that FOS achieves superior performance and has better scalability than Linux for large multicores; at 32 cores, FOS’s page allocator performs 4.5× better than Linux, and FOS’s network stack performs 2.5× better. Additionally, we demonstrate how fleets can adapt to changing resource demand, and the importance of spatial scheduling for good performance in multicores.
Keywords: Multicore, Kernels, page allocator, network stack, Factored Operating System.
1. INTRODUCTION
Trends in multicore architectures point to an ever-increasing number of cores available on a single chip. Moore’s law predicts an exponential increase in integrated circuit density. In the past, this increase in circuit density has translated into higher single-stream performance, but recently single-stream performance has plateaued and industry has turned to adding cores to increase processor performance. In only a few years, multicores have gone from esoteric to commonplace: 12-core single-chip offerings are available from major vendors with research prototypes showing many more cores on the horizon, and 64-core chips are available from embedded vendors with 100-core chips available this year. These emerging architectures present new challenges to OS design, particularly in the management of a previously unprecedented number of computational cores.
Given exponential scaling, it will not be long before chips with hundreds of cores are standard, with thousands of cores following close behind. Recent research, though, has demonstrated problems with scaling monolithic OS designs. In monolithic OSs, OS code executes in the kernel on the same core which makes an OS service request. This led to significant performance degradation for important applications compared to intelligent, application-level management of system services. Prior work also showed significant cache pollution caused by running OS code on the same core as the application. This becomes an even greater problem if multicore trends lead to smaller per-core cache. Ther is severe scalability problems with OS microbenchmarks, even only up to 16 cores.
A similar, independent trend can be seen in the growth of cloud computing. Rather than consolidating a large number of cores on a single chip, cloud computing consolidates cores within a data center. Current Infrastructure as a Service (IaaS) cloud management solutions require a cloud computing user to manage many virtual machines (VMs). Unfortunately, this presents a fractured and non-uniform view of resources to the programmer. For example, the user needs to manage communication differently depending on whether the communication is within a VM or between VMs. Also, the user of an IaaS system has to worry not only about constructing their application, but also about system concerns such as configuring and managing communicating operating systems. There is much commonality between constructing OSs for clouds and multicores, such as the management of unprecedented number of computational cores and resources, heterogeneity, and possible lack of widespread shared memory.
The primary question facing OS designers over the next ten years will be: What is the correct design of OS services that will scale up to hundreds or thousands of cores? We argue that the structure of monolithic OSs is fundamentally limited in how they can address this problem.
FOS is a new factored operating system (OS) designed for future multicores and cloud computers. In contrast to monolithic OS kernels, the structure of FOS brings scalability concerns to the forefront by decomposing an OS into services, and then parallelizing within each service. To facilitate the conscious consideration of scalability, FOS system services are moved into userspace and connected via messaging. In FOS, a set of cooperating servers which implement a single system service is called a fleet. This paper describes the implementation of several fleets in FOS, the design principles used in their construction, and the system architecture that makes it possible.
Monolithic operating systems are designed assuming that computation is the limiting resource. However, the advent of multicore and clouds is providing abundant computational resources and changing the tradeoffs in OS design. New challenges have been introduced through heterogeneity of communication costs and the unprecedented scale of resources under management. In order to address these challenges, the OS must take into account the spatial layout of processes in the system and efficiently manage data and resource sharing. Furthermore, these abundant resources present opportunities to the OS to allocate computational resources to auxiliary purposes, which accelerate application execution, but do not run the application itself. FOS leverages these insights by factoring OS code out of the kernel and running it on cores separate from application code.
As programmer effort shifts from maximizing per-core performance to producing parallel systems, the OS must shift to assist the programmer. The goal of FOS is to provide a system architecture that enables and encourages the design of fleets. In this paper, we make the following contributions:
a. We present the design principles used in the construction of fleets: fleets are scalable, and designed with scalability foremost in mind; fleets are self-aware, and adapt their behavior to meet a changing environment; and fleets are elastic, and grow or shrink to match demand.
b. We present the design of three critical fleets in FOS (network stack, page allocator, and file system). Additionally, we present the first scaling numbers for these services, showing that FOS achieves superior performance compared to Linux on large multicores or when considering application parallelism.
c. We present studies of self-aware, elastic fleets, using a prototype file system fleet. These studies demonstrate the importance of good spatial scheduling in multicores and the ability of fleets to scale to meet demand.
2. SYSTEM ARCHITECTURE
Current OSs were designed in an era when computation was a limited resource. With the expected exponential increase in number of cores, the landscape has fundamentally changed. The question is no longer how to cope with limited resources, but rather how to make the most of the abundant computation available. FOS is designed with this in mind, and takes scalability and adaptability as the first-order design constraints. The goal of FOS is to design system services that scale from a few to thousands of cores.
FOS does this by factoring OS services into userspace processes, running on separate cores from the application. Traditional monolithic OSs time multiplex the OS and application, whereas FOS spatially multiplexes OS services (running as user processes) and application processes. In a regime of one to a few cores, time multiplexing is an obvious win because processor time is precious and communication costs are low. With large multicores and the cloud, however, processors are relatively abundant and communication costs begin to dominate. Running the OS on every core introduces unnecessary sharing of OS data and associated communication overheads; consolidating the OS to a few cores eliminates this. For applications that do not scale well to all available cores, factoring the OS is advantageous in order to accelerate the application. In this scenario, spatial scheduling (layout) becomes more important than time multiplexing within a single core.
However, even when the application could consume all cores to good purpose, running the OS on separate cores from the application provides a number of advantages. Cache pollution from the OS is reduced, and OS data is kept hot in the cache of those cores running the service. The OS and the application can run in parallel, pipelining OS and application processing, and often eliminating expensive context switches. Running services as independent threads of execution also enables extensive background optimizations and re-balancing. Although background operations exist in monolithic OSes, FOS facilitates such behavior since each service has its own thread of control.
In order to meet demand in a large multicore or cloud environment, reduce access latency to OS services and increase throughput, it is necessary to further parallelize each service into a set of distributed, cooperating servers. We term such a service a fleet.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: paler cores, multicore, overal equipment effectiveness for fleets, multicore ffmpeg, monolithic, userspace, thinathaanthi news paper fos job vacancy today,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Platform Autonomous Custom Scalable Service using Service Oriented Cloud Computing Ar 1 1,066 15-02-2017, 04:39 PM
Last Post: jaseela123d
  Migrating Component-based Web Applications to Web Services: towards considering a ”We 1 855 15-02-2017, 10:56 AM
Last Post: jaseela123d
  Cool Cab services seminar presentation 2 2,778 21-03-2014, 02:11 AM
Last Post: Guest
  Achieving Secure, Scalable, and Fine-grained Data Access Control in Cloud Computing seminar class 1 1,974 29-10-2012, 05:31 PM
Last Post: seminar details
  A Two-Dimensional Low-Diameter Scalable On-Chip Network for Interconnecting Thousands Projects9 0 640 23-01-2012, 05:11 PM
Last Post: Projects9
  SHIP: A Scalable Hierarchical Power Control Architecture for Large-Scale Data Centers Projects9 0 785 23-01-2012, 05:08 PM
Last Post: Projects9
  OLM: Opportunistic Layered Multicasting for Scalable IPTV over Mobile WiMAX Projects9 0 599 23-01-2012, 04:44 PM
Last Post: Projects9
  On the Spectral Characterization and Scalable Mining of Network Communities Projects9 0 584 23-01-2012, 04:05 PM
Last Post: Projects9
  Scalable Wireless Ad-Hoc Network Simulation using XTC full report project report tiger 3 3,384 03-12-2011, 09:30 AM
Last Post: seminar addict
  Ceph: A Scalable, High-Performance Distributed File System smart paper boy 0 1,219 29-08-2011, 03:55 PM
Last Post: smart paper boy

Forum Jump: