Xen schedules individual domains using the Borrowed Virtual Time scheduling algorithm discussed in Section 9.14. For the sake of efficiency page faults are handled directly by the guest OS. To adjust domain memory XenoLinux implements a balloon driver which passes pages between Xen and its own page allocator. Memory is statically partitioned between domains to provide strong isolation. When a guest OS attempts to execute a privileged instruction directly, the instruction fails silently. In Xen the hypervisor runs at level 0, the guest OS at level 1, and applications at level 3.Īpplications make system calls using the so called hypercalls processed by Xen privileged instructions issued by a guest OS are paravirtualized and must be validated by Xen. The x86 Intel architecture supports four protection rings or privilege levels virtually all OS kernels run at level 0, the most privileged one, and applications at level 3. A similar strategy is used for segmentation. On the other hand, a guest OS has the ability to batch multiple page update requests to improve performance. Thus, a guest OS could only map pages it owns. When a new address space is created, the guest OS allocates and initializes a page from its own memory, registers it with Xen, and relinquishes control of the write operations to the hypervisor. The 64 MB region occupied by Xen at the top of every address space is not accessible, or not re-mappable by the guest OS. The solution adopted was to load Xen in a 64 MB segment at the top of each address space and to delegate the management of hardware page tables to the guest OS with minimal intervention from Xen. Flushing the TLB has a negative impact on performance. As a result, the address space switching when the hypervisor activates a different OS requires a complete TLB flush. Unfortunately, the x86 architecture did not support either the tagging of TLB entries or the software management of the TLB. Only Dom0 has direct access to IDE and SCSI disks all other domains access persistent storage through the Virtual Block Device (VBD) abstraction. To increase efficiency, a guest OS must install a “fast” handler.Ī lightweight event system replaces hardware interrupts synchronous system calls from a domain to Xen use hypercalls and notifications are delivered using the asynchronous event system.Ī guest OS may run multiple applications.ĭata is transferred using asynchronous I/O rings. XenoLinux implements a balloon driver to adjust domain memory.Ī guest OS runs at a lower priority level, in ring 1, while Xen runs in ring 0.Ī guest OS must register with Xen a description table with the addresses of exception handlers previously validated. A guest OS has direct access to page tables and handles pages faults directly for efficiency page table updates are batched for performance and validated by Xen for safety. A domain may be allocated discontinuous pages.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |