Goodreads helps you follow your favorite authors. Be the first to learn about new releases!
Start by following Matt Liebowitz.
Showing 1-15 of 15
“In addition to the size of the scheduler cell, this approach might also limit the amount of cache as well as the memory bandwidth on multi-core processors with shared cache.”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“A VM performs best when all its vCPUs are co-scheduled on distinct processors.”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“ESXi CPU scheduler has the responsibility to ensure that all vCPUs assigned to a VM are scheduled to execute on the physical processors in a synchronized manner so that the guest OS running inside the VM is able to meet this requirement for its threads or processes.”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“VMware ESXi has the ability to detect the physical processor topology as well as the relationships among processor sockets, cores, and the logical processors on those cores. The scheduler then makes use of this information to optimize the placement of VM vCPUs.”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“VMware has given you as an administrator the ability to restrict the pCPUs that a VM's vCPUs can be scheduled on. This feature is called CPU scheduling affinity.”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“The first is referred to as a pull migration, where an idle physical CPU initiates the migration. The second policy is referred to as a push migration, where the world becomes ready to be scheduled. These policies enable ESXi to achieve high CPU utilization and low latency scheduling.”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“the VMM might not always be”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“In addition, new multi-core processors make the problem even worse because multiple cores are now being starved for memory during memory access operations.”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“CPU affinity can lead to improved performance if the application has a larger cache footprint and can benefit from the additional cache offered by aggregating multiple pCPUs.”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“Any time spent by the VMM on behalf of the VM is excluded from the progress calculation.”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“At some point the skew will grow to exceed a threshold in milliseconds, and as a result, all of the vCPUs of the VM will be co-stopped and will be scheduled again only when there are enough physical CPUs available to schedule all vCPUs simultaneously.”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“slowest vCPU and each of the other vCPUs.”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“Distributed Locking with the Scheduler Cell”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“In other words, always allow the VM to be scheduled on at least one more pCPU than the number of vCPUs configured for the VM.”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
“only the vCPUs that advanced too much are individually stopped,”
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads
― VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads




