Apr. 13th, 2020

Matt Liebowitz (Author), Christopher Kusek (Author), Rynardt Spies (Author)
VMware vSphere Performance: Designing CPU, Memory, Storage, and Networking for Performance-Intensive Workloads 1st Edition
По следам разбирательств:
При наличии в кластере серверов с разными частотами процессоров можно получить проблему с Read Time Stamp Counter - RDTSC
Проблема выражается в резком падении производительности.
Описание тут:
https://communities.vmware.com/thread/154837
There is a known problem with RDTSC virtualization. By default, VMware virtualizes RDTSC but "monitor_control.virtual_rdtsc" option allows to disable RDTSC interception to improve time measurement resolution in VM. Disabled RDTSC virtualization may cause guest system to hangup at boot that is mentioned here:

http://www.vmware.com/pdf/WS6_Performance_Tuning_and_Benchmarking.pdf
http://www.vmware.com/pdf/vmware_timekeeping.pdf

Guest Windows hangs at boot because HAL timer initialization functions (HalpPmTimerScaleTimers, HalpScaleTimers) set TSC to zero several times to use its absolute value for time calculations instead of simply calculating the difference without resetting TSC. If RDTSC is virtualized, it returns a relatively small value because WRMSR (used to set TSC to zero) is virtualized too. If RDTSC is not virtualized, guest system receives host TSC value that is usually very big and cause divide overflow.

A recommended workaround is to start guest system with RDTSC virtualized, wait until it boots, suspend it, disable RDTSC virtualization then resume the VM. Since TSC is zeroed only several times at boot, guest can successfully use host TSC values later.

Причина проблемы - тут
Да мой старый laptop в несколько раз мощнее, чем ваш production server
https://habr.com/ru/post/496612/
Бомжуем в Microsoft Azure или хостим сайты за бакс в месяц: часть 1
https://masyan.ru/2019/02/websites-in-azure-staticwebsites-cdn/

Serverless в Microsoft Azure или боты для telegram на Azure Functions и python
https://masyan.ru/2019/10/serverless-azure-functions-telegram-python-bots/
VAAI and the Unlimited VMs per Datastore Urban Myth
One of the oldest debates in VMware lore is “How many virtual machines should I place on each datastore?” For this discussion, the context is block storage (as opposed to NFS). There were all sorts of opinions as well as technical constraints to be considered. There was the tried and true rule of thumb answer of 10-15-20 which has more than stood the test of time. The best qualified answer was usually: “Whatever fits best for your consolidated environment” which translates to “it depends” and an invoice in consulting language.
http://www.boche.net/blog/2013/02/28/vaai-and-the-unlimited-vms-per-datastore-urban-myth/

Перевод
Миф о неограниченном размещении ВМ на VMFS-хранилище с VAAI
https://vmind.ru/2013/03/12/myth-unlimited-vm-vmfs-datastore-vaai/

Дополнительные ссылки из комментариев:
I have documented them at our blog site,
http://www.purestorage.com/blog/virtualization-and-flash-blog-post-3/
and at
http://www.purestorage.com/blog/1000-vms-demo-vmworld-2011/

Вопрос про ограничения VM per datastore не так прост, ограничения до сих пор есть в документах:

However, in most circumstances and environments, a target of 15 to 25 virtual machines per
datastore is the conservative recommendation. By maintaining a smaller number of virtual machines per
datastore, potential for I/O contention is greatly reduced, resulting in more consistent performance across the
environment.
https://www.dellemc.com/sl-si/collaterals/unauth/technical-guides-support-information/PowerVault_ME4_Series_and_VMware_vSphere.pdf

Более подробно ситуация описана в чуть более свежей статье
Understanding VMware ESXi Queuing and the FlashArray
https://www.codyhosterman.com/2017/02/understanding-vmware-esxi-queuing-and-the-flasharray/

и
Setting the Maximum Outstanding Disk Requests for virtual machines (1268)
https://kb.vmware.com/s/article/1268
KubeAcademy is a free, product-agnostic Kubernetes and cloud native technology education platform.
https://kube.academy/
Upgrade from ESXi 6.7 to 7.0 ESXi Free
I had this question in the comments on one of the posts on the blog and I thought that many users of ESXi Free version might be interested. In fact, yesterday someone asked on Twitter whether the ESXi 7.0 Free version will be available. I checked and it wasn't. A 404 page at VMware made me think that they're working on it -:). Today it is and you can download ESXi 7.0 Free here. Anyway, today the question was whether we can upgrade from 6.7 to 7.0 ESXi, having a free license?

The reply is, absolutely. Even the free version of ESXi 6.x can be upgraded very easily to ESXi 7.0 Free version, and in this post, we'll show you how.
https://www.vladan.fr/upgrade-from-6-7-to-7-0-esxi-free/
Disk partition layout for ESXi in vSphere 7 -

However, in current release, there are bunch of changes. We do not have similar layout.

The new layout seems to be more compact. Though we still have boot banks, however, there are changes to the sizing of boot banks. Now both boot bank partitions are of 500 MB each.

ESXi 7.0 requires a boot disk of at least 8 GB for USB or SD devices. Although an 8 GB USB or SD device is sufficient for a minimal installation, you should use a larger device. The additional space is used for an expanded core dump file. In the screenshots above, we have used ESXi instance from this device type only.

At least 32 GB space for other device types such as HDD, SSD, or NVMe. When booting from a local disk, SAN or iSCSI LUN, a 32 GB disk is required to allow for the creation of system storage volumes, which include a boot partition, boot banks, and a VMFS-L based ESX-OSData volume.

https://blogs.virtualmaestro.in/2020/04/disk-partition-layout-for-esxi-in.html?m=1

Profile

robopet3

May 2023

S M T W T F S
 12 3456
78910111213
14151617181920
21222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 4th, 2025 12:06 am
Powered by Dreamwidth Studios