Browsing Posts tagged ESXi

Several people are disabling IPv6 support in ESXi for different reasons: because of the minimum privilege principle (if you are not using a service, why you have to keep it enabled?) or simple because they don’t want any IPv6 address in the network. On Linux and Windows systems is become very difficult disable it and Microsoft itself does not recommend disabling IPV6: ” We do not recommend that you disable IPv6 or its components, or some Windows components may not function.” (https://support.microsoft.com/en-us/kb/929852)

VMware best practices for virtual networking, starting with vSphere 5, usually recommend the vmxnet3 virtual NIC adapter for all VMs with a “recent” operating systems: starting from NT 6.0 (Vista and Windows Server 2008) for Windows and for Linux that include this driver in the kernel, and for virtual machines version 7 and later. For those operating systems the choice is normally between the e1000 or the vmxnet3 adapter: the new virtual machine wizard suggest the e1000 for the recent Windows systems, but only because this driver is included in the OSes. Historically there were some […]

As you probably know VMware vSphere 6.0 had a critical issue con its Change Block Tracking (CBT) implementation that can impact all incremental backup with “VMware native” backup program (all agent-less implementation using the VMware VDAP API). This issue occurs due to an issue with CBT in the disklib area, this causes the change tracking information of I/Os that occur during snapshot consolidation to be lost. The main backup payload data is never lost and it is always written to the backend device. However, the corresponding change tracking information entries which occur during the consolidation task are missed. […]

In previous posts (see ESXi – Partitions layout of system disk and ESXi – More on partitions) I’ve described how are handles the partitions table on the destination installation media of ESXi 5.x (both in the case of a hard disk or a SD/USB disk). With the new ESXi 6.0 the partition tables is similar in the case of a 1 or 2 GB destination device (like a previous SD media), but has some changes in the case of larger devices. Core partitions remain the same with standard size:

In previous post we have already see how add custom drivers to an ESXi installation ISO and how use ImageBuilder to make custom ESXi ISO, but in other cases you may need to define some custom settings during the installation or add custom vib files. Booting from CD is not the only way, but custom ISO could be used also for boot from USB or for boot from virtual devices (like the iDRAC or ILOE). In case you need to build custom ISO with custom option this post could help you: How to Create Bootable ESXi […]

If you plan to build a lab for vSphere 6.0 you can use all the approches for vSphere 5.x (see also Building a vSphere 5 lab): usually using a nested environment (or three physical systems, if you have) as a common platform. Of course, the other way to test new products it’s just use the VMware Hands on Labs that you can also “broke” or use in a different path compared to the one suggested by the guide. But having a local environment permit more flexibility and more time for testing or learning. The big […]

After the new Virtual Volumes, the most “spoilered” features of the new VMware vSphere 6.0 was probably the new vMotion that allow VMs to move across boundaries of vCenter Server, Datacenter Objects, Folder Objects and also virtual switches and geographical area. VMware vMotion was probably the first important virtualization related features that make important VMware (and its product) but, much important, that make relevant the virtualization approach: having VM mobility means handle planned downtime and also workload balancing. Now VMware reinvent vMotion to become more agile, more cloud oriented: breaking the boundaries and going outside […]

As you probably now the vSphere Fault Tolerance features has been unchanged from the first version (in vSphere 4.x)… untill now. With vSphere 6.0, recently announced, there is a new Multi-Processor FT (SMP-FT) features that replace the previous one and brings now continuous availability protection for VMs with up to 4 vCPUs! It’s not something news in the virtual environments… several years ago Marathon announced the everRun MX, that was the first solution, but only for Citrix XenServer. Initial plans of this producs expect also a vSphere version, but the company was then acquired by […]

The new VMware vSphere Suite 6.0 brings several changes and new features, but the installation phases remain still similar with the previous versions. The big changes are in the vCenter deployment type both for the installable and the appliance version and the new VUM client… all the other parts of the installation remain similar  with only few notes.

Some weeks ago I’ve got an issue in a VMware vSphere 5.1 environment with vMotion that stop each operation at 14% with some strange errors. Also there were several VMware KB articles related to this issues, with different reasons. The error message was simple: Timed out waiting for migration start request. So the closest KB article was: vMotion fails at 14% with the error: Timed out waiting for migration start request (2068817). But none of the possible solutions were applicable to my case.

Fusion-IO is a well-know company in the host-side flash solutions to accelerates databases, virtualization, cloud computing, big data, and the applications without change your storage. Their In-Server Acceleration products are impressive (sometime also in the price) and can provide up to 10.24TB of flash to maximize performance for large data sets, or also solutions for blade server (with the ioDrive2® Mezzanine). Thanks to Fusion-IO Italy I’ve got the opportunity to test thee Fusion-io 410GB ioScale, the smallest model of this product line (ioScale products use MLC technology and are in these capacities: 410GB, 825GB, 1650GB, […]

© 2017 © 2013 vInfrastructure Blog | Hosted by Assyrus