During an array migration usually you can use application level trick to move your data between the old and the new array. For example with VMware vSphere you can use the Storage vMotion features (now added also in lower edition of vSphere 5.1) or with Microsoft Hyper-V 3 you can use the new Storage Live Migration feature.
But in some case you may have also an option at storage level. For example to migrate across two Dell EqualLogic array (also different models) you can use the embedded function.
This was a guest post on vDestination.
Usually (especially in Country like mine) we common talk about the Digital divide that bring not equity, or at least not same opportunity, between different group. But this is mainly focused on the connectivity aspects and how accessing to broadband networks could be different and limited for somebody. But this it’s only one aspect and there are also others that must be considered! (more…)
Build a vCloud lab is interesting to increase your skill and knowledge, make some test and of course preparing for the VCAP-CIA exam. If you build on a ESXi nested environment you will not have too much problem (except with old hardware that may not support complete ESXi nesting with vSphere 5.1).
With a VMware Workstation environment (of course with enough memory and resources) could be more fun and portable. But could be more difficult if you plan to use vCloud Director virtual appliance.
Recently I got a strange issue using a cold storage migration usign the Migrate function of vCenter Server.
This was the scenario: a Free Hypervisor with some VMs where a full license (Essential Plus) was added and the host was also added to a vCenter Server. For compatibility matrix reason the version 4.1 was used (although the storage can work with vSphere 5.0 but officially was not supported). Also the Free Hypervisor was build with the 4.0 version and upgraded (one year ago) to the 4.1 version.
This was the issue: after a cold storage migration all the VM build when the ESXi was on version 4.0 have changed their MAC address. No issue on the VM build when ESXi was upgraded to the 4.1 (but still in free mode).
Some months ago I’ve write about the VMware Hands on Labs and how labs are important for testing and learning about a product (also for other vendor). Events like VMworld, Microsoft TechEd, Dell Storage Forum, HP Discover, … are a good moment to have access at those kind of labs.
But of course you can also build your own labs (as described in this post) but this require more effort. A good work-around is use on-line and on-demand lab. For VMware the on-line version is just in beta, but for Microsoft this kind of tools are a reality from several years and are called Virtual Labs (available in the TechNet section). You will experience a full-featured virtual lab experience, a download-able manual, and a 90-minute block of time for each module.
The SAN Transport Mode is a way to implement a LAN-free backup solution when the backup server is a physical machine and have a direct access to the VMFS datastores. This method was first provided by the VCB Framework and then ported to the vStorage API related to backup and data protection.
In order to use this method in the proper way, some configurations and considerations are required:
In a physical environment usually the term CPU is used to refer to the physical package (or socket). The real processing unit inside this package are called cores (and not that each core can have inside more ALU and can be seen as more logical cores with hyper-threading feature). More CPU usually define a SMP system, more cores a multi-core CPU, more CPU each with more cores a complex system (usually the NUMA architecture is used in this case).
In a virtual enviroment the term vCPU is used to refer to a core assigned to a VM. More vCPU define a vSMP system, similar to a physical system with more CPU each with a single core.
The number of vCPUs can be assigned is determined by the license edition (for more info see http://www.vmware.com/products/vsphere/buy/editions_comparison.html): for all edition except the Enterprise+ was (in vSphere 4.x) 4 vCPU, now (in vSphere 5) is 8 vCPU. For the Enterprise+ edition was 8 and now is 32.
But the number of vCPU can have some impact on guest OS CPU licensing. For example Windows XP or 7 is limited to only 2 vCPUs as “physical” CPUs and can not use more than this limit… But can use more cores.
To get around this limit, it’s possible expose to a VM a more complicated structure where each vCPU has more than one cores. This can be set by and advanced setting in the vmx file. Note that from vSphere 5 this is possible also from graphics interface.
One year ago, I’ve described how to disable the certificate warning in the Windows View Client 5.0, by using GPO or a direct change in the local registry (on the client).
Now, with the new version 5.1 of the View Client there is a new way client side:
There could some cases where you may have some Windows (Server) completely isolated from the Internet, for example in some appliances or storage solution.
One interesting aspect that learn on Windows products during a DataCore training was that in this case you can have some delay in the user interface, both on the DataCore GUI, but also on the Windows one.
In a View environment the PCoIP is usually the right choice and on a LAN works really well (see also the previous post about PCoIP and RDP differences). But on a wide-area networks (WANs)you have to consider some aspects to have a reasonable reactivity or to maximize the number of remote clients: you must consider bandwidth constraints and latency issues. The PCoIP display protocol provided by VMware adapts to varying latency and bandwidth conditions, but some optimization may be needed.