Portal Home > Knowledgebase > Articles Database > What is your Hypervisor(s) of Choice? and why?


What is your Hypervisor(s) of Choice? and why?




Posted by viabandwidth, 03-08-2011, 10:20 AM
Hi People, I'm just curious what you choose to be your hypervisor of choice? to get a little more technical 1. OpenVZ/Virtuozzo - FreeBSD Jail and Solaris Jails (I am sure I am missing some) are "containers" or "Jails" and not hypervisors. 2. Xen/XenServer/OracleVM are modifications of Xen Opensource 3. KVM (not the console cables) and Virtualbox are not hypervisors either, however, I'll document them as Tier 2 Hypervisors I may have to rephrase the question; which virtualization technology/technologies do you prefer and why? I've run into this question many a time and tried to explain to people the differences, but they just more confused. On a performance level, I think it's negligible..yes, there are differences, but they mostly rely on applications and or hardware used. Some other HV's are HyperV, ESX/ESXi, L4 microkernels, Green Hills Software’s INTEGRITY Padded Cell, VirtualLogix’s VLX, TRANGO, IBM’s POWER Hypervisor (PR/SM) (I think it's also known as PowerVM), Sun’s Logical Domains Hypervisor, Hitachi’s Virtage hypervisor. I believe there is also one created by a company called "National Instruments". I personally never heard of the company, but they don't look like a small organization in the slightest. Like I said..some of these are hardware and or application specific. I have excluded some "workstation" hypervisors as they really do not fall under the "web hosting" industry, unless you offer VDI (Virtual Desktop) for your services. That's not the kind of answer I was looking for. As for me...my "personal" preference is Virtualbox. It's considered a Tier 2 hypervisor...some people say type 2..I prefer to call it an "application hypervisor" because it's an application that runs on the main os (2nd layer above bare metal). Anyway, Virtualbox 4.0 can do a lot of things like control IO..support multiple snapshot/image technologies..very interoperable AND can even support major legacy OS' like Dos 3.1, OS2Warp etc..you may think I'm nuts, but you can get a lot of clients that are too scared to move to another host because their old stuff isn't supported and they don't want to invest in the upgrade.

Posted by FastServ, 03-08-2011, 11:08 AM
I've found that Windows performance is best on XenServer...tried a few others but always went back to XenServer for most of the 'public facing' nodes. For personal/private stuff, VirtualBox fits my needs.

Posted by bqinternet, 03-08-2011, 12:31 PM
I use Xen. I do synchronous replication of storage between pairs of servers using DRBD, so live migration and failover are possible without needing a fancy SAN. DRBD is configured to use LVM volumes, so I can use LVM's snapshot capabilities too. It's a very powerful solution. Most of my VMs run FreeBSD, some of which use jails, so I supposed there's some nested virtualization going on

Posted by FastServ, 03-08-2011, 01:34 PM
How is the performance on FreeBSD on Xen? Last I tested it wasn't very good, because FreeBSD is not Xen aware as a DomU...

Posted by JasonD10, 03-08-2011, 03:53 PM
Yes, that is the problem with HVM OS's. They at least need Paravirtualization drivers to run half decent. FreeBSD was supposed to support Xen domU properly in the 8.0 release, but basically dropped development of it in favor of EC2. I'm a long time user and fan of FreeBSD but have been frustrated by this lack of apparent support, especially figuring Xen is by far the most used hypervisor on the market.

Posted by JasonD10, 03-08-2011, 03:56 PM
Well, I can say years ago when XenServer was still XenSource in v2 that comparing Xen to VMWare was a massive difference, over 50% in performance. Today, the gap is narrowing but imho Xen still takes the cake in both performance, and reliability. We primarily rely on Xen in our environments, although we plan on adding support back for the VMWare hypervisor later this year, especially for Windows. We do some very heavy processing on both Linux and Windows platforms and the Paravirtualization support and lack of drivers on Xen has been challenging for Windows. For Linux however, Xen still takes the cake. I do not have experience in KVM but have heard good things about it as well.

Posted by wartungsfenster, 03-08-2011, 04:42 PM
Hi, If I get to chose freely I'll usually settle for Xen. when picking something Xen-based I'll usually pick Oracle VM. I've also used ESXi, XenServer, Virtualbox and KVM and won't hesistate using any of them if it makes sense. But I also made the first distro for Xen dom0s and I love how fast & stable Xen is, and how easy it is to configure OracleVM to "stay out of my way" so that I can build setups of any complexity I want. Citrix XenServer tends to make me lose my sanity. It's an OK tool if you either got dev's around to un-break it or if you're looking at a very small scale deployment (under 100 VMs). But if that doesn't apply I'd avoid it. There's just too many things re-wired under the hood. I love "simple" solutions like KVM or Virtualbox because it's so easy to quickly install a test VM, but thats a very different scope than ESXi or Xen (i think)

Posted by wartungsfenster, 03-08-2011, 04:48 PM
The Xen kernel is there, but still not as mature as the one for NetBSD. I think the EC2 PV support will get us stabilized Xen support in the end, the port also has matured a little, bugs are now actively being closed. Things got better since matters are not with the initial developer any more, I think. I'm looking for someone to team up and autobuild downloadable Xen images for FreeBSD or a deployment script to run on Linux. Because - to be honest - it seems the developers in question don't really care about usability of the Xen port. You'd need a working kernel and "netboot iso" or anything like this to install FreeBSD PV domU's but noone bothers to provide those.

Posted by wartungsfenster, 03-08-2011, 04:52 PM
Testing FreeBSD HVM right now and it's not slow, but needs a lot of power to get any disk IO done. I get about 55MB/s disk IO opposed to 85MB/s to a linux PV VM. CPU usage around 60% to move that data, and about 5% to do the same in a PV domU.

Posted by ghMike, 03-08-2011, 05:06 PM
Microsoft Hyper-V. Obvious reason is we're a Windows host, so it was the natural choice. But it does have some great features like live migration and the recent addition of dynamic memory.

Posted by FastServ, 03-08-2011, 06:55 PM
As I'm sure you know but want to make clear, that XenServer is not challanged with Windows at all and still beats most if not all others... there are very good Citrix PV drivers and excellent performance from Windows 2000 to 2008 R2. Open source Xen is and always has been a crap shoot with Windows (GPLPV), and such a setup is not something I'd provide a paying customer with and still be able to sleep at night.

Posted by JasonD10, 03-08-2011, 07:19 PM
*nod*, we use expensive paid-for closed source PV drivers which outperform GPLPV and XenServer's (which are built from GPLPV last I heard). And yes completely agree, the GPLPV drivers are downright horrible the last time we tried them. When you start throwing REAL production IO loads on them, they crap out bad.

Posted by bqinternet, 03-09-2011, 12:28 AM
Without PV drivers, it depends what you're doing with it. On my hardware, a VM would max out around 60Mbps, which is more than enough for most uses. It feels very responsive. Now, with PV drivers, I'm able to get 800+Mbps. FreeBSD 8.2 comes with PV drivers for the network and disk. You need to compile the XENHVM kernel. Get the source distribution with sysinstall, cd /usr/src && make buildkernel KERNCONF=XENHVM && make installkernel KERNCONF=XENHVM. You'll need to edit your /etc/rc.conf to rename the NIC interface to xn0. You'll also need to edit the vif line in your Xen configuration to take out the "type=ioemu", and, in my case, I also remove "model=e1000". It makes a huge difference. With some simple testing, I can get around 800Mbps on the xn0 interface, and I had no trouble getting 180MB/s from the disk.

Posted by wartungsfenster, 03-09-2011, 03:12 AM
Scott, thank you a lot. So heaven is only a cvsup away

Posted by FastServ, 03-09-2011, 11:44 AM
Citrix PV drivers are definitely not taken from GPLPV. They were written by Citrix (hence no source code) and have been in a stable state well before GPLPV existed. What is scary is that onApp relies 100% on GPLPV for Windows on their Xen setup last I checked.

Posted by iTom, 03-09-2011, 11:48 AM
VMware is a lot more feature rich and reliable than Hyper-V at hosting Windows VMs However it is an enterprise product so comes with the bigger cost...

Posted by JasonD10, 03-09-2011, 12:34 PM
Good to hear about Citrix's drivers. I was with XenServer since it was XenSource, and back then it was basically GPLPV support (or something open source, but it was years ago so I don't remember much about it). OnApp does not have a redistributable license for Windows PV drivers from a real vendor? Ouch. I did not know that. That's a big red flag for anyone wanting to host Windows apps on that platform.

Posted by FastServ, 03-09-2011, 12:42 PM
Don't quote me on it...it was last time I checked which was some time last year. They may have something better now but based on what I've actually seen WRT windows guests I don't think so.

Posted by WebGuyz, 03-10-2011, 02:03 AM
We migrated 16 Windows 2003 Standard production servers from Hyper-V over to onAPP. They perform better under Xen and the latest GPLPV drivers then they did under Hyper-V (original not R2)

Posted by FastServ, 03-10-2011, 10:05 AM
That's good to hear... sounds like GPLV is making progress. but like I said, when it works, it works... when it doesn't, all hell breaks loose especially with 2k8 or x64 guests. I never had that issue with Citrix...their drivers always 'just work' regardless of windows guest flavor.



Was this answer helpful?

Add to Favourites Add to Favourites    Print this Article Print this Article

Also Read
embedding google map (Views: 681)
MySQL restore a DB (Views: 700)