Archive for the ‘Red Hat’ Category

Accessing your RHEV VMs with your iPad and Spice!

February 3, 2013

A friend of mine sent me a link to a new project that aim to provide a HTML5 client for the Spice protocol ( He asked me to check if I can access my VMs under RHEV with my iPad.

The page above describe the steps required to connect to a standalone Spice server. Unfortunately, I didn’t find any information about how to connect to a VM running in RHEV. The problem is that you cannot find the address, port and password to connect to the Spice console in the RHEV-M interface. After a bit of research, I found that all the information is there, you just have to search a little bit to get it. Here is the step from the beginning :


Playing with RHEV 3.1

January 20, 2013

I took the snowy weekend to play with the new release of RHEV 3.1. This new release come with an impressive set of new features, including the removal of the requisite of Internet Explorer! Also included (and to be reviewed in a future post) is the possibility to use a Gluster based storage with RHEV!

My plan for the beginning was to use my old laptop and install RHEV-M on it. That part went well, the installation is now so easy. I also wanted to test the new “All-In-One” plugin on it. This plugin allow one to install a complete and working RHEV environment on a single server. The AIO plugin configure a local data center, cluster, storage and a local host.

This plugin is not supported by Red Hat to use in production case but it’s a welcome addition to ease demoing the RHEV platform. Sadly, I didn’t have much success with it. I had multiple crash and timeout during the plugin configuration that left my RHEV-M not working anymore. (Keep in mind that this plugin is still a proof of concept, oVirt is still working hard to make it work fine.)

So, the plan changed : I used a VM on my new laptop to install RHEV-M and I used my old laptop to install the RHEV hypervisor. One hour later, everything was working fine, I have a few VM running in my “data center”, etc. I also installed the Reports engine to RHEV-M (a big 10 minutes task!). The integration of reports into RHEV-M is absolutely awesome! I can right click on anything and launch a report that give me precious informations about my resources, workload, usage, etc.

The next step is to add a Gluster based storage to my data center and test the new storage live migration with it. I will post my experience with it as soon as I can!

oVirt and GlusterFS

July 6, 2012

I was checking the wiki of oVirt yesterday when I saw something that I was not aware of : oVirt already support GlusterFS! Not everything is there and working 100% but it’s developing rapidly.

What amaze me is the fast evolution that free software enable. The first official release of oVirt is not 6 months old that I would not have enough space here to list all the new features that are being worked on. Go back 2 years in time when RHEV first came out : it was very promising but at the same time very far from the features set that VMware had. Look at RHEV/oVirt now and the fast pace of development since then.

If you want to try out oVirt and GlusterFS or see what it look like, someone posted a short tutorial on this at

Beware of the semaphores!

May 3, 2012

I just run into a weird problem today when I was in the process of migrating a bunch of servers from an old HP SAN to a shiny new EMC VMAX.

The client have chosen to use Powerpath for the multipathing software on RHEL 5. The servers in question run multiple Oracle databases into a grid configuration. We had no problem with the old SAN and we had no problem with the new EMC VMAX using dm-multipath.

The problem started when we first reboot after installing Powerpath. All the devices was there, all the mount point worked fine but Oracle refused to start with the following message :

ORA-27154: post/wait create failed
ORA-27300: OS system dependent operation:semget failed withstatus: 28
ORA-27301: OS failure message: No space left on device
ORA-27302: failure occurred at: sskgpsemsper

Hmm, interesting. After searching on Oracle Metalink, we found that this message is normally related to an insufficient number of available semaphores. But, all our other servers work fine and we have followed the Oracle recommendation when we set the number of semaphores initially.

Our systems are currently configured with 128 semaphores arrays as per the Oracle recommendation. Using “ipcs -u”, we found out that we already use the whole 128 available arrays, even before trying to start Oracle. With the “ipcs -s” command, we saw that the root user had a huge number of semaphores arrays, 125 to be exact. Why these systems have 125 semaphores arrays for the root user when our other systems have around 25-30?

Here come the Powerpath semaphores eater! If you use Powerpath in combination with a EMC VMAX SAN, Powerpath use one semaphore array per LUN, per path to that LUN. So, if you have 4 paths to the EMC VMAX and 25 LUNs presented to the server, 100 semaphores arrays automatically goes away on boot, leaving not enough for your other normal task.

This problem is easily fixed by changing the value in /etc/sysctl.conf on the kernel.sem line. The semaphores arrays limit is the last digit. You can view your current limit with the “ipcs -l” command. I plan to write a following post shortly on using SystemTap to get a diagnostic during the boot on what consume semaphores arrays.

You can see this post for reference if you have a valid Red Hat subscription :

Or this article on Oracle Metalink with a valid subscription : 949468.1

RHEV 3 beta

August 29, 2011

Red Hat has released the first public beta for RHEV 3 in the last week or so. I didn’t have time to play with it since yesterday, waiting for the rest of Irene to strike my city.

So, I grabbed the documentation, go through it rapidly and start the installation on a spare server I have at home. The first thing to notice is how wonderful it is to just “yum install rhevm”! Seriously, compare that to the need to install Windows 2k8, Active Directory, .Net, etc, etc… This is a major step up for RHEV.

After playing with RHEV for the whole day, here is what I conclude :

- For a first public beta, RHEV 3 is extremely stable. I saw some problems here and there but nothing major. There is some errors in the documentation and some problems accessing the Spice console with Fedora 15 and Gnome 3 (and this one is not necessarily a RHEV problem). Everything is stable, smooth and work as advertised. I am impressed because I thought that porting from .Net to Java would cause more problems than that.

- RHEV 3 also gave me the chance to play with FreeIPA, which is in “Technology Preview” in RHEL 6.1. Again, really impressed with this product and will be sure to look at it again on it’s own in the near future.

- The upgrade to RHEL 6.x really show : the performance are amazing. I look forward to try RHEV 3 on bigger hardware than what I have currently. This is largely due in performance improvement in the kernel and KVM : KVM performance improvements and optimizations

- I like how Red Hat used their technologies combined with other major Open Source products : JBOSS, OpenJDK, PostgreSQL, KVM, RHEL 6.x, FreeIPA, etc. I like that a big project like RHEV based on Windows, .Net, Active Directory and SQL Server can be ported to stable and trusted Open Source equivalent.

The only downside from all this is that you still need Internet Explorer to access the Administration console. After searching for a bit, this is supposedly going to be fixed in RHEV 3.1. This is a small downside for all the gain that RHEV 3 come with.

Finally got my RHCA!

May 11, 2011

Wow, I finally got the result of my RH442 exam and I pass! This was my fifth certificates of expertise, so I got another mail from Red Hat confirming that I got my RHCA!

My goal was to get my RHCA in one year, passing all the exams on the first try. I took 4 months more but I got my objective of passing all of them on the first try!

This is not really related to Fedora but I just wanted to share that ;)


Get every new post delivered to your Inbox.