Upgrade to Fedora 24

I just updated to Fedora 24 today, just a day after it’s release. Two dnf commands and 30 minutes later, my system is now upgraded to Fedora 24.

And I must say, wow, this was a smooth process! I always tried to upgrade between release in the past and always encountered some weird bugs along the way. Because of that, I have always ┬áreverted back to clean install in the past . This is the first time that after the final reboot, everything works fine, everything is in it’s place, everything is OK!

I am really impressed with the result. Fedora 24 is really polished and bring some improvement to the user experience. Good job to everyone!


Accessing your RHEV VMs with your iPad and Spice!

A friend of mine sent me a link to a new project that aim to provide a HTML5 client for the Spice protocol (http://www.spice-space.org/page/Html5). He asked me to check if I can access my VMs under RHEV with my iPad.

The page above describe the steps required to connect to a standalone Spice server. Unfortunately, I didn’t find any information about how to connect to a VM running in RHEV. The problem is that you cannot find the address, port and password to connect to the Spice console in the RHEV-M interface. After a bit of research, I found that all the information is there, you just have to search a little bit to get it. Here is the step from the beginning :

Continue reading

Playing with RHEV 3.1

I took the snowy weekend to play with the new release of RHEV 3.1. This new release come with an impressive set of new features, including the removal of the requisite of Internet Explorer! Also included (and to be reviewed in a future post) is the possibility to use a Gluster based storage with RHEV!

My plan for the beginning was to use my old laptop and install RHEV-M on it. That part went well, the installation is now so easy. I also wanted to test the new “All-In-One” plugin on it. This plugin allow one to install a complete and working RHEV environment on a single server. The AIO plugin configure a local data center, cluster, storage and a local host.

This plugin is not supported by Red Hat to use in production case but it’s a welcome addition to ease demoing the RHEV platform. Sadly, I didn’t have much success with it. I had multiple crash and timeout during the plugin configuration that left my RHEV-M not working anymore. (Keep in mind that this plugin is still a proof of concept, oVirt is still working hard to make it work fine.)

So, the plan changed : I used a VM on my new laptop to install RHEV-M and I used my old laptop to install the RHEV hypervisor. One hour later, everything was working fine, I have a few VM running in my “data center”, etc. I also installed the Reports engine to RHEV-M (a big 10 minutes task!). The integration of reports into RHEV-M is absolutely awesome! I can right click on anything and launch a report that give me precious informations about my resources, workload, usage, etc.

The next step is to add a Gluster based storage to my data center and test the new storage live migration with it. I will post my experience with it as soon as I can!

oVirt and GlusterFS

I was checking the wiki of oVirt yesterday when I saw something that I was not aware of : oVirt already support GlusterFS! Not everything is there and working 100% but it’s developing rapidly.

What amaze me is the fast evolution that free software enable. The first official release of oVirt is not 6 months old that I would not have enough space here to list all the new features that are being worked on. Go back 2 years in time when RHEV first came out : it was very promising but at the same time very far from the features set that VMware had. Look at RHEV/oVirt now and the fast pace of development since then.

If you want to try out oVirt and GlusterFS or see what it look like, someone posted a short tutorial on this at http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-using-either-nfs-or-posix-native-file-system

Beware of the semaphores!

I just run into a weird problem today when I was in the process of migrating a bunch of servers from an old HP SAN to a shiny new EMC VMAX.

The client have chosen to use Powerpath for the multipathing software on RHEL 5. The servers in question run multiple Oracle databases into a grid configuration. We had no problem with the old SAN and we had no problem with the new EMC VMAX using dm-multipath.

The problem started when we first reboot after installing Powerpath. All the devices was there, all the mount point worked fine but Oracle refused to start with the following message :

ORA-27154: post/wait create failed
ORA-27300: OS system dependent operation:semget failed withstatus: 28
ORA-27301: OS failure message: No space left on device
ORA-27302: failure occurred at: sskgpsemsper

Hmm, interesting. After searching on Oracle Metalink, we found that this message is normally related to an insufficient number of available semaphores. But, all our other servers work fine and we have followed the Oracle recommendation when we set the number of semaphores initially.

Our systems are currently configured with 128 semaphores arrays as per the Oracle recommendation. Using “ipcs -u”, we found out that we already use the whole 128 available arrays, even before trying to start Oracle. With the “ipcs -s” command, we saw that the root user had a huge number of semaphores arrays, 125 to be exact. Why these systems have 125 semaphores arrays for the root user when our other systems have around 25-30?

Here come the Powerpath semaphores eater! If you use Powerpath in combination with a EMC VMAX SAN, Powerpath use one semaphore array per LUN, per path to that LUN. So, if you have 4 paths to the EMC VMAX and 25 LUNs presented to the server, 100 semaphores arrays automatically goes away on boot, leaving not enough for your other normal task.

This problem is easily fixed by changing the value in /etc/sysctl.conf on the kernel.sem line. The semaphores arrays limit is the last digit. You can view your current limit with the “ipcs -l” command. I plan to write a following post shortly on using SystemTap to get a diagnostic during the boot on what consume semaphores arrays.

You can see this post for reference if you have a valid Red Hat subscription : https://access.redhat.com/knowledge/solutions/23696

Or this article on Oracle Metalink with a valid subscription : 949468.1

Fedora 16 media!

Canada just got a new shipment of Fedora 16 media, thanks to Nick Bebout! If any Ambassadors (or anyone who organize a conference, installfest, etc) want a batch for their events, please feel free to contact me directly.

I have some other swags as well (stickers, balloons, pens, etc) that I can throw in the box as well!

Fedora 15 media

Thanks to nb, Canada and Quebec got their Fedora 15 media!

The new multi desktops live DVD are really nice and give the chance to everyone to try out Fedora completely.

We will have the chance to distribute them at the “Software Freedom Day” next Saturday. Stay tuned for pictures of this event!