Continuing on from our previous post where we covered systemd and journald, we continue our discovery of the new features of RedHat Enterprise Linux 7.
RedHat EL 7 – Networking
This is a bit of a something about nothing story. There is a new networking service called NetworkManager . There’s also a bunch of new ways to control and configure NetworkManager. These include nmcli, nmtui, control-center (gnome) and nm-connection-editor. As the names suggest, they go from command line, menu driven to GUI based configuration options.
I’ve been struggling to understand what the benefits of this new service are. I found this on the RedHat customer portal “NetworkManager is a suite of cooperative network management tools designed to make networking simple and straightforward by eliminating the need to manually edit network configuration files by hand“. Doesn’t that sound lovely. The problem is that trying to change a setting using nmcli is way more complicated than editing an ifcfg file (which is still supported), so like I said, I don’t get it.
One other thing is that there is a new type of bonding called Network Teaming. This is supposed to have a lower overhead than it’s predecessor and “it’s modular design can be interacted with via an API” . Very useful to API fans everywhere I’m sure. As ever, more details on our wiki page .
RedHat EL 7 has made the move to grub 2. Grub 2 has been around for a while and has quite a few new features like non x-86 support, custom themes/menus and other such stuff.
From the sys admins point of view, it’s laid out completely differently. Again, there is a move away from editing configuration files, (what have people got against vi anyway?) but in this case, don’t edit the config file as it gets built dynamically. Some parameters can be set in /etc/default/grub (apparently it’s OK to edit this file!) and custom menu entries can be created by using scripts in /etc/grub.d, you know where to look for some examples . RedHat promote the use of the grubby command but a bit like nmcli, to me it seems more trouble worth for all but the simplest options. However, if you’re doing the RHCE qualifications, I expect RedHat insist you know every option under the sun. You can be guaranteed to learn something useless when your doing certifications….
RedHat EL 7 filesystems
So XFS becomes the default filesystem and BRTFS is available as a “technology preview”. Not sure what that means, probably if you try to place a support call against it you’ll be told it’s not supported. From what I understand BRTFS is a bit like ZFS, which, if you’ve never used Solaris, probably won’t help you. It uses a copy on write style update mechanism which is supposed to provide a very low chance of filesystem corruption. Anyway, I guess you can play around with BRTFS ready for when it becomes properly available in RedHat EL 8 or whatever…
XFS has been around for years. Christ, I remember using it with Irix in the late 1990s (you can check out our Irix wiki page here BTW in case your interested!! Feel free to add to it!!). From what I understand it has been selected as the default filesystem due to it’s scope for growth (up to 500TB), performance and quick recovery. There’s a mkfs.xfs command to create XFS filesystems, a xfs_growfs command (guess what that’s for) and a command to defrag….more details in the usual place .
Firewalls in RedHat EL 7
Firewalld replaces iptables as the firewall service in RedHat 7. It is based around the concept of zones. Having done a lot of work in the Storage Area Network space, I found the term zones confusing at first. However, after getting SAN zones out of my head, I came to understand that in firewalld, a zone is basically a set of firewall rules that apply to a certain network environment, e.g. you might have a DMZ zone. There’s a set of predefined zones that can be used as is or adjusted using the firewall-cmd command. Alternatively you can create your own custom zones with your own rules and use that. It is quite easy to use and has the big advantage that rules can be added or removed without having to restart the firewall. In the background iptables commands are used to talk to the kernel packet filter. More here .
Time, there’s never enough of it is there? But at least your lack of time will be measured in pin point accuracy in RHEL 7. NTP remains the default time synchronisation service but now there’s more! chrony is available for mobile systems or perhaps ones with really bad network connections. It can tolerate time sources being unavailable for periods of time and can sync up the time when they are.
Also available is PTP, precision time protocol. Now I thought NTP was fairly precise but apparently it’s not. PTP can use hardware features in switches and NICs to account for any microsecond delays to give the ultimate in time accuracy. Not sure what the use cases are, algorithmic trading perhaps or real time devices? It would certainly be useful to make sure I go home on time.
One other new time feature is the timedatectl command which can be used to control time zone settings
Yes, the hypetastic feature everyone is talking about it. Of course it’s not really a RedHat “feature” but RHEL 7 is the first release docker is bundled with. And in case you’ve recently moved to the IT world from a mountain retreat in Bhutan, docker is a container technology that allows applications to run in a kind of cut down VM. A container will contain all the libraries and executables required to run the application but not the whole OS thereby creating a very light weight run-time environment. One thing I read about docker containers confused me for quite a while. That was that a docker container can only run one command. So, I had images in my mind of a docker container running just the ls command which kind of seemed a bit pointless. I think a better explanation is that it can only run one service. So if you had a container running apache, you couldn’t install mysql in there too. You’d need to run that in a separate container. You could have your docker container running a bash shell so you could then run the ls command in that and lots of others but wouldn’t really be that useful unless you wanted to learn the bash shell.
The contents of a container are kept separate from the server OS using a combination of cgroups, namespaces and chroot. You can have lots of fun playing around with them and some examples are here.
What are the use cases? Well certainly due to their light weight nature you can cram a lot more containers onto a server than VMs. So if you want to make the most of your server resources, this could be the way to go. Certainly if you’re paying for a VM in the cloud, containers could help you save money by cutting down on the VMs you’d need. The use that really seems to have caught on though is for application releases. You can configure your container to have everything it needs to run and just deploy the container making everything simpler and quicker. Containers certainly feed into the devops philosophy and help enable that concept. Obviously devops is another term bandied about by the media without actually defining what they mean by it. I got to say that although they may be excellent programmers, one or two developers I have encountered have struggled with the cd command so I’m not sure I’d trust them with my run-time environment. That aside, I think devops is a good idea but you need people with the skills to make it work. Not sure how many people there are about with the required skills but future blogs will feature AWS and puppet so I’m definitely jumping on that devops bandwagon!!
Well, that’s it for now. Look out for the next blog which, as mentioned, will be around some AWS stuff. Bye!