Provisioning with Red Hat Satellite 6.2 – part 3

Welcome to part 3 of our Red Hat Satellite provisioning series. In part 1 we covered the steps required to get provisioning set up in Satellite. Part 2 covered  provisioning a VM in VMWare. In this article we cover provisioning a Docker container, an AWS EC2 instance and a couple of other useful things to know like using gPXE.

Provisioning a Docker Container from Satellite

Although the Red Hat Satellite 6.1 documentation provides a procedure to provision a Docker container on a standard RHEL 7 server, for 6.2 the procedure switches to using Red Hat Atomic Linux. This makes a lot of sense because if the purpose of your server is to run Docker containers, you’d be best off using a version of the OS specifically tuned for this purpose. Red Hat Atomic Linux already includes all the Docker components required along with other add ons like Kubernetes for orchestrating your containers. So our first task is to get Red Hat Atomic Linux up and running and we can of course use Satellite to do this.

 Building an atomic server

One of the fundamentals of Red Hat Atomic Linux is that it uses the OSTree model of updating, rather than packages. (if you’re unfamiliar with ostree, a good introduction can be found here but in summary it overlays complete filesystem trees in a git like manner). It is therefore necessary to add OSTree support to Satellite.

satellite-installer –scenario satellite –katello-enable-ostree=true

You’ll also need to ensire foreman-discovery-image is installed:

# rpm -qa | grep foreman-discovery

foreman-discovery-image-3.1.1-16.el7sat.noarch

And enable it

satellite-installer –enable-foreman-plugin-discovery

Now you need to enable the Atomic Host repository, in the Satellite GUI select Content > Red Hat Repositories > tick Red Hat Enterprise Linux Atomic Host

image

Download the latest Red Hat Atomic Host Installation ISO and mount it on the Satellite server under /var/www/html/pub, e.g.

mount -o loop ./rhel-atomic-installer-7.3.2-1.x86_64.iso /var/www/html/pub/atomicpxe/

Create a new installation media pointing to this ISO, Hosts > Installation Media > New medium

 

 

 

 

 

 

 

 

Create the Operating System config:

 

image

 

 

 

 

 

 

An atomic template already exists:

image

Now, in Products, Red Hat Enterprise Atomic Host should appear. Sync up OSTrees.

 

 

 

 

 

 

 

 

 

 

Now you can create a Content View

image

Create an activation key for red Hat Atomic Host

image

Now create a host group

 

 

image

And you are ready to build your Atomic Host, select Host > new host and select the host group just created.

Provisioning a container

Now that we have our Red Hat Atomic Host up and running, we are ready to move on to provisioning a container. First we need to set up a product. Using the CLI this is done as follows:

# hammer product create –name=’Containers’ –organization=’JSS’

Product created

# hammer repository create –name=’rhel’ –organization=’JSS’ –product=’Containers’ –content-type=’docker’ –url=’https://registry.access.redhat.com’ –docker-upstream-name=’rhel’ –publish-via-http=”true”

Repository created

# hammer product synchronize –organization=’JSS’ –name=’Containers’

[………………………………………………………………………………………………………………………………………………..] [100%]

1 task(s), 1 success, 0 fail

We have  a product called Containers, next we need to create a content view:

# hammer content-view create –organization=’JSS’ –name “Test Registry” –description “Test Registry”

Content view created

# hammer content-view add-repository –organization “JSS” –name “Test Registry” –repository “rhel” –product “Containers”

The repository has been associated

# hammer content-view publish –organization=’JSS’ –name “Test Registry”

[………………………………………………………………………………………………………………………………………………..] [100%]

So, we’ve created a content view called Test Registry, associated it with the repository created during the product step then published the initial version.

image

Promote this to the appropriate environments:

hammer content-view version promote –organization ‘JSS’ –to-lifecycle-environment Dev –content-view “Test Registry” –async

Now create a container

hammer docker container create –organizations ‘JSS’ –locations ‘Site1’ –compute-resource ‘Atomic Host Nonprod’ –repository-name ‘nex-containers-rhel’ –tag “latest” –name rheltest –command bash –command bash (seems to be bug in CLI as can’t select tag)

hammer docker container list

—|———-|———————|——–|———|——————–

ID | NAME | IMAGE REPOSITORY | TAG | COMMAND | COMPUTE RESOURCE

—|———-|———————|——–|———|——————–

1 | rheltest | nex-containers-rhel | latest | bash | Atomic Host Nonprod

—|———-|———————|——–|———|——————–

(BTW, in the test version of Satellite, 6.2.5, there seems to be bug in GUI as can’t select tag)

You are ready to start the container, so log into the atomic host and start as follows:

bash-4.2# docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

-bash-4.2# docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

satellite1.justsomestuff.co.uk:5000/nex-containers-rhel latest e4b79d4d89ab 6 weeks ago 192.5 MB

-bash-4.2# docker run -it satellite1.justsomestuff.co.uk:5000/nex-containers-rhel:latest

[root@2ad52db9fb8d /]

The container just lauched was based on the Red Hat registry. You can also use docker hub or a private registry. Here’s how to use docker hub to lauch a fedora container. First add the docker hub repository for fedora to product Containers:

hammer repository create –name=’fedora’ –organization=’JSS’ –product=’Containers’ –content-type=’docker’ –url=’https://registry.hub.docker.com/’ –docker-upstream-name=’library/fedora’ –publish-via-http=”true”

# hammer repository list –content-type docker –organization ‘JSS’

—-|——–|————|————–|———————————–

ID | NAME | PRODUCT | CONTENT TYPE | URL

—-|——–|————|————–|———————————–

163 | rhel | Containers | docker | https://registry.access.redhat.com

198 | fedora | Containers | docker | https://registry.hub.docker.com

—-|——–|————|————–|———————————–

# hammer repository info –id 198

ID: 198

Name: docker

Label: docker

Organization: JSS

Red Hat Repository: no

Content Type: docker

URL: https://hub.docker.com/_/fedora

Publish Via HTTP: yes

Published At: satellite1.justsomestuff.co.uk:5000/nex-containers-docker

Relative Path: jss-containers-docker

Upstream Repository Name: docker

Container Repository Name: jss-containers-docker

Product:

ID: 44

Name: Containers

GPG Key:

Sync:

Status: Not Synced

Created: 2017/03/01 15:52:21

Updated: 2017/03/01 15:52:23

Content Counts:

Docker Manifests: 0

Docker Tags: 0

You can now sync it up and you should see something like the following in the GUI:

image

Add this to the Test Registry content view, publish a new version and promote as required. You can now build a container using this repository as before.

Deploying to AWS

1. Create a Compute Resource

 

 

Press Test Connection – should get success message

2. Create image from existing AMI:

image

3. Create a Compute Profile

 

 

4.  Create a minimal Host Group

 

 

5. Create New Host > Deploy On should be the AWS Compute profile you created earlier

If you check in AWS, you’ll see your EC2 instance starting up. Unfortunately, at the current time, Satellite does not actually provide the private key for you to log in with (your image may include the keys of course so ignore this if it does). You can get the key as follows:

Find the compute-resource ID:

# hammer compute-resource list

—|———————–|———

ID | NAME | PROVIDER

—|———————–|———

8 | Atomic Host Nonprod | Docker

1 | JSS_VSphere_Site1 | VMware

2 | JSS_VSphere_Site2 | VMware

10 | NEX AWS Services DEV | EC2

 

—|———————–|———

On the Satellite server, change to the postgres user and run a sql command to extract the key using the compute resource id from the above listing

# su – postgres

echo ‘select secret from key_pairs where compute_resource_id = 10;’ | psql -d foreman -t | sed -e ‘s/^[ \t]*//’| sed ‘s/+$//’ | sed “s/[[:blank:]]*$//” > /var/tmp/ec2-instance.pem

gPXE

One issue we faced was tftp timing out whilst trying to provision VMs in the US from a Satellite server in the UK. One way around this is to use gPXE, This uses HTTP protocol instead of UDP. To enable gPXE:

copy /usr/share/syslinux/gpxelinuxk.0 to /var/lib/tftpboot

Change your DHCP settings so the bootfile is gpxelinuxk.0

For Satellite, clone the Kickstart default PXELinux and create a gPXELinux template

Change all occurrences of  @initrd > @host.url_for_boot(:initrd)

Change all occurrences of  @kernel > @host.url_for_boot(:kernel)

In satellite Operating System > select OS > Templates > change PXElinux template to the newly created gPXELinux template > submit

That’s it for now.

Bye!

Update

One thing I forgot to mention with regard to provisioning to AWS is the user data templates. A while back we had a blog regarding cloud-init . This can be used to customise the EC2 instance as it’s launched by Satellite. If you use the official Red Hat AMI in the AWS market place, the cloud-init package is built into the AMI.

Red Hat AMI

You can therefore use the user data templates included in Satellite with this AMI. Back in step 2 of the EC2 provisioning procedure, when we created the image, specify the ami ID of the Red Hat market place AMI. You can then clone one of the user data templates ( I used Satellite Kickstart User Data template) and customise this as you like. Here’s an example:

userdata

One nice thing is that you can use the built in Satellite variables (as documented in A.3 of the Red Hat Satellite Host Configuration Guide) to set some values. In our example, the hostname and fully qualified hostname so out EC2 instance will have a proper hostname (rather than the default AWS IP address related name). The next line of the YAML file adds our public key to authorized hosts so we can log in with key based SSH. We then install the unzip package and as we used a proxy to access the internet, that needs to be set first. The next line installs the AWS CLI . Then, there’s some customisation of the .bash_profile for the root user.

This is a fairly simple example but. as mentioned, see our earlier article or https://cloudinit.readthedocs.io/en/latest/ for more details off how to use cloud-init.

One of thing, you can follow us on twitter at @itsjustsomestuf 🙂

 

 

 


Leave a Reply