Provisioning with Redhat Satellite 6.2 – Part 1

This week I’m looking at provisioning with Redhat Satellite 6.2. Redhat Satellite 6 has become a bit of a lumbering beast. It does a lot of stuff but it’s not as straight forward as the old versions. Whilst patching remains probably the key function, the provisioning tools are quite powerful. I’ve been investigating this over the past few weeks. Here’s what I found out.

One of the first thing I discovered was that although RedHat have produced quite a bit of documentation for Satellite 6.2, because it’s become quite complex, there’s still bits missing. Hopefully this article will help you with the missing bits. So, on the provisioning side, Satellite can build VMs, including the VM definition itself, act as a DNS source or update DNS, act as a DHCP server, act a tftp server and a puppet server. These parts I have tested. It can also provision to AWS and Docker containers. These remain to be tested (but will be once I have a suitable test environment). The aim of Satellite provisioning is to supply more or less push button building of servers, VMs and presumably AWS EC2 instances and Docker containers. To achieve this, quite a lot of upfront configuration of Satellite is required, so let’s take a look at the prerequisites.


The following Satellite components need to be set up for efficient Satellite provisioning. Although they do not all need to be in advance and it may seem like a lot of work to set up everything, setting up these prerequisites is pretty much a one off activity. As also mentioned it also greatly speeds up and simplifies the actual deployment process.

  • Lifecycle environment: A logical way of dividing up hosts depending on the type of environment they serve, e.g. development, user testing and production
  • Content View: a subset of the Satellite content, e.g. packages and puppet modules . This will built from specified yum repositories, which may be the RedHat repositories provided by Satellite or custom (see Creating a repository and adding it to a content view). You can also add puppet modules to the content view to make those available.
  • Puppet Environment: Satellite can act as a puppet master and creates its own Puppet environments when you import Puppet modules into Satellite (see Setting up Puppet for Satellite)
  • Content Source: these are created when Satellite is installed and will either be the Satellite server itself (via an internal capsule) or other capsules.
  • Subnets: These define subnets, VLAN IDs and boot modes for Satellite
  • Compute Resource & Compute Profile: For VMs, these define the vCenters and the parameters supplied to VMWare to create the VM, e.g. number of CPUs, memory, etc.
  • Host Group: All of the above can be added to a Host Group so that when the Host Group is selected, the new host form is auto populated with the correct details.

Lifecycle Environments

The Lifecycle environments are a way of logically dividing hosts depending of their function, e.g. development, user testing, production. The purpose of having different environments is to control how updates to Satellite content views are rolled out. This is probably more applicable to puppet rather than patching (as patching is done on a per server basis). For example, an updated puppet module can be rolled out to development servers first for verification. From a provisioning point of view, the host needs to be built in the environment applicable to it’s purpose, i.e. if it’s a dev server build it in the development environment.

Content Views

As the name kind of suggests, content views are a view of Satellite content. The idea is to specify in a view, the packages and puppet modules applicable to the host requirements. e.g. if your server runs Redhat 6.8, you would want to see the repositories that provide 6.8 packages and not RHEL 5 or 7 repositories. You may also have some custom repositories and a set of puppet modules you want to associate with RHEL 6.8 builds. So specifying a content view when you build a host ensures it gets the packages and puppet modules required.

Creating a repository and adding it to a content view

If you need to create a custom repository, in the Satellite Web GUI select

Content > Products > click Repo Discovery

Enter the URL for the YUM repository, e.g. and click Discover

When the discovery has finished, tick the entries you want to include, click Create Repository and complete the form details

Go to Content > Content Views and select the content view you want to add the repository to

Click add the Repository to the existing ones under Yum Content and click save and Publish New Version,

This repository is now ready for use with Satellite Provisioning. It should look something like this

Content view screenshot

This content view contains 4 standard Red Hat repositories and 2 customs repositories from Puppetlabs and VMware tools and is available for the Dev lifecycle environment.

If you need to install a package from this repository on an existing client, go to the client and enter the commands as follows:

# subscription-manager list --available
Available Subscriptions
Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters with Smart Management, Premium
Provides: Red Hat Enterprise Linux Resilient Storage (for RHEL Server) - Extended Update Support
Oracle Java (for RHEL Workstation)
Red Hat Software Collections (for RHEL Server)
Red Hat Enterprise Linux Atomic Host Beta
Red Hat Enterprise Linux High Availability (for RHEL Server) - Extended Update Support
Red Hat EUCJP Support (for RHEL Server) - Extended Update Support
Red Hat Enterprise Linux Server - Extended Update Support
Red Hat Beta
dotNET on RHEL Beta (for RHEL Server)
Red Hat Enterprise Linux High Performance Networking (for RHEL Server) - Extended Update Support
Red Hat Enterprise Linux Scalable File System (for RHEL Server) - Extended Update Support
Oracle Java (for RHEL Server)
Red Hat Enterprise Linux Load Balancer (for RHEL Server) - Extended Update Support
Red Hat Enterprise Linux Server
dotNET on RHEL (for RHEL Server)
Red Hat Software Collections Beta (for RHEL Server)
Red Hat Enterprise Linux Atomic Host
Red Hat S-JIS Support (for RHEL Server) - Extended Update Support
Red Hat Developer Toolset (for RHEL Server)
SKU: RH00051
Contract: 10980346
Pool ID: 8ace20165906e4af015917a55f100ddd
Provides Management: Yes
Available: Unlimited
Suggested: 0
Service Level: Premium
Service Type: L1-L3
Subscription Type: Stackable
Ends: 05/23/2017
System Type: Virtual

Subscription Name: Puppetlabs puppet repos
SKU: 1484833973677
Pool ID: 8ace201659a6d5d20159b70066c605ec
Provides Management: No
Available: Unlimited
Suggested: 1
Service Level:
Service Type:
Subscription Type: Standard
Ends: 01/12/2047
System Type: Physical

The puppetlabs repo is now showing as available. Attach subscription using poolid

# subscription-manager repos --list
Available Repositories in /etc/yum.repos.d/redhat.repo
Repo ID: Puppetlabs_puppet_repos_el_6x_products_x86_64
Repo Name: el 6x products x86_64
Repo URL:
Enabled: 1
Repo ID: rhel-6-server-satellite-tools-6.2-rpms
Repo Name: Red Hat Satellite Tools 6.2 (for RHEL 6 Server) (RPMs)
Repo URL:$basearch/sat-tools/6.2/os
Enabled: 1
Repo ID: rhel-6-server-extras-rpms
Repo Name: Red Hat Enterprise Linux 6 Server - Extras (RPMs)
Repo URL:$basearch/extras/os
Enabled: 0

Repo ID: rhel-6-server-rh-common-rpms
Repo Name: Red Hat Enterprise Linux 6 Server - RH Common (RPMs)
Repo URL:$releasever/$basearch/rh-common/os
Enabled: 1

Repo ID: rhel-6-server-rpms
Repo Name: Red Hat Enterprise Linux 6 Server (RPMs)
Repo URL:$releasever/$basearch/os
Enabled: 1

We can check that /etc/yum.repo.d has been updated

# ls -l /etc/yum.repos.d/
total 8
-rw-r--r--. 1 root root 2993 Jan 19 14:28 redhat.repo
-rw-r--r--. 1 root root 529 Apr 14 2016 rhel-source.repo
# cat /etc/yum.repos.d/redhat.repo
# Certificate-Based Repositories
# Managed by (rhsm) subscription-manager
metadata_expire = 1
sslclientcert = /etc/pki/entitlement/1418365267475239883.pem
baseurl =
sslverify = 1
name = el 6x products x86_64
sslclientkey = /etc/pki/entitlement/1418365267475239883-key.pem
enabled = 1
sslcacert = /etc/rhsm/ca/katello-server-ca.pem
gpgcheck = 0

Puppet has multiple versions in its repository. To install the version required:

# yum --showduplicates list puppet
Loaded plugins: package_upload, product-id, search-disabled-repos, security, subscription-manager
Puppetlabs_puppet_repos_el_6x_products_x86_64 | 1.8 kB 00:00
rhel-6-server-rh-common-rpms | 2.1 kB 00:00
rhel-6-server-rpms | 2.0 kB 00:00
Available Packages
puppet.noarch 2.6.9-2.el6 Puppetlabs_puppet_repos_el_6x_products_x86_64
puppet.noarch 2.6.10-1.el6 Puppetlabs_puppet_repos_el_6x_products_x86_64
puppet.noarch 2.6.11-1.el6 Puppetlabs_puppet_repos_el_6x_products_x86_64
puppet.noarch 2.6.12-1.el6 Puppetlabs_puppet_repos_el_6x_products_x86_64
puppet.noarch 2.6.12-2.el6 Puppetlabs_puppet_repos_el_6x_products_x86_64
puppet.noarch 2.6.14-1.el6 Puppetlabs_puppet_repos_el_6x_products_x86_64
puppet.noarch 2.6.15-1.el6 Puppetlabs_puppet_repos_el_6x_products_x86_64
puppet.noarch 2.6.16-1.el6 Puppetlabs_puppet_repos_el_6x_products_x86_64
puppet.noarch 2.6.17-1.el6 Puppetlabs_puppet_repos_el_6x_products_x86_64
puppet.noarch 2.6.18-1.el6 Puppetlabs_puppet_repos_el_6x_products_x86_64
puppet.noarch 2.7.1-1.el6

Use version to install specific version

yum install puppet-2.7.25-1.el6


Satellite can act as a puppet master and so install the puppet agent and modules when a server is built. To set up Satellite as the puppet master, the puppet modules need to be created and imported into Satellite. Satellite expects the modules to be in a standard format. This can be achieved as follows:

Create structure under /etc/puppet/environments/test/modules:

# puppet module generate daengkhao-mymodule
We need to create a metadata.json file for this module. Please answer the
following questions; if the question is not applicable to this module, feel free
to leave it blank.

Puppet uses Semantic Versioning ( to version modules.
What version is this module? [0.1.0]

The puppet standard requires the module to be generated in the format username-modulename. However, Satellite requires that the username portion is removed, so a rename of the directory is required

# mv daengkhao-mymodule mymodule

For existing (already defined in the old puppet environment) modules, the init.pp file created in /etc/puppet/environments/test/modules/mymodule/manifests needs to be updated with contents of old nmymodule.pp manifest. Then any files associated with the module must be copied.

Create /etc/puppet/environments/test/modules/mymodule/files directory and copy files associated with this manifest in there.

For new modules, just populate init.pp as required.

Check for errors:

# pwd

# puppet apply mymodule/tests/init.pp --modulepath=/etc/puppet/environments/test/modules --noop
Notice: Compiled catalog for in environment production in 0.12 seconds
Notice: /Stage[main]/Mymodule/Cron[mymodule]/ensure: current_value absent, should be present (noop)
Notice: /Stage[main]/Mymodule/File[/admin/mymodule]/ensure: current_value absent, should be directory (noop)
Notice: Class[Mymodule]: Would have triggered 'refresh' from 2 events
Notice: Stage[main]: Would have triggered 'refresh' from 1 events
Notice: Finished catalog run in 0.19 seconds

If all is OK, build the module:

# puppet module mymodule
Notice: Building /etc/puppet/environments/test/modules/mymodule for release
Module built: /etc/puppet/environments/test/modules/module/pkg/daengkhao-mymodule-0.1.0.tar.gz

Now the module can be uploaded to Satellite. In Satellite Content > Products > Puppet > click on Repository Puppet > Choose Files > upload

Add to content view > select Content View > Puppet >Add puppet modules > publish new version > promote as required

If you need to update a module, make the changes required and update the version number in metadata.json, then build the module again. You can then

upload the new version of the module to Satellite and delete the old version. You will need to create a new content view with this new version of the module and

it to the lifecycle environments as required

Content Source

These were created when Satellite was installed and is either the Satellite server itself (via the internal capsule) or a capsule server.


In addition to specifying gateways, netmasks and VLAN IDs for a subnet, they also perform an important role in allowing pxeboot. This is done by specifying the boot mode as DHCP. To get this option, the Satellite server has to be set up as a DHCP and tftp server:

#satellite-installer --foreman-proxy-dhcp true --foreman-proxy-dhcp-gateway "" --foreman-proxy-dhcp-interface "bond0" --foreman-proxy-dhcp-nameservers "" --foreman-proxy-dhcp-range "" --foreman-proxy-dhcp-server ""
#satellite-installer --foreman-proxy-tftp true

To check this has been app

# hammer proxy info --name ""
Id:            1
    Puppet CA
Created at:    2016/12/16 09:17:31
Updated at:    2016/12/16 09:17:31

This allows for a boot mode of DHCP to be selected. An example subnet definition follows:

subnets screenshot

Although we have defined Satellite as a DHCP server, it is unlikely to perform this function. In the example above, VLAN 604 already has a Windows based DHCP server serving it. This will answer when an IP address is requested as part of the boot process. However, it will be unable to supply the boot image required. A frig is required. The Windows DHCP server is configured to point to the Satellite server as the tftpd server:

on the Windows DHCP server is to go to Start –> Administrative Tools –> DHCP, go to the DHCP scope you want, and right-click and select “Scope Options”. Click “Configure Options” and select the following options:

  • 066 (Boot Server Host Name) and set value to same as you would for “next-server” (ie:
  • 067 (Bootfile Name) and set value to same as you would for “filename” (ie: pxelinux.0″)

dhcp screenshot


So the process is, pxeboot gets the IP address from the Windows DHCP server. When the boot image is requested, the Windows DHCP server points it to the tftp server running on the Satellite server and it is picked up from there. Although the server is then built with a different IP address than Satellite is expecting, it later picks up the correct address (presumably when the server registers with Satellite?).

NOTE: This is a hack and not recommended anywhere in the RedHat documentation. However, it works fine. An alternative (and supported method) is to use the discovery ISO, see Appendix A. However, the discovery ISO method is less automated and more hands on build method, particularly with VMs as you need to manually create the VM first instead of Satellite doing this for you.

Compute Resources and Compute Profile

This is required for VMs only and defines how the VM is set up. It must be associated with a compute resource (which are the VMWare vcenters) which have already been defined.

NOTE: for Satellite to be able to create VMs, a vcenter user must assigned to the Compute Resource with the following attributes:

  • All Privileges → Datastore → Allocate Space
  • All Privileges → Network → Assign Network
  • All Privileges → Resource → Assign virtual machine to resource pool
  • All Privileges → Virtual Machine → Configuration (All)
  • All Privileges → Virtual Machine → Interaction
  • All Privileges → Virtual Machine → Inventory
  • All Privileges → Virtual Machine → Provisioning

For profiles see Infrastructure > Compute Profiles > e.g

profile screenshot

Host Group

Now all these can be brought together as a Host Group, e.g.

host group screenshot

host group screenshot

host group screenshot


You can add the Lifecycle environments, Content Views, Puppet Environments, etc, so that when you provision a new host, all the details are completed automatically.

That concludes part 1. In part 2 we’ll actually get on with building a host!

See ya!

Appendix A- Discovery ISO

Booting up from the foreman-discovery-image ISO (download from RedHat) allows a server/VM booted from the ISO to register with Satellite so it can then be built

Procedure documented here:

When the server/VM boots from the ISO, you see the following screen:

iso boot

Select Manual network setup, and select the appropriate NIC card:

iso boot

Complete the network details, e.g.

iso boot

On the next screen enter the IP address of the Satellite server on port 9090 and select proxy

iso boot

If there’s any puppet modules, enter them on the next screen or leave all the fields as blank:

iso boot

You should now see a screen like the following:

iso boot

Once you see this screen, go the Satellite Web GUI and Discovered Hosts. It will appear on the list. Click the provision button, complete the form and submit and the host will build.





  • Rich Jerrido

    February 9, 2017

    In order to do PXE provisioning, it is not required to use the –foreman-dhcp* switches if you are using an external (eg other than the Satellite itself) DHCP server.

    You only need the –tftp options, and for each subnet you need to set

    IPAM – None
    Boot Mode – DHCP

    And associate the TFTP Capsule with the subnet. You don’t get the niceties of auto-suggested IP addresses, but you also don’t have to worry about running two DHCP servers on the same subnet.

  • jss-admin

    February 9, 2017

    Thanks Rich, we’ll give that a try!

  • maccumaccu

    July 14, 2017

    Thanks Rich!
    This guide is really useful, I’m configuring satellite 6.2.10 for provisioning with libvirt and vmware


Leave a Reply