Thursday, 24 September 2015

Vagrant dev environment for Puppet Enterprise 2015 with R10K and GitLab

I've created a vagrant dev environment that allows you to develop and deploy Puppet code using Gitlab and R10K. From the project README:


Vagrant dev environment for Puppet Enterprise 2015 with R10K and GitLab.


First install Vagrant.
 # Install required oscar plugin:  
 vagrant plugin install oscar  
 # Install recommended plugins:  
 vagrant plugin install vagrant-cachier vagrant-hostmanager  
 # Clone repo:  
 git clone  
 # Build and start boxes:  
 cd vagrant-pe  
 vagrant up  
 # Deploy environments from GitLab repo to Puppet server:  
 vagrant ssh master  
 sudo r10k deploy environment -p -v  
 # Test agent:  
 vagrant ssh first  
 sudo puppet agent --test  
 cat /etc/motd  


All boxes are currently CentOS 7 and use the generic base boxes created by PuppetLabs.

gitlab: This is a GitLab server that holds the puppet/control repo that is used by R10K to populate the Puppet environments on the master Puppet Enterprise server. It also has a puppet/helloworld repo, that contains an example module referenced in the site.pp manifest.

If you have the vagrant-hostmanager plugin installed, you can access the GitLab web interface from your host's web browser at http://gitlab.

This box also has the puppet agent installed.

master: This is a Puppet Enterprise server. It has been configured to pull down Puppet code from the repos on gitlab.

If you have the vagrant-hostmanager plugin installed, you can access the PE console from your host's web browser at https://master.

first: This is a server with the PE agent installed.

Thursday, 20 March 2014

Modern Windows Deployment

documentMicrosoft deployment tools have come a long way in recent years. If you know what you are doing you can be up and running, ready to deploy to Windows to devices in your organisation, in a surprisingly short amount of time. For deployment of Windows to end user devices such as PCs and laptops, the Microsoft tools are so good that there really is no point in using something else. Whether your organisation opts to use the free Microsoft Deployment Toolkit (MDT) or to pay extra for System Centre Configuration Manager (SCCM/OpsMgr), the end result is basically the same for bare-metal devices:
  1. The device boots WinPE. This can achieved with network booting using PXE and WDS or simply by booting to a USB stick.
  2. The WinPE instance connects to a standard Windows file share hosted on a Windows server (known as a deployment share). If you're using SCCM, this will be a distribution point, but with MDT on its own it can be literally any file share. Tip: for scalability with MDT, one can use DFS-R to replicate your deployment share at each site.
  3. The WinPE instance downloads a task sequence from the deployment share. This is a list of steps to be executed as part of the deployment share.
  4. After partitioning and formatting storage volumes, an operating system image (WIM file) is applied to the device
  5. The task sequence applies drivers and applications and carries out other tasks such as joining an Active Directory domain.
Admittedly, the above is a slight over-simplification, but it gives a flavour of the speed and simplicity of a basic modern deployment process. With a decent network connection, Windows can be deployed to a device in less than 30 minutes. Additions to the process can carry out more advanced tasks. An example of this is the User State Migration Tool (USMT) that can migrate existing user files and settings from an older operating system to the new install.
After the automated process completes, the device should be ready to use. Of course, in a large organisation, IT will want to manage the entire life-cycle of the device, so that applications can be kept up to date. Other products such as SCCM or (my favourite) SpecOps Deploy will be required for this.
Of course, the real work takes place before live deployments start, in the process commonly known as "image engineering". This is where the IT professional builds a Windows image for deployment and designs the deployment task sequence. The great thing about the Microsoft tools is that the process of building a master image (including patches, applications and runtimes such as .NET) can itself be automated. This video created by my colleague Raj Sumbal demonstrates the essential simplicity of this process using MDT:
To get up to speed with modern Windows deployment techniques, I can recommend nothing better than the free online Windows 8.1 Deployment Jump Start course from Microsoft Virtual Academy.

Monday, 9 January 2012

Quickly document a procedure in Windows 7


I’ve just found a great tool included in Windows 7.  It allows you to run through a set of GUI tasks (eg, “left-click this icon”, “type this text”, “right-click this tab”) on your computer and then automatically outputs an MHTML file containing a textual description of what you just did, complete with screenshots for every step.  The tool is called the Problem Steps Recorder and can be started from the Windows 7 search box or by running PSR.exe

The tool has two big benefits as far as I can see:

  • You can use it to quickly generate user documentation for your intranet or for answering email queries.
  • You can ask your users to start the tool and email you the results if you want to know exactly what they were doing to generate an error.


Wednesday, 4 January 2012

Why are you still using Windows XP?

windows_64There’s an interesting discussion at Slashdot about the reasons some consumers and  enterprises have not upgraded their desktops from Windows XP.  Speaking from experience, aside from the obvious, I can give a few additional reasons why an organisation might still have a large Windows XP estate:
  • Different management priorities.  Resources are always finite and as a stable mature operating system, Windows XP was not necessarily seen as something that needed changing.  Upgrades to servers, networks and enterprise applications may have absorbed the time and money available, especially as these components often continued to support the aging OS.
  • Loss of skills. With the perceived failure of Windows Vista, the last large upgrade of Windows was probably more than eight years ago for many enterprises.  The skills and experience to conduct a large rollout of Windows may have dispersed from the organisation, transforming what was once more routine into a major project.
  • Alternative technologies.  Some organisations were early adopters of remote desktop solutions based on VDI or terminal services.  They may have thought that these solutions would be a panacea for the “desktop problem”, allowing them to phase out their desktop estate, to be replaced with thin client devices.  Vendors of these solutions often oversell the advantages of these technologies without highlighting the limitations.  When it turned out that thin clients weren’t suitable for a large number of their users and applications, they were stuck with large numbers of Windows XP desktops and no budget or business appetite for an upgrade.
But the tide is turning and the evidence suggests that remaining users of Windows XP are finally upgrading.

Tuesday, 3 January 2012

Authenticating Linux against Active Directory

linux_64_2In my first blog article I mentioned authentication of Linux against Active Directory.  I thought I would expand on that here. Many IT organisations are heterogeneous. By this, I mean that they run and support different hardware and various operating systems. The main advantage of this diversity is that the applications demanded by the business can be run on their natural platform.  Although some .NET applications can be run on a Linux/Apache stack via the mono project, it goes against the grain.  It is far easy to provision a Windows server with IIS.  Conversely, a PHP/MySQL setup is going to be easier to support running under Linux with httpd, than on Windows. The disadvantage of heterogeneity is duplication of effort.  Too many IT departments reinvent the wheel when they roll out a new platform resulting in multiple storage infrastructures, hypervisors, DNS servers etc.
Active Directory is Microsoft’s X.500-style directory services product.  I believe it is better than many alternatives and one of the best products that Redmond has produced.  AD provides various authentication and access methods including good implementations of standards such as LDAP and Kerberos.  This means that AD can be used as an authentication and directory provider for other platforms besides Windows.
If you need to install Linux servers or workstations in your enterprise, you can use AD for authentication and user/group lookup.  This will allow your users to access your Linux devices with the same username and password as they use for Windows.  It also removes the need to maintain a parallel directory service such as OpenLDAP.
Here is my recipe for authenticating Linux against Active Directory.  I used CentOS 6, but you should be able to adapt the instructions to other Linux distributions or even to other Unix or Unix-like operating systems such as Oracle Solaris or BSD.  Some familiarity with Linux, its command line and a text editor such as vi is required.
  1. Set your host up on your network.
  2. Set the host’s hostname and DNS domain (the FQDN should be set correctly).
  3. Ensure forward and reverse DNS resolution of your Active Directory domain controllers from the Linux host works (forward and reverse), eg:
    [root@linuxserver /]# host has address

    [root@linuxserver /]# host domain name pointer
  4. Ensure forward and reverse DNS resolution of the Linux host from other network hosts works, eg:
    C:\> nslookup linuxserver

    C:\> nslookup
  5. Configure NTP and ensure the system clock is correct.  It is vital that your Linux host’s time is synchronised with the time on your AD domain controllers otherwise Kerberos will not work:
    [root@linuxserver ~]# vi /etc/ntp.conf
    Add some valid NTP servers, eg:

    [root@linuxserver ~]# chkconfig ntpd on
    [root@linuxserver ~]# server
    [root@linuxserver ~]# /etc/init.d/ntpd start
  6. Install packages:
    [root@linuxserver /]#  yum install samba-common samba-winbind pam_krb5
  7. Enable winbind:
    [root@linuxserver /]#  chkconfig winbind on
  8. Join the Linux host to AD and configure authentication and the name service. We are using winbind for user and group naming and Kerberos for authentication.  Enter or copy and paste the command below. Replace the parameters, such as YOURDOMAIN.COM, with details that are correct for your organisation. Replace $username with the login name of an AD user with permissions to join computers to your domain. You will be prompted for a password after you have entered the command. You may see errors related to DNS. As long as you have followed the previous steps correctly, these can be safely ignored:
    [root@linuxserver /]#  authconfig --updateall --enablewinbind --disablewinbindauth --smbsecurity=ads --smbworkgroup=YOURDOMAIN --smbrealm=YOURDOMAIN.COM --smbservers="," --winbindtemplatehomedir=/home/%U --winbindtemplateshell=/bin/bash --enablewinbindusedefaultdomain --winbindjoin=$username --enablemkhomedir --enablelocauthorize --enablekrb5 --krb5kdc="," --krb5realm=YOURDOMAIN.COM --enablekrb5kdcdns --enablekrb5realmdns
  9. Tweak your winbind settings to achieve the following:
    • allow for consistent UIDs across Linux boxes by generating them algorithmically using AD SIDs
    • expand nested AD group up to 10 levels
    • normalize AD group names for Linux (eg replace spaces with underscores):
    [root@linuxsever /]#  vi /etc/samba/smb.conf
    The relevant section of the file should look like this after editing:
    workgroup = YOURDOMAIN
    password server =
    realm = YOURDOMAIN.COM
    security = ads
    idmap config YOURDOMAIN:backend = rid
    idmap config YOURDOMAIN:base_rid = 500
    idmap config YOURDOMAIN:range = 500-1000000
    #idmap uid = 16777216-33554431
    #idmap gid = 16777216-33554431
    template homedir = /home/%U
    template shell = /bin/bash
    winbind use default domain = true
    winbind offline logon = false
    winbind expand groups = 10
    winbind normalize names = yes
  10. Restart winbind and verify:
    [root@linuxserver /]# service winbind restart
    [root@linuxserver /]# wbinfo -i $username
    [root@linuxserver /]# getent passwd $username
    [root@linuxserver /]# getent group Domain_Admins
  11. Create and verify a Kerberos keytab using an AD account with required privileges:
    [root@linuxserver /]# net ads keytab create -U $username
    [root@linuxserver /]# klist –k
  12. Change default permissions for auto-created home directories:
    [root@linuxserver /]# vi /etc/oddjobd.conf.d/oddjobd-mkhomedir.conf
    Edit any lines in the file that look like this:
    <helper exec="/usr/libexec/oddjob/mkhomedir -u 0022"
    They should look like this instead:
    <helper exec="/usr/libexec/oddjob/mkhomedir -u 0077"
  13. Allow only domain admins to login to server (or any other AD group of your choice as appropriate):
    [root@linuxserver /]# echo "+ : root Domain_Admins : ALL" >> /etc/security/access.conf
    [root@linuxserver /]# echo "- : ALL : ALL" >> /etc/security/access.conf
    Allow only domain admins to use sudo (or any other AD group of your choice as appropriate):
    [root@linuxserver /]# echo "%Domain_Admins ALL=(ALL) ALL" >> /etc/sudoers
If you’ve managed to get though all this, you should now have an AD-authenticated Linux server.  You should be able to login at the console or via SSH using the username and password of an AD user in the group you specified above.
As an added bonus, try SSO using Kerberos:
  1. Download and start Quest PuTTY from a domain-joined Windows workstation.
  2. Specify the hostname of your Linux box and connect .
  3. If you have configured everything correctly, your Kerberos ticket should be passed from your domain-joined Windows workstation via Quest PuTTY to the Linux server, allowing you access to the Linux server without having to type your password again.  True cross-platform Single Sign On!

Wednesday, 14 December 2011

MSI repackaging and Microsoft’s Orca tool

orca_transA common task required of those of us who support a large number of managed Windows computers is software repackaging.  The process of repackaging allows you to take a proprietary executable installer provided by a vendor and from this create a Microsoft Installer file (MSI file) suitable for distribution using an enterprise deployment tool such as SCCM. In recent years vendors have become better at supplying MSI files.  This saves you the task of completely repackaging the software, but there are often changes you might wish to make to the default behaviour of the vendor-supplied installer.

This is where Orca comes in.  I’m surprised how unknown this tool is amongst systems administrators.  Although aimed primarily at developers, Orca should be in the armoury of any IT professional with responsibility for Windows application deployment. Orca is a database table editor for MSI files.  It allows you to inspect and amend the underlying structure of an MSI file and to save any changes you wish to make to a separate transform file (MST file).  This means you can keep your original vendor MSI file intact and yet apply organisation-specific changes at install time using the MST file containing your changes.

You can find a basic tutorial here and another here.  Orca can be downloaded here or as part of the  Microsoft Windows SDK here.