This isn't just another trivial automated installation script although it started out that way. Basic installation presets led to integrated bug workarounds, setting defaults for many applications and servers, more features, etc. While you may disagree with some of my package choices, they were selected for my clients - not you. Change it if you have different needs. First, a little background on my deployments.
All of my clients have cheap desktop systems or laptops, usually outdated. Almost any CPU, chipset, GPU, and drive configuration. They're either stand-alone or connected together on small Ethernet networks. Some have broadband, some only dial-up (POTS). Ages vary from toddlers to senior citizens. A few are Windows gamers. This mix results in a wide variety of system hardware, peripherals, application requirements, and configurations. I've had to deal with most every type of kernel, application, and hardware bug. Every deployment unearths a new bug to fight. Some of these are Ubuntu's fault but many are upstream.
Inevitably I spend many hours doing full OS conversions to Ubuntu or dual-boot configurations. I've found that using a Live CD to install Ubuntu is about 4x faster than installing Windows when drivers, updates, and application installs are figured in. While I could set up slipstream builds of Windows I don't install it enough to bother with and the variety of versions (Home, Pro, upgrade, OEM,...) and licenses makes it impractical. Relatively speaking, I spend about 3x as long transferring documents, settings, and game/application files (scattered all over C:) to Ubuntu than I do installing either it or Windows. But I'll take any time savings I can get.
A while back, when Ubuntu 10.04 (Lucid Lynx) was released, I decided to streamline my installations. This wasn't just to save time. I also needed to make my installations more uniform as I couldn't remember all the various tweaks and bug fixes that I performed from installation to the next.
I had several goals for this project, not necessarily all at the beginning as some were the result of test installs, client feedback, and feature creep.
- Fix all the bugs that my clients encountered on their existing installs plus all the other Ubuntu annoyances I've been manually correcting.
- Do everything the "correct way" instead of blindly following HOW-TOs from amateurs that involved script and text file hacking that would be lost on the next update. I had to learn proper use of Gconf, PolicyKit, Upstart, init scripts, and dpkg.
- Configure all of the network features that my clients had asked for, usually file or peripheral sharing. Internet content filtering for kids was a requirement.
- Secure remote access and administration. It's bad enough when a client has a software problem. Having to waste time with an on-site visit is idiotic when it's not an Internet access problem and a broadband connection is available. The same kickstart configuration can be used for both an "administration" system as well as clients. Having them nearly identical makes both remote and verbal support easier.
- Make it easier to obtain diagnostic and status information, for me and the client.
- Research applications that meet customer needs and are stable. Configure them so the customer doesn't need to.
- Document everything, especially anything I spent significant time researching.
On all of these I mostly succeeded. There are still a few gaps but they're minor (for my deployments at least) but after working on this for 18 months I needed to get on with my life. I figure that after a few million deployments I should break even. I'm now busy updating the dozen or so I currently have.
So what's in it? The base is just a plain 10.04 (i386 or amd64) installation. Two reasons for that - it's the LTS release and I didn't have time to upgrade to newer releases or workaround their new bugs. It's supported for another year or so. I probably update it for 12.04 after it is released (and clean up my code). Highlights:
Apache. Used for sharing the public directory (see below) and accessing the various web-based tools. The home page is generated from PHP and attempts to correct for port-forwarding (SSH tunnel) if it detects you are not using port 80.
Webmin. It's the standard for web-based administration. I added a module for ddclient (Dynamic DNS). The module is primitive but usable and I fixed the developer's Engrish.
DansGuardian. Probably three months work on just this. For content filtering there isn't really anything else. Unfortunately it has almost no support tools so I had to write them. Most of these have been announced in previous blog postings although they've been updated since then. The most complicated is "dg-squid-control" which enables/disables Squid, DansGuardian, and various iptables rules. Another loads Shalla's blacklist. It doesn't have system group integration so I wrote "dg-filter-group-updater" to semi-integrate it. There are four filter groups - no access, restricted (whitelist with an index page), filtered, and unrestricted. I added a Webmin module for it I found on Sourceforge. It's not great but makes it easier to modify the grey and exception lists. Included are lists I wrote that allow access to mostly kid sites (a couple of hundred entries). The entries have wiki-style links in comments that are extracted by "dg-filter-group-2-index-gen" to create the restricted index page. There's a How-To page for proxy configuration that users are directed to when they try to bypass it.
The only limitation is that browser configurations are set to use the proxy by default but dg-squid-control doesn't have the ability to reset them if the proxy is disabled. I spent two weeks working on INI file parsing functions (many applications still use this bad Windows standard for configuration files). While they seem to work I need to significantly restructure the tool to make use of them.
DansGuardian had no development for a few years but recently a new maintainer is in charge and patches are being accepted. Hopefully full system account integration will be added.
UFW. The Uncomplicated Firewall is a front-end to iptables and there is a GUI for it. One feature it has is application profiles, which make it easy to create read-to-use filter rules. I created about 300 of them for almost every Linux service, application, or game (and and most Windows games on Wine).
File sharing. The /home/local directory is for local (non-network) file sharing between users on the same system. There is also a /home/public directory that is shared over Samba, HTTP, FTP, and NFS. WebDAV didn't make the cut this time around.
Recovery Mode. I added many scripts to the menu for status information from just about everything. Several of my tools are accessible from it.
SSH server. You make a key with ssh-keygen, client_administrator_id_dsa (should be encrypted), and include the public (*.pub) part in the kickstart_files/authentication sub-directory. It is added to the ssh configuration directory on every system. Using another tool, "remote-admin-key-control", system owners (sysowner group) can enable or disable remote access. This is for several reasons including privacy, liability, and accounting (for corporate clients where the person requesting support may not have purchase authority).
When the remote-admin-key-control adds the key to the administrator account ~/.ssh/authorized_keys, you can connect to the system without a password using the private key (you still need to enter the key passphrase). The radmin-ssh tool takes this one step further and forwards the ports for every major network service that can function over ssh. It also shows example command lines (based on the current connection) for scp, sftp, sshfs, and NFS. You still need the administrator password to get root access.
X2Go. Remote desktop access that's faster than VNC. Uses SSH (and the same key).
OpenVPN. A partially configured Remote Technical Support VPN connection is installed and available through Network Manager. If the client system is behind a firewall that you can't SSH through, the client can activate this VPN to connect to your administration system so that you can SSH back through it. Rules for iptables can be enabled that prevent the client accessing anything on the administration system. It connects using 443/udp so should work through most firewalls.
Books and guides. Located in the desktop help menu (System > Help) is a menu entry that opens a directory for books. My deployments have subdirectories with Getting Started with Ubuntu 10.04 - Second Edition from the Ubuntu Manual Project and OpenOffice.org user guides. You can easily add more as the kickstart script grabs everything in its local-books subdirectory. For the end-user I wrote networks-and-file-sharing-help.html (in the same help menu).
For the installer the main source of documentation is the kickstart script itself. I got a little carried away with comments. The next major document is TODO.html which is added to the administrator's home directory. It was intended to list post-install tasks that needed to be completed since there are many things the installer can't do (like compile kernel modules). After adding background information on the various tasks, troubleshooting help, and example commands, it's basically another book. You should read it before using the kickstart script.
Scanner Server. Allows remote access to a scanner through a web interface. Simpler than using saned (but that is also available if you enable it). It had several bugs so I fixed it and added a few features (with help from a Ubuntu Forum member pqwoerituytrueiwoq). Eventually we hit the limit of what it could do so pqwoerituytrueiwoq started writing PHP Server Scanner as a replacement. For a 12.04 release I will probably use that instead. I wrote "scanner-access-enabler" to work around udev permission problems with some scanners (especially SCSI models).
Notifications. Pop-up notices will be shown from smartd, mdadm, sshd, and OpenVPN when something significant happens. Without the first two the user doesn't know about pending drive problems until the system fails to boot. I've also had them turn the system off when I was in the process of updating it and the SSH notification helps prevent that. The OpenVPN notification is mostly for the administration system and includes the tunnel IP address of the client. OpenSSH has horrible support for this kind of scripting. OpenVPN's scripting support is absolutely beautiful.
Webcam Server. A command-line utility that I wrote a GUI for. It has a Java applet that can only be accessed locally but a static image is available from the internal web server to anywhere.
BackuPC. It uses its default directory for backups so don't enable it unless you mount something else there. A cron job will shut the system down after a backup if there are no users logged in. It has been somewhat hardened against abuse with wrapper scripts for tar and rsync.
There are many bugs, both big and small, that are either fixed or worked around. The script lists the numbers where applicable. The TODO documents lists a bunch also. Some packages were added but later removed (Oracle/Sun Java due to a licensing problem, Moonlight since it didn't work with any Silverlight site I tested).
There are some limitations to Ubuntu's kickstart support. I'm not sure why I used kickstart in the first place. Perhaps the name reminded me of KiXtart, a tool I used when I was a Windows sysadmin. Kickstart scripts are the standard for automating Red Hat installations (preseeding is the Debian standard) but Ubuntu's version is a crippled clone of it. In part it acts like a preseed file (even has a "preseed" command) but also has sections for scripts that are exported and executed at different points during the installation. About 90% of the installation occurs during the "post-install" script. The worst problem with Ubuntu's kickstart support is that the scripts are exported twice and backslashes are expanded both times. This means that every backslash has to be quadrupled. This gets real ugly with sed and regular expressions. Because of this you'll see "original" and "extra slashy" versions of many command lines. I wrote quad-backslash-check to find mistakes.
The other problem is that the way the script is executed by the installer hides line numbers when syntax errors occur, making debugging difficult. I wrote quote-count and quote-count-query to find unmatched quotes (and trailing escaped whitespace that was supposed to be newlines) which were the most common cause of failure.
I've made an archive of my kickstart file, its support files, and configuration files for various services on my server for you to download (12.5MB, MD5: b5e79e6e287da38da75ea40d0d18f07f ). The script, error checking and ISO management tools, and server configuration files are in the "kickstart" sub-directory. A few packages are included because they are hard to find but others are excluded because of size. Where a package is missing there is a "file_listing.txt" file showing the name of the package I'm using. My installation includes the following which you should download and add back in:
Amazon MP3 Downloader (./Amazon/amazonmp3.deb)
DansGuardian Webmin Module (./DansGuardian Webmin Module/dgwebmin-0.7.1.wbm)
Desura client (./Desura/desura-i686.tar.gz)
G'MIC (./GMIC/gmic_1.5.0.7_*.deb)
Gourmet (./Gourmet/gourmet_0.15.7-1_all.deb)
VMware Player (./VMware/VMware-Player-*.bundle)
VMware Player is optional. It has kernel modules so the kickstart script only retrieves the first install file returned from the web server whose name matches the architecture. It puts it in /root for later installation.
The target systems need network-bootable Ethernet devices, either with integrated PXE clients or a bootable CD from ROM-o-matic.
You need a DHCP sever that can send out:
filename "pxelinux.0"
next-server
The tftp server needs to serve the pxelinux.0 bootstrap, vesamenu.c32, and the menu files. These are available from the Ubuntu netboot images. The bootstrap and vesamenu.c32 are identical between the i386 and amd64 versions, only the kernel, initrd, and menus are different. You can use my menu files instead of the standard set in the netboot archive. The most important is the "ubuntu.cfg" file. You'll notice that my menu files list many distros and versions. Only the utility, Knoppix, and Ubuntu menus function fully. The rest are unfinished (and probably obsolete) experiments. FreeDOS is for BIOS updates.
My tftp server is atftpd which works well except it has a 30MB or so limit on tftp transfers. This only affects the tftp version of Parted Magic (they have a script to split it up into 30MB parts). It is started by inetd on demand.
I use loopback-mounted ISOs for the kickstart installs and all LiveCDs netboots. Because I have so many, I exceeded the default maximum number of loopback nodes available. I set max_loop=128 in my server's kernel command line to allow for many more.
The Ubuntu Minimal CD ISOs are the source for the kernel and initrd for the kickstart install. The architecture (and release) of the kernel on these ISOs must match the architecture of Ubuntu you want to install on the target system. You'll probably want both the i386 and amd64 versions.
PXE Linux doesn't support symlinks so my ISOs are mounted in the tftp directory under ./isomnt. Symlinks to the ISOs are in ./isolnk and are the source of the mounts. I set it up this way originally because the ISOs were in /srv/linux in various subdirectories so having the links in one place made it easier to manage. But my ISO collection grew too big to manage manually so I wrote "tftp-iso-mount" that creates the mountings for me. It searches through my /srv/linux directory for ISO files and creates isomnt_fstab.txt that can be appended to fstab. It also deletes and recreates the isomnt and isolnk directories and creates the "isomnt-all" script to mount them.
The ISOs are accessed through both NFS and Apache. I originally intended to use NFS for everything but I found that debian-installer, which performs the installation and executes the kickstart script (also on the "alternate" ISOs), doesn't support NFS. So I had to set up Apace to serve them. The Apache configuration is rather simple. There are a few symlinks in /var/www that link to various directories elsewhere. One named "ubuntu" links to /srv/linux/Ubuntu. The kickstart support files are placed in /srv/linux/Ubuntu/kickstart_files and are accessed via the link. NFS is still used for booting LiveCDs (for bug testing and demos). There is also a "tftp" symlink to /srv/tftp used for local deb loading (see below).
The kickstart script itself, Ubuntu-10.04-alternate-desktop.cfg, is saved to /srv/tftp/kickstart/ubuntu/10.04/alternate-desktop.cfg after being backslash and quote checked.
Several preseed values are set with the "preseed" command at the beginning of the script. You'll probably want to change the time zone there. License agreements are pre-agreed to as they will halt the installation if they prompt for input.
Like I mentioned earlier, the vast majority of work happens in the post-install script. The executes after the base Ubuntu packages are installed. The most important variable to set is $add_files_root which must point to the URL and directory of your web server where the rest of the kickstart support files are located (no trailing backslash). The script adapts for 32-bit and 64-bit packages as needed based on the architecture of the netboot installer. There is also a "late_command" script that executes near the end of the installation, after debian-installer creates the administrator account (which happens after the post-install script finishes).
The debug variables are important for the initial tests. The $package_debug variable has the most impact as it will change package installations from large blocks installed in one pass (not "true") to each package individually ("true"). When true, it slows down installation significantly but you can find individual package failures in the kickseed-post-script.log and installer syslog (located in /var/log/installer after installation). Setting $wget_quiet to null will guarantee a huge log file. The $script_debug variable controls status messages from the package install and mirror selection functions.
The $mirror_list variable contains a list of Ubuntu mirrors (not Medibuntu or PPAs) that should have relatively similar update intervals. This is used by the fault-tolerant mirror selection function, f_mirror_chk, that will cycle through these and check for availability and stability (i.e., not in the middle of sync). The mirrors included in the list are good for the USA. These are exported to the apt directory so that the apt-mirror-rotate command can use them to change mirrors quickly from the command line or through the recovery mode menu. When a package fails to be installed via the f_ftdpkg and f_ftapt functions, another mirror will be tried to attempt to work around damaged packages or missing dependencies.
To save bandwidth the post-install script looks for loopback mounted ISOs of the Ubuntu 10.04 live CD and Ubuntu Studio (both i386 and amd64 versions) in the isomnt sub-directory via the tftp link in the Apache default site. It copies all debs it finds directly into the apt cache. It also copies the contents of several kickstart support sub-directories (game-debs* and local-debs*). This is a primitive way to serve the bulk of the packages locally while retrieving everything else from the mirrors. You need to change the URLs in the pre-load debs section to the "pool" sub-directories of the mounted ISOs in "./tftp/isomnt/".
Because loading this many debs can run a root volume out of space, the $game_debs variable can be used to prevent game packages from being retrieved. Normally you should have at least a 20GB root (/) volume although it could be made smaller with some experimentation. An alternative to this method would be a full deb-mirror or a large caching proxy.
Set the OpenVPN variables $openvpnurl to the Internet URL of your administration system or the firewall it's behind. Set $openvpnserver to the hostname of your administration system (which can have the same values as it won't be connecting to itself).
Basic usage starts with netbooting the client system. Some have to be set to netboot in the BIOS and some have a hotkey you can press at POST to access a boot selection menu. The system then obtains an address and BOOTP information from the DHCP server. It then loads pxelinux.0 from the TFTP server which will in turn load vesamenu.c32 which displays the "Netboot main menu". Select Ubuntu from the list and look for the Ubuntu 10.04 Minimal CD KS entries. Select the one for your architecture and press the Tab key to edit the kernel boot line. Set any kernel parameters you want to be added to the default Grub2 configuration after the double dash (--), like "nomodeset". Set the hostname and domain values for the target as these are used in several places for bug workarounds and configurations. Then press Enter. The installer should boot. If nothing happens when you press Enter and you are returned to the Ubuntu boot listing menu, verify the ISOs are mounted on the server then try again (you will need to edit the entry again).
If there are no problems then you will be asked only two questions. The first is drive partitioning. This can be automated but my client systems are too different to do so. Then next question will be the administrator password. After that it will execute the post-install script and late-command scripts then prompt you to reboot. Just hit the enter key when it does as Ctrl-Alt-Delete will prevent the installer from properly finishing the installation (it's not quite done when it says it is). Full installation will take 2-3 hours depending on debug settings, availability of local debs, and Internet speeds.
In case of problems see the TODO document which has a troubleshooting section. The only problems I've had installing was missing drivers or bugs in the installer (especially with encrypted drives - see the TODO). My Dell Inspiron 11z, which has an Atheros AR8132/L1c Ethernet device, wasn't supported by the kernel the minimal CD was using. To work around it I made a second network connection with an old Linksys USB100TX. The Atheros did the netboot (the Linksys does not have the capability) but the installer only saw the Linksys afterwards and had no problems using it (other than it being slow).
I welcome comments and suggestions (other than my package choices and blog color scheme :D).