Stubborn Tech Problem Solving
Diary and notebook of whatever tech problems are irritating me at the moment.
There are a few ways around this. The easiest is to tell wget to look for it with the --content-disposition option.
Another option is to use curl --remote-header-name or aria2 instead of wget.
To easily create commands for more complicated file retrieval (with cookies, etc.) the Firefox add-on cliget can provide a command line for curl, wget, and aria2 in the opening download window. It only shows the curl command line by default. You can enable the others in cliget preferences (available in the cliget entry in about:addons).
You may have seen reports about the Heartbleed Internet bug in the news lately. Note that the term "bug" doesn't mean malware or a virus, just an unintentional flaw. This particular bug is a vulnerability in the encryption system used by about 2/3rds of the Internet web sites. Ignoring the hype, on a scale of 1 (ignore) to 10 (critical) this is an 11. Read this through entirely before panicking.
The bug allows an attacker to potentially obtain passwords from web sites or the credentials of the web site itself (possibly for creating a fake web site that validates as authentic). Most web server operators can detect if they have the vulnerablity but not if someone exploited it. While there are lists of affected sites and test services that can tell if a web site is affected, there are so many that it's best to assume most sites you have passwords on were exploited. Microsoft's web sites were NOT affected since they have their own encryption system, nor were web sites operating with their software (not that they haven't had many other security problems). But it's difficult for web users to determine what software a site is using now or what was used previously. The software you are using on your computer, tablet, or phone doesn't matter since the vulnerability affects the servers you connect to.
What to do? Immediately change your passwords. You should change them periodically anyways, at least once a year depending on how complicated and long they are. Log into your account there and look for a password change option. Sometimes this option is on the account's "profile" or "preferences" page.
If you change your password on a site that is still affected the benefit will be temporary since an attacker can just obtain it again. Many of the affected web sites have been fixed in the past two days but even those that were safe at the time the bug was found may have already been exploited if they were vulnerable any time in the past two years when the bug was introduced.
You can expect to start receiving notices from major companies about the need to change your passwords. Don't wait for them to contact you. Some web site operators may underestimate the risk or be reluctant to announce it due to liability concerns.
Because of the size of the problem you can also expect fake emails (phishing) prompting you to change your passwords. These are attempts to get your current passwords. To avoid being caught by these never click on a link in an email. Always go to the web site directly by entering the web site address into the web browser's address bar or by searching for the company by name using Google.
I suggest prioritizing password changes as follows. Some web sites and vendor names are listed as examples but not all of them had the Heartbleed vulnerability.
1. Email (Google/Gmail, Yahoo!). Do these first since they can be used to reset passwords on most other web site accounts. If your email provider has other services (Google+, Google Docs, Yahoo! Groups) then changing your email password usually changes it for everything else from them.
Google password change: https://www.google.com/settings/security
Yahoo! password change: https://edit.yahoo.com/config/change_pw
2. Financial (banks, credit unions, investments, IRAs, pensions, PayPal)
3. Government (healthcare.gov, passports, social security, taxes)
4. Password services (a.k.a. "Single Sign-On"). If you are using a service that controls access to multiple web sites (OpenID, Facebook, Google), then change that too since a breach with it can affect multiple sites. Usually these aren't used for financial and government sites so the potential damage is less but they affect many other web sites.
5. Any site that stores your credit card or financial information (Amazon.com and accounting sites like Mint.com).
6. Vendors (insurance, medical) and utilities (power, phone, Internet) that have your Social Security number or you have set up for automatic bill payments.
7. Any site that makes payments to you (eBay, Etsy, Freelancer.com).
8. Computer remote backup services. These usually don't have the encryption key for reading the contents of your backup data but someone with your password may be able to delete your backups. If your encryption key was weak (short and simple), an attacker that obtains your backup data via your password may be able to break the encryption key by trying every possible key combination.
9. File hosting services (Dropbox, MediaFire, Google Drive). An attacker can install malware within the hosted files and spread it to everyone that accesses them.
10. Social networking sites (Twitter, Linked-In, Reddit, dating) and picture hosting sites (Imgur, Panaramio, Photobucket) are less important but worth changing if you have information on them that you want to keep private.
11. Passwords for blogs and web sites you have made should be changed to prevent them from being used to host malware.
Vendors that don't keep your credit card info (i.e., you have to enter it every time you buy something) are less of a risk but you should consider changing those of vendors you depend on the most and keep an eye on your credit card statements.
Passwords used for starting your computer or unlocking your phone are probably not affected.
Some wireless Internet sharing boxes (routers) you may have in your home or business have built-in web servers for configring them and many are affected by the Heartbleed vulnerability. Fortunately the configuration web site ususally can't be accessed by people on the Internet unless the device is intentionally configured to allow remote access. Unfortunately manufacturers aren't likely to fix any that have the problem.
Some security systems and home automation systems also have web access and can be affected. If you have a system that you can control from a web browser (on your computer or phone), then contact the manufacturer or installer to verify it and get it fixed if necessary.
You should not use the same password for multiple sites. If one is compromised, then they all are regardless of which has the Heartbleed bug. Create unique passwords for each and use a password manager program to keep track of them. With a password manager you create a single (preferably long and complicated) master password you can remember. This is used to encrypt its password list. You then use the manager to create complicated random passwords for each web site, preferably random letters and numbers at least 12 characters long. Many web sites accept much longer passwords and typographical characters (@$%*&...) but not all do. You don't have to remember these, just copy and paste them from the password manager as needed.
Many web browsers also have password managers built-in and prompt you to store passwords when you enter them. But these password lists are often not well encrypted which makes it easier for an attacker (or other users of your computer) to obtain them. In addition, if you decide to change web browsers, these password lists can be difficult to transfer. Standalone password managers avoid this problem and can also be used for other codes and non-Internet passwords.
Two popular free password managers for desktop and laptop computers are KeePass and Password Safe. I use KeePassX on Linux and it's installed on every system I build (in the menu: Applications > Accessories > KeePassX). It's compatible with KeePass on Windows. It's available in the repositories of most Linux distributions (Ubuntu, Linux Mint, etc.)
Obviously the password manager will now be the center of your Internet life so it's important to make a backup of its password file, perhaps to a Flash "thumb" drive, and keep it in a safe place. To limit loss in case your computer is burned up in a house fire, keep the backup in a different location like in the house of a relative, your car or a safe deposit box. And don't forget its master password!
Changing many passwords is a pain but don't ignore this critical security problem. The damage is already done and all you can do is prevent further damage to your Internet accounts.
You can panic now.
But when I set up a system and start loading dozens of games the desktop menus become quite a mess. Some games are not listed, or are listed in only in the main "Games" menu but not the correct sub-menus, or are listed in multiple sub-menus, or missing icons, or fail to function.
This is on top of the usual problems with broken packages, bitrot with old closed-source games (Loki titles in particular), conflicts with PulseAudio, and a lack of documentation combined with unusual keyboard key assignments. The latter is really annoying when a game launches full screen and the user is unable to figure out how to exit it.
Obviously some problems are much more difficult to resolve than others but the desktop menus are fairly easy to fix. I have fixed dozens and you can do the same. The link to the archive with my changes is at the bottom of this post but before you get into that you need to understand how the menu system works.
Originally all window managers and desktop environments used their own menu configuration files. One legacy menu structure is located in /usr/share/applnk. Debian created a standard for their distro where a menu file was created once in /usr/share/menu and then exported dynamically as needed for each menu system. Later, freedesktop.org (formerly named the X Desktop Group) developed a new standard that has been adopted by Gnome, KDE, Xfce, and others.
With the fdo/XDG standard the menu entries are created as INI files according to the Desktop Menu Specification (*.menu) and Desktop Entry Specification (*.desktop). The latter standard also defines how to specify the names and icons of the menus themselves (*.directory). Most desktop environments follow v1.0 of the standard but a v1.1 draft is in development. The *.menu files are located in /etc/xdg/menus. The *.desktop files are located in /usr/share/applications or /usr/local/share/applications. The *.directory files are located in /usr/share/desktop-directories or /usr/local/share/desktop-directories.
A note about /usr, /usr/local, and /opt. Their structure is defined by the Filesystem Hierarchy Standard. The first two follow the traditional *nix structure where binaries, libraries, documentation, and support files are organized into specific subdirectories by type and may be shared with other programs (especially libraries). Files for programs in /opt are organized by provider and are generally contained entirely within their own subdirectory and not shared with other programs. In practice, /usr is where files "managed" by a package manager (apt) are located. Unmanaged files are in /usr/local and are usually not registered with the package manger. Files in /opt may be managed or unmanaged depending on their source (an install script or a deb package).
Unfortunately some games don't follow the FHS properly. Some managed packages put their files in /usr/local and some installers put their unmanaged files in /usr. Other installers add their files and then ask to "register" with the package management system for easier removal, resulting in a mix of managed and unmanaged files. Managed files can be identified using dpkg-query:
dpkg-query -S /path/to/filename
If dpkg-query doesn't find the file then it's not managed.
Desktop entries are organized according to rules in the *.menu files. While a *.desktop file can be specified directly, most are arranged by matching "Categories" values in the files. According to the standard there are both "Main" and "Additional" categories. A desktop entry file should have one Main category except as required (for example, "Audio" or "Video" mandates "AudioVideo") and an Additional category for more precise organizing. Most of the Additional categories are reserved for specific Main categories. Obviously sub-menus have have to be defined before they can be used else everything lands in the primary menus. Linux Mint's Xfce menu configuration doesn't define the game secondary sub-menus by default (mint-artwork-xfce package) so the games menu gets very crowded. My menu configuration defines sub-menus for all standard game categories.
Desktop entry files with multiple Main and Additional categories specified is an occasional problem. While some programs can arguably fit into more than one category, doing this in the desktop entry file often causes duplicates in the menus. Duplication can be suppressed with menu rules but it's much easier to avoid the problem in the first place. Some desktop entry files don't specify an Additional category and as a result the menu entry lands in the primary sub-menu, not in the secondaries with similar programs. Occasionally there will be a standards-compliant combination of Main and Additional categories that are not handled properly in the menu definition files which results in an entry erroneously stuck in the main menu or under an "Other" sub-menu. But errors in the desktop entry files are the root cause most of the time.
Another problem is the categories themselves. The video game genres are somewhat ambiguous and many games overlap multiple genres. Even the main category "Game" can be a problem with some. Emulators, for example, can be specific to a game console (PCSX) or general-purpose (QEMU). The fdo/XDG standards only cover a few game genres, some of which are very narrow while others are overly broad. These are my interpretations:
ActionGame: A catch-all for anything that requires reflexes that doesn't fit in a more specific category.
AdventureGame: Point-and-click adventures and interactive stories. If it requires reflexes then it doesn't belong here. Even so, there are some games which have a mixture of both so it becomes a judgment call as to developer intent (Full Throttle).
ArcadeGame: An ambiguous genre unless you restrict it to games that were found in coin-operated arcades at some point. I included games with simplistic gameplay and a minimalistic storyline, if any. If it has a point-based "high score" then it's probably an arcade game.
BoardGame: Board games and games that could reasonably be board games or were derived from one.
BlocksGame: Tile-matching and Tetris types, as per the Wikipedia article.
CardGame: Probably the easiest to figure out although gtali is a bit of an oddball.
KidsGame: Another non-genre genre. I included any game with a theme that is heavily slanted towards young children. However, some also fall into the Educational main category (GCompris).
LogicGame: Slightly odd name as it includes puzzles. Again there are some platformers that combine this with action (EDGE, Stealth Bastard).
RolePlaying: If it doesn't have a character development system then it's not a role-playing game. Having a fantasy theme is not good enough. Waste's Edge is an example of a game that says it's a role-playing game when it obviously is an adventure game. It's nothing more than a long dialog puzzle. It's not a bad puzzle and has a good story, but it's not a role-playing game.
Simulation: Vehicle, economic, and environment simulations including cars, cities, and space. Car racing games are often marketed as sports games but that never seems to include airplane racing games. Some vehicle simulations have cinematic physics and may belong elsewhere (ArcadeGame, ActionGame).
SportsGame: Traditional sports not involving vehicles. Includes "fantasy" sports management applications. Real sports management applications probably belong elsewhere, perhaps Office/ProjectManagement.
StrategyGame: Some are obvious, some not. The problematic ones are those that require tactics but not quite long-term strategic thinking (Atom Zombie Smasher). I tend to follow the developer's opinion but not always.
Amusement: Snowing desktops (xsnow), eye-candy (xdesktopwaves), and the like. Doesn't have a defined Main category but I set all the ones I found to Games since that is the most logical option. Creating a new primary sub-menu didn't make sense.
Shooters: This is new in the v1.1 draft specification. Again it is a bit too broad of category but because the ActionGame category is so crowded I decided to support it, just not by default. I wrote the "set-shooter-categories" script which changes some of my desktop entries to this category (mostly first-person shooters) but the targets are all hard-coded. It doesn't have a reversion option so keep a backup copy of the original *.desktop files. I defined a Shooters sub-menu in my replacement menu definitions (applications.menu, xfce-applications.menu) and included an icon and the shooters.directory file.
Adding to the problems are programs that represent many games including the Steam client, Desurium, and gtkboard (which includes Tetris and Mastermind). I normally just leave these in Games by not specifying an Additional category.
Some games in the repos have the old Debian menu files but not the newer fdo/XDG files. There is an optional "Debian menu" that can be merged with the newer menu files to create a "Debian" sub-menu with all the old menu entries. However it has a different structure and is mostly redundant with the main menus - overkill for just getting some games to show up. I encountered the missing *.desktop file problem often enough that I wrote the "deb-menu-conv" script to export old Debian menu files as something that resembles the new fdo/XDG files. While deb-menu-conv handles the basics, some things like categories (called "sections" in the Debian menus) do not translate directly. It has some hard-coded conversions for games but most anything else will need manual editing. Debian menu files also allowed multiple menu entries per file while the fdo/XDG standard does not. All entries found in a menu file are exported by deb-menu-conv and have to be split manually into separate *.desktop files. This script saved me a whole bunch of time but some packages were still a pain (xmpuzzles, xteddy, x11-apps) due to the number of entries their old menu files contained. I excluded most terminal and text-mode entries.
Some of the programs in the Debian menus are obsolete but I included them as a test of deb-menu-conv. Many of the amusement-type programs (xfireworks, xsnow, xpenguins) require direct access to the primary X window (root window) that is commonly controlled by a desktop manager in modern desktop environments (xfdesktop in Xfce). They can be used on Xfce if xfdesktop is terminated first (Settings Manager > Session and Startup > Session > xfdesktop > Quit Program). Log out and back in again or create a launcher in a desktop panel to restart xfdesktop afterwards.
I also fixed some other bugs via the desktop entry files. Doom3 doesn't get along with PulseAudio so I added pasuspender to the Exec line in its file. Some games have both 32-bit and 64-bit versions but their installer only adds one or the other and they have different names. This meant that a single menu entry couldn't support both so I had to replace it with two different files. For several of the games with undocumented keys and lack of built-in help I added additional menu entries that open man pages, Readme documents, and web sites with relevant information.
One parameter that is very useful is "TryExec". This is a test where the menu system looks for the target file and only shows the menu entry if it is found. Since it is targeting executables it can be used to check for both file and directory targets. All of my desktop entry files have this set so that if they are all installed they won't crowd the menus with entries for unavailable programs. This is also handy for menu entries that target the 32-bit and 64-bit versions of a binary. However, if a game normally installs both then only one entry is used and it points to a script that dynamically loads the correct binary based on the system architecture. I wrote or modified several game loading scripts for this reason (and to fix other bugs).
Installing a modified menu or desktop entry file is easy if it is unmanaged - just overwrite the original. However, managed files are more complicated. If an update is installed by the package manager then the modified file may be overwritten. The solution is to divert the original file to a different location with dpkg-divert. When apt applies the update it will then redirect the diverted file to the new location, leaving the modified one intact.
The installation of the modified files is easy when only a few files are involved. But I ended up with dozens which I realized would be error-prone and time-consuming to install. So I wrote the "file-overrides" script to handle installation automatically, make backups of replaced files, and be able to revert changes. It has built-in help but I will summarize the basics here.
Overrides are located in a subdirectory structure that mimics the actual targets. The default directory is /usr/local/share/file-overrides.d which contains two subdirectories, managed and unmanaged. The difference is that managed targets are diverted (and moved) by dpkg-divert while unmanged targets are moved directly. These subdirectories must exist else the script will abort.
The diversions and backups are located in /usr/share/Diversions/file-overrides by default and this directory must also exist. The script will create "managed" and "unmanaged" directories as needed and will remove their contents automatically during reversions.
The structures are relative so that a replacement for managed file:
is located at:
During an override operation the original will be moved/diverted to:
prior to being replaced. During a reversion the sequence is reversed, more or less. While intended for fixing menu entries the script can be used to replace any file on the system. It does have some special support for the *.desktop files. Normally the script will only install an override file if the target already exists to keep the directories clean. However, if the file is a desktop entry file with TryExec set, and the target executable exists, then the override will be applied even if the target *.desktop file does not exist. The --force parameter applies all overrides regardless.
Another special case is with override files that are zero bytes in size. This causes the target to be moved/diverted but not replaced, essentially causing a deletion.
See the readme.txt file for more info on included icons and such.
The archive contains everything. Note that it has been developed and tested on Ubuntu 12.04 (Precise Pangolin)/Linus Mint 13 (maya) with Xfce 4.10. It will probably work with newer releases (and Debian) and other desktop environments with the exception of the menu files which may need adjusting. Compare them to the originals in /etc/xdg/menus.
One of the major developments of the 10.04 installation was the creation of a bunch of application profiles for use with UFW, the "Uncomplicated FireWall". I submitted these to Ubuntu for inclusion in future releases. After some initial enthusiasm on the part of their developers for the contribution, the profiles have been more or less forgotten about.
There are multiple GUI interfaces for UFW but at the time only ufw-frontends supported profiles well. About a week ago a member of the Gufw team contacted me about including the profiles as part of their package. They also found some errors. I'm not sure what their current version looks like but hopefully it supports the reference and category extensions to the file format I added.
One thing the profiles didn't have was a license. After some consideration I settled on GNU GPL v3 or later. Even though the profiles are not source code or configuration files related to compiling, the GPL can be used.
Because of all this I've updated the profiles to v1.4 and now include a license file and MD5 checksums for identifying which files are covered by the GPL.
This isn't just another trivial automated installation script although it started out that way. Basic installation presets led to integrated bug workarounds, setting defaults for many applications and servers, more features, etc. While you may disagree with some of my package choices, they were selected for my clients - not you. Change it if you have different needs. First, a little background on my deployments.
All of my clients have cheap desktop systems or laptops, usually outdated. Almost any CPU, chipset, GPU, and drive configuration. They're either stand-alone or connected together on small Ethernet networks. Some have broadband, some only dial-up (POTS). Ages vary from toddlers to senior citizens. A few are Windows gamers. This mix results in a wide variety of system hardware, peripherals, application requirements, and configurations. I've had to deal with most every type of kernel, application, and hardware bug. Every deployment unearths a new bug to fight. Some of these are Ubuntu's fault but many are upstream.
Inevitably I spend many hours doing full OS conversions to Ubuntu or dual-boot configurations. I've found that using a Live CD to install Ubuntu is about 4x faster than installing Windows when drivers, updates, and application installs are figured in. While I could set up slipstream builds of Windows I don't install it enough to bother with and the variety of versions (Home, Pro, upgrade, OEM,...) and licenses makes it impractical. Relatively speaking, I spend about 3x as long transferring documents, settings, and game/application files (scattered all over C:) to Ubuntu than I do installing either it or Windows. But I'll take any time savings I can get.
A while back, when Ubuntu 10.04 (Lucid Lynx) was released, I decided to streamline my installations. This wasn't just to save time. I also needed to make my installations more uniform as I couldn't remember all the various tweaks and bug fixes that I performed from installation to the next.
I had several goals for this project, not necessarily all at the beginning as some were the result of test installs, client feedback, and feature creep.
- Fix all the bugs that my clients encountered on their existing installs plus all the other Ubuntu annoyances I've been manually correcting.
- Do everything the "correct way" instead of blindly following HOW-TOs from amateurs that involved script and text file hacking that would be lost on the next update. I had to learn proper use of Gconf, PolicyKit, Upstart, init scripts, and dpkg.
- Configure all of the network features that my clients had asked for, usually file or peripheral sharing. Internet content filtering for kids was a requirement.
- Secure remote access and administration. It's bad enough when a client has a software problem. Having to waste time with an on-site visit is idiotic when it's not an Internet access problem and a broadband connection is available. The same kickstart configuration can be used for both an "administration" system as well as clients. Having them nearly identical makes both remote and verbal support easier.
- Make it easier to obtain diagnostic and status information, for me and the client.
- Research applications that meet customer needs and are stable. Configure them so the customer doesn't need to.
- Document everything, especially anything I spent significant time researching.
On all of these I mostly succeeded. There are still a few gaps but they're minor (for my deployments at least) but after working on this for 18 months I needed to get on with my life. I figure that after a few million deployments I should break even. I'm now busy updating the dozen or so I currently have.
So what's in it? The base is just a plain 10.04 (i386 or amd64) installation. Two reasons for that - it's the LTS release and I didn't have time to upgrade to newer releases or workaround their new bugs. It's supported for another year or so. I probably update it for 12.04 after it is released (and clean up my code). Highlights:
Apache. Used for sharing the public directory (see below) and accessing the various web-based tools. The home page is generated from PHP and attempts to correct for port-forwarding (SSH tunnel) if it detects you are not using port 80.
Webmin. It's the standard for web-based administration. I added a module for ddclient (Dynamic DNS). The module is primitive but usable and I fixed the developer's Engrish.
DansGuardian. Probably three months work on just this. For content filtering there isn't really anything else. Unfortunately it has almost no support tools so I had to write them. Most of these have been announced in previous blog postings although they've been updated since then. The most complicated is "dg-squid-control" which enables/disables Squid, DansGuardian, and various iptables rules. Another loads Shalla's blacklist. It doesn't have system group integration so I wrote "dg-filter-group-updater" to semi-integrate it. There are four filter groups - no access, restricted (whitelist with an index page), filtered, and unrestricted. I added a Webmin module for it I found on Sourceforge. It's not great but makes it easier to modify the grey and exception lists. Included are lists I wrote that allow access to mostly kid sites (a couple of hundred entries). The entries have wiki-style links in comments that are extracted by "dg-filter-group-2-index-gen" to create the restricted index page. There's a How-To page for proxy configuration that users are directed to when they try to bypass it.
The only limitation is that browser configurations are set to use the proxy by default but dg-squid-control doesn't have the ability to reset them if the proxy is disabled. I spent two weeks working on INI file parsing functions (many applications still use this bad Windows standard for configuration files). While they seem to work I need to significantly restructure the tool to make use of them.
DansGuardian had no development for a few years but recently a new maintainer is in charge and patches are being accepted. Hopefully full system account integration will be added.
UFW. The Uncomplicated Firewall is a front-end to iptables and there is a GUI for it. One feature it has is application profiles, which make it easy to create read-to-use filter rules. I created about 300 of them for almost every Linux service, application, or game (and and most Windows games on Wine).
File sharing. The /home/local directory is for local (non-network) file sharing between users on the same system. There is also a /home/public directory that is shared over Samba, HTTP, FTP, and NFS. WebDAV didn't make the cut this time around.
Recovery Mode. I added many scripts to the menu for status information from just about everything. Several of my tools are accessible from it.
SSH server. You make a key with ssh-keygen, client_administrator_id_dsa (should be encrypted), and include the public (*.pub) part in the kickstart_files/authentication sub-directory. It is added to the ssh configuration directory on every system. Using another tool, "remote-admin-key-control", system owners (sysowner group) can enable or disable remote access. This is for several reasons including privacy, liability, and accounting (for corporate clients where the person requesting support may not have purchase authority).
When the remote-admin-key-control adds the key to the administrator account ~/.ssh/authorized_keys, you can connect to the system without a password using the private key (you still need to enter the key passphrase). The radmin-ssh tool takes this one step further and forwards the ports for every major network service that can function over ssh. It also shows example command lines (based on the current connection) for scp, sftp, sshfs, and NFS. You still need the administrator password to get root access.
X2Go. Remote desktop access that's faster than VNC. Uses SSH (and the same key).
OpenVPN. A partially configured Remote Technical Support VPN connection is installed and available through Network Manager. If the client system is behind a firewall that you can't SSH through, the client can activate this VPN to connect to your administration system so that you can SSH back through it. Rules for iptables can be enabled that prevent the client accessing anything on the administration system. It connects using 443/udp so should work through most firewalls.
Books and guides. Located in the desktop help menu (System > Help) is a menu entry that opens a directory for books. My deployments have subdirectories with Getting Started with Ubuntu 10.04 - Second Edition from the Ubuntu Manual Project and OpenOffice.org user guides. You can easily add more as the kickstart script grabs everything in its local-books subdirectory. For the end-user I wrote networks-and-file-sharing-help.html (in the same help menu).
For the installer the main source of documentation is the kickstart script itself. I got a little carried away with comments. The next major document is TODO.html which is added to the administrator's home directory. It was intended to list post-install tasks that needed to be completed since there are many things the installer can't do (like compile kernel modules). After adding background information on the various tasks, troubleshooting help, and example commands, it's basically another book. You should read it before using the kickstart script.
Scanner Server. Allows remote access to a scanner through a web interface. Simpler than using saned (but that is also available if you enable it). It had several bugs so I fixed it and added a few features (with help from a Ubuntu Forum member pqwoerituytrueiwoq). Eventually we hit the limit of what it could do so pqwoerituytrueiwoq started writing PHP Server Scanner as a replacement. For a 12.04 release I will probably use that instead. I wrote "scanner-access-enabler" to work around udev permission problems with some scanners (especially SCSI models).
Notifications. Pop-up notices will be shown from smartd, mdadm, sshd, and OpenVPN when something significant happens. Without the first two the user doesn't know about pending drive problems until the system fails to boot. I've also had them turn the system off when I was in the process of updating it and the SSH notification helps prevent that. The OpenVPN notification is mostly for the administration system and includes the tunnel IP address of the client. OpenSSH has horrible support for this kind of scripting. OpenVPN's scripting support is absolutely beautiful.
Webcam Server. A command-line utility that I wrote a GUI for. It has a Java applet that can only be accessed locally but a static image is available from the internal web server to anywhere.
BackuPC. It uses its default directory for backups so don't enable it unless you mount something else there. A cron job will shut the system down after a backup if there are no users logged in. It has been somewhat hardened against abuse with wrapper scripts for tar and rsync.
There are many bugs, both big and small, that are either fixed or worked around. The script lists the numbers where applicable. The TODO documents lists a bunch also. Some packages were added but later removed (Oracle/Sun Java due to a licensing problem, Moonlight since it didn't work with any Silverlight site I tested).
There are some limitations to Ubuntu's kickstart support. I'm not sure why I used kickstart in the first place. Perhaps the name reminded me of KiXtart, a tool I used when I was a Windows sysadmin. Kickstart scripts are the standard for automating Red Hat installations (preseeding is the Debian standard) but Ubuntu's version is a crippled clone of it. In part it acts like a preseed file (even has a "preseed" command) but also has sections for scripts that are exported and executed at different points during the installation. About 90% of the installation occurs during the "post-install" script. The worst problem with Ubuntu's kickstart support is that the scripts are exported twice and backslashes are expanded both times. This means that every backslash has to be quadrupled. This gets real ugly with sed and regular expressions. Because of this you'll see "original" and "extra slashy" versions of many command lines. I wrote quad-backslash-check to find mistakes.
The other problem is that the way the script is executed by the installer hides line numbers when syntax errors occur, making debugging difficult. I wrote quote-count and quote-count-query to find unmatched quotes (and trailing escaped whitespace that was supposed to be newlines) which were the most common cause of failure.
I've made an archive of my kickstart file, its support files, and configuration files for various services on my server for you to download (12.5MB, MD5: b5e79e6e287da38da75ea40d0d18f07f ). The script, error checking and ISO management tools, and server configuration files are in the "kickstart" sub-directory. A few packages are included because they are hard to find but others are excluded because of size. Where a package is missing there is a "file_listing.txt" file showing the name of the package I'm using. My installation includes the following which you should download and add back in:
Amazon MP3 Downloader (./Amazon/amazonmp3.deb)
DansGuardian Webmin Module (./DansGuardian Webmin Module/dgwebmin-0.7.1.wbm)
Desura client (./Desura/desura-i686.tar.gz)
VMware Player (./VMware/VMware-Player-*.bundle)
VMware Player is optional. It has kernel modules so the kickstart script only retrieves the first install file returned from the web server whose name matches the architecture. It puts it in /root for later installation.
The target systems need network-bootable Ethernet devices, either with integrated PXE clients or a bootable CD from ROM-o-matic.
You need a DHCP sever that can send out:
The tftp server needs to serve the pxelinux.0 bootstrap, vesamenu.c32, and the menu files. These are available from the Ubuntu netboot images. The bootstrap and vesamenu.c32 are identical between the i386 and amd64 versions, only the kernel, initrd, and menus are different. You can use my menu files instead of the standard set in the netboot archive. The most important is the "ubuntu.cfg" file. You'll notice that my menu files list many distros and versions. Only the utility, Knoppix, and Ubuntu menus function fully. The rest are unfinished (and probably obsolete) experiments. FreeDOS is for BIOS updates.
My tftp server is atftpd which works well except it has a 30MB or so limit on tftp transfers. This only affects the tftp version of Parted Magic (they have a script to split it up into 30MB parts). It is started by inetd on demand.
I use loopback-mounted ISOs for the kickstart installs and all LiveCDs netboots. Because I have so many, I exceeded the default maximum number of loopback nodes available. I set max_loop=128 in my server's kernel command line to allow for many more.
The Ubuntu Minimal CD ISOs are the source for the kernel and initrd for the kickstart install. The architecture (and release) of the kernel on these ISOs must match the architecture of Ubuntu you want to install on the target system. You'll probably want both the i386 and amd64 versions.
PXE Linux doesn't support symlinks so my ISOs are mounted in the tftp directory under ./isomnt. Symlinks to the ISOs are in ./isolnk and are the source of the mounts. I set it up this way originally because the ISOs were in /srv/linux in various subdirectories so having the links in one place made it easier to manage. But my ISO collection grew too big to manage manually so I wrote "tftp-iso-mount" that creates the mountings for me. It searches through my /srv/linux directory for ISO files and creates isomnt_fstab.txt that can be appended to fstab. It also deletes and recreates the isomnt and isolnk directories and creates the "isomnt-all" script to mount them.
The ISOs are accessed through both NFS and Apache. I originally intended to use NFS for everything but I found that debian-installer, which performs the installation and executes the kickstart script (also on the "alternate" ISOs), doesn't support NFS. So I had to set up Apace to serve them. The Apache configuration is rather simple. There are a few symlinks in /var/www that link to various directories elsewhere. One named "ubuntu" links to /srv/linux/Ubuntu. The kickstart support files are placed in /srv/linux/Ubuntu/kickstart_files and are accessed via the link. NFS is still used for booting LiveCDs (for bug testing and demos). There is also a "tftp" symlink to /srv/tftp used for local deb loading (see below).
The kickstart script itself, Ubuntu-10.04-alternate-desktop.cfg, is saved to /srv/tftp/kickstart/ubuntu/10.04/alternate-desktop.cfg after being backslash and quote checked.
Several preseed values are set with the "preseed" command at the beginning of the script. You'll probably want to change the time zone there. License agreements are pre-agreed to as they will halt the installation if they prompt for input.
Like I mentioned earlier, the vast majority of work happens in the post-install script. The executes after the base Ubuntu packages are installed. The most important variable to set is $add_files_root which must point to the URL and directory of your web server where the rest of the kickstart support files are located (no trailing backslash). The script adapts for 32-bit and 64-bit packages as needed based on the architecture of the netboot installer. There is also a "late_command" script that executes near the end of the installation, after debian-installer creates the administrator account (which happens after the post-install script finishes).
The debug variables are important for the initial tests. The $package_debug variable has the most impact as it will change package installations from large blocks installed in one pass (not "true") to each package individually ("true"). When true, it slows down installation significantly but you can find individual package failures in the kickseed-post-script.log and installer syslog (located in /var/log/installer after installation). Setting $wget_quiet to null will guarantee a huge log file. The $script_debug variable controls status messages from the package install and mirror selection functions.
The $mirror_list variable contains a list of Ubuntu mirrors (not Medibuntu or PPAs) that should have relatively similar update intervals. This is used by the fault-tolerant mirror selection function, f_mirror_chk, that will cycle through these and check for availability and stability (i.e., not in the middle of sync). The mirrors included in the list are good for the USA. These are exported to the apt directory so that the apt-mirror-rotate command can use them to change mirrors quickly from the command line or through the recovery mode menu. When a package fails to be installed via the f_ftdpkg and f_ftapt functions, another mirror will be tried to attempt to work around damaged packages or missing dependencies.
To save bandwidth the post-install script looks for loopback mounted ISOs of the Ubuntu 10.04 live CD and Ubuntu Studio (both i386 and amd64 versions) in the isomnt sub-directory via the tftp link in the Apache default site. It copies all debs it finds directly into the apt cache. It also copies the contents of several kickstart support sub-directories (game-debs* and local-debs*). This is a primitive way to serve the bulk of the packages locally while retrieving everything else from the mirrors. You need to change the URLs in the pre-load debs section to the "pool" sub-directories of the mounted ISOs in "./tftp/isomnt/".
Because loading this many debs can run a root volume out of space, the $game_debs variable can be used to prevent game packages from being retrieved. Normally you should have at least a 20GB root (/) volume although it could be made smaller with some experimentation. An alternative to this method would be a full deb-mirror or a large caching proxy.
Set the OpenVPN variables $openvpnurl to the Internet URL of your administration system or the firewall it's behind. Set $openvpnserver to the hostname of your administration system (which can have the same values as it won't be connecting to itself).
Basic usage starts with netbooting the client system. Some have to be set to netboot in the BIOS and some have a hotkey you can press at POST to access a boot selection menu. The system then obtains an address and BOOTP information from the DHCP server. It then loads pxelinux.0 from the TFTP server which will in turn load vesamenu.c32 which displays the "Netboot main menu". Select Ubuntu from the list and look for the Ubuntu 10.04 Minimal CD KS entries. Select the one for your architecture and press the Tab key to edit the kernel boot line. Set any kernel parameters you want to be added to the default Grub2 configuration after the double dash (--), like "nomodeset". Set the hostname and domain values for the target as these are used in several places for bug workarounds and configurations. Then press Enter. The installer should boot. If nothing happens when you press Enter and you are returned to the Ubuntu boot listing menu, verify the ISOs are mounted on the server then try again (you will need to edit the entry again).
If there are no problems then you will be asked only two questions. The first is drive partitioning. This can be automated but my client systems are too different to do so. Then next question will be the administrator password. After that it will execute the post-install script and late-command scripts then prompt you to reboot. Just hit the enter key when it does as Ctrl-Alt-Delete will prevent the installer from properly finishing the installation (it's not quite done when it says it is). Full installation will take 2-3 hours depending on debug settings, availability of local debs, and Internet speeds.
In case of problems see the TODO document which has a troubleshooting section. The only problems I've had installing was missing drivers or bugs in the installer (especially with encrypted drives - see the TODO). My Dell Inspiron 11z, which has an Atheros AR8132/L1c Ethernet device, wasn't supported by the kernel the minimal CD was using. To work around it I made a second network connection with an old Linksys USB100TX. The Atheros did the netboot (the Linksys does not have the capability) but the installer only saw the Linksys afterwards and had no problems using it (other than it being slow).
I welcome comments and suggestions (other than my package choices and blog color scheme :D).
Some of my clients require Internet content filtering on computers their kids are using. The solution to that is DansGuardian. While it has many problems there really isn't a better F/OSS alternative. Its development has been stagnant for years but recently a new maintainer joined the project so submitted patches are being applied to fix bugs and add features (like system group integration).
DansGuardian requires a proxy. The common options are TinyProxy and Squid. TinyProxy has a few annoying bugs so I use Squid with my clients. One challenge with content filtering is preventing the proxy from being bypassed. The two solutions are transparent interception or an explicit-proxy with dropping of connections that aren't destined for the proxy ports.
With a transparent proxy all outgoing connections are routed via iptables rules to DansGuardian regardless of the client settings. While this simplifies deployment by eliminating client configuration it also prevents using different content filtering levels on a per-user basis as it masks the source port of the connection. Without the source port the associated user can't be identified. Since the systems I maintain have a variety of users within the same household and thus different filtering requirements, this doesn't meet their needs.
The alternative method is to use iptables rules that drop connections that aren't destined for the DansGuardian. Here are the nat rules that I use:
-A OUTPUT ! -o lo -p tcp -m owner ! --uid-owner proxy -m owner ! --uid-owner root -m owner ! --uid-owner clamav -m owner ! --uid-owner administrator -m tcp --dport 80 -j REDIRECT --to-ports 8090
-A OUTPUT ! -o lo -p tcp -m owner ! --uid-owner proxy -m owner ! --uid-owner root -m owner ! --uid-owner clamav -m owner ! --uid-owner administrator -m tcp --dport 443 -j REDIRECT --to-ports 8090
-A OUTPUT ! -o lo -p tcp -m owner ! --uid-owner proxy -m owner ! --uid-owner root -m owner ! --uid-owner clamav -m owner ! --uid-owner administrator -m tcp --dport 21 -j REDIRECT --to-ports 8090
-A OUTPUT -p tcp -m tcp --dport 3128 -m owner ! --uid-owner dansguardian -m owner ! --uid-owner root -m owner ! --uid-owner clamav -m owner ! --uid-owner administrator -j REDIRECT --to-ports 8080
Fairly simple but note that I'm not dropping the packets. Any TCP connection that is destined for ports 80 (HTTP), 443 (HTTPS), and FTP (21) are rerouted to port 8090. Some accounts are excluded to prevent false-positive blocking by DansGuardian.
DansGuardian is using port 8080 (and connects to Squid on 3128). So what is 8090? Its an Apache server. One of the problems with programs that aren't configured to use the proxy is that the users won't know why their connections are failing. The web site, known as a network billboard, displays a page that informs them that their programs need to be configured to use the proxy and how to do it. This is much friendlier than just dropping the packets. DansGuardian uses ident2 to identify the user that is the source of the connection and applies the filtering rules specific to the filter group they are assigned to.
This configuration works very well with web browsers. Most use the system proxy settings through gconf on Gnome. Some need manual configuration so I created default configuration files and put them in /etc/skel so that new user accounts have them at creation. Unfortunately, many other programs rely on environment variables to determine the proxy address and Ubuntu's proxy configuration tool (gnome-network-properties) has a really stupid bug and they aren't set correctly. Some are set in bash in terminal windows but not in the session so any graphical program that doesn't use gconf fails to access the proxy correctly. It's easy to demonstrate. Open a terminal window and enter:
tail -f ~/.xsession-errors
Then create a custom application launcher in the panel and enter "printenv" for the command. Then just click it and check the output from tail. On my system, variables for "HTTP_PROXY" and the like aren't present. I created a fix for this. Just extract the file and add it to the end of ~/.profile and relogin. Run the tail/printenv commands again with a proxy set in System>Preferences>Network Proxy. Add this fix to /etc/skel/.profile to use it as the default for new user accounts.
Even with this fix it is surprising is how many Internet-using programs don't support proxies correctly. I tested every streaming media player I could find and a few other programs and here are the results with my systems (Ubuntu 10.04 Lucid Lynx i386 and amd64):
Clementine (0.7.1): Neither Last.fm and SomaFM work. Jamendo lists songs but doesn't play them but this is due to Ogg problems at Jamendo. Unlike other players Clementine's plug-in for Jamendo is not configurable for MP3 so I couldn't work around it. Mangatune and Icecast work.
Rhythmbox (0.13.1): Jamendo failed to work. Magnatune was really slow to load.
Miro (4.0.3-82931155): Could find video podcasts but not download them (except VODO which uses BitTorrent). Its integrated web browser would always show the network bulletin for any other link in the side panel.
Banshee (2.0.1): Internet Archive links work. Live365.com and xiph.org show results but nothing plays (I can copy the xiph links to VLC and they play). Miro Guide works (unlike Miro) but likes to freeze. Amazon MP3 Store, Jamendo, Magnatune (both extensions), RealRadios.com, and SHOUTcast.com extensions fail to load. Last.fm would log in but not much else. I noticed that according to ~/.xsession-errors Banshee is an exceptional media player.
Gnome MPlayer (0.9.9.2): Nothing fancy but it functioned with the streams I tried.
VLC (1.0.6): About the same as Gnome MPlayer. A lot of complaints about some playlists like radio.wazee when it encounters unavailable entries. Needs a less ugly way to handle error messages with playlists of Internet streams since they are usually just alternate servers.
Google Earth (6.1): It would connect to the DB and you could navigate the worlds but none of the Panoramio pictures would show. Wikipedia entries wouldn't show after being enabled until the app was restarted. Even then, clicking on "Full Article" resulted in the network bulletin page being shown (webkit?). Changing the preferences to use an external browser is an adequate workaround.
Totem (2.30.2): Functioned but was picky about some streams (radio.wazee).
gPodder (2.2): Useless.
Hulu beta functions but is mostly relying on Flash.
Skype beta (188.8.131.52): Connected to their network without problems and I successfully called their sound testing service.
Sun Java Plug-in (1.6.0_26 in Firefox 3.6.24): Useless with a proxy. Even without a proxy you have to work around IPv6 bugs (Debian bug #618725). With that working the online test usually fails and I've found that Pogo.com Boggle Bash is a better test. Manually setting the proxy with jcontrol doesn't have any effect. Debian is dropping the plug-in so it may not matter.
FrostWire (5.1.5): Useless with a proxy. It uses Java so not surprising. It has its own proxy settings but it couldn't connect to anything even with manual settings.
Update - Added a few more tests:
Desura (110.22): Could login and see items I had ordered (free demos) but could not download them for installation or show any web pages. Some of the links on the menu bar opened in Firefox but showed the network bulletin. Apparently it was resolving the links (maybe querying their servers) to localhost:8090 and then sending that to the default browser even though Firefox could access the Internet through the proxy without problems.
Konqueror (4.4.5): No problems (KHTML).
Epiphany (2.30.2): No problems (webkit).
X-Moto (0.5.9): No problems. Can use environment variables, manually-specified proxy, or SOCKS proxy.
DraftSight (Beta V1R1.3): Couldn't connect to the registration server initially. The browser in the Home panel showed the network bulletin. Setting the proxy manually in "Tools>Options>System Options>General>Proxy server settings" and restarting allowed the registration to function but not the Home panel browser. I found that reapplying the proxy settings (without changing anything) then right-clicking the Home panel and reloading it fixed the problem for that session but it would reoccur if DraftSight was restarted.
Clarification: My proxy configuration doesn't use authentication or SOCKS. My bug work-around script supports the environment variables for authentication but I didn't test it.
Update 20111202: I removed Sun Java because of the security problems and switched to OpenJDK/IcedTea6 (1.9.10) but it didn't do any better. I did try FrostWire again with a manually specified proxy but it had no effect. I did come across an interesting Java library for proxy detection named proxy-vole but it won't solve my immediate problem.
Update 20111204: Corrected the DansGuardian/Squid port usages mentioned in the article and added a forgotten DansGuardian anti-bypass iptables rule. They now match my test environment.
I think part of the problem is that the developers test against a proxy and if the program works then its assumed to be proxy-compatible. That can be misleading, especially when multiple components are involved, as some may use the proxy while others access the network directly (Miro being a prime example). Adding some iptables rules to drop anything bypassing the proxy would close that testing hole.
Here are some references for shell script developers, man page creators, README writers, etc. While documentation styles are a bit haphazard and vary with OS and programming language, there are some standards.
For man pages see man-pages(7). What does that mean? You open a terminal window then type:
man 7 man-pages
The Debian Policy Manual says where the different documentation files should be located but not what they should look like.
I'm not going to admit to following these but please post any other IT technical writing style guides you know of. :D
- ► 2009 (9)
- ► 2008 (11)
- Omnifarious Implementer = I do just about everything. With my usual occupations this means anything an electrical engineer does not feel like doing including PCB design, electronic troubleshooting and repair, part sourcing, inventory control, enclosure machining, label design, PC support, network administration, plant maintenance, janitorial, etc. Non-occupational includes residential plumbing, heating, electrical, farming, automotive and small engine repair. There is plenty more but you get the idea.