Diary and notebook of whatever tech problems are irritating me at the moment.

20081026

Windows GUI vs. Linux Command Line Myths

Undoubtedly you've heard the old cliché that Windows is easier to maintain because it has GUI tools for everything while Linux requires commands lines and a terminal. Any experienced Windows administrator knows the point-and-click GUI tools don't cover everything. Likewise any experienced Linux administrator knows there are many GUI tools for Linux configuration but terminal shells are available on ANY system regardless of how big or small and the ability to script any action in a platform-neutral way is too useful to give up. I just again encountered a situation on XP that required a command-line fix and it highlights the ignorance of many fanboys about the reality of Windows system administration.

I recently installed Windows XP Pro from scratch on a dual-boot system. I normally install Windows first as it doesn't play well with other OSes when it comes to the boot loader. I was using an original XP OEM CD with SP1 integrated. After installation I copied over SP3 and installed that as well. Doing it this way reduces the number of update/reboot/update cycles I have to go through with Windows Update and reduces the risk of an exploit before the process is complete. After rebooting, I run Windows Update and go through the usual Windows Update update, Installer update, activation, and WGA check. I then install all of the critical updates. There are a surprisingly large number of them considering I already have SP3. Reboot again and run WU again and install NET Framework runtimes, IE7, Media Player 11, and more updates for the updates. Reboot again and go back to WU again. Install more updates for the updates and everything else I just added. Or at least I tried as they refused to install, reporting "failed" for all of them. I went through the typical diagnostics Windows admins have learned over the years of deleting out temp files, clearing the browser settings, and attempting to install each update individually to no avail. Some Google searching turned up a blog posting about Wups2.dll not being registered properly if the system is updated through WU and not rebooted before SP3 is installed (KB943144). Of course this doesn't explain my situation as WU hadn't been used before service pack was installed. The workaround requires stopping the WU service, manually registering the dll from a command window, and restarting the service. This fixed my problem.

This isn't an unusual repair process for a Windows system. Even for Vista there are plenty of examples of command window (cmd.exe) and regedit repair instructions in Microsoft's support pages. You can ignore all the myths floating around the Internet about never having to use Unixy command lines when administering Windows systems because of the wonderful graphical tools. On Windows there are many tasks that are impossible to perform with graphical tools or are just a lot easier from a command window. The only way to avoid command line tools or regedit entirely is to write a custom graphical tool that handles those specific situations (similar to "compiling a kernel" comments from Microsoft fanboys). The fanboys will point out that regedit is a graphical tool but the reality is that it isn't much more of a "tool" than Notepad (which was used in the pre-registry days with win.ini and system.ini). An IT manager that hires an admin that doesn't know how to use regedit or command-line tools should themselves be replaced. When screening job applicants I've encountered many "certified" admins that didn't know anything about maintenance outside of the graphical tools (or even basic hardware troubleshooting for that matter). Surprisingly, I've also worked with software engineers that had a paranoid fear of even regedit. It's like they've been brainwashed into thinking that the only "proper" way to work with the registry is to use an API and approved function calls. Apparently they haven't experienced the "fun" of trying to remove auto-starting malware entries from it.

Because of the emphasis on graphical tools the skill of working at a low level with the Windows OS is a dying art. While the graphical tools lower the barrier for entry into system administration it also invites fools (with only superficial skill) to enter (and get certified) without low-level skills valuable for troubleshooting. Graphical tools provide them a flower-strewn path to anywhere they want to go but when a situation calls for them to go off the path they are lost - much to the pleasure of seasoned consultants who will guide them back to safety for a hefty fee. System administrators are not the types of users that recovery disks were intended for but unfortunately a lot of amateur admins rely on them.

The fundamental limitation of graphical tools is that trying design an interface for every conceivable configuration option, troubleshooting situation, and maintenance function ends up making the tool more complex and time-consuming to use than the task itself. There are occasions when GUIs are easier than command lines but it's usually a situation involving an over-complicated design of the underlying system than a practical improvement in efficiency. The hierarchical structure and relationship of keys and values in the Windows registry is relatively simple but the file format makes regedit a necessity. Typing in a registry key path to a command line application like reg.exe, especially one that includes a GUID, is painful. On Linux you can experience a similar difficulty when trying to work with an XML configuration file like the one pam_mount now uses.

Graphical configuration tools like regedit are not unique to Windows. Gconf-editor provides a similar interface to the Gnome GConf settings database. But the terminal isn't going away anytime soon as it's too powerful and even on Windows the DOS-derived command window is still present. Windows admins have learned to live with its limitations, switched to higher-level programming languages, or extended it with third-party utilities like KiXtart (which I've used). The Windows PowerShell is Microsoft's attempt to replace this last remnant of the DOS era and it's legacy syntax. This may be their admission of the limitations of a GUI or just be a response to the popularity of headless systems in data centers and the need for a replacement to a 20 year old shell. I haven't tried PowerShell myself as I moved away from the Windows platform before it was released. With the availability of virtualization I now just use Windows as a bloated runtime for legacy applications and I don't need to do scripting anymore (although I'll admit to playing with batch files in FreeDOS once in a while).

20081024

Disabling Suspend and Hibernate Buttons in XFCE

A few months ago I showed how to disable the buttons in Gnome and KDE. Occasionally, I use Xubuntu with XFCE on older systems with limited memory available. Disabling the buttons is easier in XFCE. Simply go to Applications > Settings > Settings Manager > Sessions and Startup > General and disable the show options for both buttons.

The problem is that this setting is per-user so you have to do it for each user account individually. The settings are stored in "~/.config/xfce4-session/xfce4-session.rc". If you want to use them as the default for new accounts then copy it to the equivalent path in /etc/skel and set the ownership to root. The skel directory is used to create the default home directory structure for new users and the ownership will be changed to the user's account automatically.

20080920

Improving Windows XP guest in VMware Player

I use XP in VMware Player to run some CAD applications on my Ubuntu system. I don't actually have to use XP for them as they function under Wine but I've been too busy to reinstall them and recreate their configurations. This setup works more or less but there are a few bugs and performance problems I've had to find workarounds for.

The first thing you need is RAM obviously. XP will function with 256MB of memory allocated to the VM but you'll need more for any app larger than Notepad. The host desktop (Gnome in my case) also needs a lot, how much depends on what native apps you have running. Open up Firefox with a bunch of tabs with Flash, Java, and Acrobat plug-ins active and you can easily use up 1GB total. For the VM you want to allocate enough for your guest applications but not exceeding what the host can provide else what the guest thinks is RAM will be paged out to disk by the host and it gets incredibly slow. If your host is a bit short on RAM then try switching to a lighter desktop like XFCE (Xubuntu), LXDE (Ubuntu Lite), Openbox, or even just xterm. If you're building a new system and plan to use VMs a lot then I recommend getting at least 2GB of RAM.

If you have a lot of memory available then you can improve speed in a XP guest by disabling the paging file. You'll find a lot of web sites warning against this but it's no more risky then running out of both RAM and disk space with the paging file active. If your guest apps use more than whats available then you will get an out of memory error either way, except with a paging file active you will often trash the registry in the process as it can't be written to. To turn it off go to Start > Control Panel > System (or press the logo key + Pause), Advanced > Settings (Performance) > Advanced > Change (Virtual Memory). In the Virtual Memory window select "No paging file", Set, Ok, then apply the setting (in Windows terms "apply" means reboot). Afterwards you may notice that the Windows Task Manager (Ctrl-Alt-Delete or logo key + R and then run taskmgr) will still show PF Usage >0 in the performance tab. Ignore it. I'm not sure if it's including some other non-memory related temporary file usage or is just an estimate but the important thing is that the hidden pagefile.sys in the root of the drive is gone and the setting can be confirmed in the registry.

Because I do a lot of file management from both Nautilus and the VMware guest I set up a shared folder and redirected the Windows special folder "My Documents" to the share. To set up a shared folder in VMware Player you need to manually edit the guest's vmx file. Here is a sample of the entries:

sharedFolder.option = “alwaysEnabled”
sharedFolder0.present = "TRUE"
sharedFolder0.enabled = "TRUE"
sharedFolder0.readAccess = "TRUE"
sharedFolder0.writeAccess = "TRUE"
sharedFolder0.hostPath = "/home/user/vmware/shared"
sharedFolder0.guestName = "shared"
sharedFolder0.expiration = "never"
sharedFolder.maxNum = "1"

You may have to enable it from the Player controls bar under Player > Shared Folders. The shared folders show up as a network location "\\.host\Shared Folders\shared". To set My Documents to this location right-click My Documents > Properties > Target then enter the path in the Target box and move your existing files there if desired. All of your applications should now be able to access the files the same as from the original location. I emphasize "should" as special folder usage in Windows is a Microsoft "suggested" practice that is not enforced by file system permissions. Some apps access the My Documents folder directly by an absolute path like "C:\Documents and Settings\User\My Documents" and they will ignore the target change. It also doesn't affect apps written in the "traditional" DOS programming style with behavior like writing user files to the same directory as the application executable.

This method eliminates having to synchronize files between the guest and host. You could also set up a Samba share on the host or another system and redirect My Documents to that instead but you will need centralized logins between your Linux and Windows systems to prevent permissions problems. One behavior you need to be aware of when using using shared folders is that VMware will often lock the files while the guest is running.

One problem I discovered with shared folders is that there is a long delay in accessing them. I found out the delay is caused by Windows trying to find a system named ".host" on the network and the solution is to define it as local host. Browse to the directory "C:\WINDOWS\system32\drivers\etc". If there is no "lmhosts" file present then copy lmhosts.sam to lmhosts. Edit it with Notepad and add the line "127.0.0.1 .host". Apply it (reboot again) and the delay should be gone.

I'm using a multi-head (non-Xinerama) monitor setup where I can run a VMware guest full screen on one monitor and have the Gnome desktop (or another VMware guest) on the other. One problem that I've encountered is that the Ctrl, Alt, and Shift keys would occasionally stop responding and eventually any Linux app I tried to type into would crash. This is a known bug in VMware and the solution is to run "setxkbmap". Since even a terminal window can crash when the problem is occurring I found it easiest to create a custom application launcher on the Gnome panel so I can just click it with the mouse. The fix is instantaneous and doesn't require VMware to be shut down.

Another oddity with Player on my multi-head system is that if it's running on the secondary monitor and I switch to a VT (Ctrl-Alt-F1, etc.) the vmplayer process disconnects from vmware-vmx process and the guest window is gone when I switch back. It doesn't happen on my primary monitor. If I run Player again it will reconnect to the running VM if I tell it to run the same one again so it's not fatal, just annoying.

I mentioned above about running two VMs at the same time. To save time you can use the same base VM and make duplicates of it for each with some minor modifications. You can also use them on different PCs. Regardless, there are some caveats to running multiple XP VMs simultaneously.

First is licensing. Make sure you have a valid license for each one. It's possible to change the product key after installation which will allow you to re-authenticate with Microsoft if necessary. This will save you from having to perform a reinstall of XP from scratch on each system. One thing to remember about VMware - it doesn't virtualize everything (the host's CPU for instance) and some parameters like the VM's MAC address are unique to each guest. These can be used by Windows Genuine Advantage to identify the system. Changing them can cause a revalidation prompt.

Second, some things must be different with each guest, the VM directory (obviously) and the aforementioned MAC address. The MAC address needs to be different in order for your DHCP server to assign a unique IP address to each one. By default Player uses an auto-generated MAC address which appears to be based on the VM UUID. Copying a VM will result in the same UUID and MAC address. To change it you need to change the UUID or set the MAC address manually in the vmx file, either of which can set off WGA revalidation. VMware also requires a different address range, 00:50:56:00:00:00 - 00:50:56:3F:FF:FF, for static values. Example:

ethernet0.addressType = "static"
ethernet0.Address = "00:50:56:3F:FF:FD"

Another parameter you may want to change is the displayed name of the VM which is shown in the "recently used" VM list in Player:

displayName = "Windows XP Professional - Testing"

On one of my systems Player would start misbehaving some time after installation. The guest VM would operate way too fast which would cause XP to freeze during boot probably due to timing problems with devices between the guest and host. This bug is caused by VMware misidentifying the host's maximum CPU speed due to power management or maybe ACPI problems. Minor timing problems often show up as creeping RTC (real time clock, aka "time of day") errors. The solution is to manually set the speed in "/etc/vmware/config". For a 2.53GHz CPU (cat /proc/cpuinfo):

host.cpukHz = 2530000
host.noTSC = TRUE
ptsc.noTSC = TRUE

There are many other parameters you can tweak in the vmx file. Some I've found usefull are ide1:0.fileName = "/dev/scd0" which had to be changed when IDE device names changed in recent kernels and the various "present" and "startConnected" parameters for specifying the default state of devices.

20080713

Abusing your deb package manager

Normally all applications should be installed using your distro's package manager in order to set up dependencies correctly (like libraries). Once in a while you may encounter a problem with either a broken package database or synchronization problem due to hardware faults or naughty user behavior (like deletion of an application's files manually). The solution to a broken package problem is to first let the package manager try to fix it. On Debian or Ubuntu systems Synaptic has a menu option to fix broken packages. The console package manager "aptitude" also has a broken package filter and the command line tool apt-get has an "-f" option. But there are limits to what kind of a mess they can fix and sometimes you have to tread into the risky world of tool abuse to get the job done. One of these methods are the apt-get "force" options. For example, I wanted to reinstall the free Linux version of AVG Anti-Virus as the previous version was failing to update it's virus definition database due to a script or license key problem. I'm not worried about "theoretical" Linux malware but I do need to check Windows files for viruses prior to using them with Wine. But I'm using Ubuntu 64-bit on my AMD Phenom system and AVG only has a 32-bit version available which the package managers won't install due to the architecture mismatch. That's not as serious of a problem as it looks since most any 64-bit distro and CPU can also use 32-bit applications but the package managers take a very narrow-minded view of it. To get around it I used:

dpkg -i --force-architecture avg75fld-r51-a1243.i386.deb

But this didn't work when I first tried it. I had been doing some file management with some other manually-installed applications (i.e. not controlled by the package manager) and apparently deleted a link or two that the AVG package scripts looked for when uninstalling an old version. This may explain while it was failing to update in the first place. The failing "prerm" script caused dpkg to abort the install and there wasn't any command line option I could find to force it past the problem. I wasn't worried about breakage as I expected the new package to replace all the files of the old but I didn't want to extract the files from the deb and move them manually. The Debian/Ubuntu package management system keeps track of package status in "/var/lib/dpkg/status" and it's an easy to read text file. I searched for "avg75fld" and found this line:

Status: install ok installed

I changed the ending "installed" to "not-installed". I ran dpkg again and AVG installed without problems since dpkg stopped trying to remove the previous installation first.

I don't recommend abusing your package manager out of habit as it will lead to more problems in the future. Since a package manager has very significant control over your system and often is what sets distros apart from one another, you should learn how it operates and try to work with it instead of against it. For Ubuntu and Debian systems see Debian FAQ and Ubuntu documentation.

A word of warning - although Ubuntu is based on Debian the packages in their repositories are not the same so don't mix them. A stand-alone deb package shouldn't be a problem as long as no other packages depend on it. Several projects not in the repositores like Webmin and Lost Labyrinth are distributed this way.

20080517

Setting up a local repository with debmirror

I set up a lot of PCs and while I have a fast 10Mbs Internet connection I wanted to utilize my faster internal network bandwidth better. With a new distro release it's less important as most of what I need is on the CD but as updates are released I end up downloading increasing amounts of data for each install. I've been doing lazy tricks like copying /var/cache/apt/archives to a network-shared directory but it's sloppy and multiple versions of packages accumulate. Setting up local repository was the answer for me.

You use debmirror to create a local repository for Ubuntu and Debian systems. Instead of duplicating an entire repository server you can select by release (feisty, gutsy, hardy), section (main, universe, multiverse, backports), architecture (i386, amd64), and using regular expressions. An alternative to creating a mirror is to create a caching proxy using apt-cacher. The advantages of one over the other depends on how similar the package selection of each client is. Caching is more efficient for serving similar systems, better at handling limited storage space on the server, and often has an earlier data transfer break-even point (the amount of upstream data transfer saved with it versus without). Depending on what packages are stored locally, a mirror is more efficient with diverse systems but you have to plan out the space requirements beforehand. The data transfer break-even point will take much longer to reach as many unneeded packages will be transferred unless you are very selective about which portions of the repository to mirror. With apt-cacher there can be less latency when subsequent requests for a newly published package are received as the initial request retrieves it immediately while debmirror updates are usually controlled via a cron job. Currently both debmirror and apt-cacher require the clients to be configured to use the new source so there is no administrative savings there. But apt-cacher does have to potential to support an intercepting (a.k.a transparent) proxy configuration if Debian bug #352140 is resolved which would eliminate client configuration.

Because I had space and rather diverse client requirements I went with the mirror approach using debmirror. I relied on several sources for information especially BobSongs' How To but he didn't include some of the third-party sources I needed and disabled some of the secure apt checks. Not every repository uses these security features but I prefer to err on the side of caution.

First, you need to install debmirror using "apt-get install debmirror", aptitude, Synaptic, or Adept. Next set up a location to store and share the mirror. I set the root of mine to "/srv/public/linux/distributions/Ubuntu/mirror". I also have ISOs and other files in this tree which explains the depth. The "/srv" directory is the FHS standard recommended served data location but you may prefer to dump it somewhere in /var. Then you select the repositories to mirror and create a shell script to run debmirror. You can create a cron job run it daily to stay updated. Finally, to share the mirror you can use anything that apt-get supports. Refer to "man sources.list" for the options. I'm not going to duplicate the many HOW-TOs on setting up servers here.

For my systems I needed the Hardy i386 and amd64 versions of packages in the general Ubuntu, Medibuntu, Wine, Google, and Skype repositories. First you need to set up a key ring for debmirror which defaults to "~/.gnupg/trustedkeys.gpg". On systems like Ubuntu which disables direct root logins and uses "sudo" instead, I create a special "administrator" account which is in the admin group and has a high-strength password (since the password provides root access). Within I created the keyring "/home/administrator/keyrings/mirrorkeyring/trustedkeys.gpg" using the following command to import the Ubuntu archive keys:

gpg --no-default-keyring --keyring /home/administrator/keyrings/mirrorkeyring/trustedkeys.gpg --import /usr/share/keyrings/ubuntu-archive-keyring.gpg

To this I added the other keys:

wget -q http://packages.medibuntu.org/medibuntu-key.gpg -O - | gpg --keyring /home/administrator/keyrings/mirrorkeyring/trustedkeys.gpg --import

wget -q http://wine.budgetdedicated.com/apt/387EE263.gpg -O - | gpg --keyring /home/administrator/keyrings/mirrorkeyring/trustedkeys.gpg --import

wget -q https://dl-ssl.google.com/linux/linux_signing_key.pub -O - | gpg --keyring /home/administrator/keyrings/mirrorkeyring/trustedkeys.gpg --import

gpg --keyring /home/administrator/keyrings/mirrorkeyring/trustedkeys.gpg --import rpm-public-key.asc

The last key is for Skype. Their Linux support is minimal and the repository is kind of a mess. They moved the key on their server (or lost it) but I had a copy. The MD5 hash of my key (md5sum rpm-public-key.asc) is 2f595c0efe5d26fb4909f3347670746d and you can get a copy from this link.

Next I created my debmirror-hardy.sh script and put it in /usr/local/bin. Most of my parameters are explicitly defined on the debmirror command line although you could create a configuration file instead (the default is /usr/share/doc/debmirror/debmirror.conf). I specify the parameters in the same order as the corresponding directories appear in the repository path. I used rsync with the main Ubuntu repositories as it is supposedly faster but none of the others support it so they use http. Notice that the root for an rsync server is specified with a preceding colon (:). The "md5sums" parameter adds MD5 checking but you may want to skip it to speed up the mirror process. The "nosource" parameter skips source packages as the only time I need them is when I compile something outside of the distro and even then I only need the headers. I do compile Wine to perform testing on my primary system but I get it straight from the source tree using git. The "progress" option shows a download progress meter and I tee everything to the console so I can watch if I'm bored. It also creates a couple of logs in /var/log and compresses the old ones to save space.

#!/bin/sh
# debmirror script v1.1 for Ubuntu Hardy Heron
# Copyright 2008 Jeff D. Hanson (jhansonxi@gmail.com)
# Released under GNU General Public License version 3
# v1.1 - added debian-installer section, post chown/chmod
# fix, size summary, date/time

DEBMLOG=/var/log/debmirror.log
MIRRORDIR=/srv/linux/distributions/Ubuntu/mirror
export GNUPGHOME=/home/administrator/keyrings/mirrorkeyring

if test -s $DEBMLOG
then
test -f $DEBMLOG.3.gz && mv $DEBMLOG.3.gz $DEBMLOG.4.gz
test -f $DEBMLOG.2.gz && mv $DEBMLOG.2.gz $DEBMLOG.3.gz
test -f $DEBMLOG.1.gz && mv $DEBMLOG.1.gz $DEBMLOG.2.gz
test -f $DEBMLOG.0 && mv $DEBMLOG.0 $DEBMLOG.1 && gzip $DEBMLOG.1
mv $DEBMLOG $DEBMLOG.0
cp /dev/null $DEBMLOG
chmod 640 $DEBMLOG
fi

# Record the current date/time
date 2>&1 | tee -a $DEBMLOG

# Ubuntu mother lode. At least it supports rsync.
echo "\n*** Ubuntu general ***\n" 2>&1 | tee -a $DEBMLOG
debmirror --nosource --method=rsync --md5sums --progress \
--host=us.archive.ubuntu.com \
--root=:ubuntu \
--dist=hardy,hardy-security,hardy-updates,hardy-backports \
--section=main,main/debian-installer,restricted,restricted/debian-installer,\
universe,universe/debian-installer,multiverse,multiverse/debian-installer \
--arch=i386,amd64 \
$MIRRORDIR/ubuntu \
2>&1 | tee -a $DEBMLOG

# Canonical's rather lonely partners repo
echo "\n*** Canonical partners ***\n" 2>&1 | tee -a $DEBMLOG
debmirror --nosource --method=http --md5sums --progress \
--host=archive.canonical.com \
--root=/ \
--dist=hardy,hardy-backports,hardy-proposed,hardy-security,hardy-updates \
--section=partner \
--arch=i386,amd64 \
$MIRRORDIR/canonical \
2>&1 | tee -a $DEBMLOG

# Medibuntu fun stuff
echo "\n*** Medibuntu ***\n" 2>&1 | tee -a $DEBMLOG
debmirror --nosource --method=http --md5sums --progress \
--host=packages.medibuntu.org \
--root=/ \
--dist=hardy \
--section=free,non-free \
--arch=i386,amd64 \
$MIRRORDIR/medibuntu \
2>&1 | tee -a $DEBMLOG

# Wine's latest bugs
echo "\n*** Wine ***\n" 2>&1 | tee -a $DEBMLOG
debmirror --nosource --method=http --md5sums --progress \
--host=wine.budgetdedicated.com \
--root=/apt \
--dist=hardy \
--section=main \
--arch=i386,amd64 \
$MIRRORDIR/wine \
2>&1 | tee -a $DEBMLOG

# Our friends at Google. Including a leading / in the root causes failure.
echo "\n*** Google ***\n" 2>&1 | tee -a $DEBMLOG
debmirror --nosource --method=http --md5sums --progress \
--host=dl.google.com \
--root=linux/deb \
--dist=stable \
--section=main,non-free \
--arch=i386,amd64 \
$MIRRORDIR/google \
2>&1 | tee -a $DEBMLOG

# Skype's half-baked linux contribution. Located in a half-baked repository.
echo "\n*** Skype ***\n" 2>&1 | tee -a $DEBMLOG
debmirror --nosource --method=http --md5sums --progress --ignore-release-gpg --ignore-missing-release \
--host=download.skype.com \
--root=/linux/repos/debian \
--dist=stable \
--section=non-free \
--arch=i386 \
$MIRRORDIR/skype \
2>&1 | tee -a $DEBMLOG

echo "\n*** Fixing ownership ***\n" 2>&1 | tee -a $DEBMLOG
find $MIRRORDIR -type d -o -type f -exec chown root:root '{}' \; \
2>&1 | tee -a $DEBMLOG

echo "\n*** Fixing permissions ***\n" 2>&1 | tee -a $DEBMLOG
find $MIRRORDIR -type d -o -type f -exec chmod u+rw,g+rw,o+r-w {} \; \
2>&1 | tee -a $DEBMLOG

echo "\n*** Mirror size ***\n" 2>&1 | tee -a $DEBMLOG
du -hs $MIRRORDIR 2>&1 | tee -a $DEBMLOG

# Record the current date/time
date 2>&1 | tee -a $DEBMLOG

This works very well so far but it took a lot of time to figure out. One thing I noticed is that apt-get handles some repository structures better than debmirror. Google's repository had an oddity, possibly due to a redirect, that caused debmirror to not find the Release file or detached *.gpg signature unless I left out the preceding / from the root parameter. Skype's repository has a Release file but not where debmirror could find it. They don't sign it either.

UPDATE: I've made some changes to the script. I've been having fun with PXELINUX and performing Ubuntu installs by netbooting. This required the addition of the debian-installer portion of the repositories. I also added time/date timestamps and a final size check (about 37GB for everything so far). One problem I haven't found the solution for is that when I put the script in /etc/cron.daily it doesn't run.

UPDATE2: Thanks to the comment by sq5nbg I figured out the problem with cron.daily. The crontab entry for it uses run-parts to run the executables in the directory. According to it's man page it is picky about the file names it will accept and a period is not a valid character. You either have to rename the file or symlink to it. The run-parts utility is in debianutils and bug #38022 reports this issue. It's marked as a wishlist item since the restriction is documented in the man page. I added a note about this to the cron page in the Ubuntu Wiki.

I need to point out that the script should be edited to use a server nearest (in Internet terms) to you instead of the ones specified. This especially applies to the Ubuntu mirror (us.archive.ubuntu.com). Use the Ubuntu mirror list page to find one that has the packages and protocol you want. This reduces the load on the primary servers.

UPDATE3: You can use your local mirror with the Minimal CD to install Ubuntu on systems that don't support network booting. First, set up a server that provides access to the mirror directory. I used Apache2 to serve them via http and put a link to my debmirror directory in "/var/www". If you are using an http server you should be able to navigate the debmirror directories using any web browser. If you can't see them then the installer won't either. After setting up the server, boot the CD and proceed as you normally would through the boot settings and locale selection. After specifying the network configuration and hostname you will see the "Choose a mirror of the Ubuntu archive" screen where it wants you to select the "Ubuntu archive mirror country". Hit the Home key to jump to the top of the list and select the "enter information manually" option. For the "Ubuntu archive mirror hostname" enter your servers hostname, FQDN, or IP address. Do not specify a protocol prefix (http://) or any directory path on that screen. I'm not sure if the installer tries all protocols, defaults to http, or guesses based on a specified port number but I didn't have to tell it what to use. On the next screen, enter the "Ubuntu archive mirror directory" with the full server path to the directory containing the dists, pool, and project directories. If you do it wrong it won't be able to find the "Release" file and you will get a "bad archive mirror" error.

20080422

Penguicon 6.0

I attended Penguicon 6.0 over the weekend. It met my expectations which were rather high but I figured it would. More importantly, two friends of mine also enjoyed it. One had been there last year and the other was a n00b. Another had attended last year and would have returned but unfortunately had family matters that took priority. I did advertise a bit around the Alpena area but I'm not sure how effective it was. Next year I think I'll try to get enough people to fill a van or maybe a small bus and reduce the fuel cost of the trip.

I did help out with the installfest but only saw a half-dozen attendees that took advantage of it. I fixed a networking problem on a laptop of a WLUG member which was simply an issue with a manually set network configuration which was made by another member. It was the direct result of WCC not having a DHCP server on their WiFi.

The other was a volunteer that wasn't able to get any distro to install on an old HP laptop with a 500MHz K6 CPU. The problem was obvious - 64MB of RAM. I put Puppy 4 beta on it which seems stable and looks much better then v3. I didn't have time to try setting up the network but at least he had something to start with. He was planning on installing more memory now that the problem was identified. It would be a good candidate for either Puppy or Xubuntu.

The only major issue at the conference was the Hilton's overloaded WAN connection. By Saturday night it was unusable. There was a small LAN party set up in a large and mostly empty ballroom although there may have been more players in the guest rooms. It looked like it was a local game so it probably wasn't affecting the WAN. I think a LAN party is kind of a waste of space as most attendees can play games online at home and there are better things to do at the conference. Especially since they were Windows systems. They did have some nice large LCD and HDTV screens. The computer lounge had a proxy with ISOs and repository mirrors and there didn't appear to be a large number of users on the public PCs so not much load there either. A virus on some Windows system somewhere could have been the culprit but there's no way to tell. Next year the conference is moving to a different hotel as its outgrown the Hilton so it will be a different environment.

20080401

Laptop problems with Ubuntu Hardy Beta

So far my Hardy experience has been good. I've had a few crashes with Firefox 3 but it's session recovery works well so it's been only a minor annoyance. I have mixed feelings about the new live searching in the address bar as I'm used to it only picking up URLs as in Firefox 2. When it hits on page titles and other data it's distracting.

Some of the bugs I encountered in alpha 5 have been fixed. SCIM can now be disabled via System > Administration > Language Support > Enable support to enter complex characters (unchecked). The Gnome display properties applet and Gnome settings daemon no longer crash.

The Java plug-in problem may have been my fault. The correct package is sun-java6-plugin and I may have installed a different one by mistake or it wasn't in the repo at the time.

The problem with X freezing the system during a VT switch or restart/shutdown is still present. According to bug #204603 it's an Intel driver issue which requires Option ForceEnablePipeA "true" to be specified in the xorg.conf Device section. This may be a repeat of a previous bug.

The battery monitor bug only occurs with one account and only with the Power Manager applet. The Battery Charge Monitor applet shows the correct state and other in other accounts on the laptop they both work correctly. Minor config problem somewhere.

A major usability problem I've noticed is that the drive mounting/browsing options that were in the "Storage" tab in "Removable Drives and Media" has been moved to Nautilus preferences. The menu entry hasn't been renamed however which is confusing because it no longer affects removable media. Bug #210499 was already filed about it so I'm not the only one who noticed.

One glaring omission in new Hardy features is the lack of networking support in Friendly Recovery - the menu you now encounter after selecting recovery mode from the Grub menu. Unless your system is configured to use static IP/gateway/DNS you don't get any network access so apt-get is rather limited. On a DHCP network like most broadband and wireless connections you have to manually start dhclient first.

20080309

Living on the edge with Ubuntu Hardy Alpha

I was having some problems getting Wine to compile with OpenGL support in Ubuntu Gutsy on my laptop. The config script could never find the OpenGL dev libraries. I had been messing around with learning package management and other things and I broke something somewhere and couldn't find it. Instead of reinstalling Gutsy I decided to test Hardy alpha 5. This way I could report any bugs I encountered so they could get fixed before the final release and also fix the Wine problem. The laptop is a Toshiba M35X-S114, a Celeron 1.3GHz with an Intel 855GME chipset. It's Linux compatibility is better than average but not spectacular. I expected breakage with Hardy and was not disappointed. Alpha testing is living on the edge as packages are constantly being updated so system reliability can vary wildly from one update to the next. I often would start it up in the morning, install a dozen updates, a few minutes later reload the package lists and find more updates.

The biggest immediate issue I encountered was X locking up randomly with the Intel driver. It would also lock the system whenever I switched to a virtual terminal. I've encountered the latter problem before when the framebuffer drivers used by the kernel for the splash screen conflict with the X drivers. I edited the Grub menu (/boot/grub/menu.lst) and changed the "splash" entries to "nosplash" which made the system usable. It now locks up rarely during use but fairly often during logouts, shut downs, and reboots. No errors show up in the logs. I'll have to set up an SSH server on it so I can monitor it remotely the next time it hangs assuming it's just X and not the kernel itself. If it is the kernel then the SSH server will die immediately and I probably won't see any error messages. In that case the solution would be to use a serial console but the laptop doesn't have a serial port and I don't have a USB serial adapter. The advantage of a serial console is that the kernel handles it directly and "out of band" so networking issues don't affect it.

Another possibly related graphics issue is that the Gnome display properties utility crashes whenever I attempt to use it (bug #198951). I use this to correct the screen resolution when a game or Wine application aborts and doesn't reset it back after changing it. I can do a "xrandr -s 1024x768" in the terminal for now.

Another constant annoyance is the crashing of the Gnome settings daemon at every login (bug #197153). There was an update that came through and fixed it but two days later another one caused the problem to return.

There is also an issue with the battery monitor always reporting a 51% charge, hal setting the cdrom (/dev/sg0) to group root instead of cdrom and complaining it can't unmount the volume when the drive's eject button is used, and I can't disable the SCIM language control which keeps changing my keyboard language randomly (bug #199314).

Firefox v3 beta works more or less. I like the new smart bookmark which lists recently used or bookmarked links. It crashes once in a while but I haven't determined if an addon or plugin is responsible. I installed the Sun Java plugin but it can't seem to detect it and is not listed in about:plugins.

In spite of these problems the system is usable and 3D in Wine now works.

20080214

Re: Krazy Kubuntu Annoyances

Carla Schroder (tuxchick) recently posted a Kubuntu critique on LinuxPlanet titled Krazy Kubuntu Annoyances. I saw this on both Linux Today and LXer.com but I'm replying here as commenting is annoying on both sites because they strip out most html tags.

I've been setting up an advanced home office network to replace my existing haphazard mess. I haven't been posing much lately because this it taking a lot longer than planned due to changing design, learning the details of systems I only had superficial experience with, and Ubuntu bugs. Originally I didn't have a server or central authentication and was just using Samba to share access the lazy "map to guest = bad user" in smb.conf way. I have IPCop on an old box for firewall, DNS, and VPN. Internet access is via an Ethernet connected cable modem and there's five subnets for my office, household, wireless, public servers (unused), and an isolated one. The isolated network allows me to retrieve user documents from a malware infected system (Windows of course) without it being able to access the rest of my network or the Internet. This arrangement worked for a while but my primary workstation was running out of disk space and I decided to set up a proper network and server to make it easier to control access and synchronize my data remotely. I also plan on building a Debian/Ubuntu repository mirror, public wifi with a "usage agreement" access page like many places have, and a public game server for some Linux games like Tremulous. Nothing really ridiculous but a lot of work as I've found out.

So far about half of my time has been spent diagnosing various problems that often are Ubuntu Gutsy bugs. Not always server-related but it's often hard to tell where the problem is when you're in the middle of a major restructuring and there are a large number of variables. About half have been data storage problems and the other networking.

The storage issues are the result of me hitting the limit of Ubuntu's alternate installer. It's fine for basic setup with RAID or LVM but I had to make my life difficult by mixing PATA and SATA drives with RAID+LVM+LUKS/dm-crypt+pam_mount including encrypted root and swap. It didn't handle that well. I could go on for hours about the problems and other bug reports I filed but more on that later.

Networking-wise I encountered many problems including some of the ones Carla mentioned. Not all were bugs but just design decisions whose basis is hard to understand or track down. The ones we've encountered are by no means all as there are many bug reports at launchpad and on various forums about networking problems that don't occur with Knoppix or other distros. I'm using Ubuntu on most systems so some of my problems are Gnome-specific but many are not.

Network Manager seems to be the cause of a lot of complaints. First, just having the wrong ethernet chip will cause Network Manager to disconnect it for you even if it was working at boot. Or maybe it will refuse to shut down because of it. Then there are the general IPv6 issues. Of course you still have to deal with the normal industry-wide problems like not being able to resolve host names when using a laptop with multicast-DNS (Avahi) active on a Windows network using a something.local domain.

The the link local auto-config emulates Window's behavior but I'm not sure if it's good or bad. On Windows networks I always used a class C network address and if a PC ended up with a class B then I knew that it wasn't seeing the DHCP server and had auto-configured one.

I've had some issues with the host file not being set up properly which seemed to cause a major slowdown in Gnome. It took me a while to straighten it out. The odd 127.0.1.1 entry confused me but apparently it was requested because of compatibility and historical reasons.

For printer configuration I use the CUPS web interface if the graphical utilities don't suffice.

The Bluetooth support seemed to be driven by phone integration usability concerns but it was low priority according to the Gutsy blueprint. The Hardy blueprint has an entry for networkless installation fixes but I don't see anything regarding my drive setup problems, only post-installation management.

20080120

Disabling hibernate and suspend buttons

On systems with broken hibernate and suspend its a good idea to disable the option in Gnome and KDE to prevent inadvertent usage and potential lock-ups.

In Gnome use the gconf-editor. On systems using sudo to prevent direct root login launch it by "gksu gconf-editor". Browse the configuration tree to:
/apps/gnome-power-manager/general
Uncheck "can_hibernate" and "can_suspend". Right-click them and select "Set as Mandatory". Users will probably have to relogin before it takes effect.

In KDE, create a file:
/usr/share/kubuntu-default-settings/kde-profile/default/share/config/power-managerrc
In this file add:
disableSuspend=1
disableHibernate=1

20080101

Find tricks

These are some find examples with moderately complicated regular expressions that I've used for administration tasks. Note that regular expressions used by find, grep, and other programs have some variants with both the old "basic" form and the newer "extended" forms. Find defaults to the extended version based emacs but some of it's tests like -name use "shell patterns" instead (see the sh man page). In the regex man page the "(!)" identifies some of the syntax and behaviour that may not be compatible with other regular expression implementations.

The first example cleans out the Unreal Tournament 2004 cache from the home folders of all users. The purpose of cleaning out the cache is that every time the client connects to a server that is using a map, vehicle, or other add-on that it doesn't already have locally it downloads it to the cache. The cache on a system of an avid on-line gamer will easily exceed many gigabytes and can run their home directory out of space on smaller drives. On some distributions, it is impossible to log in if there is no home space available.

find /home -regex '.*/\.ut2004\(/Cache\|/.*/Cache\)' -exec rm -rf {} \;

Breakdown:

. = metacharacter implying any single character

* = any quantity of the previous character (in this case any quantity of any character because of the "." metacharacter)

/ = detect the slash to set a reference to a directory when combined with the next part.

\.ut2004 = the backslash escapes "." so that it is treated as a period and not a metacharacter. Combined with the previous ".*/" it limits the results to hidden /.ut2004 and not /..ut2004, /xut2004, or any other directory.

\(...\|...\) = This sets up a pair of branches with alternation as indicated by the vertical bar. The parenthesis define the range of expressions that make up each branch. The vertical bar is escaped to keep the shell from thinking it's a pipe and the parenthesis are escaped to indicate they are not being searched for.

\(/Cache\|/.*/Cache\) = The combined alteration limits results to .ut2004/Cache and any other items named Cache inside the .ut2004 directory. The parenthesis are important - without them find will return /.ut2004/Cache and anything anywhere with a subdirectory named Cache (like in .mozilla).

The entire regular expression is protected by single quotes so find knows they belong together. You could also limit the results to directories by adding the "-type d" test.

-exec rm -rf {} \; = This tells find that for every item it returns it is to execute rm with the parameters -rf (to delete directory trees) followed by the path with which will dynamically replace the {}. The command line is terminated by an escaped semicolon.

Here are some other find examples I've found useful.

Find any user's Mozilla/Firefox cache:
find /home -regex '.*/\.mozilla/.*/Cache'

Find any user's hidden trash directory:
find /home -regex '.*/home/[^/]*/\.Trash'

When installing updates to applications in Wine you will occasionally encounter duplicate file and directory name problems, sort of a reverse name collision. It can occur because Linux names are case sensitive but Windows is not. This means it is possible to have two files, "readme.txt" and README.TXT" in the same Linux directory but not in a Windows one. If an application update is in the form of a executable or self-extracting archive, Wine will resolve capitalization differences and ensure that a replacement file from an update that has a upper-case name will correctly overwrite a target file with a lower-case name. But if the update is from a zip or other archive and contains names with different case, then you just can't extract them and copy them on top of the installed application's directories as duplicates will result. In order to work around this you either have to install a Windows archive utility like 7-Zip and use it to extract the files and take advantage of Wine's name resolution function, or manually change the names to match. Since Windows applications generally don't care about file or directory name case, another option is to rename everything to lower case. You can do this by combining the find command with the rename command:
find <directory or file name> -depth -execdir rename 'y/A-Z/a-z/' {} \;

By default, find has some optimizations that will speed up searches in large directory trees. One of these is to skip checking subdirectories by assuming that two less exist than the total becasue of the "." and ".." entries. This will cause find to miss directories on file systems that do not have hard links for these like CD-ROM and vfat. To prevent this from occurring, use the -noleaf option.

About Me

Omnifarious Implementer = I do just about everything. With my usual occupations this means anything an electrical engineer does not feel like doing including PCB design, electronic troubleshooting and repair, part sourcing, inventory control, enclosure machining, label design, PC support, network administration, plant maintenance, janitorial, etc. Non-occupational includes residential plumbing, heating, electrical, farming, automotive and small engine repair. There is plenty more but you get the idea.