Diary and notebook of whatever tech problems are irritating me at the moment.

20091225

Restricting SSH logins to specific groups on Ubuntu

On Ubuntu I have a user account "administrator" which is in the admin group. It has a complicated password for security. OpenSSH by default allows all users to attempt to login remotely. Since user accounts often have weak passwords it's unsafe to allow this. I could use ssh-keygen to create keys instead but the systems I support are not in the same physical locations so an ad-hoc arrangement is easier as I can't predict what I'll be connecting with. To set up this restriction all I needed to do was edit /etc/ssh/sshd_config (see the man page for the file) and add "AllowGroups admin". Then I had sshd reload the config with "/etc/init.d/ssh reload". After that only members of the admin group could log in and all others receive generic "Permission denied, please try again." messages. It supports blocking or allowing by user and hosts also.

20090929

Basic apt key management

Ubuntu's keyserver, keyserver.ubuntu.com, has a lot of problems lately. I was trying to add the Pidgin repository to work around bug #389322 but kept getting timeout errors from the server. One of my systems did successfully get the key so all I had to do was transfer it to the others.

These are the commands you can use to do the same. I used a terminal because I couldn't find a way to export the keys graphically with either the Synaptic package manager or Seahorse.

First list the keys:

gpg --list-keys --no-default-keyring --keyring /etc/apt/trusted.gpg
/etc/apt/trusted.gpg
--------------------
pub 1024D/437D05B5 2004-09-12
uid Ubuntu Archive Automatic Signing Key
sub 2048g/79164387 2004-09-12
...
pub 1024R/A1F196A8 2009-01-20
uid Launchpad PPA for Pidgin Developers

The "--no-default-keyring" and "--keyring" tells gpg to use only the specified apt trusted keyring. Next, find the key you want to export from the list and specify either the key ID or user ID with the following command:

gpg --no-default-keyring --keyring /etc/apt/trusted.gpg --armor --export A1F196A8>pidgin.gpg

The "--armor" tells gpg to output an encoded text key instead of binary one. The Pidgin package signing key ID is A1F196A8 and it is captured to a "pidgin.gpg" file in the current directory. Then you copy the key file to your other systems and add it to their apt keyrings using either Synaptic (Settings > Repositories > Authentication > Import Key File) or the apt-key command in a terminal:

sudo apt-key add pidgin.gpg

Then you add the repository as shown in the Pidgin download page. To upgrade Pidgin you can use Synaptic (Reload, Mark All Upgrades, Apply), the text-based package manager aptitude (u,U,g), or apt-get on the command line:

sudo apt-get update
sudo apt-get upgrade

Then you just need to restart Pidgin and try connecting to your Yahoo! Messenger account.

20090815

Writing UDEV rules to get a SCSI scanner working on Ubuntu

I'm building some Ubuntu 9.04 (Jaunty Jackalope) systems for relatives and using them as a way to get rid of a lot of old hardware that has been taking up space in my office. This includes several old USB, parallel port, and SCSI scanners. SCSI scanners pretty much ruled in the days before USB as they were much faster than parallel ports. However, they were a pain to configure and required heavy (and usually short) cables which made them difficult to fit into your work area. I tested a Microtek ScanMaker E3 (MRS-600E3) and UMAX Vista S8 scanner first. They worked without problems although the former was picky about termination. Unfortunately a Hewlett-Packard ScanJet 6100C (Q2950A) didn't work at all. Checking the kernel messages indicated that it was represented by /dev/sg7 but the permissions were 0660 root:root so sane couldn't access it. Changing the permissions solved the problem but the /dev directory is a virtual filesystem controlled by udev and the changes are lost after reboot. I could just put a chmod comand in /etc/rc.local but that is the wrong way to fix it. A search on launchpad found bug #378989 which describes the problem with this model. I'm not sure if the fault lies with udev or HAL but creating a udev rule is a simple enough way to fix it for now. I'll describe how to create such a rule using this as an example but udev rules can do much more than just change device permissions.

First you need to be root. Either add "sudo" to the beginning of the following commands or switch to a root shell with "sudo su". Next install lsscsi which makes it easy to identify device node assignments:

apt-get install lsscsi

Then run it to get a list of SCSI devices:

lsscsi -g
[0:0:0:0] disk ATA Maxtor 33073U4 BAC5 /dev/sda /dev/sg0
[0:0:1:0] cd/dvd LITE-ON COMBO SOHC-4836V SG$4 /dev/sr0 /dev/sg1
[4:0:5:0] process HP C2520A 3644 - /dev/sg7
[5:0:0:0] disk USB 2.0 Flash Disk 0.00 /dev/sdb /dev/sg2
[6:0:0:0] disk Generic USB SD Reader 1.00 /dev/sdc /dev/sg3
[6:0:0:1] disk Generic USB CF Reader 1.01 /dev/sdd /dev/sg4
[6:0:0:2] disk Generic USB SM Reader 1.02 /dev/sde /dev/sg5
[6:0:0:3] disk Generic USB MS Reader 1.03 /dev/sdf /dev/sg6

Note that the scanner is at /dev/sg7. With this information you can then use udevadm to find out what is known about the device in the udev database and where in hierarchy of systems it lies:

udevadm info -a -p /sys/class/scsi_generic/sg7

Udevadm info starts with the device specified by the devpath and then
walks up the chain of parent devices. It prints for every device
found, all possible attributes in the udev rules key format.
A rule to match, can be composed by the attributes of the device
and the attributes from one single parent device.

looking at device '/devices/pci0000:00/0000:00:1e.0/0000:01:01.0/host4/target4:0:5/4:0:5:0/scsi_generic/sg7':
KERNEL=="sg7"
SUBSYSTEM=="scsi_generic"
DRIVER==""

looking at parent device '/devices/pci0000:00/0000:00:1e.0/0000:01:01.0/host4/target4:0:5/4:0:5:0':
KERNELS=="4:0:5:0"
SUBSYSTEMS=="scsi"
DRIVERS==""
ATTRS{device_blocked}=="0"
ATTRS{type}=="3"
ATTRS{scsi_level}=="3"
ATTRS{vendor}=="HP "
ATTRS{model}=="C2520A "
ATTRS{rev}=="3644"
ATTRS{state}=="running"
ATTRS{timeout}=="0"
ATTRS{iocounterbits}=="32"
ATTRS{iorequest_cnt}=="0x8"
ATTRS{iodone_cnt}=="0x8"
ATTRS{ioerr_cnt}=="0x1"
ATTRS{modalias}=="scsi:t-0x03"
ATTRS{evt_media_change}=="0"
ATTRS{queue_depth}=="2"
ATTRS{queue_type}=="none"

looking at parent device '/devices/pci0000:00/0000:00:1e.0/0000:01:01.0/host4/target4:0:5':
KERNELS=="target4:0:5"
SUBSYSTEMS=="scsi"
DRIVERS==""

looking at parent device '/devices/pci0000:00/0000:00:1e.0/0000:01:01.0/host4':
KERNELS=="host4"
SUBSYSTEMS=="scsi"
DRIVERS==""

looking at parent device '/devices/pci0000:00/0000:00:1e.0/0000:01:01.0':
KERNELS=="0000:01:01.0"
SUBSYSTEMS=="pci"
DRIVERS=="aic7xxx"
ATTRS{vendor}=="0x9004"
ATTRS{device}=="0x7178"
ATTRS{subsystem_vendor}=="0x0000"
ATTRS{subsystem_device}=="0x0000"
ATTRS{class}=="0x010000"
ATTRS{irq}=="22"
ATTRS{local_cpus}=="ffffffff,ffffffff" ATTRS{local_cpulist}=="0-63"
ATTRS{modalias}=="pci:v00009004d00007178sv00000000sd00000000bc01sc00i00"
ATTRS{enable}=="1"
ATTRS{broken_parity_status}=="0"
ATTRS{msi_bus}==""

looking at parent device '/devices/pci0000:00/0000:00:1e.0':
KERNELS=="0000:00:1e.0"
SUBSYSTEMS=="pci"
DRIVERS==""
ATTRS{vendor}=="0x8086"
ATTRS{device}=="0x244e"
ATTRS{subsystem_vendor}=="0x0000"
ATTRS{subsystem_device}=="0x0000"
ATTRS{class}=="0x060400"
ATTRS{irq}=="0"
ATTRS{local_cpus}=="ffffffff,ffffffff"
ATTRS{local_cpulist}=="0-63"
ATTRS{modalias}=="pci:v00008086d0000244Esv00000000sd00000000bc06sc04i00"
ATTRS{enable}=="1"
ATTRS{broken_parity_status}=="0"
ATTRS{msi_bus}=="1"

looking at parent device '/devices/pci0000:00':
KERNELS=="pci0000:00"
SUBSYSTEMS==""
DRIVERS==""

Note that the DRIVERS=="aic7xxx" indentifies the Adaptec AHA-2940 SCSI card. All of this data can be referenced by a udev rule to identify when and how to manipulate the device. That is what udev does - run everything through a list of rules, matching or excluding attributes as specified by a rule, then performing an operation when the conditions of a rule is met. The manual for udev is at /usr/share/doc/udev/writing_udev_rules/index.html and it gives many good examples of what you can do. In this case the scanner device needs different permissions and group ownership so that users can access it with Xsane. Most of the rules included with packages are in /lib/udev but local rules can be added to /etc/udev/rules.d and they can override existing rules. There is a file name standard for the rule files (see the README in the directory) - they always start with a number (which indicates priority) and end with ".rules". My rule file is "/etc/udev/rules.d/45-scsi-scanner.rules", owned by root and in group root with 0644 (rw-r--r--) permissions. You have to reboot to make it active. This is what it contains:

# permissions for HP ScanJet 6100C SCSI scanner
SUBSYSTEM=="scsi_generic",ATTRS{vendor}=="HP",ATTRS{model}=="C2520A", NAME="%k", SYMLINK="scanner%n", MODE="0660", GROUP="scanner"

So what does this all mean? First the SUBSYSTEM keyword says it only applies to devices in the "scsi_generic" subsystem (as per the first few lines that udevadm reported). The "==" is a comparison operator. Next the ATTRS{vendor} keyword specifies that an attribute named "vendor" in the subsystem (or any parent subsystem) has to have a value of "HP" (which the SCSI module reports via the SCSI card). Then the ATTRS{model} keyword tells udev to look in the same subsystem that matched the vendor for a model attribute that matches "C2520A". If it finds one, and since there are no other comparisons specified, then the rule matches and the rest is processed. NAME is the keyword for setting the device node name (sg7 in this case) and the %k is a string substitution operator that udev will expand to the original name assigned by the kernel (again sg7). The "=" is the assignment operator. So this part of the rule sets the NAME assignment key to the original "sg7" effectively keeping the default device node "/dev/sg7" as is. The SYMLINK keyword creates symlinks to the default device node. The %n operator is expanded by udev to the kernel number of the device (the 7 in sg7). The resulting symlink will be scanner7 in this case and if the default node changes due to a SCSI device being added or removed the symlink will change to match (scanner5 for sg5, etc.) For the scanner rule it is for convenience only as a device named "scanner" is easier to figure out than "sg", especially when trying to do user support over the phone. The MODE just sets the permissions in octal and GROUP assigns a specific group membership of "scanner".

When this rule is activated, /dev/sg7 will be root:scanner with rw-rw---- permissions and a /dev/scanner7 symlink will also be created that points to it. For the user to access the scanner they need to be in the scanner group. If the scanner group doesn't exist (not in /etc/group) then you can add it with:

addgroup --system scanner

This will dynamically create a system group somewhere in the range of 100-999. Any users added to the group need to relogin for it to take effect.

20090802

Introducing Winesharer - so pre-alpha it doesn't even work

About a year ago I was setting up a Ubuntu system for a family of non-technical users. They like to play games and one of their favorites is Diablo II. It works well on Wine if you set it to use Glide and install a Glide wrapper. The game's copy protection is properly supported by Wine but it's not necessary as Blizzard removed the CD check in the recent patches.

There are two problems with installing Windows applications like Diablo II for multiple users. First, because of the isolation of user accounts on *nix systems, you have to login and repeat the install process for each account. Second, each installation after the first wastes disk space and with Diablo II it's several gigabytes, especially if you have a lot of mods.

It's possible to manually copy the first installation and edit the Wine registry, menu entries, and fix symlinks, but it's tedious. So I began messing around with some shell scripts to automate the process. I'm not an expert with shell scripts but I improved with time and some help from my LUG-mates. After several false starts I got a basic script functioning. That solved the first problem but not the space issue.

After messing around with some LiveCDs, I got the idea to try to share the wine directory with a union mount. First I tried FunionFS but it had several bugs that prevented it from working (like not being able to change an existing file). So I switched to Aufs. It worked but it can't be run by a user as it requires root permission to mount. To keep it easy for users I had to use pam_mount and mount it at login. I added the ability for the script to export a sample mount entry for pam_mount.conf.xml to save time.

I then got the idea to add handling for separate Wine directories for each application (like CrossOver Bottles). That brought up another issue, the desktop menu entries for Wine's utilities like winefile and winecfg. I needed to add duplicate entries with different WINEPREFIX settings and associate them with each application. I came up with a primitive solution for locating the entries and duplicating them in an alternate location as a submenu below the primary application's menu (Wine > Programs > (application) > Wine Utils).

I then got another idea - application merging. One problem with games is that there are a lot of third-party mods and other customizations for them. There are also a lot of updates. This requires editing of configuration files, extracting files from archives, and file management. These can't always be done easily with Linux tools. One problem is patches for older games are often in zip files. The contents are intended to overwrite existing files but sometimes the filenames have different case. If you extract them with a native Linux application you end up with duplicates instead of overwrites. The other problem with Linux tools is that they always give a "/" or "/home/user" oriented view when the user expecting C:, especially when following an online instructions for installing a patch or mod. The concept of application merging is simple - select some utilities that have minimal dependences, install to a separate Wine directory, then add it as an Aufs branch to the mount for each main application. You install each one once and share it but then it can be used within every other Wine application directory (and behaves as if it was installed in each) without wasting much storage space. Wine's registry files are text so entries for a merging application can be duplicated with just diff and patch.

Then I realized this had another major benefit for game mods - it would be possible to install multiple, even conflicting mods, and mount them as separate Aufs mount points at the same time. This is especially useful for games that don't have integrated mod management. Using Diablo II as an example, you would install and update it, share it with wineappshare.sh, then mount it via Aufs with a new read/write directory branch. Then install a mod like MedianXL (which ends up in the read/write branch). Then unmount the directory, move and share the read/write directory as a new read-only branch for MedianXL and mount it on top of the Diablo II directory with a new read/write branch. If you mount the Diablo II directory again using a different read/write branch, you can run regular Diablo II and the MedianXL version at the same time. Effectively they are in separate "bottles" but share the bulk of the install in read-only branches so there is little additional overhead.

Of course the tricks don't stop there. You can imagine putting the shared parent branches on a compressed volume to save space, mount it on a server via NFS, and use pam_mount to mount the read/write branches on a USB drive on the client. Imagine the possibilities for a gaming cafe system.

At this point I knew Winesharer would revolutionize Windows gaming on Linux. Just as soon as I finish writing it. Then perfecting it. Then documenting some really complicated examples. And make some awesome video demonstrations. And write the book. Then push it out of the code cave and bask in the glory. At least that was the plan about a year ago before life and reality got in the way. So now I'm down to the old "release early and often" process which completely eliminates the "shock and awe" value. At least this way it may have an impact before Wine is forgotten due to lack of interest in running legacy Windows applications and everyone switching to GNU/Hurd.

Winesharer consists of three scripts:

wineappshare.sh - strips out user-specific directory links, tracks down related icons and XDG menu entires, and copies the Wine directory (bottle) to a shared location - /srv/wine by default. The hardest part was finding the menu entries (*.desktop) as Wine doesn't keep track of them. I had to do a linear search by grepping for matching WINEPREFIX values then calculating what the matching menu directory (*.menu) path would be. It also doesn't help that the "Icon=" references can specify icon file extensions with ".png", ".xpm", or not at all.

winemergeprep.sh - makes registry diffs and a list of modified files for "mergeable" utilities. The file listing excludes any unchanged/unused files like the fake DLLs that Wine adds in the System directory. It is run twice - after initial creation and configuration of the Wine directory and before the target application is installed, and again after installation and configuration of the application. The applications I was using are 7-Zip, xplorer², FontPage, IPaddress, SciTE (or EditPad Lite), @icon sushi, and Dependency Walker. Some of these are only for debugging. After prep the mergeable application directory is shared with wineappshare.sh like other applications and wineappinstall.sh performs the special handling of their branches and patches when other applications are installed for users.

wineappinstall.sh - where things got ugly. Setting up the read/write directory and creating the template pam_mount entry for Aufs mounting was easy. So was copying over the icons. Recreating symlinks between the user's profile directories (My Doucments > ~/Documents, etc.) was a lot more complicated. I wanted to do it correctly by following the XDG Base Directory Specification, first by checking for a local (user) configuration, then the system-wide defaults, then look for commonly-used defaults, and finally just defaulting to ~. I got that part sort of working. The final problem was trying to merge the shared application menu entries, the "mergeable" applications entires, and any existing entries without damage. Doing this in an orderly (and deterministic) fashion is difficult and shell scripts aren't great for text processing. That's where I left off.

The scripts all require the Wine directory name and it's assumed to be in ~. For example, specifying Diablo II's directory would just be ".wine-Diablo_II". Note that spaces should not exist in the directory name. The Winesharer scripts handle them but others, like Dan Kegel's winetricks, had trouble with them. Second, the scripts search through the registry for the username of the installer in order to change it to the target user's name later. Because of this it needs to be globally unique (in ALL Windows applications) so the scripts don't change something that is not related to the user. I had a Wine administration account named "wineadmin" which should be safe as long as there isn't any client/server wine (the drinking type) management applications that use the same keyword or value in the registry. The sharing directory is in /srv to comply with v2.3 of the Filesystem Hierarchy Standard. I was using a "_rw" suffix for the read/write branch directories.

What's next? Nothing. This was intended to be a one week feasibility study but suffered from a ridiculous amount of feature creep. Between the earlier draft scripts and command-line tests I know it's possible to do but text processing in shell scripts is tedious and I don't have the time to finish it. The scripts are ugly, broken, and can't handle all possible problems. I do like cats so I made it a point to eliminate cat abuse but I left a lot of dysfunctional grep|sed marriages since reducing them is time consuming and the extra processes give my idle Phenom cores something to do. There's a lot of arrays and case statements. I didn't follow any column limits either. Because this was a work-in-progress I also discarded the Unixy notion of minimal feedback - my scripts write entire novels to the terminal. There are a lot of comments, especially in the unfinished portions of wineappinstall.sh (which is guaranteed to not work). I didn't even begin implementing the integrated multimedia help styled after 0verkill. Instead, this pre-alpha work includes a bonus pack of bugs. I don't think the scripts can fail in such a way as to wipe out your filesystems and install Vista but I'm not guaranteeing they won't either. This mess was developed on Ubuntu 8.04 (Hardy Heron).

My goal with this project is to inspire others to implement a more robust (not to mention functional) solution incorporating these ideas. A user space union filesystem like FunionFS would be more convenient than Aufs but I don't know of any alternatives. I think that having submenus for mergeables in each application menu is ugly. A front-end utility for dynamically setting WINEPREFIX and launching them would be better. One problem I thought of but don't know how to handle are applications that require registration keys at installation instead of first-run. For some, their registration keys can be purged from the registry and they will prompt for a new key when executed again. Others won't and may lock-out and refuse to run even with a valid key, requiring a full reinstall.

20090708

The fun of legacy hardware

I have an old embedded system that uses an SBC-MAX (pdf) board from Computer Dynamics. The unit was part of a vehicle monitoring system that used Windows 98 (one of many fundamental flaws in the design). It has a K6/2 333MHz CPU and 128MB of EDO DRAM. It features a whole bunch of integrated devices and a fairly broken BIOS. As an embedded system it has a PC/104 bus for which I have a few modules including a GPS. I'm would like to get that working but getting Ubuntu to even install on it has been a pain.

The source of the problem is an ITE IT8330G PCI-ISA bridge with IDE controller that is only supported by the ide-generic driver. This is rather obsolete and isn't loaded in most kernel images including bootable CDs. The latest Ubuntu CD that would boot is the 7.10 (Gutsy Gibbon) alternate CD.

Gutsy's install worked up to where it loaded packages where it would hang after a while. I suspected it was running out of memory so I tried again. After formatting the partitions I switched to a different terminal and activated swap before continuing. This solved the problem:

free
fdisk -l /dev/hda
swapon /dev/hda5
free

When the install completed, it hung at restarting so I power-cycled it. It does this at power-off (halt) as well which may be a board limitation as the system originally used a serial port to shut off power via an "intelligent" power supply made by Dynamic Engineering. The system booted and Grub loaded initrd (containing the kernel) and then the init scripts started but then it stalled for a while - ending up at an initramfs prompt. Rebooting and editing the boot line in Grub to remove the "quiet" and "splash" entries resulted in more detailed messages which showed it couldn't find the drive. Basically the only driver that supports the ID8330G is ide-generic and it's not in linux-generic which Gutsy and later releases use by default.

The solution to getting Linux to boot is to add the driver but it's in linux-386. To install it, I rebooted with the CD and entered "rescue" mode. After a series of prompts it gives you the option to open a root terminal on a chrooted partition. I selected the root partition and got a bterm session. On Gutsy, it's not a friendly environment as you don't get tab completion or history so you get a lot of finger exercise. First thing was to activate swap then use apt-get to install linux-386.

This is were the next problem was encountered. Gusty is obsolete so the repos have moved to the old releases server. Trying to fix the sources.list file with vi was impossible due to refresh and scrolling bugs with either it or bterm. I tried Midnight Commander (mc) and had to set the TERM environment variable (export TERM=linux or whatever) but it was also rather ugly. I eventually figured out a sed script to fix them faster:

cp /etc/apt/sources.list /etc/apt/sources.list_orig
sed 's/us\.archive\|security/old-releases/' /etc/apt/sources.list_orig >/etc/apt/sources.list_oldr

This just looks for "us.archive" or "security" and replaces them with "old-releases". The next problem was that Gutsy's installer had disabled the repos since it couldn't find them during installation. Another sed script fixed this:

sed 's/^#[# ]*\(deb .*$\|deb-src .*$\)/\1/' /etc/apt/sources.list_oldr > /etc/apt/sources.list

This looks for lines starting with the comment character # followed by "deb" and attempts to skip other comment lines. Then I ran "apt-get update" and "apt-get install linux-386" and was good to go - almost.

After rebooting it ended up at the initramfs prompt again. I entered "modprobe ide-generic" and it found the drive. I entered a Ctrl-D and the boot completed. I filed bug 128833 about this a while back but I know now that the driver can have problems with more-specific IDE drivers so normally it isn't loaded. To fix it I wrote a basic init script named "idegeneric" and put it in "/usr/share/initramfs-tools/scripts/local-top":

#!/bin/sh

PREREQ=""
prereqs()
{
echo "$PREREQ"
}
case $1 in
# get pre-requisites
prereqs)
prereqs
exit 0

;
esac

modprobe ide-generic

I just basically copied one of the other scripts and modified it. I then created a new initrd image with the command "update-initramfs". It loaded on the next reboot without problems.

Next I updated Gutsy with "apt-get -y upgrade" which updated a whole lot of stuff and installed a new kernel (which included the idegeneric script automatically). On a system this old it's like watching grass grow so I worked on something else for a few hours but later found it had locked up (the Magic SysRq keys didn't work). After a power cycle I ran dpkg --configure -a" and it completed without problems but it found some bad inodes on the next reboot and had to fix them (should have done this first before finishing). Then it was time to upgrade to something a little more modern, Ubuntu 8.04 (Hardy Heron). The command to do this is "do-release-upgrade". The first thing it does after finding a new release is change the release targets in the sources.list file to "hardy". Next it loads in the new package lists. Of course this failed as it was looking for them on the old-releases server so I had to change the entries back to "us.archive" first. Then it was happy and began the upgrade. After a reboot Hardy loaded without problems.

An upgrade to Intrepid Ibex (8.10) was next but "do-release-upgrade" couldn't find a newer version. This is because Hardy is a "Long Term Support" version and the next LTS release will be 10.04 which isn't out yet. To fix it, you have to edit "/etc/update-manager/release-upgrades" and change "Prompt=lts" to "Prompt=normal". This upgrade continued without problems.

Jaunty Jackalope (9.04) is the latest but after the upgrade it failed to load the driver. A message was shown:

ide_generic: please use "probe_mask=0x3f" module parameter for probing all legacy ISA IDE ports

Looks like "probe_mask=0x3f" was needed at the end of the modprobe line in the script to make it happy. Luckily the older Intrepid kernel was still configured in Grub so I was able to boot it instead. I added the parameter to idegeneric, then ran "update-initramfs" with the "-k" parameter to specify the Jaunty kernel (referencing it in the form that "uname -r" returns). After a reboot it loaded without problems but then started segfaulting all over the place. Apparently ide-generic is broke in Jaunty or doesn't like the ID8330G so it's back to Gutsy and restart the process. I stopped at Hardy as I have better things to do. I did try Puppy Linux but both the regular and "retro" versions failed to find the drive even with the "all-generic-ide" boot option.

20090603

My Paperwork Reduction Act

I have a habit of keeping receipts of every sales transaction I make. This is good for taxes, returns, rebates, billing, resale, and just trivia. After a decade or so it really piles up.

Several years ago I built my first PC, a Pentium 133MHz system with a Micronics M54Hi motherboard and a screaming Quantum Fireball 7200RPM 4GB SCSI drive (which later failed so I guess the screaming was a bad thing). This was one awesome Doom playing system. The advantage of SCSI over IDE was the number of devices a port could support (7 instead of 2), speed, and the ability to brag that your system cost 10x what everyone else bought. Of course a 50 pound, 30in high tower case with 3x the space needed was essential.

I also bought a HP ScanJet 3C (C2520A) 600dpi 8.5x14in SCSI scanner. This alone was about $900. While the scanner did get some use it mostly just sat on the shelf collecting dust and depreciating. It was rather large and I didn't have enough space on my desk to keep it handy. Several PCs later I bought a Visioneer OneTouch 8100 USB scanner for $50 - which was a waste of money as Staples had it on sale a month later for $25. This worked well and was much smaller but I still didn't get around to catching up with the paperwork. A unique feature of the 8100 was that it's power jack matched the plug on my Toshiba laptop power supply. However the power supply output was not compatible and after getting the plugs mixed up one day I had to buy another scanner. This time I got an Agfa SnapScan 1212U which has worked rather well.

Last week, while tripping over another box of paperwork I decided to finally start scanning things in. I'm using Xsane on Ubuntu 8.04 (Hardy Heron) and it's working rather well. I fit as many receipts as I can on the glass then preview them, draw a selection box around each in the Preview window, then hit the Scan button. Some of the receipts are too long for the Agfa so I installed an Adaptec SCSI card and hooked up the ScanJet. Xsane (0.995) doesn't let you switch between two scanners except at startup but you can run two instances of it simultaneously. It keeps track of preferences for each scanner separately too. Some of the documents I'm scanning have several numbered pages and Xsane has a numeric filename auto-increment function. It's not very flexible but it does the job. My only complaint is that some hotkeys, like Ctrl-V, are used for changing settings instead of the default copy/paste functions that most apps use so I have to use alternates like Shift-Insert. This makes copying and pasting filenames into the save dialog boxes annoying.

I find that scanning most things in at 300dpi grayscale and saving them as JPEGs works best. For items where color is important I use 600dpi. The ScanJet has one design flaw in that it's pad is white while the SnapScan's is black. The white pad causes printing on the back side of receipts to show up when scanning the front so it takes some fiddling around with the contrast and brightness settings to suppress it.

I'm not just scanning in receipts either. I'm also scanning contracts, notepads, holiday greeting cards, photographs, and user manuals I can't get PDFs for. It takes a long time but DVDs take up a lot less space than file cabinets.

20090324

Be wary of CPU upgrades on old motherboards

Twice now I've encountered burned power connectors on older ATX motherboards. The first was a pair of Tyan S2460 boards with dual Athlon MP CPUs. The owner was having problems with them and thought one of the CPUs had failed. He gave them to me and while testing I noticed the problem. Both of these boards were designed to the original ATX specifications and only had a 20-pin power connector. The CPU regulators used +5 volts as their input supply to generate the CPU voltages. The increasing power requirements of CPUs in this era (especially dual CPUs) resulted in increased current (amperes) through the connectors. Since increasing current results in increasing heat dissipation (due to wire and connector resistance) and the increased heat also increases the resistance the same, a thermal runaway condition can occur. This greatly reduces the life of them - the plastic of the connectors bakes and melts and eventually the solder on the motherboard connector pins melt. All of this is bad for stability.

I deemed the boards were useless and, not finding any economical Socket A replacements, decided to upgrade two other systems and give the rest of the parts to a PC recycling center. One of the CPUs, an XP 1800, replaced a 1.3GHz Thunderbird in a Gigabyte GA-7ZXE. This solved a problem with it's Nvidia 6600GT AGP video card and the newer drivers requiring SSE support. Everything was good for a few months but eventually instability set in. I found the +5V pins (red wires) on the ATX connectors had burned and fused together. The 6600GT had it's own power connector and I didn't think one CPU would cause an overload on the ATX connector. Burned twice you might say.

Newer versions of the ATX specification, ATX12V, added the second four-pin CPU power connector that uses +12V instead which greatly reduces the problem. In electrical physics, for the same amount of power (P, in watts), doubling the voltage (V) halves the current (I). The voltage (the amount of energy each electron is carrying) doesn't affect wire or connector heating, only the current (the quantity of electrons moving through the wire) and the conductor's resistance (R, in ohms) does. This heating effect is described by the formula P=I²R and is called "I squared R" losses. Something to keep in mind when upgrading or recycling old systems.

Note that this problem doesn't just affect the connectors. I almost fried a 400W ATX power supply while testing the Tyans with two CPUs because of the current requirements on the +5V rail was more than it could handle.

20090211

A Linux user's review of Windows 7 Beta

After years of being a Windows user (since 2.0) and an administrator I've learned to ignore the marketing hype surrounding new Windows versions. But I tried out the Windows 7 beta just so I can settle arguments about what it can or can't do. It's only a minor upgrade from Vista with some stuff added and removed but the fanatics have been evangelizing it like it's a start of a new era in computing technology. Linux netbooks® really get them hyper.

Table of Contents
1. Preparation
2. Installation
3. Storage
4. Encryption
5. Interface
6. Applications
7. Legacy
8. Control and Security
9. Shortcuts to Panic
10. Divide and Conquer
11. Sharing Almost Redefined
12. Epilogue

^ 1. Preparation

I can't give a fair comparison between Vista and Windows 7 so I'll be referencing XP a lot. I haven't used Vista much as I have no need for it internally and only used it at a customer site a while ago. The customer I did ECAD work for insisted on it which reduced the workstation performance by half compared to the previously installed XP, even with Aero disabled. Over RDC it was even worse with constant stalling especially compared to a Windows Server 2003 system on the same network (disabling auto-tuning didn't fix it either). I charge by the hour so it really wasn't a problem. I haven't worked for them since the Windows 7 beta was released so I can't install it on the same system. I'm not going to install Vista in VMware to compare either. I feel that a virtual machine is not a good way to benchmark a desktop OS because there are too many variables.

I downloaded the 64-bit DVD ISO (3.2GB) and installed it on VMware Player 2.5 on 64-bit Ubuntu 8.04.2 (Hardy Heron). I'm using a Phenom 9550 with 8GB PC2-6400 ECC memory and a pair of Maxtor IDE drives using software RAID (md), LVM, and LUKS/dm-crypt. Drive encryption doesn't affect performance much with modern processors as the drives are so much slower. An XP VM runs just fine on the same system. I gave the Windows 7 VM 2GB of memory and a pair of 16GB virtual volumes. The beta is the Ultimate edition which will be the most expensive of the lot.


^ 2. Installation

Windows 7 Setup is graphical like those on most Linux distributions (which they've had for many years) but the functionality isn't much better than the XP installer. The only really useful addition is the ability to use USB storage devices to load drivers instead of requiring a floppy drive. Like in XP it is single-task based so that every partition edit is immediately applied while most Linux installers queue up a series of operations and then perform them in a batch. The installation and updates took a long time with several reboots (and of course entering the 25 digit key). I don't remember the details about the one Vista installation and update process was like but both are faster to install than XP with it's service pack and several updates that often have to be performed sequentially. The progress messages from the setup screen did seem to indicate some updates were installed but after logging in Windows Update installed some more. The earlier messages may have been referring to updates on the ISO that weren't slipstreamed.

Ubuntu's graphical installation process is a lot faster, especially considering the number of applications included. Its updating can take longer but the package manager updates everything, not just the OS. I use an internal mirror with netbooting via PXELinux that's partially automated using Kickstart so my installations are very fast and already updated. You can achieve some of the same benefits on Windows with the AIK, WinPE, WDS, and slipstreaming but you still have to deal with licensing, product activation, and updating. Windows Update only covers Microsoft products so other applications need their own update functions else you have to do them manually. Like Ubuntu and Debian most Linux distributions come with text or graphical installers or both. Text installers are not as friendly but work on systems with limited memory. Graphical installers are easier but require more memory. Many of them are integrated into Live CDs that can be used for web browsing or playing music while the OS is being installed in the background. There are a few Windows Live CDs, mostly based on BartPE. I haven't tried any of them but once I made a Windows 98 Live CD with a DriveSpace volume that had Quake installed as a feasibility study for RAM disk usage on an embedded system. Using Windows 98 on an embedded system was STUPID but I wasn't the engineer in charge of the project.

The full install was about 9GB but I suspect there is some debug code and related utils taking up space. There is a separate 200MB system partition with about 32MB for the boot loader and its language support files. An XP install with IE 7 added is about 3.5GB at most. A Ubuntu install is about 3.5GB (with a lot more applications) and comes on a CD.

After installation the first issue I encountered was that Windows 7 doesn't have a driver for the default AMD PCNet virtual network device. I had to set ethernet0.virtualDev = "e1000" in the vmx file which it recognized as an Intel PRO/1000 MT device. I started with a VMware configuration from an XP VM. If you are starting from scratch you may find EasyVMX helpful. I then extracted and installed the VMware Tools.


^ 3. Storage

Some of the features I wanted to try included Logical Volume Management (or "Dynamic Disks" as Microsoft calls it), software RAID, and drive encryption. I didn't use these technologies until I started using Linux. I knew that Windows had the capability but systems I built had hardware RAID and didn't need LVM or encryption. With Windows 7 Setup there is no way to configure these during installation. You have to create a basic disk and then convert it afterwards. What's annoying is that if you later erase the partition contents, leaving only the Dynamic Disks and RAID layout, the installer will install to them and they will be fully operational when Windows first starts. While the installer does indicate which partitions are dynamic or are using encryption it doesn't do so for RAID or anything on the system partition. Linux installers usually show the individual partitions and the child volumes for everything so it's easy to follow. I'm surprised by how primitive Windows Setup is considering how long it's been since they released XP and features like Dynamic Disks have been available since Windows 2000. It may be the result of the decision to only offer these features in specific editions of the OS (which causes problems in Vista with mixed installations). I haven't found anything definite about Dynamic Disk availability and the various editions of Windows 7 yet. Linux distributions don't have these restrictions and most graphical installers can set up RAID and LVM. For drive encryption the only installer that can handle it (that I'm aware of) is the alternate text-mode installer on Ubuntu (and Debian) but I suspect more distributions will add it. Drive encryption is really important for laptops.

I then set up Dynamic Disks first as software RAID requires it. Applying the change was instantaneous. While setting up the RAID mirror with the Disk Management tool I encountered an error that stated the boot configuration of the system could not be updated and I should use bcdedit.exe to fix it manually. I didn't bother and I found that the system wouldn't from the mirror if the primary was removed. On Linux this is also a little tricky with the GRUB boot loader. The problem is that in a failure scenario it's hard to deterministically identify at boot which is the good drive versus the bad if the latter is partially functioning but not syncing. Obviously this requires a BIOS that can boot the next good drive if one has failed completely (or partially after a timeout).


^ 4. Encryption

BitLocker is Microsoft's drive encryption system. Another implementation is BitLocker to Go which targets USB storage devices. I tried to "turn on" BitLocker for the C: drive but found it can't use a dynamic volume. So I went back to Disk Management only to find that you can't revert a dynamic volume to basic if the OS is on it. So I had to reinstall. But then I found that the installer won't allow you to delete a a dynamic partition. I had boot Knoppix and use cfdisk to delete the type 42 SFS partitions. Then I was finally able to reinstall and activate BitLocker, then change the disks to dynamic, and then set up RAID mirroring.

On Linux these problems don't exist. Any combination of software RAID, encryption, and LVM volumes can be created and stacked in any order on almost any storage device. On my Ubuntu system I set up RAID first, then LVM, some logical volumes (usually one per user) within the LVM group, then LUKS/dm-crypt volumes on those, then format them to ext3. I split the encrypted volumes up this way because even a single bit error can cause major damage on encrypted data. With separate volumes I limit the damage if one develops an error. This means the partition headers for RAID and LVM are unencrypted but I can't imagine they contain any significant data that could compromise the system. Unintentionally I tested the reliability of this configuration over several months amid random system crashes. I eventually narrowed down the problem to RAM - I had been burned by a bad batch of Crucial BallistiX PC2-8500 2.2V modules like a lot of people. Only the final crash lost any data and I recovered most of it (the critical data was backed up elsewhere).

From what I've read, to set up BitLocker on Vista required creating a 1.5GB partition for the loader/authentication system which obviously can't be encrypted else it can't boot. In Windows 7 the partitioning wasn't necessary and it looks like only the 200MB system partition is used (unless part of C: is not encrypted which I can't determine). My system doesn't have a TPM so I had to use a USB key as it won't allow just a PIN. At first it insisted that a TPM was required but I found that this was due to the default setting in the Group Policy. After changing the setting it still wasn't working. I noticed that my USB flash drive wasn't enumerating properly and in Device Manager the USB Mass Storage Device was reporting "This device cannot start. (Code 10)". Turns out that I had over-optimized the vmx file and needed to add some settings back. BitLocker was now satisfied and I selected the new option "Require a Startup key at every startup" and selected the USB drive for the key. Next I had to choose what to do with the recovery key - save it to removable drive, a file to an unencrypted location, or print it. Both key files are small and have long names, probably a serial number. The recovery key is a text file that includes a description of what it is for. The startup key is a binary with a BEK extension with hidden and system attributes set. The final screen had a "Run BitLocker system check" option selected by default. It restarts the system and attempts to read the startup key from the USB drive. After I clicked the Continue button it just kind of did nothing until I found that a restart prompt dialog was hidden behind the Control Panel window (minor bug). It restarted and booted back into Windows and reported that the test failed. After several attempts I tried it without the test and it proceeded to encrypt drive. It did take a while but the system was usable while this was occurring. You could say its ability to encrypt the volume the OS was operating from is an advantage but I'm not sure it makes up for Windows Setup not being able to do it. After it completed I rebooted with the USB drive but the key check failed. Apparently it couldn't see the drive. I searched with Google a bit and found many other reports of the same problem with Vista. Many users thought the BitLocker utility wasn't saving the key to the USB drive but I think they were confused because it was hidden. It may be an issue with the BIOS not enumerating the device or the boot application not communicating with my particular USB drive. I ended up having to enter the recovery key and Windows booted. The recovery key is 48 digits long and is entered as 8 groups of 6 digits which is easy to enter with a numeric keypad. Each group apparently includes a checksum as it validates them as you type. After the last group is entered correctly it begins loading Windows immediately instead of waiting for you to press Enter.

On Linux, LUKS/dm-crypt uses PINs exclusively which decrypt a second key stored in the partition header which is then used to decrypt the volume. LUKS has eight key slots and any key can be used to add or delete the other keys. This can be combined with other authentication mechanisms via boot scripts for TPM, USB keys (via pam_usb), and pretty much anything else. The problem with it is that while scripts exist for some of the authentication options there isn't much of a standard implementation and many distros don't include all the functionality. Both have biometric support - Windows has a control panel applet and fprint tools are in the Ubuntu repositories but I don't have a fingerprint reader and can't test either. Like with Windows a part of the OS needs to be unencrypted so it can authenticate the PIN and unlock the rest of the volume. On Linux this is the /boot subdirectory which contains the Linux kernel. This means the entire kernel loads and can provide access to any hardware device for which it contains a built-in module (driver). For both Windows and Linux the boot-time authentication system is the primary attack vector for an encrypted filesystem. If an attacker installs spyware it could copy the key elsewhere for later retrieval. The Linux kernel and start-up scripts can can be configured boot off USB key instead and WinPE could probably act in the same capacity for Windows. But this security hole also exists with the BIOS and even with TPM it's still possible to hack into or around (although not easily).

You can "suspend" BitLocker which simply means it stores the key plaintext on the drive someplace making it pretty insecure. Obviously you could do the same with LUKS/dm-crypt but who would want to? According to the help file it's for some tasks like BIOS updates which may be due to the way TPM operates. You can also "turn off" BitLocker which decrypts the drive, a surprising option. Like the original encryption pass the system is usable while it is decrypting. Kind of neat but again I can't imagine where someone would need it (even for forensics). I went ahead and tried it anyway. The decryption pass is very slow but it worked and I could boot without entering the keys again. Just for fun I turned it back on again to see how long it would take to re-encrypt. That's when I encountered a fundamental limitation of the whole architecture. Remember that I had to set up a Dynamic Disk and RAID after encrypting because BitLocker required a basic disk structure? Well now that it was dynamic it couldn't encrypt it. So I had to reinstall all over again using Knoppix to delete the partitions, etc.

An alternative encryption option is the file-level Encrypting File System. It can unlock files automatically upon login. Starting with the 8.10 (Intrepid Ibex) release, Ubuntu has added a similar feature, Encrypted Private Directory that also unlocks with logins. In additon, LUKS/dm-crypt volumes can be controlled by login when combined with pam_mount. With both Windows and Ubuntu the encryption keys are themselves encrypted when used by the login process and are associated with the user's password (a common weak point). The pam_mount implementation in Ubuntu (and Debian) has a few problems and one is that the association between the user's password and the decryption key is not maintained if the user changes their password. Of course with both Windows and Linux these encryption options are still vulnerable to spyware.

In summary, Windows and Linux have comparable features with LVM, software RAID, and encryption. The Windows solution has a nice GUI and adds a couple of questionable features but it's implementation is very inflexible and limited by a crippled installer, architecture, and licensing. It's not that these features are "enterprise only" as they can be very useful for home systems and laptops. But they're unlikely to see usage outside of large corporate environments because of the limitations. Ubuntu and other distributions are very flexible and without licensing problems but the encryption installation and authentication functionality needs to be streamlined better. Both encryption solutions for Windows and Linux distros have multiple options for management with directory services but I don't have one set up and it's outside the scope of this review.


^ 5. Interface

The desktop looked plain enough. It wasn't using Aero since VMware Player's experimental DirectX acceleration feature isn't enough to get it working even with registry hacks. I don't find the desktop 3D effects trends useful and don't have Compiz enabled on my system since it currently doesn't work well with multi-head desktops and full-screen OpenGL applications. Those problems are expected to be solved with future releases of the DRI.

The new "superbar" taskbar is interesting. Basically you can pin a menu item to it which is similar in functionality to the Quick Launch toolbar you could enable in the taskbar on XP. Task buttons for running applications also end up there which at first seems confusing as you can't just look at it and tell what's running or not. The difference between a pinned menu item button and a running task button is that the latter will pop-up a list of active windows when it has focus, similar to the "Group similar taskbar buttons" function in XP when there are more active task buttons than will fit on the screen. If a pinned menu item also has open windows then they are just listed above it so it makes sense it a way. Right-clicking on an item allows you to unpin it and lists recent files used with the application. The button grouping function on XP slowed me down when I had many CAD documents open so I disabled it. I would have to use the superbar a lot to know if I like it better.


^ 6. Applications

There still isn't much in the way of applications included but some of the existing ones have improved. Some that were in Vista have been removed and others added in. This is not a criticism of the lack of bundled applications but when you compare it to Ubuntu and the relative installation sizes you wonder what's using up all the space. Internet Explorer 8 continues to try and catch Firefox but is also available for XP. Like web browsers in most Linux distributions it doesn't include the Java, Flash, or Silverlight/Moonlight plug-ins. Notepad is still useless as it doesn't handle LF-only newlines and still has no syntax highlighting even for batch files. I normally install EditPad Lite. The calculator has improved and is equivalent to the default gcalctool on Ubuntu. WordPad is substantially better and can open and save OOXML and ODF files but I didn't test for compatibility with Word 2007 and OpenOffice.org Writer 3. It closest Linux equivalent is probably AbiWord but I normally use Writer. Ubuntu includes OpenOffice.org but removing it and adding AbiWord saves about 300MB of space. Even combined they are much smaller than an Office 2007 installation. Paint now has multi-level undo. It also has a lot more scalable shapes but having them in a raster graphics editor seems like a waste of code. Paint is basically useless for photo editing and is only good for people who think vector editors are hard. The closest F/OSS competitor is Tux Paint which doesn't have the editing tools but is more fun. Both WordPad and Paint now use the Office 2007-style ribbon. I'm indifferent about the ribbon. It makes some functions easier to find but in applications with a lot of functions it makes the rest harder. There's also the standard selection of basic games with a few more network-enabled versions.

Windows Media Player 12 currently doesn't support XP and is still bloated fatware compared to foobar2000, Media Player Classic, and VLC (on Ubuntu I use Rhythmbox). Considering the number and types of plug-ins available it probably qualifies as its own OS. Initially it can only rip to WMA, MP3 and WAV, but plug-ins for Ogg and other formats are available. It's not very good at solving codec problems. I tried a few videos including Elephant's Dream (DivX MPEG-4) and it reported it couldn't play the files and the problem may be the file type or codec. A "Web Help" button took me to a web page about error #C00D1199 (not very helpful). On Ubuntu when Totem doesn't have the correct codec it offers to install what it needs. With Windows, advanced users often just squash the problem by installing every possible codec. The other audio utility, Sound Recorder, can only save to WMA when in XP it could only save to WAV. In Ubuntu, gnome-sound-recorder can save to FLAC, Speex, Ogg Vorbis, and WAV.

I don't watch much TV and I've never used a PVR although I've been intending to set one up. Windows 7 Ultimate includes Windows Media Center. There are several similar PVRs for Linux, most of which are free and support Windows also.

The data backup and restore functions are more integrated then the buggy backup utility from Veritas in XP. In the properties panel of files and directories there is a "Previous Versions" panel that can be used to restore them from backups and restore points. When you connect a removable drive one of the autoplay options is to "Use this drive for backup".


^ 7. Legacy

There's some old Windows NT holdovers in XP. The Explorer Install New Font/Add Fonts dialog and ODBC Data Source Administrator are the two most obvious. I wanted to see if they were still around in Windows 7. The Install New Font option in the Fonts folder has disappeared as font installation is handled by a TTF file context menu. That left the ODBC utility. I went through the same routine as I would have on XP with an ODBC registration for an Access 2007 database. I didn't want to bother installing an Office 2007 trial so I installed the Access Database Engine (a.k.a. ACE) and then went into the Administrative Tools and ran the ODBC Data Source Administrator. I clicked the User DSN Add button but only an SQL Server driver was listed. I double-checked System DSN and, not seeing it there either, went through the Program Files directory to verify it actually installed. After some searching it turned out to be a 64-bit feature. The link in the Administrative Tools was to the 32-bit version of the utility and I had to use the 64-bit version located at C:\Windows\SysWOW64\odbcad32.exe instead. So I finally was able to select the Access mdb/accdb driver, enter a name, and then select a database. Then I was greeted by an old familiar dialog. It may seem petty but I get annoyed if I have to map a drive letter to select a database on a server share.

<rant> For the record I think the Jet/ACE database is unreliable garbage and would take SQL Server any day. Unfortunately I have to defer to software engineers that insist it's easier with .NET to work with Jet/ACE than SQL Server because "SQL is hard". The Jet engine is the abandoned offspring of the SQL Server team - deprecated in favor of SQL Server Express. The Access team couldn't seem to live without a particular query function so they forked Jet and their version is known as ACE. If you install Access 2007 on Windows 7 you end up with both Jet and ACE. In spite of what Microsoft says about upsizing (data migration) from Jet/ACE to SQL Server they are not quite compatible due to different field types and date ranges. I have tried using Access 2003 with a SQL Server back-end but it couldn't handle things like tables with auto-incrementing key fields. I haven't tried the same with Access 2007 and don't plan to since I've come to the conclusion that using a fat client instead of a web interface for data entry and reports is a waste of drive space. </rant>

One annoyance was that any online help that used the HTML Help Control for chm files (like the ODBC Data Source Administrator) would open a window that was locked to always be on top. This makes working with any application full-screen impossible with the help open. I had to open the chm files manually to get around it. Most of the other applications use the newer help system which has the opposite problem - when you want to lock it on top you can't. Problems like this don't occur on Linux because the window controls are provided by the window manager while on Windows each application has to implement it's own controls. If a Windows application developer doesn't feel like adding an "always on top" control then you go without or use a third-party utility that can override the window properties; or maybe not legally as the EULA states "You may not...work around any technical limitations in the software". Another irritation is the number of fixed-size windows with list boxes. I'm not overly fond of scrollbars especially when I have lots of desktop space. They did add some new hotkeys for window management which is an improvement but some of them don't work without Aero.


^ 8. Control and Security

Like Ubuntu with it's sudo implementation, the first user account created in Windows 7 has superuser privileges. This user can elevate their access on demand but doesn't log in directly with a superuser account (same with Ubuntu and the Linux root account). Explorer has a context menu on executable files allows them to be run with administrative privileges. The same can be done with most Linux file managers by configuring an "open with" sudo entry. On both most processes run using various system accounts.

This brings up everyone's favorite Vista feature - UAC. It has undergone some behavior modifications. If you are logged in with an account that has superuser privileges it only pops up if you try to install something or copy anything to certain directories like C:\Program Files or C:\Windows. Like in XP you get a security warning (not UAC) if you try to run an executable from any network location but not if you just copy it to the desktop or public folders (C:\Users\Public) and run it. Non-admin users get more UAC prompts. Considering typical user behavior is to use the first account with the least annoyances this looks like a set-up for a "blame the users" malware excuse that will keep Microsoft's lawyers happy. Every time I see "UAC" I think of the fictional company Union Aerospace Corporation that was responsible for the demonic invasion in Doom.

One odd feature is PC Safeguard which is apparently an updated version of an add-on named SteadyState. SteadyState is available for XP but it's not something I encountered before. It adds a data reversion function for a documents and settings in a shared user account. The changed data is stored as a file on the drive that is cleared at logout. On Linux you could achieve the same thing by mounting a temporary home directory on Aufs and deleting the rw branch at logout. Some of the reviews I've read promoted it for safer web browsing (remember that integrating IE with the OS was a feature). Compared to Live CDs this seems like a hack for administrators who don't know how to lock down a desktop or set up thin clients. Promoting it as a recovery mechanism reminds me of a CAD system I used long ago that had wonderful crash recovery right up to the last command issued. It was a feature that had evolved because all it did was crash.

One interesting feature is the integrated parental controls. On Linux there are graphical solutions for web access control (DansGuardian with the Ubuntu Christian Edition GUI and Mandriva's drakguard) but Windows 7 includes controlling access to applications by user and optionally by the ambiguous ESRB ratings. I've encountered the need for per-user application access control with a parent that wanted to play games that he felt were too violent for his kids. I solved the problem by changing ownership and permissions on the XDG desktop entry but a GUI would make it easier for a parent to do it. I like using parental controls, not because they work, but because it encourages kids learn more about computers while trying to get around them.


^ 9. Shortcuts to Panic

Lately there's been a lot of concern about Linux malware and XDG desktop entries. On Windows, programs that are runnable (executable) are determined by their filename extensions (exe, com, bat, cmd) which are hidden by default in Explorer. On Linux and other Unix-like operating systems the filename doesn't matter. Their files have attributes (read, write, and execute) similar to Windows (read-only, hidden, and system). In general a file can't be run as a program directly unless the execute attribute is set. It is possible to get a script (similar to a Windows batch or command file) to run without being marked executable if you call the program that interprets it (sh, bash, etc.) and tell it to run the script. When it comes to sending a program through e-mail the executability of a Windows program remains as the filename doesn't change (although some extensions are blocked by some e-mail servers). File attributes are not included with the file so the e-mail client that receives them saves it according to the settings on that system. On Unix-like systems that usually means without the executable attribute set. An XDG desktop entry is a text file that is essentially the equivalent of a Windows shortcut (lnk) file. Since the XDG file is not a program it usually doesn't have the execute attribute set which is technically correct. But the problem with both Windows lnk files and XDG files is that they can be created to run any program and specify any parameters so effectively they act as programs by proxy. They can be used as a basic malware carrier by being configured to create a simple program when launched. They could also be used to exploit an existing security hole in an application by having it connect to an outside source that has been maliciously created to exploit that hole. There has been some debate about requiring the file managers and desktop environments to not launch XDG desktop files without the execute attribute set. Currently Gnome and KDE don't require the attribute but the file manager Thunar in Xfce does. Unlike Windows, which always has Explorer, on Linux there are many desktop environments used so a security problem with one is not necessarily a problem for the others. The problems of Windows lnk files has existed since the early days of Windows. Here is an example shortcut Target command line that can create an entry in the current user's Startup folder which will show a directory listing at every login:

C:\Windows\System32\cmd.exe /C echo cmd /K dir "%HOMEPATH%\Documents">"%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup\dir.bat"

This will work with Windows 95 and newer. Give it a nice helpful icon (C:\Windows\hh.exe) and hook it up to the ftp client (available since Windows 95) and you got yourself a script kiddie-quality spyware. Safeguards like caution dialogs can be implemented on both Windows and Linux but don't underestimate the ignorance of users under the influence of social engineering, especially when they've grown used to being flooded with UAC prompts in Vista.


^ 10. Divide and Conquer

I wanted to try recreating the common Linux practice of separating user files from the rest of the OS with /home as a mount point for a separate partition. Whenever I rebuild a standalone Windows system I usually spend half my time backing up and restoring user documents so they don't get overwritten when I reinstall or use a recovery disk. I almost never use the Windows "repair install" option as usually the system has malware and replacing Windows components does nothing for any other application executables or their cached updates. With Linux you don't have to worry about wiping out /home during a clean reinstall and at most it will require fixing ownership of the existing files. In addition, the root account uses /root for it's home so it's easy to isolate any problem with other user home directories. With Windows its a lot more complicated due to a history of unconstrained application file management. While the directory structure suggested where files should go there wasn't anything preventing them from being written all over the place and overwriting system files. This led to crazy solutions like Windows File Protection which tried to automatically fix the damage after it happened. While system damage problems have been mostly solved with the introduction of Windows Resource Protection in Vista, many badly-written and legacy applications still store user files in the same directory as the application itself - resulting in permissions problems when multiple users access the same files. Newer applications should use the appropriate special folder but some access them by an absolute path and don't follow if the target directory is relocated. By "should" I mean that they can't save them elsewhere because the "Designed for Windows" logo requirements prevents them from doing so; in other words - security by marketing agreement (Bad Sony! You don't get a logo!) On Linux, user applications running under a user's account can write to their home directory and not much else. With the rest of the system being open source a distribution's staff can fix them if they don't follow the FHS rules.

With previous versions of Windows I've attempted to move user home directories and special folders to a different drive and it was a mess. While some special folders like My Documents had a Target property that could change the target directory and move the contents, others like Favorites didn't have that option and required editing the registry and manual copying. I would try to move the files using XXCOPY but dealing with access control list (ACL) problems and applications that insist on bad file management made it more hassle than it was worth. In Windows 7 all of the special folders are now relocatable but legacy applications are still a problem. In XP each user had the "My Documents" special folder while other user file locations like "My Music", etc. were just normal directories contained within it. In Windows 7 these are all special folders and are no longer nested in My Documents. This is helpful because they can be redirected to different locations instead of being dragged along with My Documents. Another annoyance in XP was that other user-related application data was stored in a different directory higher in the tree. It is now hidden within the user's home directory (same as Linux).

On Windows, NTFS supports volume mount points or "mounted drives" which act similarly to directory mounting options in Linux. You can target a mounted drive to any empty directory. During installation I reserved a portion of the virtual drive for a 2GB partition. Since C:\Users always has at least the initial user account in it I couldn't mount it there. I created a directory for a new user and mounted the partition there using the Disk Management tool. I then created the user account and changed the ownership and permissions on the directory. ACLs make this complicated with inherited permissions and such. Linux directory permissions are much simpler but and add-on for ACLs is available. Normally the user home directory and special folders are only created when the user initially logs in so I logged out and back in as the new user. I've noticed that the initial "Preparing your desktop" operation takes a very long time with nothing obvious to account for it. The new home directory and special folders are created and probably the initial registry for the account but I couldn't tell what else it did. It's at least twice as long as my XP VM and I did make another account without the mounted drive just to make sure it wasn't interfering. On Ubuntu adding a user takes only a few seconds including storing the password, setting group memberships, and copying /etc/skel to the new user's home directory. There is an additional few seconds delay with the initial log in with Gnome where it creates its configuration directories but that's it. After the long pause I found the mounted drive directory under C:\Users wasn't used. Instead it created a new directory as <username>.<domainname> as it would if the system had joined a domain. To be able to use the mounted drive I would have to change all of the special folders to use the other user directory or give it a drive letter and set them to that. This wouldn't work for the hidden application data directory either.

Special Folders can also be redirected to a share on a server. Not useful with a stand-alone system but common in larger business network environments. This doesn't help a lot of legacy applications for the reasons discussed above but it's convenient when users need to access the same data from multiple systems and allows for simpler backup. I knew I could redirect the special folders to a Samba share with the standard Windows SMB protocol but I wanted to try to connect them to an NFS export. Adding NFS support to Windows 7 was fairly easy but I didn't bother with the name mapping for my simple test. The add-on does have a lot of options for setting default permissions, character encoding, buffer size, etc. NFS is not a browsable protocol so you can't just click on the Network icon in Explorer and find servers. NFSv4 adds a pseudofs option that can be used to create a read-only share that lists all available shares which can be used to connect to the actual shares. If you know the exact address of the pseudofs export you can use the "\\<servername>\<sharename>" syntax in Explorer's address bar to see them but I wasn't able to connect to the actual shares this way. I also could not see any way to map NFS shares to drive letters in Explorer. With SMB shares you right-click on the server's share list and select "Map Network Drive" but since NFS is not browsable at that level you have nothing to click. I was able to use the Windows "mount" command to map drive letters using either the Explorer \\<servername>\<sharename> or Unix <servername>:/<exportname> syntax but there's no option to reconnect at login. There are ways to automate NFS drive letter mapping but it is a missing feature regardless. Being able to map NFS exports to a drive letter is is enough to prove that special folders can be mapped to an NFS share. There are some issues with extra crud being left behind when some files are copied to the NFS export but I didn't test for it.


^ 11. Sharing Almost Redefined

During the installation, if Windows Setup detects you are connected to a network, you are prompted as to what kind of network the system is connected to - home, work, or public. Selecting home takes you to the "Homegroup" configuration. If you click the "Tell me more about homegroups" link the help pops up a blank window. Actually there's not much more to homegroups than that. It looks like there are basically five parts: an extension of network profiles, new special folders for file management and sharing (Libraries and Homegroup), SMB file sharing, SSDP, and a new generic file sharing account (AlphaUser$). To test this I set up another Windows 7 VM and tested between it and the first.

The network profile extensions are easy to understand. Take multiple profile support in NetworkManager and extend it to shared directory permissions, control of some network services like Samba, and firewall profiles and you have it. There are three profiles for home, work, and public with increasing levels of lock-down. In the firewall they correspond to the profiles private, domain (i.e. Active Directory), and public. The Homegroups are only active with the home profile. A relatively obvious and trivial extension of network settings control so it's probably only worth a dozen patents at the USPTO.

With XP the starting point for navigating user files in Explorer was "My Documents". In Windows 7 the new special folder Libraries is the starting point but it only exists in Explorer and not in the underlying directory structure. The next level of special folders under Libraries are the media category libraries Documents, Music, Pictures, and Video (more can be created). Their contents are the aggregate of files contained in the sub-folders which is very confusing if there are duplicate file names. These also are not represented in the underlying directory structure but do exist as files. For example, the library Documents file is C:\Users\<username>\AppData\Roaming\Microsoft\Windows\Libraries\Documents.library-ms. You can delete them from the navigation panel which deletes the library file and hides the sub-folders in the Libraries view but this has no effect on the actual sub-folder directories or contents (some really bizarre architecture here). The default Libraries can be restored from the context menu of Libraries which causes any sub-folders to reappear. Under each media category library are two more special folders, a read-only private one and a read/write public one; "My Music" and "Public Music" for example (the missing RIAA's Music is conspicuous.) In the properties panel of the media category library object you can set which of the libraries within are the default save location. On Linux this would be similar to changing the settings in the local XDG config. There is also an option to optimize the library for various kinds of content but I'm not sure what it is for (indexing? compression?)

Homegroups are a network representation of the Libraries. It is simply SMB file sharing with a different view and authentication mechanism. It doesn't work with anything except another Windows 7 system which will annoy a lot of users (see comments at the bottom of the linked page). This could be another rusty bent nail in the coffin of XP but normal SMB file sharing is still available. In Explorer's navigation panel there is a Homegroup location. Under that are objects for users on various systems instead of just systems as you would see when browsing the SMB network. Under each is the same structure as the Libraries. Users can select which libraries to share with the Homegroup (adding them to the Homegroup group) or specific users and allowing read-only or read/write access. If a file is dropped on a media category object it goes into whichever library within that has read/write access with probably some rules for choosing between multiple valid targets. From the network view each user has a private and a public directory making it easy to keep files organized. Well it would be if it had been implemented correctly because each user's public directory on the same system is actually the same shared directory C:\Users\Public (another brainless design decision). If you check the sharing properties of the Public directory you find that sometimes only the current logged-in user (along with Administrators and maybe Homegroup) is shown as having share access when in fact all local users have access but the dialog randomly hides them. Security through confused obscurity? Since the Public directory is shared by all users, everyone has a sharing veto. I can't imagine this scaling well.

The new authentication mechanism includes SSDP for resource discovery, the Homegroup group, and what appears to be a new generic sharing account AlphaUser$. In a way it acts as a new "network guest" account. I think the idea behind AlphaUser$ is to use it as a proxy for sharing when a directory service is not available to authenticate users between connected systems. I think that when a file transfer occurs it's done under the AlphaUser account and then the ownership is changed on arrival. A password is needed to join a Homegroup and one is randomly generated when they are set up. SSDP has been used before in conjunction with UPnP. I accidentally discovered that the auto-complete function in Explorer's address bar showed some of my previous Homegroup locations in an unfriendly path referencing SSDP:

\\Provider\Microsoft.Networking.SSDP//uuid:27088a07-14dd-438f-8433-bc5933be615c

There are other problems with the current system. One user couldn't access the shared folders in Homegroup on the same system but could access everything on the other system. I also found that if a user disables sharing on a media category library it sometimes causes the sub-folder to show the wrong sharing icons. This makes it difficult to understand what permissions are applying to where. The Library special folder seems to just add unnecessary depth to the tree. The name for it is odd but I can't think of a better one. Perhaps the intent is that completed work will be stored there and the desktop used for files that are currently being edited. I know a lot of users that work that way. I've never liked Explore and when it was introduced in Windows 95 I still strongly preferred File Manager with two vertical panes. I tolerate Nautilus on Ubuntu, probably because I have two monitors and it's "Places" side panel is a lot less cluttered than Explorer, especially with the new Libraries and Homegroup objects. On Windows I always use xplorer².

A default install of Ubuntu doesn't include SMB file sharing although it's easily enabled by adding Samba. Gnome and KDE (on Kubuntu) do have integrated SMB network browsing that doesn't need Samba. Sharing between users on the same system is easy; maybe too easy. One policy I really don't like is every user's home directory is readable by everyone else on the system by default. This is done to ease sharing but is unnecessary. Parents don't necessarily want to share everything with their kids like "marital photos". Each user has their own group (USERGROUPS=y in /etc/adduser.conf) and their home directories are set to be readable by members of their group. I normally remove read access for others (and newly created users with DIR_MODE=0750 in adduser.conf) because if one user needs read access to another user's home directory that user can be added to the other's group. This provides a lot more control than with the default settings. For ad-hoc sharing I have a "local" directory that all local users can write to and a "public" directory that is the same but shared via Samba and NFS. To get around authentication problems I normally set Samba to treat unidentified users on the network as guests ("map to guest = bad user" in /etc/sambla/smb.conf). Using a specific account with a password as a proxy (like Homegroups) would make sharing between systems easier and more secure.


^ 12. Epilogue

Windows 7 beta seemed relatively stable but I wasn't really installing much or putting it under continuous use. I had a lot more problems with the initial release of Vista. I did manage to crash Explorer a few times without trying but this is a "beta" which is the equivalent to an "alpha" for most other software projects. This is the reason why many system administrators wait until the first Windows service pack before mass deployments. Since they're skipping a second beta you might want to wait until SP2. Of course version numbering and service pack availability is taking on a marketing influence so it's getting more difficult to know when it's safe to deploy.

Some readers may get the impression that I'm against commercial software or closed-source. I don't have a problem using either actually. I've used the Linux versions of Perforce and TestTrack Pro, Nvidia's driver for my graphics card, and games like Quake 4. Given equal functionality I'll take an open-source product over a closed one as it gives me the option to maintain it if the developers abandon it, which reduces risk. I don't use Linux because it's free. I could easily use BSD or Solaris. I use Linux because I won't use Windows unless somebody's paying me to. Considering how long I've used Microsoft's products that's saying quite a lot.

20090209

Practical password security

A recent security breach at phpbb.com resulted in an intruder obtaining and publishing thousands of member names and passwords. A design flaw, a.k.a. bug, in a mailing list application was responsible. An analysis of the passwords revealed some interesting facts about the types of passwords people use when creating accounts at web sites. The most popular ones were "123456" and "password". A similar pattern was found in passwords exposed by a fake MySpace site in 2006. While intrusions at non-critical sites like these aren't likely to ruin your life it's a lot more serious if they manage to get access to your account at your bank or credit union web site. Lets look at the types of password problems I've seen and what you can do make yours safer without a lot of hassle.

First, security is like a chain - it's only as strong as the weakest link. Even with a secure computer that is connecting to a secure web site using a secure network connection a weak password pretty much defeats the security. There are three ways intruders can get your password without your direct assistance. By "direct assistance" I mean you telling them (in other words, lying still works) or by writing it on a sticky note and pasting it on your computer where everyone in the room or those looking through a window can see it. The remote methods include installing spyware on your computer or the web server your are connecting to, guessing your password based on what they know about you (pet names, phone numbers, favorite foods, favorite cars, etc.), or using another computer to try every possible password (called a brute force attack). The last one is often used with a method known as a dictionary attack which uses dictionaries of known words to check against. This works faster because most passwords are words instead of random characters since they are easier to remember. There are dictionaries for every language. There are also dictionaries for special categories like scientific fields, entertainment, or industries. For example, a biology dictionary may contain scientific names of plants, animals, and fungi. An attacker could include it if they knew you were a biologist in case you used the name of a bacteria for part of your password.

The security strength of a password is directly related to its unpredictability (from the attackers point of view). If the password is a word in the English language then it's more predictable than random characters. If it's a word relating to you then the more the attacker knows about you makes it more predictable. A long password is usually less predictable than a short one. A password made up of several related words like "big red truck" is weaker than a password made up of several unrelated words like "plastic quickly artichoke". A password using more types of characters (lower case and upper case letters, numbers, and symbols) is stronger than one that only uses lower case letters. Intentional spelling errors can make a password stronger but common errors or alternate spellings (including English dialects, Engrish, and Leetspeak) are more predictable and probably in password dictionaries.

Another problem I find with most users is that they use the same password with every account on every site. If you do this and someone figures out the password for one of your accounts then they have access to all of your accounts.

Another password weak point is the password recovery functions at most web sites. These allow you to reset your password if you forget it. They usually require you to enter your account user name or email address and then send a reset web page link to the email account that is registered with the account. You then click the link and are then given either a temporary password or allowed to enter a new one. If your email account has a weak password and an intruder gets in then the password reset functions at every site you have an account on can be used by the intruder to set new passwords and get access.

Obviously the best security is to use different big random passwords at each site but these are very difficult to remember. The solution is to use a password manager. This is a program that keeps track of passwords. You could store all of your passwords in a text file or word processor document but if an intruder gets access to your computer they could easily open and read them. Password managers store passwords in an encrypted file that is itself protected by a master password. Encryption is a process of scrambling something so that it is unreadable without the correct key. When you try to open the file with the password manager it asks you for the master password (i.e. the encryption key) and attempts to unscramble the file. If the file is unreadable then it knows you didn't enter the correct key. If an attacker gets the file but doesn't have the key all they will find is scrambled gibberish. There are many encryption methods with varying levels of speed and complexity but they still rely on you to create a strong master password (encryption key) to secure the contents. With the password manager you only need to remember the master password for the password file - the rest are available once it is decrypted and opened. You can then copy the passwords for your other accounts from the password manager and paste them into your web browser or other programs as needed. While there are many different password managers available for all kinds of computers the one I recommend is KeePassX. It's free and available for Windows, Mac OS X, and Linux (Ubuntu, Mandriva, etc.). I normally install it on any computer I set up.

A password manager isn't a perfect solution. If you use it on a computer that has already been infiltrated and has spyware on it, the intruder can get your password manager's master password by reading what you type when you enter it. But outside of that, it's rather secure with a good strong password. In fact, with a strong master password, you can make the encrypted password file publicly available and not worry about anyone being able to read it because only you have the key. You can put the file on a public Internet site or, if you have a web-based email account like Google or Yahoo!, you can email it to yourself so it's stored in your email inbox. That way you can get at your passwords from any computer on the Internet with the password manager program installed - just make sure they are secure first before entering your master password or even logging into your email account to get the file. Of course, make backups of your password file by emailing it to yourself or saving copies of it to a USB flash drive.

For creating secure account passwords most password managers have a password generator. It can create random passwords of varying lengths using numbers, letters, and symbols. The more variety the better. For example, if you have a password that consists of a single lower-case letter, there are 26 possible passwords that an attacker will have to try to break in. They may get lucky on the first try and find it's an "a" or they may have to try them all and find it's a "z". With two characters there are 26x26 or 676 possibilities. Add another character and it's 26x26x26 or 17,576. If you include upper-case letters you now have 52, 2,704, and 140,608 possibilities for one, two, and three character passwords. Add numbers and you get 62, 3,844, and 238,328 possibilities. Using every printable character on the keyboard (including a space) you end up with 74, 5,476, and 405,224. Way to many to try by hand but remember that most attackers on the Internet are using a computer to try each combination and can make millions of attempts every second. To be relatively safe you should have at least 12 characters. This makes it unlikely for an attacker to determine your password in any reasonable amount of time (many years) even if they are using thousands of computers simultaneously. The more complicated your password is the more likely they are going to give up before breaking it and move on to another target. In general, both the length and number of different characters affect the strength of the password so if you use fewer character types then use a longer password to make up for it.

While the password generator can make complicate passwords and the password manager can keep track of them, you still need a strong master password. Ideally it should be random and long but it's really hard to remember something like that. A technique I've used with employee accounts on business networks is to have them create a short secret password consisting of words and a few extra numbers and symbols that they can remember. Then add several random characters before and after it for the full password. Then write down the random portions on a piece of paper with a blank line between them signifying the secret part and store it someplace out of sight. When entering the password they use the paper for the beginning portion, followed by the secret part they didn't write down, and then the rest from the paper. This is practical because you can generally trust coworkers (or household members) more than anyone on the Internet. Even if someone finds the paper they don't have the full password while an attacker on the Internet doesn't have any of it. If you have trouble coming up with random words for a password you can try a technique called Diceware that uses dice and a word list.

One problem you will encounter is that web sites have varying rules about passwords. Some require between six and twelve characters, some allow much longer. Some only allow letters and numbers. Some only allow some symbols while others allow almost anything. Unfortunately many web sites don't specify their rules entirely so you may have to make several attempts to find one it will accept. You may find that a web site will accept a 16 character password when you set up a new account but actually only allow 14 characters and chop off the last two without telling you. When you enter a new password into a web site test for this problem by logging out of the web site and back in again with the new password. If it rejects it, delete one character from the end of the password and try logging in again. If you get down to the minimum number of characters the site will allow and you still can't log in, use the site's password reset/recovery function to get access again. If deleting some characters from the end allows you to log in make sure to note how many characters it accepted in your password manager so you don't end up fighting the site again the next time you change your password. Another problem you may encounter is a web site that accepts a new password with symbols in it but filters them out, again without telling you. If you can't log in with a new password and making it shorter doesn't seem to fix the problem, try deleting any symbols in the password leaving only the numbers and letters. Again, if you can't get access then use the site's password reset/recovery function.

Sometimes you will find a site that will accept a strong password but then does a bad job of keeping it secret. They do stupid things like confirming your password by emailing it to you unencrypted which means that at every point in the Internet that the email passed through someone could read your password. Some mailing lists also email a password reminder to you every month again exposing it to the whole world. There isn't much you can do about these bad security lapses except not use the sites and complain to the administrators.

Using complicated passwords is only part of good password security. The other is changing them regularly. If an attacker can brute force your weak password in a month and you change it every two months then you have a security problem. Changing it before they break in makes them have to start over again. While using a stronger password requires an attacker to take longer to break in, you should change your passwords regularly anyways to limit damage in case your computer or one you are connecting to has an intruder you're not aware of. How often depends on the strength of the passwords used with each account and how much damage could be caused if someone breaks into them.

While strong passwords reduce the chances of someone finding your account passwords without your help don't overlook the age-old method of social engineering (lying). Normally if you set up a new account on a web site it will email you a confirmation link to verify your email address. You will also get an email from the site if you use their password reset/recovery option. But later, if you get an email from the site requesting that you click a link to respond to a problem with your account, especially if it's a bank or store account, be very suspicious. These are often used for phishing. Scammers often send you an email like these with links to a fake web site for you to log into in order to get your passwords. These often refer to popular companies and are sent blindly to millions of email accounts which is why you sometimes find an "account security notice" in your email inbox for a bank you don't have an account with. Most financial institutions have a policy of never contacting you by email regarding security problems and will call you instead. You can also sidestep the phishing by going to the web site directly as you normally would instead of clicking a link in the email. If there is a problem then you should be alerted when you log in to the web site. Most web site operators will never ask you for your password as they usually have other methods (like their internal computers) for accessing your data.

While strong passwords can help keep intruders out of your accounts note that there is no perfect security. What security systems and passwords attempt to do is to add a limited amount of annoyance for legitimate users and multiply the annoyance factor many many times for attackers. While some security systems and software are better than others often the limiting factor is your willpower versus that of potential attackers.

About Me

Omnifarious Implementer = I do just about everything. With my usual occupations this means anything an electrical engineer does not feel like doing including PCB design, electronic troubleshooting and repair, part sourcing, inventory control, enclosure machining, label design, PC support, network administration, plant maintenance, janitorial, etc. Non-occupational includes residential plumbing, heating, electrical, farming, automotive and small engine repair. There is plenty more but you get the idea.