tag:blogger.com,1999:blog-82306926788671059042024-03-08T12:09:06.376-05:00Stubborn Tech Problem SolvingDiary and notebook of whatever tech problems are irritating me at the moment.jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.comBlogger55125tag:blogger.com,1999:blog-8230692678867105904.post-84958312435815973742020-06-09T03:13:00.005-04:002020-06-09T03:13:58.954-04:00Pyrethrin and Permethrin Repellent Clothing Treatments<div data-contents="true">
<div class="bi6gxh9e" data-block="true" data-editor="1tb0v" data-offset-key="fc80r-0-0">
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<span data-offset-key="fc80r-0-0"><span data-text="true">I used up my bottle of clothing repellent treatment and started searching for something cheaper including repurposing lawn or animal treatments. Most all of them use <a href="https://en.wikipedia.org/wiki/Pyrethrin">pyrethrin</a> (natural) or <a href="https://en.wikipedia.org/wiki/Permethrin">permethrin</a> (synthetic) insecticides. The natural form can trigger allergies while the synthetic is more common but possibly less healthy depending on what studies you look at. They're often combined with synergists like <a href="https://en.wikipedia.org/wiki/Piperonyl_butoxide">piperonyl butoxide</a> (PBO) which reduce insect resistance to them.</span></span></div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<span data-offset-key="fc80r-0-0"><span data-text="true"> </span></span></div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<span data-offset-key="fc80r-0-0"><span data-text="true">There are concerns over toxicity to humans but both (and PBO) are found in medical treatments for lice and scabies, typically with concentrations in the 0.5-4% range. Animal treatments (for most any mammal except cats whose livers can't neutralize it) are available in concentrations in up to 10%. Lawn and garden spray concentrations vary widely. Clothing treatments are typically 0.5% permethrin, which is probably a regulatory limit, and don't contain PBO (Ben's, Coleman, Repel, </span></span><span data-offset-key="fc80r-0-0"><span data-text="true"><span data-offset-key="fc80r-0-0"><span data-text="true">Sawyer</span></span>).</span></span></div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<span data-offset-key="fc80r-0-0"><span data-text="true"> </span></span></div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<span data-offset-key="fc80r-0-0"><span data-text="true">The main problem isn't these chemicals - it's the unknown "inert" ingredients. Most everything I've found uses "petroleum distillates" which is a non-specific category of chemicals. On safety data sheets their names and/or quantities are usually redacted as trade secrets. According to an <a href="https://www.consumerreports.org/insect-repellent/how-to-use-permethrin-on-clothing-safely/">article by Consumers Reports</a> some clothing treatments have additives to enhance adhesiveness but I suspect that's only what the manufacturers told them without providing specific chemical data. Most insecticides are formulated to have long surface adhesion duration regardless of application target.</span></span></div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<span data-offset-key="fc80r-0-0"><span data-text="true"> </span></span></div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<span data-offset-key="fc80r-0-0"><span data-text="true">While trying to find and compare water-based and petroleum-based mixes I discovered <a href="https://www.sciencedirect.com/topics/chemistry/permethrin">an excerpt from a 2013 research book</a> titled <i>Biotextiles as Medical Implants</i> by D. Tessier. In the search summary the chapter </span></span><span data-offset-key="fc80r-0-1" style="font-style: italic;"><span data-text="true">Surface modification of biotextiles for medical applications</span></span><span data-offset-key="fc80r-0-2"><span data-text="true"> discusses durability of permethrin treatment. According to Tessier water-based treatments last for two weeks or two washings, oil-based for six weeks or six washings. Treatments last much longer if kept in sealed light-proof bags. I haven't found any specific info on the durability of pyrethrin but permethrin is considered to be longer-lasting due to better water and sunlight resistance.</span></span></div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<span data-offset-key="fc80r-0-2"><span data-text="true"> </span></span></div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<span data-offset-key="fc80r-0-2"><span data-text="true"><a href="http://npic.orst.edu/factsheets/pbogen.pdf">According to a fact sheet</a> (PDF) from the National Pesticide Information Center at Oregon State University, PBO breaks down in 3-8 hours in sunlight which may explain why it's not found in clothing-specific treatments.</span></span></div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<span data-offset-key="fc80r-0-2"><span data-text="true"><br /></span></span></div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<span data-offset-key="fc80r-0-2"><span data-text="true">Factory-treated clothing reportedly has much longer-lasting effectiveness than spray treatments. My guess is they are using higher concentrations and perhaps enhancing the treatment bonding using elevated temperatures and/or pressures.</span></span></div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<span data-offset-key="fc80r-0-2"><span data-text="true"><br /></span></span></div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<span data-offset-key="fc80r-0-2"><span data-text="true">It seems like the best treatment would be water-based </span></span><span data-offset-key="fc80r-0-2"><span data-text="true"><span data-offset-key="fc80r-0-2"><span data-text="true">permethrin </span></span>solutions but I haven't found any without PBO which only provides a short-term benefit. The closest to ideal is MGK </span></span><a href="https://www.mgk.com/product/sector-misting-concentrate/">Sector Misting Concentrate</a> which is 10% permethrin, 10% PBO, and less than 10% petroleum distillates. The permethrin/PBO ratio of 1:1 is unusual since most mixes are 1:5 or 1:10 but would still have to be diluted. The presence of unknown petroleum distillates is annoying in a "water-based" product but it is much less than regular sprays. I could avoid it with other solutions but would have to accept a much larger amount of PBO. The only other option would be to buy concentrated permethrin in bulk off Alibaba and mix it myself.</div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
</div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
</div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
</div>
<div class="_1mf _1mj" data-offset-key="fc80r-0-0">
<br />Not surprisingly it comes down to cost, labor, and safety. I'll have to study this a bit more.</div>
</div>
</div>
jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com0tag:blogger.com,1999:blog-8230692678867105904.post-6241358648685443662015-02-24T03:43:00.002-05:002015-02-24T13:04:07.930-05:00Windows Update vs. apt-getWindows 8.1 on my nephew's HP 2000-2c29wm had another malware meltdown and I wasn't able to save it. To get it back to the operational level it was when I last fixed it, I had to do a factory reset (from the recovery partition) to Windows 8, install the free Windows 8.1 upgrade from the Windows Store, then reinstall everything else.<br />
<br />
I start by dumping all of his Minecraft mods, pictures, music, and other files to another system (I hate single-partition configurations). The Windows factory reset completes without a problem (maybe 30 minutes). Norton is removed without a fight. Next comes the first round of updates. I'm on an <a href="http://www.exede.com/">Exede</a> satellite connection with 10Mbps bandwidth but high lag. There's also a 10MB monthly cap but 12-5am EST is the "Late Night Free Zone" during which data usage is ignored. I start Windows Update just after midnight. There's 112 updates with over 1GB to download. After WU stays at 0% for an hour I have to check the directory size of c:\windows\SoftwareDistribution\Download just to make sure it's doing something. It finally finishes downloading around 2:30am and begins installing the updates. At 3:30am I go to bed because I'm tired of waiting for it to finish (90+ out of 112 updates installed). I estimate it finished around 4:30am. When I check it in the morning it's ready to reboot. It still needs to finish installing the updates during the reboot cycle (about another 25 minutes).<br />
<br />
But it's not done yet. Another round of WU checks indicates there's another 500MB of updates to install so I head to the local library to use their faster connection. Another two hours to finish that round (and this excludes the 20+ "recommended updates").<br />
<br />
Why did I go through this? Because the free Win8.1 upgrade requires it. The upgrade itself is another 3GB+ download. The library WiFi is free but it has a EULA with a 2-hour timeout (noted they're using <a href="http://www.ipcop.org/">IPCop</a>). About an hour into the Win8.1 download it timed out and I had to re-accept the EULA with IE. The Windows Store handled this temporary problem by restarting the download from the beginning. An hour and a half later, just before the library closed, it has finished the download and install and prompted to restart the system. I shut it down (not suspend) and head home to finish.<br />
<br />
I power it up and it loads the desktop. Strange, I was expecting the installation to finish during boot. Thinking in terms of "brain-dead design" I correctly surmised that <i>restarting </i>the system would finish the update while shut-down and power-up would not. Another 20 minutes waiting for that to finish.<br />
<br />
From my past experience the Win8.1 upgrade required multiple attempts to install intermixed with fixes by the Windows Update Troubleshooter. Every time it failed to upgrade it spent another 30 minutes reverting. While those problems didn't occur this time, when it finally loaded the Win8.1 desktop I was greeted with a Windows
Installer crash dialog. Luckily everything seems to be working in spite
of that.<br />
<br />
Time for another round of updates. Another 700MB+ for 30 or so updates, most of which were .NET Framework 4.5 "optional" security updates. Another hour and a half to install those including a reboot. Then install a few more updates that didn't show up previously followed by another reboot.<br />
<br />
I'm going to make a backup image before I continue so I don't have to go through all of this again. I still have to install Firefox, Chrome, VLC, Steam, and anti-malware tools (not that they're reliable).<br />
<br />
On this same system <a href="http://xubuntu.org/">Xubuntu</a> 14.04 took about two hours to install over satellite using the <a href="https://help.ubuntu.com/community/Installation/MinimalCD">Ubuntu Minimal ISO</a> + xubuntu-desktop, libreoffice, openclipart, wesnoth, and a few other games and multimedia tools. A couple gigabytes to download and installed and it was completely up-to-date. The only things I had to do afterwards was create user accounts and install fgrlx and <a href="http://cdemu.sourceforge.net/">CDEmu</a> from the repos.<br />
<br />
I don't have a clue as to why updating Windows was so slow. There weren't any anti-malware tools operating. I removed Norton and Windows Defender was disabled. It's not using an encrypted file system. System Restore probably adds some overhead but that doesn't seem likely to account for all of the problem. The HP crapware load is relatively minor as compared to other systems I've seen - Cyberlink multimedia tools, some free games, and a bunch of links to various services. <br />
<br />
There is one thing that surprised me about all of this, other than the Windows 8.1 upgrade not failing this time. The factory reset warned that all files would be erased so I expected to lose the Xubuntu partition. Instead, the reset only wiped the Windows partition. <strike>It still lost the Xubuntu boot loader configuration but that's relatively minor to fix.</strike> Update: I was wrong about the boot loader. It survived the reset also.<br />
<br />
Update: A <a href="http://www.reddit.com/r/Ubuntu/comments/2wz153/stubborn_tech_problem_solving_windows_update_vs/">Reddit discussion about this article</a> yielded a few ways to improve the Windows upgrade/update process.<br />
<br />
A little-known Microsoft <a href="http://windows.microsoft.com/en-us/windows-8/why-can-t-find-update-store">how-to article</a> indicates that only KB 2871389 and KB 2917499 updates are required before the Win8.1 upgrade. It's annoying that the upgrade doesn't handle this itself.<br />
<br />
There's also a third-party tool, <a href="http://download.wsusoffline.net/">WSUS Offline Update</a>, that can download and cache the updates locally. While it's not much help with single system updates it could be useful if you have an unreliable Internet connection or multiple systems to update and don't want to do <a href="http://en.wikipedia.org/wiki/Slipstream_%28computing%29">slipstreaming</a>.jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com0tag:blogger.com,1999:blog-8230692678867105904.post-57386378475419690622014-11-27T19:29:00.000-05:002014-11-27T19:31:09.286-05:00Steam client missing images on LinuxIf your Steam client is missing images (broken image icon) in the store or other pages then try this fix. First exit the Steam client. Then delete the HTML cache directory at ~/.steam/config/htmlcache. Then start the client and browse to the problem page again. The missing images should be reloaded from the server and display correctly.<br />
<br />
The tilde "~" is terminal shorthand for your home directory but you can use any file manager instead of a terminal. A file or directory that starts with a period like ".steam" makes it a hidden directory so you'll have to configure your file manager to show them (Ctrl-H toggles the setting in most file managers).jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com0tag:blogger.com,1999:blog-8230692678867105904.post-25248136540018701062014-10-07T18:24:00.003-04:002014-10-07T18:32:20.984-04:00Workarounds for some game bit-rot on [A-Z]*Ubuntu 14.04After upgrading to Xubuntu 14.04 x86_64 several games failed to work including Torchlight (from Humble Indie Bundle 6), Doom3, Quake 4, and Aquaria. I had expected this because <a href="http://en.wikipedia.org/wiki/Software_rot">bit-rot</a> is a common problem with closed-source games when upgrading.<br />
<br />
Aquaria and Doom3 segfaulted. Quake 4 failed to load textures and its log reported r600_dri.so problems. These all had the same cause - the libs they included were incompatible with the newer libs being loaded automatically from the OS. They also had the same solution - rename the included libs. On Quake 4 that included libgcc_s.so.1, libSDL-1.2.id.so.0, and libstdc++.so.6. To determine where a dynamic binary is getting libraries from use the <a href="http://manpages.ubuntu.com/manpages/trusty/man1/ldd.1.html">ldd command</a> but only change those necessary to get it working else other compatibility problems may occur.<br />
<br />
Torchlight would load but the cursor had a black box around it and none of the menu buttons could be clicked. The only way out of it was Alt-F4. Past experience had taught me that libSDL is a common culprit. It was using version 2, the same that Ubuntu 12.04/Linux Mint 13 supplied, but the game worked without problems on them. I suspected that changes to the lib since (or its dependencies) then had created an incompatibility so I simply copied the lib from 12.04 to the 14.04 system and put it in /usr/local/games/Torchlight/lib64. The game started without problems.<br />
<br />
Using different library versions works if they haven't changed much. The better solution is recompiling the game with newer libraries but getting developers to do that with old closed-source games is difficult. While fixing library problems is not n00b-friendly, it's trivial compared to what is required for getting old <a href="http://en.wikipedia.org/wiki/Loki_Software">Loki Software</a> games working.jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com1tag:blogger.com,1999:blog-8230692678867105904.post-70129110297007211132014-08-29T16:41:00.000-04:002014-08-29T16:42:56.343-04:00Wgetting to the correct file nameOne of the annoyances of using wget in scripts is the problem of files from some web sites being named "index.html" instead of the actual file name. This is due to the use of the <a href="http://tools.ietf.org/html/rfc2183">content-disposition</a> header field (part of MIME) to indicate the file name instead of the URL. Support for this field by wget is incomplete and disabled by default.<br />
<br />
There are a few ways around this. The easiest is to tell wget to look for it with the <a href="http://www.gnu.org/software/wget/manual/html_node/HTTP-Options.html">--content-disposition</a> option.<br />
<br />
Another option is to use <a href="http://www.gnu.org/software/wget/manual/html_node/HTTP-Options.html">curl --remote-header-name</a> or <a href="http://aria2.sourceforge.net/manual/en/html/aria2c.html">aria2</a> instead of wget.<br />
<br />
To easily create commands for more complicated file retrieval (with cookies, etc.) the Firefox add-on <a href="https://github.com/zaidka/cliget">cliget</a> can provide a command line for curl, wget, and aria2 in the opening download window. It only shows the curl command line by default. You can enable the others in cliget preferences (available in the cliget entry in about:addons).jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com0tag:blogger.com,1999:blog-8230692678867105904.post-59517964815822771192014-04-11T04:02:00.002-04:002014-04-11T12:35:59.745-04:00The Heartbleed Bug and You<b><a href="http://www.urbandictionary.com/define.php?term=TL%3BDR">TL;DR</a></b>: Most web sites have been leaking passwords due to a bug for the past two years. Immediately change your Internet passwords for Email and financial accounts, even if you changed your passwords April 7, 2014 or in the days preceding. Don't use the same password for all sites because if one is compromised, they all are even if some sites were not affected by the bug. Use a password manager such as <a href="http://keepass.info/">KeePass</a>/KeePassX or <a href="http://pwsafe.org/">Password Safe</a> to keep track of them. Don't click on links in emails, especially any that seem to come from your bank asking to change your password. Go to your bank's web site directly by entering the address in your web browser manually.<br />
<br />
<b>Verbose Version</b><br />
You may have seen reports about the <a href="http://heartbleed.com/">Heartbleed</a> Internet bug in the news lately. Note that the term "bug" doesn't mean malware or a virus, just an unintentional flaw. This particular bug is a vulnerability in the encryption system used by about 2/3rds of the Internet web sites. Ignoring the hype, on a scale of 1 (ignore) to 10 (critical) this is an 11. Read this through entirely before panicking.<br />
<br />
The bug allows an attacker to potentially obtain passwords from web sites or the credentials of the web site itself (possibly for creating a fake web site that validates as authentic). Most web server operators can detect if they have the vulnerablity but not if someone exploited it. While there are lists of affected sites and test services that can tell if a web site is affected, there are so many that it's best to assume most sites you have passwords on were exploited. Microsoft's web sites were NOT affected since they have their own encryption system, nor were web sites operating with their software (not that they haven't had many other security problems). But it's difficult for web users to determine what software a site is using now or what was used previously. The software you are using on your computer, tablet, or phone doesn't matter since the vulnerability affects the servers you connect to.<br />
<br />
What to do? Immediately change your passwords. You should change them periodically anyways, at least once a year depending on how complicated and long they are. Log into your account there and look for a password change option. Sometimes this option is on the account's "profile" or "preferences" page.<br />
<br />
If you change your password on a site that is still affected the benefit will be temporary since an attacker can just obtain it again. Many of the affected web sites have been fixed in the past two days but even those that were safe at the time the bug was found may have already been exploited if they were vulnerable any time in the past two years when the bug was introduced.<br />
<br />
You can expect to start receiving notices from major companies about the need to change your passwords. Don't wait for them to contact you. Some web site operators may underestimate the risk or be reluctant to announce it due to liability concerns.<br />
<br />
Because of the size of the problem you can also expect fake emails (phishing) prompting you to change your passwords. These are attempts to get your current passwords. To avoid being caught by these never click on a link in an email. Always go to the web site directly by entering the web site address into the web browser's address bar or by searching for the company by name using Google.<br />
<br />
I suggest prioritizing password changes as follows. Some web sites and vendor names are listed as examples but not all of them had the Heartbleed vulnerability.<br />
<br />
1. Email (Google/Gmail, Yahoo!). Do these first since they can be used to reset passwords on most other web site accounts. If your email provider has other services (Google+, Google Docs, Yahoo! Groups) then changing your email password usually changes it for everything else from them.<br />
Google password change: <b>https://www.google.com/settings/security</b><br />
Yahoo! password change: <b>https://edit.yahoo.com/config/change_pw</b><br />
<br />
2. Financial (banks, credit unions, investments, IRAs, pensions, PayPal)<br />
3. Government (healthcare.gov, passports, social security, taxes)<br />
4. Password services (a.k.a. "Single Sign-On"). If you are using a service that controls access to multiple web sites (OpenID, Facebook, Google), then change that too since a breach with it can affect multiple sites. Usually these aren't used for financial and government sites so the potential damage is less but they affect many other web sites.<br />
5. Any site that stores your credit card or financial information (Amazon.com and accounting sites like Mint.com).<br />
6. Vendors (insurance, medical) and utilities (power, phone, Internet) that have your Social Security number or you have set up for automatic bill payments.<br />
7. Any site that makes payments to you (eBay, Etsy, Freelancer.com).<br />
8. Computer remote backup services. These usually don't have the encryption key for reading the contents of your backup data but someone with your password may be able to delete your backups. If your encryption key was weak (short and simple), an attacker that obtains your backup data via your password may be able to break the encryption key by trying every possible key combination.<br />
9. File hosting services (Dropbox, MediaFire, Google Drive). An attacker can install malware within the hosted files and spread it to everyone that accesses them.<br />
10. Social networking sites (Twitter, Linked-In, Reddit, dating) and picture hosting sites (Imgur, Panaramio, Photobucket) are less important but worth changing if you have information on them that you want to keep private.<br />
11. Passwords for blogs and web sites you have made should be changed to prevent them from being used to host malware.<br />
<br />
Vendors that don't keep your credit card info (i.e., you have to enter it every time you buy something) are less of a risk but you should consider changing those of vendors you depend on the most and keep an eye on your credit card statements.<br />
<br />
Passwords used for starting your computer or unlocking your phone are probably not affected.<br />
<br />
Some wireless Internet sharing boxes (routers) you may have in your home or business have built-in web servers for configring them and many are affected by the Heartbleed vulnerability. Fortunately the configuration web site ususally can't be accessed by people on the Internet unless the device is intentionally configured to allow remote access. Unfortunately manufacturers aren't likely to fix any that have the problem.<br />
<br />
Some security systems and home automation systems also have web access and can be affected. If you have a system that you can control from a web browser (on your computer or phone), then contact the manufacturer or installer to verify it and get it fixed if necessary.<br />
<br />
You should not use the same password for multiple sites. If one is compromised, then they all are regardless of which has the Heartbleed bug. Create unique passwords for each and use a password manager program to keep track of them. With a password manager you create a single (preferably long and complicated) master password you can remember. This is used to encrypt its password list. You then use the manager to create complicated random passwords for each web site, preferably random letters and numbers at least 12 characters long. Many web sites accept much longer passwords and typographical characters (@$%*&...) but not all do. You don't have to remember these, just copy and paste them from the password manager as needed.<br />
<br />
Many web browsers also have password managers built-in and prompt you to store passwords when you enter them. But these password lists are often not well encrypted which makes it easier for an attacker (or other users of your computer) to obtain them. In addition, if you decide to change web browsers, these password lists can be difficult to transfer. Standalone password managers avoid this problem and can also be used for other codes and non-Internet passwords.<br />
<br />
Two popular free password managers for desktop and laptop computers are <a href="http://keepass.info/">KeePass</a> and <a href="http://pwsafe.org/">Password Safe</a>. I use KeePassX on Linux and it's installed on every system I build (in the menu: Applications > Accessories > KeePassX). It's compatible with KeePass on Windows. It's available in the repositories of most Linux distributions (Ubuntu, Linux Mint, etc.)<br />
<br />
Obviously the password manager will now be the center of your Internet life so it's important to make a backup of its password file, perhaps to a Flash "thumb" drive, and keep it in a safe place. To limit loss in case your computer is burned up in a house fire, keep the backup in a different location like in the house of a relative, your car or a safe deposit box. And don't forget its master password!<br />
<br />
Changing many passwords is a pain but don't ignore this critical security problem. The damage is already done and all you can do is prevent further damage to your Internet accounts.<br />
<br />
You can panic now. jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com0tag:blogger.com,1999:blog-8230692678867105904.post-61575872323842185852013-09-26T16:03:00.000-04:002013-09-26T16:03:04.808-04:00Cleaning up the Ubuntu and Linux Mint game menusMost kids love games. Thanks to F/OSS developers, Humble Bundle, Desura, Steam, and a variety of cross-platform Kickstarter and Indiegogo projects they now have many to choose from with more on the way.<br />
<br />
But when I set up a system and start loading dozens of games the desktop menus become quite a mess. Some games are not listed, or are listed in only in the main "Games" menu but not the correct sub-menus, or are listed in multiple sub-menus, or missing icons, or fail to function.<br />
<br />
This is on top of the usual problems with broken packages, bitrot with old closed-source games (<a href="http://en.wikipedia.org/wiki/Loki_Software">Loki</a> titles in particular), conflicts with PulseAudio, and a lack of documentation combined with unusual keyboard key assignments. The latter is really annoying when a game launches full screen and the user is unable to figure out how to exit it.<br />
<br />
Obviously some problems are much more difficult to resolve than others but the desktop menus are fairly easy to fix. I have fixed dozens and you can do the same. The link to the archive with my changes is at the bottom of this post but before you get into that you need to understand how the menu system works.<br />
<br />
Originally all window managers and desktop environments used their own menu configuration files. One legacy menu structure is located in /usr/share/applnk. Debian created a <a href="http://www.debian.org/doc/packaging-manuals/menu.html/ch3.html">standard for their distro </a>where a menu file was created once in /usr/share/menu and then exported dynamically as needed for each menu system. Later, freedesktop.org (formerly named the X Desktop Group) developed a new standard that has been adopted by Gnome, KDE, Xfce, and others.<br />
<br />
With the fdo/XDG standard the menu entries are created as <a href="http://en.wikipedia.org/wiki/INI_file">INI files</a> according to the <a href="http://standards.freedesktop.org/menu-spec/">Desktop Menu Specification</a> (*.menu) and <a href="http://standards.freedesktop.org/desktop-entry-spec/">Desktop Entry Specification</a> (*.desktop). The latter standard also defines how to specify the names and icons of the menus themselves (*.directory). Most desktop environments follow v1.0 of the standard but a v1.1 draft is in development. The *.menu files are located in /etc/xdg/menus. The *.desktop files are located in /usr/share/applications or /usr/local/share/applications. The *.directory files are located in /usr/share/desktop-directories or /usr/local/share/desktop-directories.<br />
<br />
A note about /usr, /usr/local, and /opt. Their structure is defined by the <a href="http://www.pathname.com/fhs/">Filesystem Hierarchy Standard</a>. The first two follow the traditional *nix structure where binaries, libraries, documentation, and support files are organized into specific subdirectories by type and may be shared with other programs (especially libraries). Files for programs in /opt are organized by provider and are generally contained entirely within their own subdirectory and not shared with other programs. In practice, /usr is where files "managed" by a package manager (apt) are located. Unmanaged files are in /usr/local and are usually not registered with the package manger. Files in /opt may be managed or unmanaged depending on their source (an install script or a deb package).<br />
<br />
Unfortunately some games don't follow the FHS properly. Some managed packages put their files in /usr/local and some installers put their unmanaged files in /usr. Other installers add their files and then ask to "register" with the package management system for easier removal, resulting in a mix of managed and unmanaged files. Managed files can be identified using <a href="http://manpages.ubuntu.com/manpages/precise/en/man1/dpkg-query.1.html">dpkg-query</a>:<br />
<br />
<code>dpkg-query -S /path/to/filename<filename></filename></code><br />
<br />
If dpkg-query doesn't find the file then it's not managed.<br />
<br />
Desktop entries are organized according to rules in the *.menu files. While a *.desktop file can be specified directly, most are arranged by matching "Categories" values in the files. According to the standard there are both "Main" and "Additional" categories. A desktop entry file should have one Main category except as required (for example, "Audio" or "Video" mandates "AudioVideo") and an Additional category for more precise organizing. Most of the Additional categories are reserved for specific Main categories. Obviously sub-menus have have to be defined before they can be used else everything lands in the primary menus. Linux Mint's Xfce menu configuration doesn't define the game secondary sub-menus by default (mint-artwork-xfce package) so the games menu gets very crowded. My menu configuration defines sub-menus for all standard game categories.<br />
<br />
Desktop entry files with multiple Main and Additional categories specified is an occasional problem. While some programs can arguably fit into more than one category, doing this in the desktop entry file often causes duplicates in the menus. Duplication can be suppressed with menu rules but it's much easier to avoid the problem in the first place. Some desktop entry files don't specify an Additional category and as a result the menu entry lands in the primary sub-menu, not in the secondaries with similar programs. Occasionally there will be a standards-compliant combination of Main and Additional categories that are not handled properly in the menu definition files which results in an entry erroneously stuck in the main menu or under an "Other" sub-menu. But errors in the desktop entry files are the root cause most of the time.<br />
<br />
Another problem is the categories themselves. The <a href="http://en.wikipedia.org/wiki/Video_game_genres">video game genres</a> are somewhat ambiguous and many games overlap multiple genres. Even the main category "Game" can be a problem with some. Emulators,
for example, can be specific to a game console (PCSX) or general-purpose
(QEMU). The fdo/XDG standards only cover a few game genres, some of which are very narrow while others are overly broad. These are my interpretations:<br />
<br />
<b>ActionGame</b>: A catch-all for anything that requires reflexes that doesn't fit in a more specific category.<br />
<br />
<b>AdventureGame</b>: Point-and-click adventures and interactive stories. If it requires reflexes then it doesn't belong here. Even so, there are some games which have a mixture of both so it becomes a judgment call as to developer intent (<a href="http://en.wikipedia.org/wiki/Full_Throttle_%281995_video_game%29">Full Throttle</a>).<br />
<br />
<b>ArcadeGame</b>: An ambiguous genre unless you restrict it to games that were found in coin-operated arcades at some point. I included games with simplistic gameplay and a minimalistic storyline, if any. If it has a point-based "high score" then it's probably an arcade game.<br />
<br />
<b>BoardGame</b>: Board games and games that could reasonably be board games or were derived from one.<br />
<br />
<b>BlocksGame</b>: Tile-matching and Tetris types, as per the <a href="http://en.wikipedia.org/wiki/Tile-matching_video_game">Wikipedia article</a>.<br />
<br />
<b>CardGame</b>: Probably the easiest to figure out although gtali is a bit of an oddball.<br />
<br />
<b>KidsGame</b>: Another non-genre genre. I included any game with a theme that is heavily slanted towards young children. However, some also fall into the Educational main category (GCompris).<br />
<br />
<b>LogicGame</b>: Slightly odd name as it includes puzzles. Again there are some platformers that combine this with action (EDGE, Stealth Bastard).<br />
<br />
<b>RolePlaying</b>: If it doesn't have a character development system then it's not a role-playing game. Having a fantasy theme is not good enough. Waste's Edge is an example of a game that says it's a role-playing game when it obviously is an adventure game. It's nothing more than a long dialog puzzle. It's not a bad puzzle and has a good story, but it's not a role-playing game.<br />
<br />
<b>Simulation</b>: Vehicle, economic, and environment simulations including cars, cities, and space. Car racing games are often marketed as sports games but that never seems to include airplane racing games. Some vehicle simulations have cinematic physics and may belong elsewhere (ArcadeGame, ActionGame).<br />
<br />
<b>SportsGame</b>: Traditional sports not involving vehicles. Includes "fantasy" sports management applications. Real sports management applications probably belong elsewhere, perhaps Office/ProjectManagement.<br />
<br />
<b>StrategyGame</b>: Some are obvious, some not. The problematic ones are those that require tactics but not quite long-term strategic thinking (Atom Zombie Smasher). I tend to follow the developer's opinion but not always.<br />
<br />
<b>Amusement</b>: Snowing desktops (xsnow), eye-candy (xdesktopwaves), and the like. Doesn't have a defined Main category but I set all the ones I found to Games since that is the most logical option. Creating a new primary sub-menu didn't make sense.<br />
<br />
<b>Shooters</b>: This is new in the v1.1 draft specification. Again it is a bit too broad of category but because the ActionGame category is so crowded I decided to support it, just not by default. I wrote the "set-shooter-categories" script which changes some of my desktop entries to this category (mostly first-person shooters) but the targets are all hard-coded. It doesn't have a reversion option so keep a backup copy of the original *.desktop files. I defined a Shooters sub-menu in my replacement menu definitions (applications.menu, xfce-applications.menu) and included an icon and the shooters.directory file.<br />
<br />
Adding to the problems are programs that represent many games including the Steam client, Desurium, and gtkboard (which includes Tetris and Mastermind). I normally just leave these in Games by not specifying an Additional category.<br />
<br />
Some games in the repos have the old Debian menu files but not the newer fdo/XDG files. There is an optional "Debian menu" that can be merged with the newer menu files to create a "Debian" sub-menu with all the old menu entries. However it has a different structure and is mostly redundant with the main menus - overkill for just getting some games to show up. I encountered the missing *.desktop file problem often enough that I wrote the "deb-menu-conv" script to export old Debian menu files as something that resembles the new fdo/XDG files. While deb-menu-conv handles the basics, some things like categories (called "sections" in the Debian menus) do not translate directly. It has some hard-coded conversions for games but most anything else will need manual editing. Debian menu files also allowed multiple menu entries per file while the fdo/XDG standard does not. All entries found in a menu file are exported by deb-menu-conv and have to be split manually into separate *.desktop files. This script saved me a whole bunch of time but some packages were still a pain (xmpuzzles, xteddy, x11-apps) due to the number of entries their old menu files contained. I excluded most terminal and text-mode entries.<br />
<br />
Some of the programs in the Debian menus are obsolete but I included them as a test of
deb-menu-conv. Many of the amusement-type programs (xfireworks, xsnow,
xpenguins) require direct access to the primary X window (root window)
that is commonly controlled by a desktop manager in modern desktop
environments (xfdesktop in Xfce). They can be used on Xfce if xfdesktop
is terminated first (Settings Manager > Session and Startup
> Session > xfdesktop > Quit Program). Log out and
back in again or create a launcher in a desktop panel to restart
xfdesktop afterwards.<br />
<br />
I also fixed some other bugs via the desktop entry files. Doom3 doesn't get along with PulseAudio so I added pasuspender to the Exec line in its file. Some games have both 32-bit and 64-bit versions but their installer only adds one or the other and they have different names. This meant that a single menu entry couldn't support both so I had to replace it with two different files. For several of the games with undocumented keys and lack of built-in help I added additional menu entries that open man pages, Readme documents, and web sites with relevant information. <br />
<br />
One parameter that is very useful is "TryExec". This is a test where the menu system looks for the target file and only shows the menu entry if it is found. Since it is targeting executables it can be used to check for both file and directory targets. All of my desktop entry files have this set so that if they are all installed they won't crowd the menus with entries for unavailable programs. This is also handy for menu entries that target the 32-bit and 64-bit versions of a binary. However, if a game normally installs both then only one entry is used and it points to a script that dynamically loads the correct binary based on the system architecture. I wrote or modified several game loading scripts for this reason (and to fix other bugs).<br />
<br />
Installing a modified menu or desktop entry file is easy if it is unmanaged - just overwrite the original. However, managed files are more complicated. If an update is installed by the package manager then the modified file may be overwritten. The solution is to divert the original file to a different location with <a href="http://manpages.ubuntu.com/manpages/precise/man8/dpkg-divert.8.html">dpkg-divert</a>. When apt applies the update it will then redirect the diverted file to the new location, leaving the modified one intact.<br />
<br />
The installation of the modified files is easy when only a few files are involved. But I ended up with dozens which I realized would be error-prone and time-consuming to install. So I wrote the "file-overrides" script to handle installation automatically, make backups of replaced files, and be able to revert changes. It has built-in help but I will summarize the basics here.<br />
<br />
Overrides are located in a subdirectory structure that mimics the actual targets. The default directory is /usr/local/share/file-overrides.d which contains two subdirectories, managed and unmanaged. The difference is that managed targets are diverted (and moved) by dpkg-divert while unmanged targets are moved directly. These subdirectories must exist else the script will abort.<br />
<br />
The diversions and backups are located in /usr/share/Diversions/file-overrides by default and this directory must also exist. The script will create "managed" and "unmanaged" directories as needed and will remove their contents automatically during reversions.<br />
<br />
The structures are relative so that a replacement for managed file:<br />
<br />
/usr/share/appications/quake3.desktop<br />
<br />
is located at:<br />
<br />
/usr/share/Diversions/file-overrides/managed/usr/share/applications/quake3.desktop<br />
<br />
During an override operation the original will be moved/diverted to:<br />
<br />
/usr/share/Diversions/file-overrides/managed/usr/share/applications/quake3.desktop<br />
<br />
prior to being replaced. During a reversion the sequence is reversed, more or less. While intended for fixing menu entries the script can be used to replace any file on the system. It does have some special support for the *.desktop files. Normally the script will only install an override file if the target already exists to keep the directories clean. However, if the file is a desktop entry file with TryExec set, and the target executable exists, then the override will be applied even if the target *.desktop file does not exist. The --force parameter applies all overrides regardless.<br />
<br />
Another special case is with override files that are zero bytes in size. This causes the target to be moved/diverted but not replaced, essentially causing a deletion.<br />
<br />
See the readme.txt file for more info on included icons and such.<br />
<br />
The archive contains everything. Note that it has been developed and tested on Ubuntu 12.04 (Precise Pangolin)/Linus Mint 13 (maya) with <a href="https://launchpad.net/~xubuntu-dev/+archive/xfce-4.10">Xfce 4.10</a>. It will probably work with newer releases (and Debian) and other desktop environments with the exception of the menu files which may need adjusting. Compare them to the originals in /etc/xdg/menus.<br />
<br />
<br />
<a href="http://www.mediafire.com/?gznaa93u315dai6">file-overrides.d_v1.1.tar.bz2</a><br />
<br />
md5sum: b15e343fe9c901f4556bed27fbce8cf8<br />
<br />jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com1tag:blogger.com,1999:blog-8230692678867105904.post-78624195904024981782013-03-04T22:52:00.003-05:002013-03-04T22:52:40.422-05:00Latest batch of UFW application profilesI'm working on an updated <a href="http://jhansonxi.blogspot.com/2011/12/full-featured-ubuntu-online.html">kickstart script</a> for Ubuntu 12.04 and it's late as usual. Part of the reason was the Unity/Gnome3/Linux Mint/Xfce desktop environment choice problem. Trying to select one that my customers would like, would work with full-screen 3D games, and my personal non-Xinerama multi-head (aka "Zaphod mode") desktop configuration has been difficult. I also was dragged into one of the largest CAD projects of my career which took about 5x longer to finish than estimated (mostly not my fault). It paid well but at 70+ hours per week it really messed up the schedules of my other projects.<br />
<br />
One of the major developments of the 10.04 installation was the <a href="http://jhansonxi.blogspot.com.es/2010/10/ufw-application-profiles.html">creation of a bunch of application profiles</a> for use with UFW, the "Uncomplicated FireWall". I submitted these to Ubuntu for inclusion in future releases. After some initial enthusiasm on the part of their developers for the contribution, the profiles have been more or less forgotten about.<br />
<br />
There are multiple GUI interfaces for UFW but at the time only <a href="http://code.google.com/p/ufw-frontends/">ufw-frontends</a> supported profiles well. About a week ago a member of the <a href="http://gufw.org/">Gufw</a> team contacted me about including the profiles as part of their package. They also found some errors. I'm not sure what their current version looks like but hopefully it supports the reference and category extensions to the file format I added.<br />
<br />
One thing the profiles didn't have was a license. After some consideration I settled on GNU GPL v3 or later. Even though the profiles are not source code or configuration files related to compiling, the <a href="http://www.gnu.org/licenses/license-list.html#GPLOther">GPL can be used</a>.<br />
<br />
Because of all this I've <a href="http://www.mediafire.com/?qh1bn98a80mesm4">updated the profiles to v1.4</a> and now include a license file and MD5 checksums for identifying which files are covered by the GPL.<br />
<br />
<br />jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com3tag:blogger.com,1999:blog-8230692678867105904.post-10737035761064707062011-12-15T23:58:00.000-05:002011-12-18T22:37:05.661-05:00Full-featured Ubuntu online installation using kickstartThis is an elaborate fault-tolerant Kickstart script for an on-line Ubuntu installation, optimized for home users, with extensive remote administration support and documentation. Not recommended for beginners.<br />
<br />
This isn't just another trivial automated installation script although it started out that way. Basic installation presets led to integrated bug workarounds, setting defaults for many applications and servers, more features, etc. While you may disagree with some of my package choices, they were selected for my clients - not you. Change it if you have different needs. First, a little background on my deployments.<br />
<br />
All of my clients have cheap desktop systems or laptops, usually outdated. Almost any CPU, chipset, GPU, and drive configuration. They're either stand-alone or connected together on small Ethernet networks. Some have broadband, some only dial-up (POTS). Ages vary from toddlers to senior citizens. A few are Windows gamers. This mix results in a wide variety of system hardware, peripherals, application requirements, and configurations. I've had to deal with most every type of kernel, application, and hardware bug. Every deployment unearths a new bug to fight. Some of these are Ubuntu's fault but many are upstream.<br />
<br />
Inevitably I spend many hours doing full OS conversions to Ubuntu or dual-boot configurations. I've found that using a Live CD to install Ubuntu is about 4x faster than installing Windows when drivers, updates, and application installs are figured in. While I could set up slipstream builds of Windows I don't install it enough to bother with and the variety of versions (Home, Pro, upgrade, OEM,...) and licenses makes it impractical. Relatively speaking, I spend about 3x as long transferring documents, settings, and game/application files (scattered all over C:) to Ubuntu than I do installing either it or Windows. But I'll take any time savings I can get.<br />
<br />
A while back, when Ubuntu 10.04 (Lucid Lynx) was released, I decided to streamline my installations. This wasn't just to save time. I also needed to make my installations more uniform as I couldn't remember all the various tweaks and bug fixes that I performed from installation to the next.<br />
I had several goals for this project, not necessarily all at the beginning as some were the result of test installs, client feedback, and feature creep.<br />
<ol>
<li>Fix all the bugs that my clients encountered on their existing installs plus all the other Ubuntu annoyances I've been manually correcting.</li>
<li>Do everything the "correct way" instead of blindly following HOW-TOs from amateurs that involved script and text file hacking that would be lost on the next update. I had to learn proper use of Gconf, PolicyKit, Upstart, init scripts, and dpkg.</li>
<li>Configure all of the network features that my clients had asked for, usually file or peripheral sharing. Internet content filtering for kids was a requirement.</li>
<li>Secure remote access and administration. It's bad enough when a client has a software problem. Having to waste time with an on-site visit is idiotic when it's not an Internet access problem and a broadband connection is available. The same kickstart configuration can be used for both an "administration" system as well as clients. Having them nearly identical makes both remote and verbal support easier.</li>
<li>Make it easier to obtain diagnostic and status information, for me and the client.</li>
<li>Research applications that meet customer needs and are stable. Configure them so the customer doesn't need to.</li>
<li>Document everything, especially anything I spent significant time researching.</li>
</ol>
<br />
On all of these I mostly succeeded. There are still a few gaps but they're minor (for my deployments at least) but after working on this for 18 months I needed to get on with my life. I figure that after a few million deployments I should break even. I'm now busy updating the dozen or so I currently have.<br />
<br />
So what's in it? The base is just a plain 10.04 (i386 or amd64) installation. Two reasons for that - it's the LTS release and I didn't have time to upgrade to newer releases or workaround their new bugs. It's supported <a href="https://wiki.ubuntu.com/LTS">for another year</a> or so. I probably update it for 12.04 after it is released (and clean up my code). Highlights:<b> </b><br />
<br />
<b>Apache</b>. Used for sharing the public directory (see below) and accessing the various web-based tools. The home page is generated from PHP and attempts to correct for port-forwarding (SSH tunnel) if it detects you are not using port 80.<br />
<br />
<b>Webmin</b>. It's the standard for web-based administration. I added a module for ddclient (Dynamic DNS). The module is primitive but usable and I fixed the developer's Engrish.<br />
<br />
<b>DansGuardian</b>. Probably three months work on just this. For content filtering there isn't really anything else. Unfortunately it has almost no support tools so I had to write them. Most of these have been announced in previous blog postings although they've been updated since then. The most complicated is "dg-squid-control" which enables/disables Squid, DansGuardian, and various iptables rules. Another loads <a href="http://www.shallalist.de/">Shalla's blacklist</a>. It doesn't have system group integration so I wrote "dg-filter-group-updater" to semi-integrate it. There are four filter groups - no access, restricted (whitelist with an index page), filtered, and unrestricted. I added a Webmin module for it I found on Sourceforge. It's not great but makes it easier to modify the grey and exception lists. Included are lists I wrote that allow access to mostly kid sites (a couple of hundred entries). The entries have wiki-style links in comments that are extracted by "dg-filter-group-2-index-gen" to create the restricted index page. There's a How-To page for proxy configuration that users are directed to when they try to bypass it.<br />
<br />
The only limitation is that browser configurations are set to use the proxy by default but dg-squid-control doesn't have the ability to reset them if the proxy is disabled. I spent two weeks working on INI file parsing functions (many applications still use this bad Windows standard for configuration files). While they seem to work I need to significantly restructure the tool to make use of them.<br />
<br />
DansGuardian had no development for a few years but recently a new maintainer is in charge and patches are being accepted. Hopefully full system account integration will be added.<br />
<br />
<b>UFW</b>. The Uncomplicated Firewall is a front-end to iptables and there is a GUI for it. One feature it has is application profiles, which make it easy to create read-to-use filter rules. I created about 300 of them for almost every Linux service, application, or game (and and most Windows games on Wine).<br />
<br />
<b>File sharing</b>. The /home/local directory is for local (non-network) file sharing between users on the same system. There is also a /home/public directory that is shared over Samba, HTTP, FTP, and NFS. WebDAV didn't make the cut this time around.<br />
<br />
<b>Recovery Mode</b>. I added many scripts to the menu for status information from just about everything. Several of my tools are accessible from it.<br />
<br />
<b>SSH server</b>. You make a key with ssh-keygen, client_administrator_id_dsa (should be encrypted), and include the public (*.pub) part in the kickstart_files/authentication sub-directory. It is added to the ssh configuration directory on every system. Using another tool, "remote-admin-key-control", system owners (sysowner group) can enable or disable remote access. This is for several reasons including privacy, liability, and accounting (for corporate clients where the person requesting support may not have purchase authority).<br />
<br />
When the remote-admin-key-control adds the key to the administrator account ~/.ssh/authorized_keys, you can connect to the system without a password using the private key (you still need to enter the key passphrase). The radmin-ssh tool takes this one step further and forwards the ports for every major network service that can function over ssh. It also shows example command lines (based on the current connection) for scp, sftp, sshfs, and NFS. You still need the administrator password to get root access.<br />
<br />
<b><a href="http://x2go.org/">X2Go</a></b>. Remote desktop access that's faster than VNC. Uses SSH (and the same key).<br />
<br />
<b>OpenVPN</b>. A partially configured Remote Technical Support VPN connection is installed and available through Network Manager. If the client system is behind a firewall that you can't SSH through, the client can activate this VPN to connect to your administration system so that you can SSH back through it. Rules for iptables can be enabled that prevent the client accessing anything on the administration system. It connects using 443/udp so should work through most firewalls.<br />
<br />
<b>Books and guides</b>. Located in the desktop help menu (System > Help) is a menu entry that opens a directory for books. My deployments have subdirectories with Getting Started with Ubuntu 10.04 - Second Edition from the <a href="http://ubuntu-manual.org/downloads">Ubuntu Manual Project</a> and <a href="http://wiki.services.openoffice.org/wiki/Documentation/OOo3_User_Guides/Chapters">OpenOffice.org user guides</a>. You can easily add more as the kickstart script grabs everything in its local-books subdirectory. For the end-user I wrote networks-and-file-sharing-help.html (in the same help menu).<br />
<br />
For the installer the main source of documentation is the kickstart script itself. I got a little carried away with comments. The next major document is TODO.html which is added to the administrator's home directory. It was intended to list post-install tasks that needed to be completed since there are many things the installer can't do (like compile kernel modules). After adding background information on the various tasks, troubleshooting help, and example commands, it's basically another book. You should read it before using the kickstart script.<br />
<br />
<b><a href="http://scannerserver.online02.com/">Scanner Server</a></b>. Allows remote access to a scanner through a web interface. Simpler than using saned (but that is also available if you enable it). It had several bugs so I fixed it and added a few features (with help from a Ubuntu Forum member pqwoerituytrueiwoq). Eventually we hit the limit of what it could do so pqwoerituytrueiwoq started writing <a href="http://ubuntuforums.org/showpost.php?p=10968753&postcount=117">PHP Server Scanner</a> as a replacement. For a 12.04 release I will probably use that instead. I wrote "scanner-access-enabler" to work around udev permission problems with some scanners (especially SCSI models).<br />
<br />
<b>Notifications</b>. Pop-up notices will be shown from smartd, mdadm, sshd, and OpenVPN when something significant happens. Without the first two the user doesn't know about pending drive problems until the system fails to boot. I've also had them turn the system off when I was in the process of updating it and the SSH notification helps prevent that. The OpenVPN notification is mostly for the administration system and includes the tunnel IP address of the client. OpenSSH has horrible support for this kind of scripting. OpenVPN's scripting support is absolutely beautiful.<br />
<br />
<b>Webcam Server</b>. A command-line utility that I wrote a GUI for. It has a Java applet that can only be accessed locally but a static image is available from the internal web server to anywhere.<br />
<br />
<b>BackuPC</b>. It uses its default directory for backups so don't enable it unless you mount something else there. A cron job will shut the system down after a backup if there are no users logged in. It has been somewhat hardened against abuse with wrapper scripts for tar and rsync.<br />
<br />
There are many bugs, both big and small, that are either fixed or worked around. The script lists the numbers where applicable. The TODO documents lists a bunch also. Some packages were added but later removed (Oracle/Sun Java due to a licensing problem, Moonlight since it didn't work with any Silverlight site I tested).<br />
<br />
There are some limitations to Ubuntu's kickstart support. I'm not sure why I used kickstart in the first place. Perhaps the name reminded me of <a href="http://www.kixtart.org/">KiXtart</a>, a tool I used when I was a Windows sysadmin. Kickstart scripts are the standard for automating Red Hat installations (<a href="http://wiki.debian.org/DebianInstaller/Preseed">preseeding</a> is the Debian standard) but Ubuntu's version is a crippled clone of it. In part it acts like a preseed file (even has a "preseed" command) but also has sections for scripts that are exported and executed at different points during the installation. About 90% of the installation occurs during the "post-install" script. The worst problem with Ubuntu's kickstart support is that the scripts are exported twice and backslashes are expanded both times. This means that every backslash has to be quadrupled. This gets real ugly with sed and regular expressions. Because of this you'll see "original" and "extra slashy" versions of many command lines. I wrote quad-backslash-check to find mistakes.<br />
<br />
The other problem is that the way the script is executed by the installer hides line numbers when syntax errors occur, making debugging difficult. I wrote quote-count and quote-count-query to find unmatched quotes (and trailing escaped whitespace that was supposed to be newlines) which were the most common cause of failure.<br />
<br />
I've made an archive of my kickstart file, its support files, and configuration files for various services on my server <a href="http://www.mediafire.com/?tvffidvh998xvit">for you to download</a> (12.5MB, MD5: b5e79e6e287da38da75ea40d0d18f07f ). The script, error checking and ISO management tools, and server configuration files are in the "kickstart" sub-directory. A few packages are included because they are hard to find but others are excluded because of size. Where a package is missing there is a "file_listing.txt" file showing the name of the package I'm using. My installation includes the following which you should download and add back in:<br />
<br />
<a href="http://www.amazon.com/gp/dmusic/help/amd.html">Amazon MP3 Downloader</a> (./Amazon/amazonmp3.deb)<br />
<a href="http://sourceforge.net/projects/dgwebminmodule/">DansGuardian Webmin Module</a> (./DansGuardian Webmin Module/dgwebmin-0.7.1.wbm)<br />
<a href="http://www.desura.com/">Desura client</a> (./Desura/desura-i686.tar.gz)<br />
<a href="http://sourceforge.net/projects/gmic/files/">G'MIC</a> (./GMIC/gmic_1.5.0.7_*.deb)<br />
<a href="http://grecipe-manager.sourceforge.net/">Gourmet</a> (./Gourmet/gourmet_0.15.7-1_all.deb)<br />
<a href="https://www.vmware.com/tryvmware/">VMware Player</a> (./VMware/VMware-Player-*.bundle)<br />
<br />
VMware Player is optional. It has kernel modules so the kickstart script only retrieves the first install file returned from the web server whose name matches the architecture. It puts it in /root for later installation. <br />
<br />
The target systems need network-bootable Ethernet devices, either with integrated PXE clients or a bootable CD from <a href="http://rom-o-matic.net/">ROM-o-matic</a>. <br />
<br />
You need a DHCP sever that can send out:<br />
<br />
filename "pxelinux.0"<br />
next-server <tftp_server_address></tftp_server_address><br />
<br />
The tftp server needs to serve the pxelinux.0 bootstrap, vesamenu.c32, and the menu files. These are available from the <a href="http://archive.ubuntu.com/ubuntu/dists/lucid/main/installer-i386/current/images/netboot/">Ubuntu netboot images</a>. The bootstrap and vesamenu.c32 are identical between the i386 and amd64 versions, only the kernel, initrd, and menus are different. You can use my menu files instead of the standard set in the netboot archive. The most important is the "ubuntu.cfg" file. You'll notice that my menu files list many distros and versions. Only the utility, Knoppix, and Ubuntu menus function fully. The rest are unfinished (and probably obsolete) experiments. FreeDOS is for BIOS updates.<br />
<br />
My tftp server is atftpd which works well except it has a 30MB or so limit on tftp transfers. This only affects the tftp version of Parted Magic (they have a script to split it up into 30MB parts). It is started by inetd on demand.<br />
<br />
I use loopback-mounted ISOs for the kickstart installs and all LiveCDs netboots. Because I have so many, I exceeded the default maximum number of loopback nodes available. I set max_loop=128 in my server's kernel command line to allow for many more.<br />
<br />
The <a href="https://help.ubuntu.com/community/Installation/MinimalCD">Ubuntu Minimal CD ISOs</a> are the source for the kernel and initrd for the kickstart install. The architecture (and release) of the kernel on these ISOs must match the architecture of Ubuntu you want to install on the target system. You'll probably want both the i386 and amd64 versions.<br />
<br />
PXE Linux doesn't support symlinks so my ISOs are mounted in the tftp directory under ./isomnt. Symlinks to the ISOs are in ./isolnk and are the source of the mounts. I set it up this way originally because the ISOs were in /srv/linux in various subdirectories so having the links in one place made it easier to manage. But my ISO collection grew too big to manage manually so I wrote "tftp-iso-mount" that creates the mountings for me. It searches through my /srv/linux directory for ISO files and creates isomnt_fstab.txt that can be appended to fstab. It also deletes and recreates the isomnt and isolnk directories and creates the "isomnt-all" script to mount them.<br />
<br />
The ISOs are accessed through both NFS and Apache. I originally intended to use NFS for everything but I found that debian-installer, which performs the installation and executes the kickstart script (also on the "alternate" ISOs), doesn't support NFS. So I had to set up Apace to serve them. The Apache configuration is rather simple. There are a few symlinks in /var/www that link to various directories elsewhere. One named "ubuntu" links to /srv/linux/Ubuntu. The kickstart support files are placed in /srv/linux/Ubuntu/kickstart_files and are accessed via the link. NFS is still used for booting LiveCDs (for bug testing and demos). There is also a "tftp" symlink to /srv/tftp used for local deb loading (see below).<br />
<br />
The kickstart script itself, Ubuntu-10.04-alternate-desktop.cfg, is saved to /srv/tftp/kickstart/ubuntu/10.04/alternate-desktop.cfg after being backslash and quote checked.<br />
<br />
Several preseed values are set with the "preseed" command at the beginning of the script. You'll probably want to change the time zone there. License agreements are pre-agreed to as they will halt the installation if they prompt for input.<br />
<br />
Like I mentioned earlier, the vast majority of work happens in the post-install script. The executes after the base Ubuntu packages are installed. The most important variable to set is $add_files_root which must point to the URL and directory of your web server where the rest of the kickstart support files are located (no trailing backslash). The script adapts for 32-bit and 64-bit packages as needed based on the architecture of the netboot installer. There is also a "late_command" script that executes near the end of the installation, after debian-installer creates the administrator account (which happens after the post-install script finishes).<br />
<br />
The debug variables are important for the initial tests. The $package_debug variable has the most impact as it will change package installations from large blocks installed in one pass (not "true") to each package individually ("true"). When true, it slows down installation significantly but you can find individual package failures in the kickseed-post-script.log and installer syslog (located in /var/log/installer after installation). Setting $wget_quiet to null will guarantee a huge log file. The $script_debug variable controls status messages from the package install and mirror selection functions.<br />
<br />
The $mirror_list variable contains a list of Ubuntu mirrors (not Medibuntu or PPAs) that should have relatively similar update intervals. This is used by the fault-tolerant mirror selection function, f_mirror_chk, that will cycle through these and check for availability and stability (i.e., not in the middle of sync). The mirrors included in the list are good for the USA. These are exported to the apt directory so that the apt-mirror-rotate command can use them to change mirrors quickly from the command line or through the recovery mode menu. When a package fails to be installed via the f_ftdpkg and f_ftapt functions, another mirror will be tried to attempt to work around damaged packages or missing dependencies.<br />
<br />
To save bandwidth the post-install script looks for loopback mounted ISOs of the Ubuntu 10.04 live CD and Ubuntu Studio (both i386 and amd64 versions) in the isomnt sub-directory via the tftp link in the Apache default site. It copies all debs it finds directly into the apt cache. It also copies the contents of several kickstart support sub-directories (game-debs* and local-debs*). This is a primitive way to serve the bulk of the packages locally while retrieving everything else from the mirrors. You need to change the URLs in the pre-load debs section to the "pool" sub-directories of the mounted ISOs in "./tftp/isomnt/".<br />
<br />
Because loading this many debs can run a root volume out of space, the $game_debs variable can be used to prevent game packages from being retrieved. Normally you should have at least a 20GB root (/) volume although it could be made smaller with some experimentation. An alternative to this method would be a full deb-mirror or a large caching proxy.<br />
<br />
Set the OpenVPN variables $openvpnurl to the Internet URL of your administration system or the firewall it's behind. Set $openvpnserver to the hostname of your administration system (which can have the same values as it won't be connecting to itself).<br />
<br />
Basic usage starts with netbooting the client system. Some have to be set to netboot in the BIOS and some have a hotkey you can press at POST to access a boot selection menu. The system then obtains an address and BOOTP information from the DHCP server. It then loads pxelinux.0 from the TFTP server which will in turn load vesamenu.c32 which displays the "Netboot main menu". Select Ubuntu from the list and look for the Ubuntu 10.04 Minimal CD KS entries. Select the one for your architecture and press the Tab key to edit the kernel boot line. Set any kernel parameters you want to be added to the default Grub2 configuration after the double dash (--), like "nomodeset". Set the hostname and domain values for the target as these are used in several places for bug workarounds and configurations. Then press Enter. The installer should boot. If nothing happens when you press Enter and you are returned to the Ubuntu boot listing menu, verify the ISOs are mounted on the server then try again (you will need to edit the entry again).<br />
<br />
If there are no problems then you will be asked only two questions. The first is drive partitioning. This can be automated but my client systems are too different to do so. Then next question will be the administrator password. After that it will execute the post-install script and late-command scripts then prompt you to reboot. Just hit the enter key when it does as Ctrl-Alt-Delete will prevent the installer from properly finishing the installation (it's not quite done when it says it is). Full installation will take 2-3 hours depending on debug settings, availability of local debs, and Internet speeds.<br />
<br />
In case of problems see the TODO document which has a troubleshooting section. The only problems I've had installing was missing drivers or bugs in the installer (especially with encrypted drives - see the TODO). My Dell Inspiron 11z, which has an Atheros AR8132/L1c Ethernet device, wasn't supported by the kernel the minimal CD was using. To work around it I made a second network connection with an old Linksys USB100TX. The Atheros did the netboot (the Linksys does not have the capability) but the installer only saw the Linksys afterwards and had no problems using it (other than it being slow).<br />
<br />
I welcome comments and suggestions (other than my package choices and blog color scheme :D).jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com0tag:blogger.com,1999:blog-8230692678867105904.post-82572621125064522902011-11-25T17:50:00.025-05:002011-12-04T18:03:32.858-05:00Haphazard proxy support in Linux programs<p>Some of my clients require Internet content filtering on computers their kids are using. The solution to that is <a href="http://dansguardian.org">DansGuardian</a>. While it has many problems there really isn't a better F/OSS alternative. Its development has been stagnant for years but recently a new maintainer joined the project so submitted patches are being applied to fix bugs and add features (like system group integration).</p><p>DansGuardian requires a proxy. The common options are TinyProxy and Squid. TinyProxy has a few annoying bugs so I use Squid with my clients. One challenge with content filtering is preventing the proxy from being bypassed. The <a href="http://contentfilter.futuragts.com/wiki/doku.php?id=two_configuration_families">two solutions</a> are transparent interception or an explicit-proxy with dropping of connections that aren't destined for the proxy ports.</p><p>With a transparent proxy all outgoing connections are routed via iptables rules to DansGuardian regardless of the client settings. While this simplifies deployment by eliminating client configuration it also prevents using different content filtering levels on a per-user basis as it masks the source port of the connection. Without the source port the associated user can't be identified. Since the systems I maintain have a variety of users within the same household and thus different filtering requirements, this doesn't meet their needs.</p><p>The alternative method is to use iptables rules that drop connections that aren't destined for the DansGuardian. Here are the nat rules that I use:</p><p><code>*nat<br />
:PREROUTING ACCEPT<br />
:POSTROUTING ACCEPT<br />
:OUTPUT ACCEPT<br />
-A OUTPUT ! -o lo -p tcp -m owner ! --uid-owner proxy -m owner ! --uid-owner root -m owner ! --uid-owner clamav -m owner ! --uid-owner administrator -m tcp --dport 80 -j REDIRECT --to-ports 8090<br />
-A OUTPUT ! -o lo -p tcp -m owner ! --uid-owner proxy -m owner ! --uid-owner root -m owner ! --uid-owner clamav -m owner ! --uid-owner administrator -m tcp --dport 443 -j REDIRECT --to-ports 8090<br />
-A OUTPUT ! -o lo -p tcp -m owner ! --uid-owner proxy -m owner ! --uid-owner root -m owner ! --uid-owner clamav -m owner ! --uid-owner administrator -m tcp --dport 21 -j REDIRECT --to-ports 8090<br />
-A OUTPUT -p tcp -m tcp --dport 3128 -m owner ! --uid-owner dansguardian -m owner ! --uid-owner root -m owner ! --uid-owner clamav -m owner ! --uid-owner administrator -j REDIRECT --to-ports 8080<br />
COMMIT<br />
</code></p><p>Fairly simple but note that I'm not dropping the packets. Any TCP connection that is destined for ports 80 (HTTP), 443 (HTTPS), and FTP (21) are rerouted to port 8090. Some accounts are excluded to prevent false-positive blocking by DansGuardian.</p><p>DansGuardian is using port 8080 (and connects to Squid on 3128). So what is 8090? Its an Apache server. One of the problems with programs that aren't configured to use the proxy is that the users won't know why their connections are failing. The web site, known as a <a href="http://contentfilter.futuragts.com/wiki/doku.php?id=network_billboard&DokuWiki=07cd76662d9eed573eaa60f7cb0b0e3d">network billboard</a>, displays a page that informs them that their programs need to be configured to use the proxy and how to do it. This is much friendlier than just dropping the packets. DansGuardian uses <a href="http://manpages.ubuntu.com/manpages/lucid/man8/ident2.8.html">ident2</a> to identify the user that is the source of the connection and applies the filtering rules specific to the filter group they are assigned to.</p><p>This configuration works very well with web browsers. Most use the system proxy settings through gconf on Gnome. Some need manual configuration so I created default configuration files and put them in /etc/skel so that new user accounts have them at creation. Unfortunately, many other programs rely on environment variables to determine the proxy address and Ubuntu's proxy configuration tool (gnome-network-properties) has a <a href="https://bugs.launchpad.net/ubuntu/+source/gnome-control-center/+bug/494373">really stupid bug</a> and they aren't set correctly. Some are set in bash in terminal windows but not in the session so any graphical program that doesn't use gconf fails to access the proxy correctly. It's easy to demonstrate. Open a terminal window and enter:</p><p><code>tail -f ~/.xsession-errors</code></p><p>Then create a custom application launcher in the panel and enter "printenv" for the command. Then just click it and check the output from tail. On my system, variables for "HTTP_PROXY" and the like aren't present. I <a href="http://www.mediafire.com/?lyl3062j12goa2d">created a fix</a> for this. Just extract the file and add it to the end of ~/.profile and relogin. Run the tail/printenv commands again with a proxy set in System>Preferences>Network Proxy. Add this fix to /etc/skel/.profile to use it as the default for new user accounts.</p><p>Even with this fix it is surprising is how many Internet-using programs don't support proxies correctly. I tested every streaming media player I could find and a few other programs and here are the results with my systems (Ubuntu 10.04 Lucid Lynx i386 and amd64):</p><p><a href="http://www.clementine-player.org">Clementine</a> (0.7.1): Neither Last.fm and SomaFM work. Jamendo lists songs but doesn't play them but this is due to Ogg problems at Jamendo. Unlike other players Clementine's plug-in for Jamendo is not configurable for MP3 so I couldn't work around it. Mangatune and Icecast work.</p><p><a href="http://projects.gnome.org/rhythmbox/">Rhythmbox</a> (0.13.1): Jamendo failed to work. Magnatune was really slow to load.</p><p><a href="http://www.getmiro.com">Miro</a> (4.0.3-82931155): Could find video podcasts but not download them (except VODO which uses BitTorrent). Its integrated web browser would always show the network bulletin for any other link in the side panel.</p><p><a href="http://banshee.fm">Banshee</a> (2.0.1): Internet Archive links work. Live365.com and xiph.org show results but nothing plays (I can copy the xiph links to VLC and they play). Miro Guide works (unlike Miro) but likes to freeze. Amazon MP3 Store, Jamendo, Magnatune (both extensions), RealRadios.com, and SHOUTcast.com extensions fail to load. Last.fm would log in but not much else. I noticed that according to ~/.xsession-errors Banshee is an exceptional media player.</p><p><a href="http://sites.google.com/site/kdekorte2/gnomemplayer">Gnome MPlayer</a> (0.9.9.2): Nothing fancy but it functioned with the streams I tried.</p><p><a href="http://www.videolan.org/vlc/">VLC</a> (1.0.6): About the same as Gnome MPlayer. A lot of complaints about some playlists like <a href="http://www.wazee.org/128.pls">radio.wazee</a> when it encounters unavailable entries. Needs a less ugly way to handle error messages with playlists of Internet streams since they are usually just alternate servers.</p><p><a href="http://www.google.com/earth/download/ge/agree.html">Google Earth</a> (6.1): It would connect to the DB and you could navigate the worlds but none of the Panoramio pictures would show. Wikipedia entries wouldn't show after being enabled until the app was restarted. Even then, clicking on "Full Article" resulted in the network bulletin page being shown (webkit?). Changing the preferences to use an external browser is an adequate workaround.</p><p><a href="http://projects.gnome.org/totem/">Totem</a> (2.30.2): Functioned but was picky about some streams (radio.wazee).</p><p><a href="http://gpodder.org">gPodder</a> (2.2): Useless.</p><p><a href="http://www.hulu.com/labs/hulu-desktop-linux">Hulu beta</a> functions but is mostly relying on Flash.</p><p><a href="http://www.skype.com/intl/en/get-skype/on-your-computer/linux/">Skype beta</a> (2.2.0.35): Connected to their network without problems and I successfully called their sound testing service.</p><p>Sun Java Plug-in (1.6.0_26 in Firefox 3.6.24): Useless with a proxy. Even without a proxy you have to work around IPv6 bugs (<a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=618725">Debian bug #618725</a>). With that working the <a href="http://java.com/en/download/testjava.jsp">online test</a> usually fails and I've found that <a href="http://word-games.pogo.com/games/boggle-bash">Pogo.com Boggle Bash</a> is a better test. Manually setting the proxy with jcontrol doesn't have any effect. Debian is <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=646524">dropping the plug-in</a> so it may not matter.</p><p><a href="http://www.frostwire.com">FrostWire</a> (5.1.5): Useless with a proxy. It uses Java so not surprising. It has its own proxy settings but it couldn't connect to anything even with manual settings.</p><p>Update - Added a few more tests:</p><p><a href="http://www.desura.com">Desura</a> (110.22): Could login and see items I had ordered (free demos) but could not download them for installation or show any web pages. Some of the links on the menu bar opened in Firefox but showed the network bulletin. Apparently it was resolving the links (maybe querying their servers) to localhost:8090 and then sending that to the default browser even though Firefox could access the Internet through the proxy without problems.</p><p><a href="http://www.konqueror.org">Konqueror</a> (4.4.5): No problems (KHTML).</p><p><a href="http://projects.gnome.org/epiphany/">Epiphany</a> (2.30.2): No problems (webkit).</p><p><a href="http://xmoto.tuxfamily.org">X-Moto</a> (0.5.9): No problems. Can use environment variables, manually-specified proxy, or SOCKS proxy.</p><p><a href="http://www.3ds.com/products/draftsight/overview/">DraftSight</a> (Beta V1R1.3): Couldn't connect to the registration server initially. The browser in the Home panel showed the network bulletin. Setting the proxy manually in "Tools>Options>System Options>General>Proxy server settings" and restarting allowed the registration to function but not the Home panel browser. I found that reapplying the proxy settings (without changing anything) then right-clicking the Home panel and reloading it fixed the problem for that session but it would reoccur if DraftSight was restarted.</p><p>Clarification: My proxy configuration doesn't use authentication or <a href="http://en.wikipedia.org/wiki/SOCKS">SOCKS</a>. My bug work-around script supports the environment variables for authentication but I didn't test it.</p><p>Update 20111202: I removed Sun Java because of the security problems and switched to OpenJDK/IcedTea6 (1.9.10) but it didn't do any better. I did try FrostWire again with a <a href="http://docs.oracle.com/javase/6/docs/technotes/guides/net/proxies.html">manually specified proxy</a> but it had no effect. I did come across an interesting Java library for proxy detection named <a href="http://code.google.com/p/proxy-vole/">proxy-vole</a> but it won't solve my immediate problem.</p><p>Update 20111204: Corrected the DansGuardian/Squid port usages mentioned in the article and added a forgotten DansGuardian anti-bypass iptables rule. They now match my test environment.</p><p>I think part of the problem is that the developers test against a proxy and if the program works then its assumed to be proxy-compatible. That can be misleading, especially when multiple components are involved, as some may use the proxy while others access the network directly (Miro being a prime example). Adding some iptables rules to drop anything bypassing the proxy would close that testing hole.</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com2tag:blogger.com,1999:blog-8230692678867105904.post-60816765323300768582011-11-23T16:14:00.005-05:002011-11-24T01:53:26.284-05:00Documentation standards for commands<p>Here are some references for shell script developers, man page creators, README writers, etc. While documentation styles are a bit haphazard and vary with OS and programming language, there are some standards.</p><p>For <a href="http://en.wikipedia.org/wiki/Man_page">man pages</a> see man-pages(7). What does that mean? You open a terminal window then type:</p><code>man 7 man-pages</code><p>The GNU project has <a href="http://www.gnu.org/prep/standards/html_node/GNU-Manuals.html#GNU-Manuals">some guidelines</a> on writing software manuals. They recommend using <a href="http://en.wikipedia.org/wiki/Texinfo">Texinfo</a> to create them.</p><p>The Debian Policy Manual says where the different documentation files <a href="http://www.debian.org/doc/debian-policy/ch-docs.html">should be located</a> but not what they should look like.</p><p>The most detailed standard I've found is the Open Group Base Specifications <a href="http://pubs.opengroup.org/onlinepubs/7908799/xbd/utilconv.html">utility conventions</a> and <a href="http://pubs.opengroup.org/onlinepubs/000095399/frontmatter/typographics.html">typographical conventions</a>.</p><p>I'm not going to admit to following these but please post any other IT technical writing style guides you know of. :D</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com2tag:blogger.com,1999:blog-8230692678867105904.post-3832558401012694072011-09-21T01:17:00.003-04:002011-09-21T01:38:51.964-04:00Extracting EML files<p><a href="http://en.wikipedia.org/wiki/E-mail#Filename_extensions">EML files</a> are a problem for some of my users on Ubuntu. They receive these as Email attachments but can only view them as text (usually in gedit) even if they contain pictures. The senders are probably using Outlook Express or a related mail application to attach them. While some non-Microsoft mail clients can open them properly this is a hassle for my users as they all use web mail. There is a command-line tool, munpack, that will extract non-text objects automatically (part of the mpack package in Ubuntu/Debian). To make it easier for them I wrote a little script that integrates munpack with their file manager via a mime type association. To use it, download <a href="http://www.mediafire.com/?gx833mhuly4xzg8">munpack_eml</a> and extract the files. Put munpack_eml in /usr/local/bin with root ownership and u=rwx,go=rx (0755) permissions. Put munpack_eml.desktop in /usr/local/share/applicatons with root ownership and u=rw,go=r (0644) permissions. Then right-click on any *.eml file from your file manager and you should see and option to extract the contents with munpack.</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com3tag:blogger.com,1999:blog-8230692678867105904.post-51535252882688387412011-08-29T18:11:00.006-04:002011-08-29T19:04:44.018-04:00Simple off-site backup of a MD RAID 1 system<p>Standard backup tools like <a href="http://backuppc.sourceforge.net">BackupPC</a> are great for backing-up moderate amounts of user data but they can be impractical with huge data stores such as multi-terabyte RAID arrays as they need a backup store that is larger than the source data. My simple solution is to clone the array with another drive and store it off-site.</p><p>For this to work I had to categorize the data between smaller dynamic files (like documents) and larger static files (videos). The smaller files are backed up daily with BackupPC. The larger files are not backed up. Both are stored on a RAID 1 (mirror) array for redundancy in case of drive failure. On my server BackupPC uses a different, smaller RAID 1 array for a backup store. Since it is only backing up part of the data it doesn't have to be the same size as the main array. For backing up the larger/static files (and everything else) I simply add another drive to the main array, let it sync, then remove it and store off-site.</p><p>Ideally this system would use hot-swap but I don't have removable bays so I have to power-off the server each time. The rest of the procedure is relatively easy. With a RAID 1 array I have two drives (sda, sdb) and the added drive may show up as sdc. I say "may" because Ubuntu uses UUIDs for drive mappings and the actual device assignments may change. I always check with:</p><code>cat /proc/mdstat</code><p>to verify what devices are being used. I also check the partition sizes of all drives using "fdisk -l" and make sure the new drive has the same size partitions as the original RAID members. The partitions need to be of type fd "Linux raid autodetect" but no formatting with mkfs is necessary. Next I grow each RAID 1 MD device from 2 to 3 devices. For example:</p><code>mdadm -G -n 3 /dev/md0</code><p>This just tells the kernel that the array will now have three devices but does not assign another device to it. To allocate the device:</p><code>mdadm -a /dev/md0 /dev/sdc1</code><p>Resync should begin immediately. To monitor, I just use "cat /proc/mdstat" but the kernel will also send status messages to the console. After resynching, I disable the backup device by failing it:</p><code>mdadm -f /dev/md0 /dev/sdc1</code><p>This results in the RAID degradation warnings to be emailed to root. Next I remove it:</p><code>mdadm -r /dev/md0 /dev/sdc1</code><p>Finally, I shrink the array back to two devices:</p><code>mdadm -G -n 2 /dev/md0</code><p>This works well for my simple server setup. Obviously some scripting could be used to automate it. While this works well for a 2-drive RAID 1 array, it doesn't scale well with a larger number of drives or other RAID types.</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com3tag:blogger.com,1999:blog-8230692678867105904.post-42869800928742906822011-01-17T17:36:00.017-05:002011-01-24T21:47:58.829-05:00Expanding Ubuntu Recovery Mode<p>Recovery Mode is a text-based interface to a few quick repair tools that is installed by default with most Ubuntu releases and derivatives. I wrote <a href="http://www.mediafire.com/?l5s9qbjcscxk7tb">a few add-ons</a> for it that increase its usefulness in remote repair and diagnostics situations. These were developed and tested on Ubuntu 10.04 (Lucid Lynx).</p><p>Starting Ubuntu in Recovery Mode (aka. <a href="https://wiki.ubuntu.com/FriendlyRecovery">Friendly Recovery</a>) is relatively easy. Just hold down the shift key after the <a href="http://en.wikipedia.org/wiki/Power-on_self-test">BIOS POST</a> to get Grub2 to show its menu, then just select the kernel with the "recovery" option. Also note the memtest86+ option which is useful for identifying bad RAM.</p><p>Adding on to Recovery Mode is relatively simple. At its heart is a shell script, "/usr/share/recovery-mode/recovery-menu", that is started at the end of the single mode (runlevel S) boot. It looks through the options subdirectory and starts every script it finds, passing it a parameter of "test". It looks for a return status of 0 and the description of the script on stdout. Scripts with valid responses are added together and shown in a menu listing using the whiptail dialogger. The user selects one from the menu to execute it.</p><p>My additions are more informative than corrective. The intention is to help with diagnostics when dealing with a remote non-technical client. They are also useful for beginners who lack command-line experience and simply don't know where to look for system status information.</p><p>Many of my scripts check their respective system configuration and return a non-zero status if required executables are not installed or configured. This keeps the menu from getting cluttered. For example, the sensors script checks for output from the sensors command. Lack of such indicates that the hardware sensors haven't been configured with sensors-detect or the required modules haven't been added to /etc/modules. When this happens it does an exit 1 when started with the test parameter. The ddclient script looks for run_daemon="true" in /etc/default/ddclient and the presence of the ddclient executable. The ssh script looks for the sshd process and its description changes if it is found or not. If you write your own, the only limitation to keep in mind is that the description returned should be 45 characters or less as longer ones will corrupt the whiptail display.</p><p>Some of the scripts deserve special attention:</p><p>shallablud: works with my <a href="http://jhansonxi.blogspot.com/2010/11/pair-of-utilities-for-dansguardian.html">shall-bl-update</a> v1.3 or later. It forces an update to the Shalla blacklists for DansGuardian.</p><p>lynx: requires the Lynx text browser. It does a su to the default admin member (the first one listed in the admin group) before starting. It defaults to the <a href="http://checkip.dyndns.com">DynDNS.com check IP page</a>. I used Lynx because it has options for lockdown (prevent shell escapes, etc.) that the others don't offer.</p><p>wicd: requires wicd-curses. While the netroot script already provides network activation before switching to a shell, it just starts dhclient to get an IP address and nothing else. This was something <a href="https://bugs.launchpad.net/ubuntu/+source/friendly-recovery/+bug/244885">I requested back in Hardy</a>. It's better than nothing but is rather useless if you only have a wireless connection. <a href="http://wicd.sourceforge.net">Wicd</a> solves the problem but creates another - it conflicts with Network Manager. Luckily the packages themselves <a href="https://bugs.launchpad.net/ubuntu/+source/wicd/+bug/555403">don't conflict on Lucid</a> but the daemons do. The script will stop Network Manager before starting wicd-curses (which starts the wicd daemon). To keep this from happening when starting wicd from a root shell you need to either stop Network Manager first or modify the Upstart job configuration to keep it from starting in recovery mode (runlevel S). The conf file also needs to be diverted by dpkg to keep it from being overwritten on updates (and reverting the changes). The commands to do this are:</p><p><code>dpkg-divert --rename --divert /etc/init/network-manager.conf.original /etc/init/network-manager.conf<br>cp /etc/init/network-manager.conf.original /etc/init/network-manager.conf<br>sed -i 's/\(.*and started dbus\)\().*\)/\1\n\t and runlevel [!S]\2/' /etc/init/network-manager.conf</code></p><p>You need to either add a sudo in front of these or open a root terminal with "sudo su". The divert tells dpkg to rename the file and always redirect new installations to "network-manager.conf.original". The file is then copied back to use as a template. The sed expression then adds a condition to not start in runlevel S.</p><p>This only solves half of the problem. The Wicd daemon still needs to be prevented from starting during regular operation (runlevel 2) unless you plan to use it instead of Network Manager. Wicd's configuration hasn't been changed to Upstart yet so it's still using init scripts. To disable it do:</p><p><code>mv /etc/rc2.d/S20wicd /etc/rc2.d/K80wicd</code></p><p>This by itself is not enough. If wicd-gtk is installed, it will start when the desktop loads and start the daemon if it is not active. You need to purge it with aptitude or apt-get. In addition, another function somewhere will also start the wicd daemon. The only option I've found is to change the wicd executable, which is just a script that starts the daemon with Python, to not function unless the runlevel is single mode. These commands will make the change:</p><p><code> dpkg-divert --rename --divert /usr/sbin/wicd.original /usr/sbin/wicd<br>cp /usr/sbin/wicd.original /usr/sbin/wicd<br>sed -i 's/\([[:space:]]*exec[[:space:]]\+.*\)/[ \"$RUNLEVEL\" = \"S\" ] \&\& \1/' /usr/sbin/wicd</code></p><p>If you make this change you won't have to disable the init script. You will also have to fix the AppArmor profile for dhclient so that wicd can use it (<a href="https://bugs.launchpad.net/ubuntu/+source/wicd/+bug/588635">bug #588635</a>). Just add the text in the report before the entry for Network Manager.</p><p>One option that isn't listed in the menu is "fsck". This is easy to fix as the script just needs execute permission (<a href="https://bugs.launchpad.net/ubuntu/+source/friendly-recovery/+bug/566200">bug #566200</a>).</p><p>Currently the "resume" option doesn't function (<a href="https://bugs.launchpad.net/ubuntu/+source/friendly-recovery/+bug/651782">bug #651782</a>).</p><p>If you want to prevent the "root" and "netroot" options from providing an uncontested root prompt try my <a href="http://jhansonxi.blogspot.com/2010/12/slightly-less-open-ubuntu-recovery-mode.html">rootlock</a>.</p><p><blockquote></blockquote>Consider a theoretical example of how this all works with a remote user. They have a problem with X not starting and contact you. They are a considerable distance away and don't have time to ship their PC to you for repair. The system is bootable and they have high-speed Internet so remote access is possible. You tell them how to enter Recovery Mode and how to start wicd. It automatically gets an IP from a wired connection but if they are using wireless they have to select an AP from whatever wicd finds. If they are using Network Manager and their normal wireless connection is encrypted, you will have to set it up beforehand with wicd as SSIDs and keys aren't shared with Network Manager (or the root account which is the one being used here). If they have a dynamic WAN IP address then you have them start ddclient (which also needs to have been configured) or start Lynx and read to you the WAN IP from DynDNS.com. Then they can start sshd. At this point you should be able to access it remotely over SSH assuming that any intervening firewall/NAT routers are forwarding the correct ports. Obviously you should be using key-based authentication with SSH, not passwords. If you can't access it remotely you can still have them perform updates with the dpkg option (<a href="https://bugs.launchpad.net/ubuntu/+source/friendly-recovery/+bug/452222">also an upgrade</a>), fix the X configuration with failsafeX, or read you the root mail, SMART drive status, and sensor readings (if configured).</p><p>Obviously many problems can't be fixed this way but if it saves you a road trip or two it's worth it.</p><p>Update: I filed <a href="https://bugs.launchpad.net/ubuntu/+source/friendly-recovery/+bug/706145">bug #706145</a> to get these into Ubuntu. Following the normal submit/reject/resubmit/ignore cycle it should be in the repositories within a few years.</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com2tag:blogger.com,1999:blog-8230692678867105904.post-52040204958853446752010-12-24T13:34:00.014-05:002010-12-24T19:08:08.531-05:00A slightly less open Ubuntu recovery mode<p>Ubuntu <a href="https://wiki.ubuntu.com/RecoveryMode">recovery mode</a> is a basic boot configuration for repairing a broken system. In this mode it skips most configuration files and daemons in order to achieve a functioning root prompt. For the security-conscious administrator this itself is a problem.</p><p>There have been <a href="https://bugs.launchpad.net/ubuntu/+source/sysvinit/+bug/21994">complaints</a> about unchallenged root access in recovery mode. Ubuntu uses <a href="https://help.ubuntu.com/community/RootSudo">sudo</a> for root access and the root account is disabled via a <a href="http://manpages.ubuntu.com/manpages/lucid/en/man5/shadow.5.html">"*" password</a>. If you forget the passwords of the admins (any user account in the admin group) then this makes it possible to easily reset it.</p><p>Originally, recovery mode went straight to a root prompt which wasn't useful to non-technical types. With the addition of <a href="https://wiki.ubuntu.com/FriendlyRecovery">Friendly Recovery</a>, a menu is displayed with a list of repair options. The menu is just a <a href="http://manpages.ubuntu.com/manpages/lucid/en/man1/whiptail.1.html">Whiptail</a> selection dialog driven by the "/usr/share/recovery-mode/recovery-menu" script which queries other scripts in the "./options" subdirectory. The sub-scripts provide simple repair options like failsafeX, apt-get clean, and update-grub. These are useful to non-technical types for attempting simple repairs to problems. They won't fix complicated problems like <a href="https://bugs.launchpad.net/ubuntu/+source/gtk+2.0/+bug/693737">gdm crash loops</a> but may save the administrator an on-site visit or two. The root and netroot scripts provide root shell access which is where security becomes a concern, not just because of <a href="http://en.wikipedia.org/wiki/Black_hat">black hats</a>, but also fools blindly using repair commands like "<a href="http://en.wikipedia.org/wiki/Fork_bomb">:(){ :|:& };:</a>" and "<a href="https://bugs.launchpad.net/ubuntu/+source/coreutils/+bug/174283">rm -rf /</a>".<p>There are several options for limiting root access.</p><p>1. Set a <a href="http://ubuntuforums.org/showthread.php?t=1369019">grub password</a> that prevents running recovery mode or editing menu entries. This means the administrator has to make any repairs. If the network is failing then that means on-site.</p><p>2. Set a root password with "sudo passwd". The password will then be required to access the shell from the Friendly Recovery screen but this also allows direct root logins during normal operation (although you might not care about that).</p><p>3. Disable the shell options in Friendly Recovery. These commands remove the options from the menu and prevents them from reappearing if the friendly-recovery package is updated. This allows users to run the automated commands but makes it more difficult for the administrator to get root access in recovery mode. You'll need to use sudo before these or start a root shell with "sudo su" first.</p><p><code>mkdir /usr/share/recovery-mode/disabled<br>dpkg-divert --divert /usr/share/recovery-mode/disabled/root \<br>--rename /usr/share/recovery-mode/options/root<br>dpkg-divert --divert /usr/share/recovery-mode/disabled/netroot \<br>--rename /usr/share/recovery-mode/options/netroot</code></p><p>4. Set a root password only in recovery mode. To do this I wrote <a href="http://www.mediafire.com/?hhrcbp6fo725f6s">rootlock.conf</a>. This is a job configuration for <a href="http://manpages.ubuntu.com/manpages/lucid/man5/init.5.html">Upstart</a> that is added to the "/etc/init" directory (with root:root ownership and -rw-r--r-- permissions). It is triggered by <a href="http://manpages.ubuntu.com/manpages/lucid/man7/runlevel.7.html">runlevel</a> changes. Within is a script that when the runlevel is "S" (single) mode, which indicates recovery mode, it copies the password from the first admin group member to the root account in <a href="http://manpages.ubuntu.com/manpages/lucid/en/man5/shadow.5.html">/etc/shadow</a>. In runlevels 2-5, it changes the root password back to "*". This allows root logins from the Friendly Recovery menu if the password of the first admin is entered. In normal operations direct root login is disabled. This makes a lost admin password more difficult to fix but for a capable administrator that is only a minor annoyance.</p><p>Don't use it if you have set a root password previously because you want a normal root login available. It will be disabled by this job.</p><p>I've tested this on Ubuntu 10.04 (Lucid Lynx) extensively and it seems robust but I'm awaiting feedback on the ubuntu-devel mailing list. Check back for updates.</p><p>Disabling unchallenged root logins in recovery mode will not keep a knowledgeable hacker out. This is only possible if you use full-disk encryption like <a href="http://en.wikipedia.org/wiki/LUKS">LUKS/dm-crypt</a> for which only the administrator has the key. This will prevent the user from booting with a LiveCD and editing shadow directly but will require the administrator to start the system every time it is powered on or rebooted.</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com3tag:blogger.com,1999:blog-8230692678867105904.post-63009688470586857762010-12-16T18:03:00.004-05:002010-12-23T01:13:06.888-05:00quote-count: A debugging tool for shell scripts<p>I've been doing a lot of shell scripting lately with <a href="http://gondor.apana.org.au/~herbert/dash/">Dash</a> and <a href="http://tiswww.case.edu/php/chet/bash/bashtop.html">Bash</a>. Complicated scripts with lots of text handling make debugging difficult, especially when they are being used in sub-shells which obfuscate line numbers in error messages. One of my more common mistakes is an unmatched quote. These can be rather difficult to find so I wrote <a href="http://www.mediafire.com/?rbvk9bhuvo9s77o">quote-count</a>, a simple analysis tool that counts quotes in lines.</p><p>It just accepts a single filename as a parameter and counts single, double, and back quotes on each line and prints their totals. It prints out a warning if the any of the counts is odd-numbered which may indicate a mismatched quote. It also warns if the line is a comment so you easily ignore those. It isn't brilliant as it doesn't handle escaped newlines, in-line comments, escaped quotes or quotes encapsulated within other quotes. It could be enhanced to handle these cases but it's already saved me a lot of debugging time as is. The output from running it on itself looks like this:</p><code>1 Q:0 DQ:0 BQ:0 # COMMENT <br>2 Q:0 DQ:0 BQ:0 # COMMENT <br>3 Q:0 DQ:0 BQ:0 # COMMENT <br>4 Q:0 DQ:0 BQ:0 # COMMENT <br>5 Q:0 DQ:0 BQ:0 # COMMENT <br>6 Q:0 DQ:0 BQ:0 # COMMENT <br>7 Q:0 DQ:0 BQ:0 # COMMENT <br>8 Q:0 DQ:0 BQ:0 <br>9 Q:0 DQ:8 BQ:0 <br>10 Q:0 DQ:2 BQ:0 <br>11 Q:0 DQ:2 BQ:0 <br>12 Q:0 DQ:2 BQ:0 <br>13 Q:0 DQ:2 BQ:0 <br>14 Q:0 DQ:2 BQ:0 <br>15 Q:0 DQ:0 BQ:0 <br>16 Q:0 DQ:0 BQ:0 <br>17 Q:0 DQ:0 BQ:0 <br>18 Q:0 DQ:0 BQ:0 <br>19 Q:0 DQ:0 BQ:0 <br>20 Q:0 DQ:0 BQ:0 <br>21 Q:0 DQ:2 BQ:0 <br>22 Q:4 DQ:5 BQ:1 # ODD<br>23 Q:0 DQ:2 BQ:0 <br>24 Q:5 DQ:4 BQ:1 # ODD<br>25 Q:0 DQ:2 BQ:0 <br>26 Q:5 DQ:3 BQ:2 # ODD<br>27 Q:0 DQ:2 BQ:0 <br>28 Q:0 DQ:4 BQ:0 <br>29 Q:0 DQ:2 BQ:0 <br>30 Q:0 DQ:2 BQ:0 <br>31 Q:0 DQ:0 BQ:0 <br>32 Q:0 DQ:0 BQ:0 <br>33 Q:0 DQ:0 BQ:0 </code><p>I've tested it with both Dash and Bash on Ubuntu 9.10 and Mandriva 2010.1 so it should work with most systems.</p><p>Another typo I occasionally encounter is escaped whitespace at the end of a line. The intent always is to escape a newline but sometimes in my editing I end up with a space or tab after the backslash. These can easily be found with grep:</p><code>grep -E -r -n '\\[[:space:]]+$'<filename></code><p>I wanted to add this check to quote-count v1.0 but found that the "while read" loop removes everything after the trailing backslash. Richard Bos sent me a modified version that included the check as a pre-processor utilizing a simple grep trick. I added it in although it used an array which Dash doesn't support.</p><p>UPDATE: v1.2 released and link updated. I found some bugs in v1.1 with the TEW check. I also cleaned up the report output a bit.</p><p>Reading through the quote-count report for my larger scripts was tedious so I wrote <a href="http://www.mediafire.com/?hbq80zw8osbyy97">quote-count-query</a> which compares the original source file with the quote-count report and shows the affected lines with two preceding and following lines for context.</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com0tag:blogger.com,1999:blog-8230692678867105904.post-11601072232727203092010-11-24T00:13:00.009-05:002010-11-27T15:52:14.635-05:00Two more utilites for DansGuardian Users<p>I reduced the DansGuardian user account and blacklist maintenance hassles with my <a href="http://jhansonxi.blogspot.com/2010/11/pair-of-utilities-for-dansguardian.html">previous two utilities</a> but while working on whitelisting I found the need for a few more.</p><p>In DansGuardian (DG) terms a blacklist bans something, a greylist allows something (overrides blacklisting) but still filters it, and an exceptionlist allows something without filtering (overriding greylists and blacklists). The "something" can be URLs, IP addresses, server names, etc., depending upon the specific list type. Blacklisting a site is easy but blacklisting a specific type of content is very difficult and error-prone. It works the same way as anti-malware utility definitions - if the undesirable items are on the list, and they match a particular requested target, then it's blocked. If not, then it gets through. It's a big Internet out there and trying to block all the bad is rather difficult. "Bad" is also relative and what is bad for one person/religious group/company/government may not be bad for another. Whitelisting has the opposite problems in that you gain strict control over what is available but trying to predict where the user wants to go, determining if that is a safe destination, and maintaining the lists is also difficult.</p><p>I found that I needed to use both blacklisting and whitelisting. I use whitelisting with younger children and blacklisting for older. Older children won't put up with strict constraints and will either figure out how to bypass them or simply go somewhere else to browse the Internet. Younger children are easier to keep happy but you still have to spend time figuring out all the web sites they will want access to, preferably with the initial configuration so they're not whining every five minutes about another toy/game/whatever site they can't access.</p><p>With DG a "whitelist" configuration is basically a blacklisting of all sites with a "**" in the bannedsitelist file with entries in greylist and exceptionlist files to bypass it. The exceptionlist file entries will enable site access but this is not what you want for allowing a user to browse a particular site because it disables all filtering. Use greylist files instead. This way if there is an offensive part of a site that you didn't know about (or it gets defaced by black hats) then you still have the filters to rely on. The exception lists are useful for sites that are not normally browsed but may trigger the filters inadvertently such as Linux distro repositories using http.</p><p>One of the problems with whitelisting is that the user won't necessarily know where they can go on the Internet. To solve this problem you need an index page of some sort. This is the problem I encountered when creating my greylists and I came up with a solution.</p><p>I didn't want to maintain an index separately from the greylists so I figured out a way to embed the data in the lists. DG recognizes a # in the list as a comment. I added a comment at the end of each list entry with a Wiki-style link after it. This isn't all that unusual as Debian/Ubuntu did something similar with the menu.lst file in Grub. The comment hides data that isn't relevant to DG but the defined format allows extraction of the data to create an index. Soon after I started adding the links to the list entries I figured out two things - it was a lot of typing and was going to be a very big index. To organize the index better I added a category tag on the end which could be used in the index. The final format is:<br /><br /><code><exception link> #[<URL><space><label>][Category:<category text>]</code><br /><br />The brackets are required characters. The parsing is somewhat whitespace tolerant but in the Category tag don't leave any spaces between the colon and the category text (sed and regex expressions can be tedious). Example:<br /><br /><code>gutenberg.org # [http://www.gutenberg.org Project Gutenberg][Category:Books]</code></p><p>To save some typing I wrote <a href="http://www.mediafire.com/?c787mn1l7b4rnq1">add-exceptionlist-url-comments</a> which creates a default URL comment. First it pads the end with tabs (up to 5) to keep it pretty. The default link is made by slapping an http protocol prefix on the exception entry. It then uses wget to try to fetch the default web page and scrape the page title to use as a default link label. This works for most pages and redirects but not those that are using a meta refresh. It finally adds an undefined category tag at the end. Anything in the list that starts with a # is ignored. Note that not every entry will need a link. Some sites you don't want may serve data to a site you want. A lot of USA government sites that are kid-specific will link to media on the main government sites which aren't of interest to kids and just clutter the index. Some web stores also use third-party search services which will need exceptions but not links. In many cases you'll want a link that points to a specific part of the site, not just the server root, so you'll have to edit the defaults.</p><p>To create the index page I wrote <a href="http://www.mediafire.com/?kr35t4t4afkmcle">exceptions-index-page-generator</a>. It looks for the bracket-formatted URLs in the input files. It also builds a list of category tags, assigning a default tag (defined in the script) to any that are missing. It then creates a basic html file with entries separated by category. If a category has more than a certain number of entries (default 5 as defined in the script) it makes two columns to reduce the page length. It doesn't try to normalize category names so they must match in the entries exactly in order to be combined. It ain't pretty but it works. These are both command-line utilities but are rather easy to use.</p><p>UPDATE: I updated exceptions-index-page-generator. Version 1.1 adds a category table of contents to the top of the page. It will also make two columns of these if the number of categories exceeds the column threshold.</p><p>You can use <a href="http://www.mediafire.com/?cdumka96g97iq5i">my greylists</a> to test with and as a base for your own lists for younger children. I haven't performed in-depth checking of these but they look relatively safe. Some of the entries may seem odd but they're intended to aid holiday gift buying. You will also notice that I used <a href="http://www.ascii.cl/htmlcodes.htm">html entity codes</a> in the labels for some punctuation as they didn't display correctly in Firefox.</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com0tag:blogger.com,1999:blog-8230692678867105904.post-85880530896921182212010-11-15T21:21:00.028-05:002011-01-21T17:30:43.702-05:00A pair of utilities for DansGuardian users<p>Content filtering is a requirement of the home desktop system configuration build I'm working on. Young children are part of the client base so it's mandatory. <a href="http://dansguardian.org">DansGuardian</a> is basically the only free option available. It's a server daemon so it has command-line configuration only. Once it's running parents don't need to mess with the basic settings but they need to be able to set filtering controls for children without a lot of hassle. On Ubuntu it doesn't come with any blacklists but third-party lists are available. <a href="http://www.shallalist.de/licence.html">Shalla Secure Services</a> has one of the most comprehensive list that's free for home use but installing and updating it is also a hassle. I wrote a pair of scripts to solve both of these problems.</p><p>There are a few options for DansGuardian GUI. Some firewalls like <a href="http://www.smoothwall.org">SmoothWall</a> have plug-ins for it. Two popular stand-alone ones are DansGuardian-GUI from <a href="http://ubuntuce.com">Ubuntu CE</a> and <a href="https://launchpad.net/webstrict">WebStrict</a> from <a href="http://www.sabily.org">Saliby</a>. Unfortunately they both rely on <a href="https://banu.com/tinyproxy">Tinyproxy</a> which has a <a href="https://bugs.launchpad.net/ubuntu/+source/dansguardian/+bug/474475">bug with DansGuardian</a> that prevents many pages from loading. They also drag in <a href="http://firehol.sourceforge.net">FireHOL</a> which I don't need.</p><p>Since remote administration is a requirement for my desktop configuration I installed <a href="http://www.webmin.com">Webmin</a>. A plug-in is available, <a href="https://sourceforge.net/projects/dgwebminmodule/">DansGuardian Webmin Module</a>, which allows easier control than straight command-line methods including a semi-automatic configuration for <a href="http://contentfilter.futuragts.com/wiki/doku.php?id=group_configuration">multiple filter groups</a>. There's <a href="https://sourceforge.net/tracker/?func=detail&aid=2814496&group_id=51969&atid=465236">one bug</a> with the latter that I had to fix first and the default DansGuardian binary location in the module's configuration was incorrect for Ubuntu (it's at /usr/sbin/dansguardian) but that's all.</p><p>When working with multiple filter groups the goal is to have DansGuardian automatically apply the correct filter based on the user account. Correlating user port activity to filter groups is tricky. Since my targeted desktop systems are stand-alone and won't have multiple simultaneous users I chose the <a href="http://contentfilter.futuragts.com/wiki/doku.php?id=using_ident_for_user_identification">Ident method</a> using <a href="http://manpages.ubuntu.com/manpages/hardy/man8/ident2.8.html">Ident2</a>. I tried <a href="http://bisqwit.iki.fi/source/bidentd.html">Bisqwit's identd</a> (bidentd) but the version on Ubuntu 10.04 (Lucid Lynx) has a <a href="http://osdir.com/ml/debian-bugs-dist/2010-04/msg08975.html">nasty looping bug</a> that is triggered by local queries. Getting this to work only requires activating the ident authplugin and creating the filter groups.</p><p>While the module makes configuration easier for the admin, it's still not that friendly for a parent. The filter groups make it easy to set user restrictions based on group membership but DansGuardian filter groups are completely separate from system groups. They can only be changed from the command line or with the Webmin module. I wanted parents to be able to use the standard desktop user administration tool, users-admin (System > Administration > Users and Groups) to assign users to special DansGuardian groups that could then be converted to filter group memberships. There once was a patch for DansGuardian that integrated the two but it's not included upstream. So I came up with a system group naming scheme and wrote <a href="http://www.mediafire.com/?v8mo18krvcd6ne8">dg-filter-group-updater</a>, a GUI tool that automatically creates the filter group list (/etc/dansguardian/lists/filtergroupslist by default) from the system group membership. Installing it is easy. Just copy the script to "/usr/local/sbin" with root ownership and 755 (rwxr-xr-x) permissions. Download <a href="http://www.mediafire.com/?dinoaszcu91eugs">this desktop file</a> and put it in "/usr/local/share/applications" with root ownership and 644 (rw-r--r--) permissions which will cause a menu entry to appear in the System > Administration menu. This is for Gnome as it uses gksudo to get root access by you can convert it for KDE by changing the "gksudo" to "kdesudo" or "kdesu" then changing the "Categories" entry for KDE (look at other KDE desktop menu files in /usr/share/applications). For this script to be useful you have to set up the required system groups first and assign users.</p><p>DansGuardian references group filters by an index number. The first group is "filter1" which corresponds to the configuration file "dansguardianf1.conf" and is the default. Typically in a multi-group configuration this filter is set to disable Internet access with a "groupmode = 0" setting. By "Internet" I mean "http" as DansGuardian can't really help with "https" (TLS/SSL) or much else. The rest you have to block with firewall rules or a filtered DNS like <a href="http://www.opendns.com">OpenDNS</a>. The module's multiple group tool is the one named "Set Up Lists&Configs For Multiple Filter Groups" on its main page. Before using it, backup the "/etc/dansguardian" directory as this option only works once and then locks itself out. Restoring the directory is the only way to revert. When you use this tool you will have a few options to chose from. The scheme is up to you (I used separate). I recommend selecting "Use of Default Group" and "To Set Aside Unrestricted Group". I used four groups:</p><p>#1 "No_Web_Access" default (filter1, groupmode = 0)<br>#2 "restricted" (filter2, whitelisted with groupmode = 1 in its conf file and ** in its bannedsitelist file)<br>#3 "filtered" (filter3, filtered with groupmode = 1 and nautynesslimit = 100)<br>#4 "unlimited" (filter4, groupmode = 2)</p><p>The idea here is that unassigned accounts are automatically blocked by filter1, young children are sandboxed with filter2, older children are filtered with filter3, and adults unrestricted through filter4. Since the restrictions are more about maturity than age the groups don't have names that refer to the latter.</p><p>The dg-filter-group-updater script requires system group names to have a specific format of "dansguardian-f#..." where # is the corresponding filter number. Anything after the digits are ignored so you can create more descriptive group names that a non-technical user can recognize in the users-admin tool when assigning members. These groups should be created as system groups (GID < 1000). I created my groups with addgroup:</p><p>addgroup --system dansguardian-f2-restricted<br>addgroup --system dansguardian-f3-filtered<br>addgroup --system dansguardian-f4-unlimited</p><p>Obviously you need to have a "sudo" before these or get a root terminal with "sudo su" first. Since filter1 is the default you won't be assigning users to it and don't need a matching system group. Next you just need to assign users to each group. If you assign the same user to more than one, DansGuardian will use the lower numbered filter in the resulting filter group list. Afterwards just launch the script via the menu item "DansGuardian filter group updater" and enter your admin password. First it will read through the dansguardian.conf file. The file location is set by the "dg_conf" variable in the script and is the only hard-coded value you need to worry about. From the conf file it locates the filter group list file and the number of filter groups. It then starts a new filter list group file (overwriting any existing one). Next it reads through /etc/groups and looks for the "dansguardian-f#..." groups, extracts the users for each, and adds them to the filter group list file in "username = filter#" format. It then restarts DansGuardian. So all a parent needs to do is assign users to groups with users-admin and then launch the script from the menu item to apply the changes.</p><p>The script is based on the same code I used for <a href="http://jhansonxi.blogspot.com/2010/09/webcam-server-dialog-basic-front-end-to.html">webcam-server-dialog</a> so it will work with any dialogging program installed. Other than that it only uses basic text manipulation tools including grep, sed, and cut. If it doesn't start then launch it from a terminal window or do a "tail ~/.xsession-errors" to see any messages it put out (including those from DansGuardian when it restarts). Most error messages are displayed in a dialog box.</p><p>While dg-filter-group-updater solves the basic user administration problem, the lists for filtering (filter3 in my example) still need to be configured. The Ubuntu package only includes basic advertisement-blocking blacklists. Adding <a href="http://contentfilter.futuragts.com/wiki/doku.php?id=downloadable_blacklists">third-party blacklists</a> is complicated as you have to merge them in with "Include" statements in the main lists. The lists are organized by categories so you can pick and choose what to filter. Annoying but you only have to do it once if you're using simple filter groups like mine. The problem with blacklists is that they have to be updated often. <a href="http://www.shallalist.de">Shalla Secure Services</a> has some <a href="http://www.shallalist.de/helpers.html">update scripts</a> but they didn't impress me much or did what I wanted. My policy with third-party anything (clipart, CAD libraries, templates) is to keep them separate as references and use other files for customization. To that end I wrote <a href="http://www.mediafire.com/?s6msavq8jwd9sxl">shalla-bl-update</a>. It downloads the list and creates a MD5 file to track the installed version. When it is executed again it checks the MD5 published on the web site against the installed version and downloads the list again if it differs. It has some fault tolerance included as it will retry if the file fails to download or the downloaded file fails a MD5 check. The lists are located in "/etc/dansguardian/lists/shalla" by default. Just download the script from the link and put it in "/usr/local/sbin" with root ownership and 755 (rwxr-xr-x) permissions. It's designed to be started by cron. To have it run daily do "ln -s /usr/local/sbin/shalla-bl-update /etc/cron.daily/shalla-bl-update". It produces no output as cron will Email root whenever anything it runs does. It has a debug mode you can enable by editing the script if you want it to fill your mailbox. It will restart DansGuardian after a successful list update.</p><p>Update: I updated shalla-bl-update to v1.2 which adds an optional check for empty system groups. The idea here is that if there are specified system groups used by dg-filter-group-updater, and these groups use the Shalla lists, then these groups should have members. If they don't then there is no point trying to update the Shalla lists. You need to edit the script and set the system_groups variable to the names of the system groups used by dg-filter-group-updater. The grep expression it is using will find partial matches. You can specify "--force-update" to override the check with empty groups.</p><p>Update: I've released v1.3 of shalla-bl-update and the link has been updated. Changes: --force-udpate now sets debug=true and clears existing md5. Retries can now be aborted interactively in debug mode. Because of this, the script now uses bash because of the reqirement of the timeout capability of the "read" command. Added "test" parameter for use with Ubuntu Recovery Mode. DansGuardian is not restarted if RUNLEVEL=S (single mode, essentially Recovery Mode). Added --help parameter.</p><p>Note: There is a <a href="http://tech.groups.yahoo.com/group/dansguardian/message/23255">patch by Philip Allison</a> that integrates DG with system groups but the lead developer, Andreas Büsching, has been too busy to integrate it or keep up with maintenance.</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com18tag:blogger.com,1999:blog-8230692678867105904.post-66189453612094430032010-10-19T21:34:00.011-04:002011-01-21T19:29:06.567-05:00UFW application profiles<p><a href="https://wiki.ubuntu.com/UncomplicatedFirewall">Uncomplicated Firewall</a> (<a href="http://manpages.ubuntu.com/manpages/maverick/en/man8/ufw.8.html">ufw</a>) is a front-end to <a href="http://en.wikipedia.org/wiki/Iptables">iptables</a>. One of its features are "application profiles" which are <a href="http://en.wikipedia.org/wiki/INI_file">INI-style files</a> that contain profile names and ufw settings. This allows packages to include their own firewall settings and make them available to ufw when installed.</p><p>Using profiles is relatively easy. To see what profiles are on your system, go to a terminal and enter "ufw app list" to see the names. The profiles are located in the directory "/etc/ufw/applications.d" and the names referenced are the "[section names]" in the files. Note that ufw also references the services list in "/etc/services" for rules. If a section name conflicts with an entry in the services file then the latter takes priority (and ufw warns you every time you use it).</p><p>There doesn't seem to be any documentation on the file format and the example files mentioned in the docs don't exist on my Karmic or Lucid systems but the existing files for OpenSSH server and Apache are good examples to determine it from:</p><p>[section name] (The identifier that ufw references)<br>title= (shown in "ufw status")<br>description= (doesn't seem to be used anywhere)<br>ports= (the port list)</p><p>This is the profile for OpenSSH server:</p><p>[OpenSSH]<br>title=Secure shell server, an rshd replacement<br>description=OpenSSH is a free implementation of the Secure Shell protocol.<br>ports=22/tcp</p><p>Multiple protocols are specified as "80/udp|80/tcp" with a vertical bar separating them. If just "80" was specified then both udp and tcp are assumed. The port can be a comma delimited list (80,443) or a range with a colon (81:82) or combined (80,443,81:82/udp|8080/tpc). If a range is specified then a separate entry for each protocol is required (81:82/udp|81:82/tcp).</p><p>I've been working on a Ubuntu 10.04 deployment configuration for my clients and one of my requirements is a user-friendly firewall for mobile users. While ufw is a command-line application GUIs do exist. <a href="http://gufw.tuxfamily.org">Gufw</a> is rather basic and doesn't support application profiles. My clients don't know much about network protocols but they can pick an application by name out from a list. It does list some applications but they seem to be hard-coded. Another GUI is <a href="http://code.google.com/p/ufw-frontends/">ufw-frontends</a> (ufw-gtk) which does support them. My only complaint with it is that when a profile is used there isn't any way to see what ports it affects - all you see is the profile name. In many cases the title and description are more informative than the profile/section name so I hope the tool shows them in future revisions.</p><p>With my deployment configuration selecting the firewall GUI was the easy problem. The hard one was the profiles themselves. Application profiles are easy to make but <a href="http://brainstorm.ubuntu.com/idea/18301/">few packages include them</a>. Many of my clients are gamers and most of the best games have online multi-player capability. This isn't just a Linux problem as all of them want to play games on Wine also. Most of these games are client/server and they need ports unblocked when hosting a private server. The profiles are easy to write but finding out which ports need to be forwarded can be very frustrating. Many gamer-oriented web sites provide aggregated ports lists but most of these are unverified and usually specify way more ports and protocols than are necessary. Developer sites either don't list them or list them without specifying the protocols (TCP/UDP or both) or traffic direction. With home users generally you only care about incoming connections to the server - not outgoing. Since most of the ports used are unofficial and not <a href="http://www.iana.org/assignments/port-numbers">controlled by IANA</a> many games have port collisions with other games, often because they are based on the same engine (like those from id Software and Epic Games) or the use the same API (DirectX/DirectPlay and GameSpy Arcade). It's very rare that you find an list as accurate or concise as that of <a href="http://www.novalogic.com/router.asp">Novalogic</a>. Several open-source applications only document their ports the old-fashioned way - in the source code. With some I had to install them and use "netstat -nap" to figure out what was used (which sometimes conflicted with the documentation). Another complication is that several games, like Quake 3, require a different port to be opened for every simultaneous client.</p><p>I couldn't really avoid the task so I spent several days writing profiles. You can <a href="http://www.mediafire.com/?46dydiby20xbjq8">download them all from here</a>. These are intended to be used as incoming exceptions to a "deny all" rule. Just extract and copy them to "/etc/ufw/applications.d" with root ownership and rw-r-r (644) permissions. Start ufw-frontends and click the "Add Rule" button. In the Destination/Port section select the Application radio button and choose the profile from the list. For applications like Quake 3 that have many possible port configurations I created a few different ones which should cover most situations. Unfortunately the ambiguous profile names in some files are going to be confusing. On a few I tried to make them more readable but fixing ufw-frontends so that it shows the title would be a better solution. Unavoidably there are several duplicates and overlaps with other applications which shouldn't harm anything unless the conflicting servers are both operating at the same time. Many servers can be configured with alternate ports but my profiles only specify the common defaults.</p><p>Both ufw and ufw-frontends have limitations that I hope will be addressed in the future. Support for <a href="http://en.wikipedia.org/wiki/Port_triggering">port triggers</a>, dynamic configuration based on <a href="https://bugs.edge.launchpad.net/ubuntu/+source/ufw/+bug/262438">the network connection</a>, or just warning when profile port ranges overlap would be helpful. If you add all my profiles to ufw the first thing you will notice with ufw-frontends is that it doesn't handle large numbers of profiles well. To help address the problem I've added a new parameter to the profile file format that hopefully ufw and ufw-frontends can utilize in the future. This is easy to do because INI files don't have much of a standard and ufw ignores everything other than the original ones. The parameter I added is "categories" for classifying profiles. This will allow users of ufw and related GUIs to quickly filter large profile lists. I put in a <a href="https://bugs.edge.launchpad.net/ubuntu/+source/ufw/+bug/659619">wishlist bug report</a> about it for ufw. I didn't want to bother creating my own standard from scratch so I used the <a href="http://standards.freedesktop.org/menu-spec/latest/apa.html">freedesktop.org menu spec</a> categories since they're already used for organizing desktop menus. I had to break the standard a bit by mixing main categories, usually "Network" with "Game", but this shouldn't be a problem.</p><p>The second parameter I added was "reference". This was due to the ridiculous amount of research I had to go through in finding port numbers for each application. Multiple "reference" parameters can exist for each profile, each listing a one-line item. The references indicate the basis for the profile configuration, like "netstat -nap|grep python", to indicate how the port was determined. Mostly these are web site references with a link specified in [URL label] wiki format.</p><p>Obviously there are many more servers and daemons to add but generic ones like DirectX, GameSpy, and GGZ Gaming Zone cover many. This brings up a possible optimization - a "meta" or "prerequisite" parameter. Because a lot of games share the same ports due to underlying common code, it would be simpler to define a profile that simply links to other profiles to specify ports. This way a profile could be specified for every individual program but not add a lot of duplicate rules to keep track of.</p><p>I only encountered one problem with ufw's profile implementation. It happened when I created a profile for <a href="http://artax.karlin.mff.cuni.cz/~brain/0verkill/">0verkill</a>. Apparently ufw doesn't allow section names to <a href="https://bugs.edge.launchpad.net/ubuntu/+source/ufw/+bug/663632">begin with a digit</a> but I can't imagine why this limitation would exist. Besides that and the way ufw-frontends handles huge profile lists this firewall configuration works well. I don't know if all of the profiles are correct as I didn't have time to test everything. Some may open more ports than a game or application requires and some may not open enough. Feedback is welcome.</p><p>Note: The NFS profile (nfs-kernel-server) requires static port mapping. The references in the file will lead you to articles on how to configure NFS this way but I changed the common 4000:4003 ports to 4194:4197 as these aren't in <a href="http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers">Wikipedia's list</a> or used by anything I could find with Google. There may be a Netfilter module that handles the NFS random port usage better as one exists for <a href="http://www.sane-project.org/man/saned.8.html">saned</a> (nf_conntrack_sane) but I'm unaware of one.</p><p>Update: I updated the profiles package to v1.1 which includes a bunch more Linux games and some corrections. NFS profiles have been split into three different files representing the common address ranges. Using static ports for NFS is kind of a hack and I reported <a href="https://bugs.launchpad.net/ubuntu/+source/nfs-utils/+bug/688446">bug 688446</a> about possible solutions.</p><p>Update: I released v1.2 of the profiles which made some changes to http due to <a href="https://bugs.launchpad.net/ubuntu/+source/netbase/+bug/694894">bug #694894</a> (which is mostly the fault of Debian and IANA). I also added another parameter, "modules", which specifies the connection tracking module (nf_conntrack) for some protocols. Currently in ufw-gtk some of these can be enabled under "Edit > Preferences > IPT Modules". Having a separate dialog for them doesn't make a lot of sense as they are specific to a particular protocol and you usually need them enabled. The modules act similarly to "port triggering" but are more intelligent as they understand the handshakes of their respective protocols and can identify which additional ports have been negotiated between server and client. I also found out that with NFS v4.1 that the dynamic port problems <a href="http://www.spinics.net/lists/linux-nfs/msg18342.html">are being eliminated</a>.</p><p>Update: I released v1.3 of the profiles. This is a minor release that only adds Skype, toribash, and webcam-server. The download link has been updated.</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com4tag:blogger.com,1999:blog-8230692678867105904.post-34656142069695329622010-10-07T21:17:00.007-04:002010-10-08T13:08:37.156-04:00A script for auto-configuring saned network connections<p>Host-connected image scanners can be shared through <a href="http://www.sane-project.org/man/saned.8.html">saned</a> (part of sane-utils in Ubuntu). It can be run continuously as a daemon or on-demand through <a href="http://en.wikipedia.org/wiki/Inetd">Inetd</a>. Basic configuration for either mode is simple and generic but adding the network address to the saned.conf file in <a href="http://en.wikipedia.org/wiki/CIDR_notation">CIDR notation</a> is not. When you are setting up systems for multiple clients on different networks and IP ranges, this is a bit of a nuisance. To automate this I wrote saned-subnet-conf which will automatically add an entry for whatever network the host connects to through Network Manager or the ifupdown utilities directly.</p><p>Whenever a network connection is made or broken, <a href="https://wiki.ubuntu.com/OnNetworkConnectionRunScript">scripts can be triggered</a>. These scripts need to be located (or linked to) in "/etc/network" in specific subdirectories, the choice of which determines when they execute. Variables are passed to them that can be used for changing behavior based on the network interface, address assignment mode used (DHCP, static, ppp, etc.), and other values. See the <a href="http://manpages.ubuntu.com/manpages/lucid/man5/interfaces.5.html">interfaces man page</a> for some hints. Network Manager executes these scripts with "/etc/NetworkManager/dispatcher.d/01ifupdown" which uses the <a href="http://manpages.ubuntu.com/manpages/lucid/man8/run-parts.8.html">run-parts</a> utility. Network Manager does not trigger the "pre" directories <a href="https://bugzilla.gnome.org/show_bug.cgi?id=600167">due to a design decision</a>.</p><p>To install the script, first <a href="http://www.mediafire.com/?6eu8njjqd320fkd">download</a> and extract the script, then put it in "/etc/network/if-up.d". You'll need to use sudo or have a root terminal for the copying (and most of the rest of the commands). Make it owned by root:root with rwxr-xr-x (0755) permissions. Whenever a network interface is brought up by ifup or Network Manager the script will execute. It uses scanimage to look for scanners and if any are found it will then use the <a href="http://linux.die.net/man/8/ip">ip command</a> to get a CIDR version of the network address and produce an entry for saned.conf if one doesn't already exist. The last part is important as the script will add an entry for every network the host connects to. If you want to block a particular network, let the script add it to saned.conf and then comment the entry out with a # as the script won't add it again if it finds it anywhere in the file. Make sure you restart saned anytime you edit saned.conf (see below). If you want to keep the script from adding entries in relation to a particular network interface you'll have to edit the script and have it exit based on the IFACE variable. Look at the "$METHOD = loopback" entry for a rough idea. If you enable the VERBOSITY=1 entry the script will generate a log file in /tmp that includes all the variables. Currently the script only supports IPv4 addresses as my network doesn't use <a href="http://en.wikipedia.org/wiki/IPv6">IPv6</a> so I can't test it.</p><p>Setting up saned is rather easy. During installation you have the option of running it as a daemon. To enable this later use "dpkg-reconfigure sane-utils" and indicate "Yes" to the standalone server option, or just edit the "/etc/default/saned" file and set "RUN=yes". The server daemon will start automatically at boot but you can start (or stop, restart) it manually with "invoke-rc.d saned start" or "/etc/init.d/saned start". To see any messages from saned use "tail /var/log/daemon.log".</p><p>To have saned start automatically when a client connects, indicate "No" to the standalone server option or set "RUN=no" in the default config file. Then add (as per the man page) the required entry to "/etc/inetd.conf" if it doesn't already exist. You can use a text editor but a safer way is with the <a href="http://man.he.net/man8/update-inetd">update-inetd</a> utility with "update-inetd --add "sane-port stream tcp nowait saned.saned /usr/sbin/saned saned". If you watch the log (tail -f -n 20 /var/log/daemon.log) you will see saned start and stop automatically whenever a client connects. Don't run a daemon and have an Inetd configuration active at the same time as they will conflict over network port access (6566 by default). To disable the Inetd entry use the command "update-inetd --disable sane-port".</p><p>To configure clients to use the server just add the server IP address or host/domain name to "/etc/sane.d/net.conf" and start whatever scanning program you want to use. You can get a list of available scanners with <a href="http://www.sane-project.org/man/scanimage.1.html">scanimage -L</a> but note that neither saned or scanimage supports scanners connected via a parallel port.</p><p>On Ubuntu 10.04 (Lucid Lynx) and some earlier versions access to scanner devices isn't handled correctly for anyone other than standard users (UID=1000+) on the host. As a workaround you can use my <a href="http://jhansonxi.blogspot.com/2010/10/scanner-access-enabler.html">Scanner Access Enabler</a> to correct the permissions until reboot. In the future, scanner network access may be handled by <a href="http://en.wikipedia.org/wiki/Avahi_%28software%29">Avahi</a> but it doesn't work with Karmic or Lucid <a href="https://bugs.launchpad.net/ubuntu/+source/sane-backends/+bug/508866">due to another bug</a>.</p><p>Update: Forgot to mention that scanimage is used to look for scanners first before adding a saned.conf entry.</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com3tag:blogger.com,1999:blog-8230692678867105904.post-29395923777910374362010-10-03T18:17:00.008-04:002012-07-08T23:51:26.745-04:00Scanner Access EnablerThere is a problem with scanner device permissions on Ubuntu. Regular users (UID>999) can access libsane applications like Xsane and <a href="https://launchpad.net/simple-scan">Simple Scan</a> without problems. <a href="http://scannerserver.online02.com/">Linux Scanner Server</a>, which is running in Apache as www-data, can't access them without a chmod o+rw on each scanner device. Nobody seems to know <a href="https://answers.launchpad.net/ubuntu/+question/127223">how the permissions work</a> so this has to be fixed manually in a terminal. This is not n00b friendly so I created a GUI application that automatically changes the permissions of every scanner device.<br />
The application relies on <a href="http://www.sane-project.org/man/scanimage.1.html">scanimage</a> and <a href="http://www.sane-project.org/man/sane-find-scanner.1.html">sane-find-scanner</a> utilities to identify scanner device ports then simply does a chmod against all of them. It supports USB, SCSI, and optionally parallel port (-p parameter) scanners and has been tested against the same ones I used for my <a href="http://jhansonxi.blogspot.com/2010/10/patch-for-linux-scanner-server-v12.html">LSS patch</a>. It uses the same universal dialog code as <a href="http://jhansonxi.blogspot.com/2010/09/webcam-server-dialog-basic-front-end-to.html">webcam-server-dialog</a> so it should work with almost any desktop environment.<br />
To install first <a href="http://www.mediafire.com/?4r1aw9ix9ayb0u0">download the archive</a> and extract the contents. Move the script to "/usr/local/bin/scanner-access-enabler" and set it for root:root ownership with rwxr-xr-x (0755) permissions. Copy the <a href="http://standards.freedesktop.org/desktop-entry-spec/latest/">destop menu entry</a> to the /usr/local/share/applications directory with root:root ownership and rw-r--r-- (0644) permissions. You may have to edit the desktop file as it uses gksudo by default. On KDE you may want to change the Exec entry to use kdesudo instead. If you specify the -p option on the Exec line you may have to quote everything after gk/kdesudo. If you don't have one of the GUI dialoger utilities installed and plan on using dialog or whiptail then you need to set "Terminal=true" else you won't see anything.<br />
On Ubuntu the menu item will be found under System > Administration. If you want users to be able to activate scanners without a password and admin group membership, you can add an exception to the end of "/etc/sudoers" file. Simply run "sudo visudo" and enter the following:<br />
# Allow any user to fix SCSI scanner port device permissions<br />
ALL ALL=NOPASSWD: /usr/local/bin/scanner-access-enabler *<br />
While you can use any editor as root to change the file, visudo checks for syntax errors before saving as a mistake can disable sudo and prevent you from fixing it easily. If you mess it up, you can reboot and use Ubuntu recovery mode or a LiveCD to fix it.<br />
Update: I released v1.1 which adds filtering for "net:" devices from saned connections. This didn't affect the permission changes but made for a crowded dialog with both the raw and net devices shown.<br />
Update: v1.2 adds a non-interactive/silent mode activated through a "-s" parameter.<br />
Update 20120708: v1.3 cleaned up the code a bit but broke detection completely as I recently noticed. I updated it to v1.4 which actually works now. :Djhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com7tag:blogger.com,1999:blog-8230692678867105904.post-41673079307188377412010-10-02T21:22:00.022-04:002011-02-07T23:28:20.512-05:00A patch for Linux Scanner Server v1.2 Beta1<p>I just spent several days testing, fixing bugs, and adding features to <a href="http://scannerserver.online02.com/">Linux Scanner Server</a> v1.2 Beta1. LSS is an easy way to share a non-networkable scanner through a web server. While the interface doesn't allow cropping like Xsane or <a href="https://launchpad.net/simple-scan">Simple Scan</a> it does support multiple file outputs, printing, and OCR through <a href="http://code.google.com/p/tesseract-ocr/">Tesseract</a>. Development has stalled with the beta and I encountered some bugs when testing it on Ubuntu 10.04 (Lucid Lynx). Instead of complaining about it, I fixed them.</p><p>Bugs fixed/features added:<br>Noise in Apache logs caused by unquoted variables and non-critical stderr outputs from ls and rm.<br>Adding scanners would fail if the scanner name included a forward slash.<br>Multiple scanner support broken due to a lack of newlines between entries in the scanner.conf file.<br>No support for scanners connected via parallel-ports.</p><p>I wanted to try to break the beta before deploying it to my clients and I did - as soon as I connected a second scanner. I decided to fix the bug even though none of my clients have more than one attached to any given system. I just happen to have a bunch of them on hand and, thanks to a local tech recycling center, I added a few more. Sane supports scanners connected to parallel ports but LSS doesn't so I decided to fix that, well, just because. Yes - I went out and paid money for more scanners including an obsolete parallel port Mustek model just to fix LSS.</p><p>The deciding factor in doing this was that LSS is based on a shell script and a lot of <a href="http://en.wikipedia.org/wiki/Sed">sed scripts</a>. Shells scripts are about the only programming language I know to any depth (and Applesoft BASIC). Some of the regex/sed stuff still throws me but I had help from some of my <a href="http://www.lugwash.org/">LUG mates</a>. These are the scanners I tested with (and tested simultaneously):</p><p>AGFA SnapScan 1212U (snapscan:libusb:002:003)<br>Brother Industries MFC-440CN (brother2:bus4;dev1)<br>Hewlett-Packard ScanJet ADF (C7190A, identified as 5200C) (hp:libusb:004:002)<br>Hewlett-Packard ScanJet 4470c (rts8891:libusb:004:003)<br>Hewlett-Packard ScanJet 6100c (C6260A but identified as C2520A) (hp:/dev/sg5)<br>Microtek ScanMaker E3 (microtek:/dev/sg3)<br>Mustek 600 III EP Plus (/dev/parport0)<br>UMAX Vista-S8 (umax:/dev/sg4)</p><p>Fixing the multiple scanner support was a pain. LSS relies on <a href="http://www.sane-project.org/man/scanimage.1.html">scanimage</a> for all scanner functions. Getting scanimage to provide a newline at the end of the device list was trivial but the message printing function for the web page doesn't tolerate them and they all have to be converted to HTML breaks. Forward slashes in the model names from scanimage also required escaping but not anywhere else (like in the device paths). This got into sed loops which are really hard to do.</p><p>Adding support for scanners on parallel ports was also difficult. They have to be defined manually in the sane config files (/etc/sane.d/*.conf) but scanimage doesn't report them regardless. The <a href="http://www.sane-project.org/man/sane-find-scanner.1.html">sane-find-scanner</a> utility does find them and will indicate what brand is on which port but no additional details like the model name. Since sane can use auto-probing to find which parallel port the scanner is on there is no deterministic way to use the information from sane-find-scanner and the sane conf files to indicate a specific model. The only solution I could come up with is to manually specify parallel port scanners in a separate "scan/config/manual_scanners.conf" file and then merge it after the rest are detected. The format is the same as for scanners.conf but the value for ID needs to be specified as %i (same as the device entry in the format line for scanimage). The modified LSS index.cgi script will replace it with an auto-incremented value when merging. The NAME= value doesn't matter but forward slashes have to be escaped with backslashes and anything longer than 30 characters will be truncated in the pull-down list on the Scan Image page.</p><p>Setting up a parallel port scanner is a bit confusing. The Mustek model I used was configured in /etc/sane.d/mustek_pp.conf simply by uncommenting the line "scanner Mustek-600-IIIEP * ccd300". The second parameter is the name. The third is the port with an * indicating autoprobing which in my case became /dev/parport0. The last is the actual driver. With scanimage the device is not specified by the port but rather the backend driver and then the name. With the settings I used it became "mustek_pp:Mustek-600-IIIEP" (also specified for the "DEVICE=" value in manual_scanners.conf). If only the backend is specified scanimage will default to whichever is enabled in the conf file. I only have the one parallel port scanner (the ScanJet 4470c has USB and parallel but there's no driver for the latter) so I don't know how it handles multiple ones configured in the same file/backend.</p><p>There are still bugs in LSS. The most obvious one is a fault with the "Print_Message" function. There are several page updates that don't occur, mostly the "Please wait" ones that are supposed to display during scanner detection and image scanning. I don't know enough about the interaction between javascript and the browser to identify if it is a bug in the code or an architectural problem with the page design.</p><p>Another bug is with the scanner names. As you can see from my list above, some of the scanners are not named correctly. It may be that the model reported is the base one that the actual model is compatible with and sane just isn't more specific than that. LSS just uses whatever scanimage reports. This isn't a major problem as most systems will only have one scanner.</p><p>A third problem is with the scanner driver options that LSS specifies - basically none. Some scanner/driver combinations require specific options to be specified else the scanner has problems. The only one I encountered with the models tested was that the default resolution of 200 was unacceptable to one of them so it was downgraded to 150. These errors show up in the Apache logs (/var/log/apache2/error.log) but only refer to index.cgi and not a specific point within the file. I'm not sure how this bug could be fixed. Parsing the options out of the sane conf files may work but different versions of the same base model may require different settings.</p><p>Future enhancements that would be nice are cropping and <a href="http://www.linuxjournal.com/content/internationalizing-those-bash-scripts">internationalization support</a> but that's more than I'm going to take on. My LUG mates also suggested using anything other than shell scripts.</p><p>To use the patch, first download and extract LSS into your web server data directory (/var/www/scan). Then <a href="http://www.mediafire.com/?hyuc4ijzx8y1wme">download the patch archive</a>, extract it into the "/var/www" directory and apply with</p><p><code>patch -p1 --directory /var/www/scan --input=/var/www/scan_1.2_Beta4.patch<code></p><p>Just reload any browser window that has the old version loaded to make the new one active. Restarting the server is not necessary.</p><p>LSS is GPL 2 but it's not clear in the package as the author didn't follow the <a href="http://www.gnu.org/licenses/gpl.html">recommended method for applying the terms</a>.</p><p>Note - there's a nagging problem with Ubuntu in that LSS can't access any scanners due to device permissions. It runs as user www-data but the old scanner group no longer exists so devices need chmod o+rw applied manually. For regular users (UID >1000) it seems to happen automatically but nobody seems to know <a href="https://answers.launchpad.net/ubuntu/+question/127223">how that works</a>. I wrote <a href="http://jhansonxi.blogspot.com/2010/10/scanner-access-enabler.html">Scanner Access Enabler</a> to solve the problem.</p><p>Update: If you also have saned configured for scanner sharing then duplicates may be detected from both the raw devices and saned shared versions. If you don't want the ones from saned then comment out "localhost" in the "/etc/sane.d/net.conf" file and restart saned.</p><p>Update: I and pqwoerituytrueiwoq have been making more improvements to the beta. You can follow along and download updated files from <a href="http://ubuntuforums.org/showthread.php?t=1519201">this thread</a> at the Ubuntu forums.</p><p>20110207 Update: I and pqwoerituytrueiwoq made a bunch of fixes and I've released 1.2 Beta 4 of the patch. The links and instructions above have been updated. It is a recursive patch so it will affect several files. You still need to add the <a href="http://www.iconfinder.com/icondetails/46210/16/scanner_icon">favicon for it</a> to "/var/www/scan/inc/images". I performed a <a href="http://ubuntuforums.org/showpost.php?p=10429502&postcount=46">feasibility study</a> of adding proper preview, settings selection, and cropping. I found that the difficulty of adding them to the existing code base is extreme, even though some are needed to get LSS functioning correctly. For example, the Brother MFC-44CN doesn't scan because the modes that LSS uses (like "Color") are hard-coded in the html and don't match up with what the Brother driver offers. Because of these problems (and my lack of time) I've ended my involvement with the project. For my needs Beta 4 functions adequately. I also found another scanner project, <a href="http://phpsane.sourceforge.net">phpSANE</a>, that seems to have a better code base on php although it has <a href="http://ubuntuforums.org/showpost.php?p=10397513&postcount=44">many limitations otherwise</a>.</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com3tag:blogger.com,1999:blog-8230692678867105904.post-86067361485758568932010-09-23T20:56:00.013-04:002010-09-23T23:18:29.474-04:00webcam-server-dialog: A basic front-end to webcam-server<p>I'm working on a new Ubuntu configuration to deploy for friends and family. One of the capabilities I wanted to add was simple remote webcam viewing. I saw <a href="http://www.linuxaria.com/article/webcam-server-su-linux-2?lang=en">an article</a> about <a href="http://webcamserver.sourceforge.net">webcam_server</a>. While it's old and only does images it met my requirements. The primary limitation is that it's a command-line application. It can be launched from a XDG desktop file and will default to /dev/video0 but on systems with more than one video device a terminal is needed. I did spend some time testing <a href="http://wiki.videolan.org/Documentation:Streaming_HowTo">streaming with VLC</a> but it doesn't show a device list for selection either and it's streaming configuration dialogs are confusing at best. To deploy webcam-server I had to make it more friendly which meant making a video device selection dialog for it. My current programming hammer is Bash shell scripting (<a href="http://en.wikipedia.org/wiki/Applesoft_BASIC">Applesoft BASIC</a> was the other option) but it's not enough for GUI design. To add that capability I turned to what I call "dialoger" utilities that can produce GUI dialogs and provide feedback to command-line applications.</p><p>When I started this project the only dialoger I knew about was <a href="http://www.linux.com/archive/feature/55389">dialog</a> which is text-based and I needed something that would run in X. From researching alternatives to dialog I found <a href="http://linux.die.net/man/1/xmessage">xmessage</a> which led to <a href="http://homepages.ihug.co.nz/~trmusson/programs.html#gxmessage">gxmessage</a> and eventually <a href="http://en.wikipedia.org/wiki/Zenity">Zenity</a>. Plenty to choose from and all different. Now I had another problem - which one to use with each desktop environment? I normally use Gnome but some of my clients use XFCE or LXDE. There is also the possibility of a KDE user in the future. While Ubuntu includes Zenity by default, which would work with XFCE also, a GTK application isn't the best choice on KDE. I could use <a href="http://techbase.kde.org/Development/Tutorials/Shell_Scripting_with_KDE_Dialogs">kdialog</a> but either I had to make custom versions of the script for each environment, select the dialoger with a script parameter, or try to select it dynamically. In the great tradition of <a href="http://en.wiktionary.org/wiki/overdesign">overdesign</a> I chose the latter.</p><p>After spending several days solving a 5-line problem with 300+ I ended up with <a href="http://www.mediafire.com/?opu79isz45ir28p">webcam-server-dialog</a>. It supports dialog, <a href="http://linux.die.net/man/1/whiptail">whiptail</a>, <a href="http://xdialog.free.fr/doc/intro.html">Xdialog</a>, xmessage, gxmessage, kdialog, and Zentiy. Because it's rather generic it can be expanded to support more without a lot of effort. It looks for processes that indicate a particular desktop environment or window manager, uses a built-in priority list of dialogers for that environment, checks for availability, then uses the best available to provide the GUI. The core of this script is nothing more than "ls /dev/video*" dumped into an array but the focus here was eye-candy. Getting this to work with all of them was difficult as they all have different command-line parameters, behaviors, and bugs, even between those that are supposed to be clones of each other. Some use exit return status for indicating button presses, some stdout, some both depending on the mode. The fun ones were dialog and whiptail which use stdout for drawing the screen and stderr for indicating list choices. There are a lot of comments and commented-out debug lines if you want to use this with your own projects (GPL v3). It could be useful as a front-end to other programs that lack selection lists like <a href="http://linux.bytesex.org/xawtv/">xawtv</a>.</p><p>To use it, install webcam-server then put the script in /usr/local/bin with root ownership and rwxr-xr-x permissions. I also made an <a href="http://www.mediafire.com/?en19j3z5dnt536o">XDG destop menu item</a> for it. Put webcam-server-dialog.desktop in /usr/local/share/applications. For an icon I used the one from camorama and just copied /usr/share/pixmaps/camorama.png to /usr/local/share/pixmaps/webcam-server-dialog.png (make the directory if it doesn't exist). You can add parameters to the Exec line in the desktop file but the run dialogs always reference port 8888 (I'm too tired of working on this to make the text dynamic) and quotes don't pass through well so the caption format had to be hard-coded in the script.</p><p>In addition to direct web-page access a Java applet is included. It works well since it can automatically reload the image at a selectable interval. It has one really bizarre limitation - it will only connect from localhost. Apparently the developers decided to limit it that way instead of implementing remote authentication but you probably could use it from within an ssh connection. This security feature <a href="https://bugs.launchpad.net/ubuntu/+source/webcam-server/+bug/179932">has a bug</a> in it's implementation when resolving the hostname so it requires the IP (127.0.0.1). I created a <a href="http://www.mediafire.com/?g4dsyd46cal6ymx">custom PHP web page</a> that works around this. Just rename it to index.php and put it in the client directory on your web server (/var/www/client by default) along with applet.jar from the /usr/share/doc/webcam-server directory and link to it from your default home page. You'll need PHP support installed (php5 metapackage).</p><p>I also tested it in <a href="http://www.mandriva.com/">Mandriva 2010.0</a> which worked but with one problem - the webcam-server executable is named webcam_server so just replace all instances of one with the other in the script. You'll also need v4l-info which is in the xawtv-common package. If you are using the firewall you have to add "8888/tcp" to the allowed ports for remote access.jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com0tag:blogger.com,1999:blog-8230692678867105904.post-25859745385950320652010-09-03T15:44:00.026-04:002010-09-04T15:52:55.936-04:00Your Linux system keeps falling and it can't get up<p>Once in a while a Linux PC technician will encounter a system that has problems with lockups (a.k.a. hanging or freezing). Sometimes it is failing hardware but other times it's a software problem. Here are the common causes for this and how to identify which is the source of your problems. While I predominantly use Ubuntu (and some Mandriva) these tests are valid for most any distribution.</p><p>1. A kernel crash or panic is rare but generally fatal. As any PC tech knows, the first test is seeing if the Caps Lock, Num Lock, or Scroll Lock LEDs change state when the corresponding keys are pressed as this is performed by the PC, not the keyboard controller. If they don't change then you know you've got a freeze problem. If the keyboard lights are <a href="http://jhansonxi.blogspot.com/2007/12/keyboard-led-flashing-panic.html">flashing repeatedly during the freeze</a> then it's a panic. More severe problems can prevent the kernel from failing gracefully enough to even do that. These failures can be caused by fundamental compatibility problems with a kernel module and a critical piece of hardware (like a BIOS with a <a href="https://wiki.ubuntu.com/DebuggingACPI">broken ACPI implementation</a>), a device that was given commands it didn't like and has stopped responding (like a video <a href="http://en.wikipedia.org/wiki/Graphics_processing_unit">GPU</a>) or just failing hardware (bad RAM, overheating CPU, and loose PCI or AGP cards).</p><p>With hardware problems it is best to start by opening the case and blowing the dust out since that is the source of most overheating problems. After cleaning a few systems you'll understand why you should charge extra for smokers, pet owners, and homes with shag carpet. I find it easiest to use an air compressor with a tank and a blow-off nozzle to do the job as canned air is too weak. Leave the system plugged in (but powered off) to keep it grounded as air streams can produce static electricity which can damage electronics. With modern systems the only hazardous voltages are in the power supply so if you don't insert any metal objects into that you won't get shocked. Try spinning all fans with your finger or a plastic probe to see if they turn freely without resistance. Replace any that drag with anything other than a sleeve-bearing fan (like ball or fluid bearings). Power supply fans can be replaced but it requires soldering or swapping connectors as there isn't a standard connection for their internal fans. While it's fun to use the air nozzle to spin up the fans to 100K+ RPM it's bad for the bearings. They also get cleaner when you hold the blades stationary (I use the end of a big nylon tie). Start with the power supply and then the CPU and work your way around to the front case vents. Keep wires tied up and away from the fans so they don't jam the blades or block airflow. Check for cards and memory modules that are not fully seated in their slots and partially-connected drive cables. If possible, remove heat sinks from CPUs and other chips and check for adequate <a href="http://en.wikipedia.org/wiki/Thermal_grease">heat sink grease</a> (should be an even but thin layer across the entire mating surface). Check that the chips are properly seated in the sockets and that the heatsink is pressing down evenly on them else they may tilt and lose contact. Check for <a href="http://www.badcaps.net/ident/">failing capacitors</a>. If you find any bad caps it's probably easier to replace the motherboard unless you have good soldering skills. Use your eyes and nose - if something looks or smells burnt then it probably is. Keep in mind that power supplies often have stronger "electrical" smell to them due to hand-soldering during manufacturing.</p><p>Laptops are harder to clean. Most can be opened by prying off the top bezel around the keyboard, usually starting with the section enclosing the display hinges. Some have screws and some just latch. Then the keyboard can be removed and the top mounting plate. There are how-to disassembly videos on the Internet for popular models that are often modded by hackers. Some laptops have externally removable heat sinks for easy cleaning. Just because it's easy doesn't mean that users clean them. I once recycled a high-end Sager laptop (about $5K USD) that had overheated and failed. The heatsink had plugged with lint and it kept shutting down so the parents gave it to their kids to play with. The kids laid it on their bed and had it running (bottom fans so no airflow whatsoever) and it overheated enough melt the case around the heatsink. Made me sick to throw it out but the motherboard wasn't practical to fix after that. Modern CPUs will reduce their clock speed when overheating but can't reduce power dissipation entirely and can still overheat and fail when running at minimum levels.</p><p>Intermittent failures are harder to diagnose so continuous monitoring with hardware or software tools is needed. Hardware temperature monitoring can be performed with a cooking probe, thermocouple meter, or a dedicated PC temperature monitor that mounts in a drive bay. The CPU, GPU, power supply fan exhaust, and hard drives are the ones to focus on. Temperature limits for devices vary. CPUs and GPUs can often hit 60°C but 50°C is rather hot for a hard drive.</p><p>For fan monitoring you can leave the case cover off and keep an eye on them or install a PC fan monitor/controller with a display which mounts in a drive bay (and often includes temperature monitoring probes). Thermostatically-controlled fans will vary a lot but if you are having an overheating problem due to inadequate speed from a non-faulty fan then it may be too far out of the primary air stream to have an adequate response time. Better models have adjustable thresholds or remote sensors but the best solution is one controlled by the motherboard via a <a href="http://en.wikipedia.org/wiki/Computer_fan#Fan_connector">4-pin PWM fan connector</a>. Be careful here - I burned out a CPU fan controller when I used a CPU fan that consumed more current (amperes) than the motherboard's controller could handle so check the specifications before plugging it in. I had to convert mine to a drive connector which meant it ran at maximum speed and sounded like a vacuum cleaner.</p><p>A voltmeter is useful for monitoring power supply voltages <a href="http://pinouts.ru/Power/atxpower_pinout.shtml">on the connectors</a> under various loads. Generally supply voltages should be within 10% of the stated voltages on the power supply label. CPUs and GPUs usually have a local regulator on their boards as they need voltages that differ greatly from the normal 12/5/3.3 volts that most power supplies provide. The BIOS often has control over the CPU voltages and configuration so a bug in the BIOS (or incorrect manual settings) can cause erratic lock-ups by making the CPU unstable. Usually a <a href="http://en.wikipedia.org/wiki/Nonvolatile_BIOS_memory#Resetting_the_CMOS_settings">CMOS reset</a> or BIOS update can fix this. One way to test for an unstable or faulty CPU is to <a href="http://en.wikipedia.org/wiki/Underclocking">underclock</a> it via manual settings in the BIOS (or jumper or switch settings on really old motherboards) and see if stability improves.</p><p>To verify what the CPU needs for power and clock rates you first need to identify exactly what one you have as manufacturers have many versions and <a href="http://en.wikipedia.org/wiki/Stepping_%28version_numbers%29">steppings</a> and their requirements may differ. To see what you have use the command "less /proc/cpuinfo". Use that information to search for exact specifications and compare it to your system. Pay close attention to power requirements as some motherboards, even with the same socket, can't handle some CPUs. This results in unstable CPU voltages and intermittent failures, especially under heavy loads. This problem tends to occur with long-lived socket designs where the CPU family is expanded to include models with higher power requirements (essentially changing the motherboard requirements) that earlier motherboard designs can't meet even though the CPU fits in their sockets. I've <a href="http://jhansonxi.blogspot.com/2009/03/be-wary-of-cpu-upgrades-on-old.html">damaged a few boards</a> that way. Heatsinks and fans also need to meet the requirements of the CPU. Mass-market PC systems usually have very little power margin between the shipped CPU requirements and the system cooling capabilities so failing to upgrade them both can result in instability. Many of these cheap systems use a ducted case fan for cooling and just replacing a failed one requires tracking down the specifications for the fan and finding a replacement that matches in airflow (CFM) and features (connector type and thermostatic control). Standard CPU fan/heatsink combos usually can't be used as they don't fit in the case or the motherboard lacks mounting holes for them.</p><p>Most modern motherboards have built-in sensors as do CPUs, GPUs, and storage devices. These can be queried by software for status information, monitoring, and logging. Some BIOSes report the sensor values and error conditions and advanced servers often have separate hardware modules for remote monitoring of them. The standards that the sensor systems conform to are imprecise so custom drivers and algorithms are needed by external software for each implementation. Software tools include <a href="http://www.lm-sensors.org">lm-sensors</a> and <a href="http://sourceforge.net/apps/trac/smartmontools/wiki">smartmontools</a>.</p><p>The lm-sensors utilities report what thermal/fan/voltage sensors you have on your motherboard (if available and supported) and their current status. You first run "sensors-detect" to identify what kernel modules are needed and have it add them to /etc/modules and reboot (or just load them with modprobe). Then just run "sensors" to get the current status or use a graphical application like the Gnome <a href="http://sensors-applet.sourceforge.net/">Sensors Applet</a>, <a href="http://ksensors.sourceforge.net/">KSensors</a> or the XFCE4 <a href="http://goodies.xfce.org/projects/panel-plugins/xfce4-sensors-plugin">Sensors</a> panel plug-in. Note that wildly extreme readings may not indicate a fault but rather an unused sensor input or an unsupported implementation.</p><p>Most modern hard drives and SSDs have a monitoring and diagnostic system called <a href="http://en.wikipedia.org/wiki/S.M.A.R.T.">SMART</a> which can be access with smartmontools. While SMART can tell you about problems, it is <a href="http://en.wikipedia.org/wiki/Hard_disk_drive#Disk_failures_and_their_metrics">not good at predicting failures</a>. You use the smartctl program and specify the storage device to query. For most systems the primary storage device is named "sda" by the kernel so the command would be "smartctl -a /dev/sda | less". Most modern drives report temperature, log errors, and have built-in self tests that smartctl can activate. While the underlying registers on the drives are well-defined, what they represent is not so conversion data is needed by smartctl to interpret the values. It will tell you if it recognizes the model or not. The obvious status to check is the "overall-health self-assessment test" which tells you if any of the register values exceed an alarm threshold. More specifically the parameters of type "Pre-fail" are important. Also note the "worst" temperature value as it could indicate a prior significant overheating incident which is most likely to occur under heavy load (like during a backup or a RAID rebuild). Graphical tools include <a href="http://gsmartcontrol.berlios.de/home/index.php/en/Home">GSmartControl</a> and <a href="http://fedoraproject.org/wiki/Features/DeviceKit">Palimpsest</a> disk utility in DeviceKit (a.k.a. gnome-disk-utility) but root access may be needed by them. Another is <a href="http://www.guzu.net/linux/hddtemp.php">hddtemp</a> which only reads the temperature but has a daemon that can be monitored through the sensor monitoring tools mentioned above.</p><p>RAM can be tested with <a href="http://www.memtest.org">Memtest86+</a> which is installed in Ubuntu by default. Reboot and hold the left Shift key down before Grub loads and starts booting. You'll get the Grub menu with Memtest86+ listed. You can also download a bootable ISO or USB image from the Memtest86+ site to test with. In the early days of PCs the memory had <a href="http://en.wikipedia.org/wiki/RAM_parity">parity checking</a> but modern RAM doesn't so the only way to identify a failure is by using a RAM test. If you are worried about memory problems then get <a href="http://en.wikipedia.org/wiki/Dynamic_random_access_memory#Errors_and_error_correction">ECC memory</a>. This costs only a little more than standard RAM but the motherboard has to support it and it can reduce performance and limit <a href="http://en.wikipedia.org/wiki/Overclocking">overclocking</a>. With ECC memory the BIOS can provide much more memory diagnostic information and testing. For example, wiping unused memory locations is a standard process that is performed at a user-definable interval to see if any bits changed state by themselves. Servers often use ECC memory but usually these are <a href="http://en.wikipedia.org/wiki/Registered_memory">registered</a> ECC memory modules which are sometimes called "server memory". The "registered" aspect isn't a certification - it's a signal amplifier built into the module for use in systems that have more modules than the motherboard's <a href="http://en.wikipedia.org/wiki/Northbridge_%28computing%29">northbridge</a> can communicate with directly. These are not compatible with standard memory or motherboards that use it. RAM memory modules have an <a href="http://en.wikipedia.org/wiki/Serial_presence_detect">SPD</a> device that indicates it's specifications. To read it (and other BIOS information) use the command "dmidecode | less". Another source of intermittent memory problems is faulty configuration by the BIOS, either manually by the user or a faulty automatic configuration. A CMOS reset or BIOS update can often fix this.</p><p>Diagnosing a freezing system is difficult since you can't check log messages easily with a frozen system and the logs are often truncated as a result of it. The kernel (and Grub) have built-in remote communication options which can help with this. These <a href="http://en.wikipedia.org/wiki/Out-of-band_management">out-of-band</a> remote connections can be made through a <a href="http://www.howtoforge.com/setting_up_a_serial_console">serial console</a> or with <a href="https://wiki.ubuntu.com/Kernel/Netconsole">Netconsole</a> and another system. A serial console can be used like an SSH connection but requires a hardware <a href="http://en.wikipedia.org/wiki/Serial_port">RS-232 serial port</a> which is rare on modern systems. On Ubuntu 10.04 (Lucid Lynx) there is a Memtest86+ serial console configuration already in the menu that can be used to test memory remotely but it's probably more useful for headless (i.e. no display) servers. Netconsole requires a network connection (it uses UDP) and another system running a syslog server. For kernel crashes the <a href="http://lkcd.sourceforge.net">Linux Kernel Crash Dump</a> tools can be used to obtain crash data that is useful for diagnostics or reporting kernel bugs but I haven't used it yet.</p><p>Check the kernel messages and logs with "dmesg | less", "less /var/log/kern.log", "less /var/log/syslog". There are many different log files including compressed backups of previous logs. Some require you to be root to access them. With Ubuntu you just add "sudo" before the commands or just get a root login with "sudo su". Midnight Commander's internal editor is helpful for reading logs including the compressed ones. The built-in editor is not the default in Ubuntu - you have to enable it within MC with F9 > Options > Alt-I > Alt-S (use Esc 9 instead of F9 when connecting through a serial console).</p><p>Most distros have boot options that can be issued through the boot loader to the kernel to change its behavior or deactivate specific functions. Ubuntu and Debian <a href="https://help.ubuntu.com/community/BootOptions">have many</a> but every distro has it's own. These can help to isolate problems or provide long-term stability when added permanently to the boot loader options.</p><p>2. An input error resulting in the loss of keyboard/mouse control acts like a freeze but isn't. The first clue is to see if there is any screen activity at all (most desktops at least have a clock applet running). Hardware causes include a faulty peripheral, USB hub, PS/2 port, or <a href="http://en.wikipedia.org/wiki/Kvm_switch">KVM switch</a>. With PS/2 ports a failure with one device usually prevents the other from working. A simple test is to plug in a USB mouse or keyboard and see if they work during the freeze. When a kernel bug is responsible the keyboard works in the BIOS setup and Grub menu but fails during boot (I've had problems with a bug related to an Intel i8042 PS/2 controller). These can be intermittent between boots but once it's working during a session it usually stays working. It can also be a bug in X.org if they work in a tty terminal but not in X (as when booting into Ubuntu's recovery mode). I've encountered a freezing problem that affects only the mouse. It often occurs when an OpenGL game crashes. Besides restarting X with the keyboard (knowing the menu hotkeys helps here), I've found that launching the game again and then exiting usually fixes the problem.</p><p>Check your X session logs during the freeze by switching to a tty or connecting remotely through a SSH or serial console connection. Login, then do "less /home/<username>/.xsession-errors" and see if there are any crash messages from running applications. Most desktop applications will log messages there if they don't have their own log. If you have no control at all, reboot but don't log in to a graphical session (at the display manager login screen) as the session log will be overwritten as soon as you do. Don't just hit the reset or power button when rebooting - try the <a href="http://en.wikipedia.org/wiki/Magic_SysRq_key">Magic Sysrq keys</a> first or connect remotely and issue a "reboot" or "init 6" command.</p><p>An example of another traumatic but non-system freeze is when Nautilus hangs as this makes it difficult to do anything with the Gnome desktop until killed (it usually restarts automatically just like Windows Explorer). A Nautilus error would show up in .xsession-errors while a crash would also show up in the kernel logs. If the session log is rather big, making it hard to isolate messages related to a particular application, you can open a terminal window and try running the suspect application from there as any error messages would show up in that window instead. You can also capture the messages to a file by copying the screen or using shell I/O redirection to a file which is helpful when submitting bug reports.</p><p>X.org input driver errors will show up in it's log at /var/log/X.#.log where the # represents the instance that was running. Normally it's X.0.log unless you have multiple sessions running like multiple X logins or a non-Xinerama dual-head configuration. A different session ID could also be used if X crashes back to the display manger login screen and it thinks another session is still running (due to a leftover lock file) when you login again.</p><p>3. Outright X.org crash. When it involves screen corruption it's obvious but that symptom isn't always present. Sometimes this happens when switching to or from a tty terminal or when an OpenGL application is running full-screen. Sometimes it happens with the display manger at the login screen. If the keyboard lights don't toggle then try switching to a tty. If that doesn't work then try killing X with left-Alt+SysReq+K (or Ctrl+Alt+Backspace if enabled). If that doesn't do anything either then try a remote connection (or just pinging it). If that also fails then you are facing a kernel crash (which can be caused by a misbehaving video device due to integration between X, the drivers, and the kernel). If you do get remote access then save and review the logs including that of the display manger (/var/log/gdm/:0-greeter.log). These crashes are usually caused by video driver problems. In Ubuntu 8.10 through 10.04 (and several other distros) almost any Intel 8xx series graphics device will cause problems. There is a lot of <a href="http://www.freesoftwaremagazine.com/columns/xorgs_x_window_innovation_its_not_all_about_graphics">architectural changes occurring with video</a> which involves the kernel, X.org, and DRI and there has been a lot of breakage. Some drivers are not keeping up with the changes and some latent driver bugs are being discovered. The older Intel devices are currently the worst (of the "supported" devices) even though Intel is the one of the companies that is pushing these changes and has engineers working on it. But not all video crashes are the fault of the driver. Some may be kernel bugs with the motherboard chipset that the video GPU is triggering. This was a common problem with AGP ports and video device manufacturers like Nvidia wrote their own AGP modules for specific chipsets.</p><p>To save time, instead of analyzing the logs for something that indicates a driver problem, search the Internet for distro bugs relating to the one you have. Identify your graphics device with "lspci | less" or "lshw | less" and then search with Google for the device part number and the distro like "Ubuntu 10.04" or "Lucid". Check the release notes for known problems and possible workarounds. With Ubuntu there are usually workarounds in the <a href="https://help.ubuntu.com">Ubuntu help wiki</a>.</p><p>4. CPU overload. Some process is hogging the CPU and slowing everything down to a crawl. More common with single cores but can still happen with multicores due to memory bottlenecks. If you can't get a graphical process management tool like Gnome System Monitor to load then switch to a terminal and use the "top" command to see who the culprit is (probably Flash but X.org driver faults can cause overloads without a outright freeze or crash). Identify it by process ID and kill it using top's built-in kill option (press k). You can also list processes with "ps -A" then use "kill -s <signal> <process number>". If there are multiple instances of the same process then use "killall -s <signal> <process name>". The signal is 15 (terminate) by default which means "ask nicely". If that doesn't work then use 9 (kill) which isn't as friendly.</p><p>5. I/O overload. Something is hogging the hard drive/SSD which can slow everything down to the point of being non-responsive. You can usually identify this by the hard drive activity LED being lit continuously. You can narrow down the list of processes responsible with the <a href="http://www.innovationsts.com/blog/?p=658">lsof command</a> with "lsof | less" but you'll find the output can be overwhelming. If you know which file it is then you can identify the process responsible with <a href="http://www.serverwatch.com/tutorials/article.php/3812736/fuser-files-and-processes.htm">fuser</a>. Interactions between Firefox's database and the EXT filesystem can cause I/O overload intermittently but it's not as often with newer versions. A lack of storage space can cause it if applications that are trying to write to the disk don't handle failed writes well, especially logs and temporary files. They may hang and start hogging the CPU also. To check available storage space use "df -Th".</p><p>6. Memory hogging. Some process is eating memory and increasing in size. If the RAM is used up then the swap partition is used which can manifest itself as #5 also. Eventually the system runs out of memory and the kernel starts killing processes to fix it. Identifying the culprit is essentially the same as for CPU hogs. Use the command "free" to check memory and swap usage.</p><br><p>This is only the start of the diagnosis. Once you identify the source of the problem then you can try to find a workaround, file bug reports, and test patches. This all seems rather complicated but after you've fixed a few dozen systems you eventually recognize specific symptoms and behavior patterns right away and can quickly narrow down the problem. What differentiates real technicians and hackers from the amateurs is the stubborn resolve to find the problem.</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com6tag:blogger.com,1999:blog-8230692678867105904.post-91290431482222562542010-05-15T16:23:00.027-04:002010-05-15T17:53:29.758-04:00Duplicating subsets of package selections between systems<p>I'm building a pair of Ubuntu systems for kids. For a variety of reasons, including lack of time and hardware problems, this has taken far longer than expected and I ended up with one running 9.10 (Karmic Koala) and the other 10.04 (Lucid Lynx). Since the Karmic system has the most testing effort into it (reviewing games for stability and kid appropriateness) I needed a way to duplicate the selection of games on the Lucid system while filtering out everything else since some packages are release-specific.</p><p>Using <a href="http://en.wikipedia.org/wiki/Synaptic_%28software%29">Synaptic</a> it is possible to export (File > Save Markings) either a list of packages that have changed selection state but the changes haven't been applied yet or optionally a list of all packages and their status. While Synaptic can filter displayed packages by repository or type (the "section" parameter in the deb INFO file), these have no effect on the markings export.</p><p>The apt-get command (or <a href="http://en.wikipedia.org/wiki/Dpkg">dpkg</a> directly) can be used to create a full list of packages but to produce a filtered list you need to use <a href="http://en.wikipedia.org/wiki/Aptitude_%28program%29">aptitude</a>.</p><p>Aptitude has a search function that can be used to filter based on almost anything in a deb package INFO file. For example, to get a list of games available and save it to a game_selections.txt file:</p>aptitude search '?section(games)' >game_selections.txt<p>The question marks in the search pattern indicate that a search term follows. Because the parenthesis are special characters to the shell they need to have quotes around them. In this example "section" indicates the section parameter and "games" is term that is being searched for. Any package with a section parameter set to "games" will be listed:</p><blockquote>...<br>i chromium-bsu - fast paced, arcade-style, scrolling space<br>i A chromium-bsu-data - data pack for the Chromium B.S.U. game<br>p chromium-data - transitional dummy package for chromium-bs<br>p circuslinux - The clowns are trying to pop balloons to score points!<br>p circuslinux-data - data files for circuslinux<br>...</blockquote><p>The leading character indicates it's state with "i" for installed, "i A" for automatically installed (either recommended or a dependency), and "p" for purged (i.e. no trace of existence on system which is the default state of all packages). To filter for installed packages only you add the "?install" parameter:</p><blockquote>aptitude search '?section(games) ?installed' >game_selections.txt<br><br>...<br>i chromium-bsu - fast paced, arcade-style, scrolling space shooter<br>i A chromium-bsu-data - data pack for the Chromium B.S.U. game<br>i glchess - Chess strategy game<br>i glines - Five or More puzzle game<br>i gnect - Four in a Row strategy game<br>i gnibbles - Worm arcade game<br>i gnobots2 - Avoid robots game<br>i gnome-blackjack - Blackjack casino card game<br>...</blockquote><p>The status and descriptions will cause syntax errors when using the file as input to aptitude so a format needs to be specified to filter them out. The "-F" parameter is used to indicate a format change and "%p" equates to the package name:</p><blockquote>aptitude search -F '%p' '?section(games) ?installed' >game_selections.txt<br><br>...<br>chromium-bsu<br>chromium-bsu-data<br>glchess<br>glines<br>gnect<br>gnibbles<br>gnobots2<br>gnome-blackjack<br>...</blockquote><p>To use game_selections.txt as input to aptitude on another system just use command substitution (see the sh or bash man page) to redirect it from the shell:</p>
aptitude install $(< game_selections.txt)<p>On Ubuntu you need to use sudo in front of this command or use "sudo su" to create a root shell and issue it from there.</p>jhansonxihttp://www.blogger.com/profile/02954133518928245196noreply@blogger.com4