Bill Wohler's Debian Notes to Self (and You)


Shared folders with Debian bullseye under VMWare Fusion permalink

It's been a while since I had mounted a shared folder. The instructions have since changed. Here is what I used today:

$ vmhgfs-fuse .host:<share> <mountpoint> -o subtype=vmhgfs-fuse

However, it is possible that the shared folder is mounted automatically under /mnt/hgfs.


Converting a text file to a PDF permalink

enscript -p - -B < file.txt | ps2pdf - file.pdf


Inline vs bottom vs top posting permalink

I was an inline/bottom poster for decades, but have discovered that top posting is better. Much better.

From the reader's perspective, you see the reply right away rather than having to scroll through old text first--I find I now don't bother reading bottom postings if they are below the fold. You have the entire thread for reference as it was written rather than being spliced together like some horrific Frankenstein's monster, or worse, discarded entirely, which can be maddening if you need the context. You also don't have a bunch of >>>> characters on every line to lower the readability and can more easily see who said what rather than try to match up the row of >>>> characters with the attribution.

From the author's perspective, top posting is faster. You can start typing your reply right away. You don't have to scroll to the bottom, insert lines at just the right place, or discard precious text to make your email shorter.


Fixing VMware bridged networking on Big Sur permalink

A while ago, bridged networking stopped working on my Debian guest after the Big Sur upgrade on the Mac that forced the version 12 upgrade of VMware,

In the fullness of time, a solution emerged, and that was to adjust the MTU. It's like back to the future. I haven't had to mess with the MTU since the 80s! But the following command made the bridged network functional again. For me, reducing the value from 1500 to 1491 was sufficient.

$ sudo ip link set eth0 mtu 1491

I also updated the network settings in the GUI so the interface would be initialized properly (GNOME 3 menu in upper right > Wired Connection > Wired Settings > Interface Gear Icon > Identity > MTU).

2021-10-25 update: macOS 12.0.1 (Monterey) just dropped, and this update fixed the MTU issue. I was able to reset my MTU to automatic (erase or set to 0 in GUI) and get my nine bytes back. Note that the existing version of VMware Fusion at the time, 12.1.2, worked fine. By coincidence, I was offered an upgrade to Fusion 12.2.0 that "suppported Monterey", and thankfully, it continued to work. It provided a hardware version upgrade to version 19.


Fixing evince warning that /etc/mailcap.order does not have mailcap entries permalink

I've been getting the following warnings on most upgrades for a while now.

Warning: package evince listed in /etc/mailcap.order does not have mailcap entries.
I found solution to this problem, which is this:
echo 'application/pdf; evince %s; test=test -n "$DISPLAY"' | sudo tee /usr/lib/mime/packages/evince
sudo update-mime


Growing a Debian VMware client disk partition permalink

I followed the instructions given in How to Increase space on Linux vmware as follows to take advantage of the increased disk space on my new system.

  1. Booted Linux into recovery mode.
  2. Ran fdisk and created a new primary partition /dev/sda3 of type 8E (LVM) using up all of the space.
  3. systemctl reboot
  4. sudo pvcreate /dev/sda3
  5. sudo vgextend olgas-vg /dev/sda3
  6. sudo lvextend -l +100%FREE /dev/olgas-vg/root
  7. sudo resize2fs -p /dev/mapper/olgas--vg-root


EOG in full screen locks up display permalink

Going full screen in eog locked up my screen for the second time. Holding down F11 shows the other windows behind briefly. Logging into the system via ssh revealed that the eog process was gone.

I found this page that provided a couple of solutions. I opted for the less elegant but more expedient.

$ sudo aptitude purge xserver-xorg-video-intel

After rebooting to get my system back, F11 worked normally again.


Trash from the command line permalink

The current command to send a file to the trash, as well as inspect and empty the trash can, is this:

$ gio trash file...
$ gio list trash://
$ gio trash --empty


Unattended package updates and reboots permalink

Just added my notes regarding unattended upgrades here. Need to wrap the commands below with some commentary. My goal is keep the system from reboot automatically, yet get notified if a reboot is required.

$ sudo unattended-upgrades --dry-run --debug
$ less /var/log/unattended-upgrades/unattended-upgrades.log /var/log/unattended-upgrades/unattended-upgrades-dpkg.log
$ less /usr/bin/unattended-upgrade /etc/apt/listchanges.conf /etc/apt/apt.conf.d/50unattended-upgrades
Create a cron job that runs to send email if a reboot is required.
$ cat /var/run/reboot-required /var/run/reboot-required.pkgs


Keeping Google's (not so) Smart Storage from nuking your media permalink

I had a lot of photos in my DiveMate app that mysteriously started disappearing. I discovered that Google Smart Storage deletes "photos or movies that are backed up", so I'm guessing that was the source of the problem and turned it off. If so, it wasn't smart enough to tell DiveMate where to find the deleted photos.

Fortunately, I back up my phone with rsync. What follows are the one-liners I ran to identify and recover the deleted photos. Whitsunday is my backup drive, and olgas is my laptop. I used Dropbox to get the deleted photos back to the proper location on my phone.

$ sudo find <path-to-backup>/app -name *.jpg > app.out
$ cat app.out |
    while read line;
        do basename "$line";
    done | sort -u > app.all
$ find * -name *.jpg |
    while read line; do
        basename "$line";
    done | sort > ~/tmp/
$ comm -13 app.all > app.deleted          
$ cp $(grep -f ~/tmp/app.deleted ~/tmp/app.out | grep <backup>) <path-to-Dropbox>


Static IP to olgas stopped working permalink

I noticed that I could not longer access olgas from the Internet this week. I thought Comcast routing tables were messed up. Later, I discovered that olgas was using a DHCP address, not my static IP address.

Chapter 5. Network setup of the Debian documentation indicates that my configuration should be in /etc/systemd/network/ However, that directory only contains the file that says:

# This machine is most likely a virtualized guest, where the old persistent
# network interface mechanism (75-persistent-net-generator.rules) did not work.
# This file disables /lib/systemd/network/ to avoid
# changing network interface names on upgrade. Please read
# /usr/share/doc/udev/README.Debian.gz about how to migrate to the currently
# supported mechanism.
Maybe I should do that someday.

That documentation also pointed to /etc/network/interfaces, but my file didn't contain any of my details.

So I went to the menu in the top-right corner and chose the Ethernet item and then the settings button in the dialog that appeared. I chose the IPv4 tab and saw that this had been switched from Manual to Automatic. I set it to manual, filled out the bits, and pressed Apply. I had to turn off the interface in the Network dialog and turn it back on for the change to take.


Mounting smb shares stopped working permalink

I used to be able to navigate to a Windows share and find it in /run/user/1000/gvfs. Not so since my upgrade to bullseye. I learned I can now mount those shares with

$ gio mount smb://server/share
and that they should be mounted in $XDG_RUNTIME_DIR. I lodged a bug report (Bug#956009).

The solution was to install gvfs-fuse and reboot.


Recognizing Canon lens in darktable permalink

So, I tried to load my raw photos into darktable (2.6), and the lens correction module did not identify them. However, I could manually look up my camera and lens successfully, which was strange. Looking at the ChangeLogs, I figured I could upgrade to bullseye and have a recent enough version of exiv2 and lensfun to handle my camera/lens (Canon PowerShot G7 X Mark II), but I was mistaken.

After a bunch of Googling, I came across the following steps.

$ sudo aptitude install liblensfun-bin
$ sudo lensfun-update-data

By default, the lensfun database is in /usr/share/lensfun/version_1. Running lensfun-update-data updated /var/lib/lensfun-updates/version_1 instead. Sure enough there are differences associated with the G7X in the new file. This was enough. After restarting darktable (restarting turned out to be very necessary), importing photos was enough to trigger the proper lens correction. All that was left was to create a style that turned on the lens correction module so that I could select all of the new photos and turn on the lens correction module in one go.


Fail2ban errors (and large log files) permalink

While investigating why my fan was running, I found that logcheck was working very hard because /var/log/auth.log and /var/log/fail2ban.log had gotten really large the past couple of weeks. This was probably caused by fail2ban not banning attackers which was probably caused by persistent errors in the log including "Failed to execute ban jail 'sshd' action". I adapted the recipe found in [1] and [2] to perform a clean reinstall as follows:

  1. sudo service fail2ban stop
  2. sudo service shorewall restart
  3. cd /etc
  4. sudo cp -pr fail2ban ~/tmp/fail2ban.nukeme
  5. sudo aptitude purge fail2ban
  6. sudo rm -r fail2ban
  7. sudo aptitude install fail2ban
  8. (Noted that /etc/fail2ban and ~/tmp/fail2ban.nukeme were identical)
  9. sudo service fail2ban start

In the meantime, I wanted to add the worst offenders to my Shorewall blrules file. I found that my old recipe no longer worked because fail2ban hasn't been sending me email for a couple of years. Here's what it was:

$ grep -h inetnum $MAIL/fail2ban/* | sort | uniq -c | sort -n

Here's my new recipe for finding the worst offenders.

$ for i in $(sudo lastb | awk '{print $3}' |\
sort | uniq -c | sort -n | awk '{if ($1 > 500) print $2}'); \
    do whois $i | grep ^inetnum: | awk '{print $2"-"$4}';\
done | sort

Since I restarted fail2ban, I am no longer receiving a constant barrage of failed logins.

2019-12-12 update. I noticed the errors again. This time, I just did the following:

  1. sudo service fail2ban stop
  2. sudo service shorewall restart
  3. sudo service fail2ban start

Hours have passed, and I haven't seen a recurrence of the errors. I added /var/log/fail2ban.log to /etc/logcheck/logcheck.logfiles in order to catch the errors faster.


Massive log files permalink

My logs started getting spammed with these:

Feb  3 17:36:07 olgas kernel: [39888.256540] [drm:vmw_cmdbuf_work_func [vmwgfx]] *ERROR* Command buffer error.
Feb  7 21:36:59 olgas google-chrome.desktop[68050]: context mismatch in svga_sampler_view_destroy

This created log files that were 10s of megabytes large causing logcheck to run hot and long.

Two suggestions to fix these two problems both start with shutting down the virtual machine and adding:

svga.maxWidth = X
svga.maxHeight = Y
svga.vramSize = "X * Y * 4"

where X and Y match your screen's dimensions to the VMWare .vmx file as well as disabling the "Accelerate 3D Graphic" option of the VM setting.

The following sites provided this help:

It was also suggested to add "export SVGA_VGPU10=0" to .bashrc for the latter problem.

However, I found that the former problem resolved itself, perhaps with some reboots in both the guest and host, so I removed the .vmx settings. I also found that adding SVGA_VGPU10=0 to .bashrc/.dashrc caused display issues in my text windows. When focus moved, the text would become lighter, sometimes in the window with the focus.

In the end, I created a script in $HOME/bin/google-chrome that set SVGA_VGPU10=0 and called /usr/bin/google-chrome and pointed $HOME/.local/share/applications/google-chrome.desktop to it.


Binding the Mac screen saver to a key permalink

Obviously, I have to start another blog!

Synergy isn't letting me start the screen saver in the usual corner, and there doesn't seem to be a separate command to start the screen saver! However, a screen saver service can be created and that service can be bound to a key. I following the instructions in How to Start the Mac Screen Saver with a Keyboard Shortcut in OS X. For some reason, I could not bind Super-L (the current GNOME binding), so I bound C-M-L (the old GNOME binding) to the screen saver.


Editing videos IV permalink

Well, I switched to iMovie to compose videos. It's so much easier to create video in iMovie than anything I've tried before on Linux, and it's much less buggy.


GnuCash Price Editor: There was an unknown error while retrieving the price quotes permalink

Yahoo changed their API in November, 2017. The package libfinance-quote-perl was updated in version 1.41 accordingly. However, this version is not in stretch so you have get the version from buster using your favorite method. Fortunately, the updated version from buster installs without fanfare into stretch.

Once libfinance-quote-perl is updated and GnuCash is restarted, change your source of quotes to Yahoo as JSON.


GUI tweaks permalink

After installing stretch from scratch recently, I had to relearn a few things. Here are a couple of items that I hadn't already covered here.


Can't change keyboard shortcuts permalink

Or at least, it seems you can't. I like to set C-M-Down and -Up to lower and raise windows respectively (using Settings -> Keyboard -> Windows -> Raise/Lower).

However, changing the lower command to C-M-Down didn't seem to work. Using dconf-editor on org.gnome.desktop.wm.keybindings, I found that switch-to-workspace-down (and -up) add aliases for C-M-Down and -Up that were interfering with my shortcuts. I deleted these aliases and my shortcuts worked!


Shared folders with Debian stretch under VMWare Fusion permalink

I upgraded to stretch today and couldn't boot. Using journalctl -xb in the emergency system, I found that my shared folder wouldn't mount. I was able to comment out that mount and boot. I then found that the host filesystem is now mounted with FUSE in stretch. Here's my new fstab entry:

.host:/doc  /mnt/doc fuse.vmhgfs-fuse  allow_other,uid=wohler,gid=wohler,auto_unmount,defaults 0 0


Pushing etckeeper repository to another etckeeper repository permalink

I maintain a handful of Debian machines. My laptop is my master machine and the servers and workstations are considered remote. I like to keep a copy of the remote machines' configuration on my master machine.

After installing etckeeper on all of the machines, I ran the following on each remote so that the master branch on the remote would push to a separate branch on the master machine.

$ git config push.default upstream
$ git remote add origin ssh://master-machine/etc
$ git push -u origin master:branch-name

I use the host's name as the branch-name.

Too bad etckeeper doesn't support a pristine branch so that upstream changes can be easily merged into modified files.


Fix the wretched paste in LibreOffice permalink

The default LibreOffice paste function inherits the format from the source. This is never what you want, so you always end up using the paint tool to fix the formatting. Then you learn about Edit -> Paste Special -> Unformatted text. Then you learn to rebind C-v to Paste Unformatted text. Here are the steps with thanks to Robert Reese.

  1. Click on Tools.
  2. Click on Customize...
  3. Click on the Keyboard tab at the top.
  4. To the RIGHT of the Shortcut Keys box, click on LibreOffice.
  5. Below the Shortcut Keys box, find the Categories box, then find and click on Edit.
  6. In the next box, the Function box, find and click on Paste Unformatted Text.
  7. In the Shortcut Keys box above, find and click once on Ctrl+V.
  8. On the RIGHT, click the button titled Modify.
  9. Click the OK button at the bottom to finish.


Scrubbing disks permalink

I used to use fdisk.ext3 -cc to scrub disks, but I just read that this does a non-descructive read and write. Whoops.

I found a good resource that referenced a simple utility called shred that comes stock with Debian. Here is the command I used. It wrote random characters over the disk three times and took 8.5 hours to scrub a 700 GB disk over USB 3.0.

$ time sudo shred -v /dev/sdb


Installing libreoffice 5 (or anything) from backports permalink

I wanted to install libreoffice 5 from backports, but I couldn't seem to coax aptitude to do the right thing. I found that that following command got pretty close. The first conflict resolution solution wasn't good, but the second one was perfect by suggesting a few upgrades to make it work.

$ sudo aptitude -t jessie-backports install libreoffice


Reducing those massive GNOME 3 titlebars permalink

While it may be true that GNOME 3 chrome is less than in the past, the titlebars are still unnecessarily massive if buttons are not employed.

The key is to reduce the excessive padding as suggested by [1]. I preferred the solution of copying the theme file into my home directory [2]. I saved a copy of the original so that I could merge future updates into into my local copy.

$ cd .local/share/
$ mkdir -p themes/Adwaita/metacity-1
$ cd themes/Adwaita/metacity-1/
$ cp /usr/share/themes/Adwaita/metacity-1/metacity-theme-3.xml .
$ cp metacity-theme-3.xml metacity-theme-3.xml.orig

The proposed solutions simply made every title_vertical_pad value 0. I was mostly concerned with the window title bars so I just modified the "normal" and "max" frame_geometry elements. I also found the aesthetics of the proposed padding of 0 lacking, so I used values of "5" and "4" respectively. Next, install the changes by restarting the GNOME shell:

Press Alt+F2
Type restart
Press Enter

The Terminal tabs also suffer from too much padding. I have not yet found a solution.

stretch update, 2017-12-28: The file metacity-theme-3.xml doesn't exist on stretch. Instead, add the following to ~/.config/gtk-3.0/gtk.css:

window.ssd headerbar.titlebar,
window.ssd headerbar.titlebar button.titlebutton {
  padding: 0;


Juniper stopped working permalink

Juniper network-connect stopped working for me recently. It was on a new system, and in retrospect, I'm surprised it worked at all. The error message in ncsvc.log was:

rmon.error Unauthorized new route to 123.456.789.0/ has been added (conflicts with our route to, disconnecting (routemon.cpp:478)

At any rate, Google helped me find the solution, which was to add the following to /etc/NetworkManager/NetworkManager.conf:



XSane and Canon scanners on the network permalink

XSane won't automatically find your network scanner. You have to add an entry to the appropriate file in /etc/sane.d. For example, for my Canon printer, I added something like bjnp:// to pixma.conf.

VMware/Debian created a /dev/video0 device, so XSane found it and presented it first in the devices menu along with the Canon scanner. By commenting out /dev/video0 in v4l.conf so that only the Canon scanner was left, XSane now skips the device prompt resulting in a faster startup.


Running Debian under VMware permalink

Here are a list of tips I gathered while running Debian under VMware Fusion.

  1. Move the Mac alt key next to the space bar so it's where I'm used to it. This was done by swapping it with the Mac command key. Bring up the System Preferences -> Keyboard -> Modifier Keys panel, and make the Option Key the Command key and the Command Key the Option key. I also make my Caps Lock a Control key.
  2. To make an external mouse work as you'd expect in Debian and not as on the Mac, go to the VMware Fusion -> Preferences -> Keyboard & Mouse -> Profile -> Edit Profiles dialog and duplicate the Default profile. In your new Profile, uncheck Key Mappings -> Enable Key Mappings, set the Mouse Shortcuts to the Secondary Button and Button 3 respectively (same as the mouse button), and uncheck the Fusion Shortcuts Cycle Through Windows and Minimize Window.
  3. I found that if I set both network interfaces to bridged, or if I set the first interface to bridged and turned off the other, my networking was about 20% slower than NAT. By chance, I discovered that by setting the second interface to NAT, my networking speed was as fast as the host. On the Debian side, I had eth0 grab its address from my router via DHCP. By shutting off eth1, I got rid of the martian source messages in the log.
  4. To get cut and paste working between the host and guest, install open-vm-tools and open-vm-tools-desktop.
  5. My Garmin 510 wouldn't mount. I followed the instructions in Troubleshooting USB devices using USB quirks in Fusion (1025256) , and that solved the problem. In particular, I added the following to my .vmx file after shutting down my VM and quitting Fusion:
    usb.quirks.device0 = "0x091e:0x2619 skip-reset"
  6. The audio/video sync in videos was off. This was fixed by adding the following to my .vmx file after shutting down my VM and quitting Fusion:
    pciSound.playBuffer = "30"
    With thanks to Rockwell.NSS and Bryan Smart.

The problems I still have include:

  1. Debian doesn't completely handle HiDPI. Turning off the retina display still results in a resolution I'm used to without all of the funky tiny icons.
  2. If you run "M-F2 restart" to restart the GNOME shell, the windows may come back a bit small or the text might not be clear. By using C-left to go to your Mac desktop, lingering for a moment, and then pressing C-right to go back to your Debian desktop, the windows and text are restored.
  3. Pressing the mouse against the bottom of the screen doesn't bing up the message tray. Pressing Super-M works.


Why I don't like underscores in filenames permalink

Underscores are hidden when filenames are underlined as links, or when highlighted in a selection. They are harder to type.

Other problems with them is that they lower your Search Engine Optimization (SEO), mainly because they are not considered a word separator, like the dash (-). See Of Spaces, Underscores and Dashes for the details.


Making tracker go away permalink

With recent updates to GNOME, a suite of tracker processes appeared. They pegged the CPU at 100%, and their databases filled up my disk. I didn't seem to be able to discern any benefit. I found that to make the pain go away, I first had to run dconf-editor and make the following changes to keep tracker from starting.

$ gsettings set org.freedesktop.Tracker.Miner.Files crawling-interval -2
$ gsettings set org.freedesktop.Tracker.Miner.Files enable-monitors false

I then ran the following to stop the processes and remove the (large) database in ~/.cache/tracker.

$ tracker-control -r


Installing Oracle's JDK 8 (until OpenJDK 8 is available) permalink

Add the following to /etc/apt/sources.list on jessie:

deb trusty main

Then run the following:

$ sudo apt-key adv --keyserver --recv-keys EEA14886
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer
$ sudo update-java-alternatives -s java-8-oracle

See How To Install The Oracle Java 8 on Debian Wheezy And Debian Jessie, Via Repository .


Updating Apache SSL keys and certificates permalink

The Heartbleed bug compromised our certificates. I needed to update my certificates anyway. I do it so rarely that a recipe will be nice the next time. Here it is.

  1. Generate a new key (if a new one is needed). Note that the -des3 argument requires a passphrase which could be inconvenient when rebooting a remote system so that is omitted in the recipe below.
  2. Generate a Certificate Signing Request (CSR). I specified US, CA, Menlo Park, Newt Software, [no unit],, [no email address], [no password], [no optional name]
  3. Paste the CSR in the appropriate field in the a CACert New Server Certificate request or renew the prior certificate in the list of domain certificates.
  4. Copy the generated certificate into
  5. Create a .pem file.

Here are the actual commands.

1. # openssl genrsa -out 2048
2. # openssl req -new -key -out
4. # cat >
<Paste certificate here>
5. # cat >

The .key and .csr files are created per Certificate Signing Request (CSR) Generation Instructions for Apache SSL.

While not related to this recipe, I also regenerated my dovecot keys and certificates with dpkg-reconfigure dovecot-core (not dovecot-common, the documentation is in error) after removing /etc/dovecot/dovecot.pem and /etc/dovecot/private/dovecot.pem. I then updated the fetchmail fingerprint per: Fixing "Server certificate verification error" error in fetchmail .


Dynamic DNS permalink

After DynDNS removed their free accounts, I moved to Initially, I tried inadyn to update my address, but did not care much for it. Perhaps if it had an entry in init.d, I'd still be using it, but having to add an entry to rc.local is annoying. Forget running inadyn out of cron every few minutes! If it fails to connect to an IP address service (which is often), it continues to try, which leads to dozens of inadyn processes filling up your logs.

I switched to ddclient, which was not without its problems either. However, it has an entry in /etc/init.d so it starts automatically at boottime. Since the current version of ddclient in wheezy doesn't support the freedns protocol, I installed 3.8.1 from source as follows:

# mkdir ddclient && cd ddclient
# sudo aptitude build-dep ddclient
# sudo apt-get source ddclient=3.8.1-1.1
# sudo dpkg --install ddclient_3.8.1-1.1_all.deb
# sudo aptitude install cpanminus
# cpanm --sudo Digest::SHA1
# vi /etc/default/ddclient /etc/ddclient.conf
# service ddclient start


Empathy with Google 2-step verification permalink

I found that I couldn't log into Empathy any more once I switched to Google's two-factor authentication or 2-step verification. I discovered that I had to create an application password at Google. Others found that they had to edit the Passwords and Keys settings so that their authentication survived reboots. Here are the steps:

  1. Close Empathy.
  2. Open up the "Passwords and Keys" panel.
  3. p
  4. Delete any items marked "Instant messaging password."
  5. Generate an application password by going to your Google Security settings and clicking on the App passwords Settings link. Although the page says that you don't need to remember the password, you might want to do so if you want to use it from more than one machine.
  6. Open Empathy and enter your new application password.

The article, GNOME 3.6: GNOME Online Accounts and Google two-factor authentication has additional information.


Creating your favicon.ico file permalink

Create your icon, make it 16x16, and export as a PNG. If you have ImageMagick installed, it's as easy as:

$ convert favicon.png favicon.ico

Alternatively, use the ConvertICO web site to convert your image to a favicon.ico file.


Google Music Manager on Debian Wheezy permalink

See Gabriel Saldaña'sblog for instructions to get an older version of Google Music Manager that works with Debian wheezy.


GnuCash Price Editor: Unable to retrieve quotes for these items (redux) permalink

This error is described in Debian bug #739142. Although the bug says it's for Yahoo, the same patch referenced within works for Vanguard too. It's easy to apply, it just updates the URL in


Detaching GNOME 3 modal dialogs permalink

If you need to get at the information hidden under GNOME 3 modal dialogs, run the following to detach the dialog from its parent window.

gsettings set attach-modal-dialogs false

You then need to restart the GNOME Shell to use this new setting. Press M-F2 r RET.


Update Java plugin on Chrome permalink

If the Java plugin does not exist, of course it won't work. In addition, if it is too old, Chrome will complain or some sites that use a Java applet won't work. Chrome will provide a button that says "Update plug-in." In the case of Java, it will only let you download a tarball from Oracle. You can either use that tarball, or install the Debian package for the OpenJDK plugin.

The Debian OpenJDK package is called IcedTea. The appropriate package is icedtea-plugin (or its older variant such as icedtea6-plugin). Thus, all you have to do is install this package and restart Chrome:

$ sudo aptitude install icedtea-plugin

If you downloaded the Oracle tarball, install it and make Chrome aware of it with something like the following. The link to jre1.7 is created so that the link in ~/.mozilla/plugins doesn't have to be changed if you update the installation. If you want to make the plugin available to all users of the system, the appropriate directory is /opt/google/chrome/plugins (or /usr/lib/mozilla/plugins for Iceweasel).

$ cd /usr/local/lib
$ sudo tar xzf /tmp/jre-7u13-linux-x64.tar.gz
$ sudo ln -s jre1.7.0_13 jre1.7
$ sudo mkdir ~/.mozilla/plugins
$ sudo ln -s /usr/local/lib/jre1.7/lib/amd64/ ~/.mozilla/plugins

Once this is done, restart Chrome. If you installed more than one plugin, you can control the plugin that is enabled by visiting chrome://plugins/. Then, test the plugin.

This entry supersedes this entry.


Editing videos III permalink

I learned that my current video editor of choice, kino (see Editing videos I and Editing videos II), died in 2009. Since it was using obsolete arguments to ffmpeg in wheezy, it could no longer export edited video!

The editors avidemux, OpenShot, and PiTiVi surfaced after a brief survey. The editor avidemux fared poorly during my first survey, so I considered OpenShot and PiTiVi. Since PiTiVi seems to be very integrated with GNOME, is under very heavy development, has favorable reviews, and—best of all— is found in the wheezy distribution rather than in, I was definitely leaning towards it.

I first gave OpenShot a brief try. I found it difficult to split a clip at the desired frame. You have to use the context menu to remove clips, that you need to drag remaining clip to beginning and orange vertical bar to end of remaining clip to avoid exporting black space. My export settings were: Export: Profile: Web, Target: YouTube HD, default video profile (HD 720p, 25 fps) and quality (med). A nice .mp4 video was exported. However, there was a segmentation fault upon exit!

I found it very easy to edit video in PiTiVi. The manual is worthwhile to scan as it points out a couple of things that might not be obvious, but on the whole, it took less time to learn how to splice a video together than the rest. I first rendered my video in N800/MP4, but the export hung. However, rendering to Web (.webm) worked. While totem was able to play this video, the quality was not that good. I then chose a Container format of MP4, a Frame rate of 25 fps, and Codec of x264enc in the Video tab, and produced a nice .mp4 video at less than half the size of OpenShot and YouTube-friendly. I added it as a preset. Like OpenShot, you first need to drag the timeline to 0:00 to avoid rendering blackness at the beginning of your movie. However, you don't have to worry about trimming blackness from the end.

To add text to your video, create a transparent PNG image in the GIMP that is the same size as your video, add your text to that image, and then import it into PiTiVi. If you update the image, I found that the quickest way for PiTiVi to reread it was to replace it on the timeline, that is, to delete the clip from the timeline and to drag it back from the clip library.


Upgrading to wheezy permalink

I followed the release notes and things went rather smoothly. It took a little playing with aptitude afterward to complete the full update of the packages and work out the i386 multiarch kinks.

GNOME 3 has a changed a bit since I first talked about it. Here are a couple of changes and additional pointers that go with GNOME 3.4.

The computertemp applet appears to be gone as well and I learned that /proc/acpi/ibm is being deprecated in favor of the files in /sys. I found a good alternative to controlling my laptop's fan in the thinkfan package. I followed the German instructions for configuring thinkfan as well as a translation and embellishment using the sensors in /sys instead of in /proc/acpi/ibm. I referred to a page on my T500's sensors. The ThinkPad ACPI Extras Driver document contains additional good information.

With the upgrade, Picasa lost the ability to upload photos to Picasaweb. The file ~/.google/picasa/3.0/picasa.log held the clue: Picasa couldn't find Linking the new versions of libssl to didn't work, but I found a compatible version 0.98 in /usr/local/lib/emul/ia32-linux that had been installed by somebody. In order to provide this library to Picasa, start Picasa as follows:

LD_LIBRARY_PATH=/usr/local/lib/emul/ia32-linux/usr/lib picasa


Getting more performance out of USB memory sticks permalink

When I rsynced a directory onto my new Patriot 64 GB USB drive (60 MB/sec VFAT), the symbolic links failed to copy. When considering an ext filesystem for this stick in which I didn't care about compatibility with others, I found that you wanted to use ext2 rather than ext4 to avoid writing to the SDD (which has a limited lifetime). I also found an interesting article called Increase USB Flash Drive Write Speed .

Running the same commands shown in his blog, here are my results with this drive.

$ sudo hdparm -t /dev/sdb1
 Timing buffered disk reads:  90 MB in  3.07 seconds =  29.32 MB/sec
$ dd count=100 bs=1M if=/dev/zero of=/media/PATRIOT/test
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 4.62097 s, 22.7 MB/s
$ sudo fdisk -H 224 -S 56 /dev/sdb
$ sudo mke2fs -t ext4 -E stripe-width=32 -m 0 /dev/sdb1
$ sudo hdparm -t /dev/sdb1
 Timing buffered disk reads:  90 MB in  3.02 seconds =  29.82 MB/sec
$ dd count=100 bs=1M if=/dev/zero of=/media/891aeafd-24cd-426e-b37e-24738e324fdd/test
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.12317 s, 851 MB/s
$ sudo mke2fs -t ext2 -E stripe-width=32 -m 0 /dev/sdb1
$ sudo hdparm -t /dev/sdb1
 Timing buffered disk reads:  90 MB in  3.01 seconds =  29.90 MB/sec
$ dd count=100 bs=1M if=/dev/zero of=/media/2032f5d7-f4d0-4853-89a1-d6c7129e11cb/test
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.116094 s, 903 MB/s
$ sudo fdisk -H 32 -S 8 /dev/sdb
$ sudo mke2fs -t ext4 -E stripe-width=32 -m 0 /dev/sdb1
$ sudo hdparm -t /dev/sdb1
 Timing buffered disk reads:  94 MB in  3.06 seconds =  30.71 MB/sec
$ dd count=100 bs=1M if=/dev/zero of=/media/093eda14-da74-4446-ac35-0106cd5d644f/test
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.124107 s, 845 MB/s
$ sudo mke2fs -t ext2 -E stripe-width=32 -m 0 /dev/sdb1
$ sudo hdparm -t /dev/sdb1
 Timing buffered disk reads:  96 MB in  3.05 seconds =  31.52 MB/sec
$ dd count=100 bs=1M if=/dev/zero of=/media/9007be40-0a89-480f-a0d6-ebdad491f4cb/test
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.118107 s, 888 MB/s

The first fdisk command arguments follows the blog while the second follows a suggestion in the comments. In my case, the original suggestion seems faster on the writes, and ext2 seems faster than ext4. Since ext2 is also easier on the drive, I opted with the original fdisk suggestion.

But first, for fun, I tried the default fdisk and ext commands.

$ sudo fdisk /dev/sdb
$ sudo mke2fs -t ext2 /dev/sdb1
$ sudo hdparm -t /dev/sdb1
 Timing buffered disk reads:  96 MB in  3.02 seconds =  31.75 MB/sec
$ dd count=100 bs=1M if=/dev/zero of=/media/adde8bde-13f5-4b1d-bf14-c3682d109715/test
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.119677 s, 876 MB/s

The proposed commands are about 3 percent faster than using the defaults which isn't enough for me to eschew the defaults. So I went with the default fdisk and ext2 commands.


Getting the microphone working on Debian squeeze permalink

I thought my microphone on my ThinkPad T500 running squeeze was working, but I hadn't used it in a really long time. When I really needed it recently, it didn't work. Googling didn't turn up an answer, but I stumbled across it.

  1. Run
    $ sudo alsactl init
    $ sudo shutdown -r now
    The alsactl program complained about unknown hardware, but this was OK in my case. From what I read (and experienced), you do need to reboot after this step.
  2. Bring up the ALSA Mixer panel with System -> Preferences -> Sound.
  3. Set the Preferences -> Digital Recording checkbox. This was key. Strangely, the mic continued to work even if I unchecked the box.
  4. Select the Recording tab.
  5. Set level to about 70% and ensure that everything is unmuted.
  6. On the playback tab, set the Docking Mic and Internal Mic levels to about 70%. I found that the Docking Mic levels worked for the mic that was plugged in (rather than the obvious External Mic), and the Internal Mic
  7. Use Sound and Video -> Sound Recorder to test. There is a mic level display on the recorder; I tweaked the levels in the ALSA Mixer Playback tab to ensure the mic levels were set to a nice level.


Update Java plugin on Chrome permalink

This entry has been superseded by this entry.

If the default version of the Java plugin is too old, Chrome will complain or some sites that use a Java applet won't work. Chrome will provide a button that says "Update plug-in." In the case of Java, it will only let you download a tarball from Oracle. Here are the steps I took to tell Chrome about it. Your precise locations may vary.

$ cd /usr/local/lib
$ sudo tar xzf /tmp/jre-7u13-linux-x64.tar.gz
$ sudo ln -s jre1.7.0_13 jre1.7
$ sudo mkdir /opt/google/chrome/plugins
$ sudo ln -s /usr/local/lib/jre1.7/lib/amd64/ /opt/google/chrome/plugins

After restarting Chrome, test the plugin.

I was hoping to be able to link to this library from my ~/.config/google-chrome directory. Please let me know if you know the directory Chrome is looking for.

By the way, the directory that that Iceweasel uses is /usr/lib/mozilla/plugins.


Fixing "pam_abl[32671]: Invalid argument (22) while opening or creating database" permalink

The Debian pam-abl maintainer Alex Mestiashvili suggested that I run the following to test the database:

$ sudo db5.1_verify -h /var/lib/abl users.db
$ sudo db5.1_verify -h /var/lib/abl hosts.db

This revealed a problem with the database, so Alex proposed I perform the following:

$ sudo db5.1_recover -v -h /var/lib/abl

This took a long time to run, but it completed successfully and pam_abl returned to blocking access from script kiddies. In addition, I found that you can clean up the thousands of log files in /var/lib/abl with the following command:

$ sudo db5.1_archive -d -h /var/lib/abl

While this does prevent recovery from a catastrophic database corruption, in the case of pam-abl, there isn't much harm in starting from scratch. Thus, this cleanup does not present much risk. To really clean things up, there is nothing better than this:

$ cd /var/lib/abl
$ sudo mkdir t
$ sudo mv _* log* *.db t
$ sudo pam_abl
$ sudo rm -r t


Problems with wireless permalink

We began having issues streaming our Netflix movies. They began to break up more and more. I also started noticing slow network speeds and dropouts on my Linux laptop and the following errors in my log:

kernel: [96404.491351] iwlagn 0000:03:00.0: Microcode SW error detected.  Restart
ing 0x2000000.
wpa_supplicant[1910]: Failed to initiate AP scan.
kernel: [107123.472075] wlan0: direct probe to AP XX:XX:XX:XX:XX:XX timed out
wpa_supplicant[1910]: Authentication with XX:XX:XX:XX:XX:XX timed out.

After rebooting of every box in the house to no avail, I stumbled across this amazing Amazon review of my wireless router that suggested using a slower rate if there was wifi congestion in the neighborhood (does 13 APs count as a lot?)

After dialing the network speed down to 217 MB/sec from 450 Mb/sec, I have not seen any of the above errors in the past week and Netflix streaming is working perfectly.


Using pam_abl to help against brute force login attacks permalink

My logs are full of failed login attempts. I wanted to reduce the output in the logs so I installed libpam-abl and configured it per the instructions in /usr/share/doc/lib-pam/README.Debian. In addition, I replaced !root with * in the user rule since I don't allow remote root logins anyway. Also, there is a bug in version 0.4.3 of libpam-abl that allows logins with correct passwords that would otherwise be blocked. The symptom is a "Operation not permitted" error in auth.log. The workaround is to set MaxAuthTries to 1 in /etc/ssh/sshd_config.

I then tested per the instructions in Jonathan Gardner's wiki by uncommenting out the debug line in the configuration file and logging in with:

$ ssh -o "PubkeyAuthentication=no" you@yourhost

As Bob Cromwell states on his How to Set Up and Use SSH page, access controls actually result in more logging, not less. However, now I'll feel a little better about suppressing failed logins from my logcheck messages.


Getmail is not for me permalink

The fetchmail program started throwing a segmentation violation last week, which seemed correlated to my changing my password at work. Because changing my password is such a pain, and it wasn't guaranteed to fix the problem (since I've had longer passwords, and passwords with similar characters), I switched to getmail, which hopefully will only be until the next time I have to change my password or upgrade to wheezy and get fresh bits.

First, here's my simple fetchmail configuration.

set daemon 60
set bouncemail
set properties ""
poll host protocol IMAP user keep ssl idle

Here is my .getmail/getmailrc.

type = SimpleIMAPSSLRetriever
server = host
username = user
password = password

type = MDA_external
path = /usr/bin/procmail
unixfrom = true

read_all = false
delete_after = 30
delivered_to = false

In order to use it, you have to create a cron entry.

* * * * * getmail --quiet

Here is why getmail sucks and why I'm looking forward to switching back to fetchmail.

  1. getmail does not have a daemon mode, which means that
  2. You have to create a crontab entry (with the associated syslog noise), and
  3. You don't have access to IMAP's IDLE mode for faster updates, and
  4. You have to add your password to the configuration file. It is a huge security problem to have plaintext passwords in a text file, regardless of the permissions of the file and directory. In addition, I use Subversion to maintain and distribute my environment to my various machines so now the plaintext password is in an unknown environment in the Subversion repository. Finally, I now have yet another task to do when I change my password (required every two months). In contrast, my fetchmail configuration file hasn't changed for the five years I have worked here. That's the way it should be.
  5. getmail sets the value of the Return-Path header field to unknown instead of the address of the sender. This makes all of my procmail log entries start with From unknown making the log useless for checking on mail from a particular person.

Lame, lame, lame!


gvfs-open (via xdg-open) opens the wrong application permalink

The xdg-open program was opening gnumeric instead of libreoffice for .xlsx files. I found a few tools to help diagnose the problem:

$ xdg-mime query filetype foo.xlsx
$ xdg-mime query default application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
$ grep -r gnumeric ~/.local
$ locate
$ locate gnumeric.desktop
$ locate libreoffice-calc.desktop

I think what is happening is that /usr/share/app-install/desktop is not in the search path and /usr/share/applications is, so gnumeric.desktop is being used in favor of since the former is found and the latter is not. The correct application, libreoffice-calc.desktop, doesn't even get to play.

I removed the offensive line from .local/share/applications/mimeapps.list and then ran:

$ xdg-mime default libreoffice-calc.desktop

When I run xdg-open on a .xlsx file, I get libreoffice as desired.


Fixing "Server certificate verification error" error in fetchmail permalink

Recent versions of fetchmail seem to be getting picky about their SSL certificates. I only noticed after it seemed that fetchmail was taking its time getting mail. Running fetchmail -vN, I saw the following:

fetchmail: Server certificate verification error: self signed certificate
fetchmail: This means that the root signing certificate (issued for
/O=Dovecot mail server/
is not in the trusted CA certificate locations, or that c_rehash needs
to be run on the certificate directory. For details, please see the
documentation of --sslcertpath and --sslcertfile in the manual page.

I tried running c_rehash on my server to no avail. After a bit of Googling, I learned that I could give fetchmail my server's certificate fingerprint to satisfy its security stringentness. To obtain your server's fingerprint, run:

sudo openssl x509 -md5 -subject -dates -fingerprint -in /etc/dovecot/dovecot.pem

To tell fetchmail about it, add the sslfingerprint keyword to your .fetchmailrc. For example,

poll <host>
     ssl sslfingerprint "<fingerprint obtained above in quotes>"
     <other parameters>


Readying a new disk for use permalink

Here are my notes on configuring a new disk for use. Update device names, volume group, and logical volume names to taste.

This time, instead of running fdisk or cfdisk, I tried System Tools -> Disk Utility. I had used it previously to label external drives so that since the label is used as the name of the directory in /media. It was certainly nicer than working with cfdisk. I created a single partition that spanned the entire disk.

I then made this partition ready for use with the following commands:

# pvcreate /dev/sdc1
# vgcreate lmc2 /dev/sdc1
# vgdisplay lmc2 | grep "Total PE"
# lvcreate -l 238466 lmc2 -n backup
# mkfs -t ext4 /dev/mapper/lmc2-backup
# emacs /etc/fstab
/dev/mapper/lmc2-backup /var/local/backup ext4 errors=remount-ro 0 3
# mount /var/local/backup

The vgdisplay command was used to get the number of extents in the volume group which was used in the lvcreate command. Here, I used all of them when creating the logical volume. I used LVM so that if I ever have to add another logical volume, I can simply resize the existing volume and add another.


Creating static mount points dynamically permalink

I have a pair of external backup drives that I use alternatively for backups. For ages, I just let Nautilus mount them wherever and my backup script, which used tar under the covers, looked for a special file called BACKUPS in all of the directories in /media. However, I just switched to rsnapshot and needed to specify the root to the backup directory in rsnapshot.conf. The question was thus: How can I ensure that my two backup drives always have the same mount point?

One way is to use /etc/fstab. For example:

UUID=11111111-1111-1111-1111-111111111111 /media/backups ext3 rw,nosuid,nodev,user 0 0
UUID=22222222-2222-2222-2222-222222222222 /media/backups ext3 rw,nosuid,nodev,user 0 0

Note that there is currently a bug in Nautilus that duplicates the backups name in its bookmarks using the UUID format. A workaround for that is to use the link in /dev/disk/by-uuid as follows:

/dev/disk/by-uuid/11111111-1111-1111-1111-111111111111 /media/backups ext3 rw,nosuid,nodev,user 0 0
/dev/disk/by-uuid/22222222-2222-2222-2222-222222222222 /media/backups ext3 rw,nosuid,nodev,user 0 0

Since Nautilus and gvfs-mount create and remove mount points on the fly for removable media, it seems a shame to have to modify /etc/fstab. More important, I have set the no_create_root variable to 1 in rsnapshot.conf so I don't want to have to create the /media/backups mount point because otherwise, my backup would go into my root partition if my external drive wasn't mounted.

I learned that Nautilus will use the partition's label as a mount point if it exists; otherwise, it uses the UUID as the mount point, which I was seeing. I then changed the label of my external drives to backups via the System Tools -> Disk Utility command. So, without any local configuration, my external drives are mounted on /media/backups and that mount point is dynamically created and removed as needed. Case closed.


Goodbye Delicious, hello Pinboard permalink

Charles Arthur was kind enough to write Goodbye Delicious, hello Pinboard: why we'll pay for internet plumbing and I invite you to start with that article for my motivation to leave Delicious. Look to the Delicious forum to see that this truly is the winter of Delicious discontent.

I briefly checked out Google Bookmarks. I was disappointed to see that their tags are comma-separated. Adding bookmarks seemed to be pretty cumbersome, at least in Chrome. There is a bookmarklet available, but I don't want to waste screen real estate for a bookmark toolbar. Chrome extensions claim to be able to access Google Bookmarks but I couldn't figure out how to add bookmarks with them. Why can't one just use the existing Chrome bookmarking star to access a different bookmark tool?

I wasn't able to import my Delicious bookmarks (the feature exists, but Delicious responded with "access denied"--mmmmm), and there isn't a means to import an HTML file with bookmarks. I therefore didn't actually try its searching and browsing capabilities.

There is a limited sharing capability, but that is to be retired tomorrow.

In short, Google Bookmarks looks promising if you're willing to put in a little work to discover an easy way to add bookmarks and don't use the social aspect of bookmarking.

The Wikipedia page for "Social bookmarking" indicates a number of related sites. Another Delicious user mentioned Pinboard and I also read Charles Arthur's article Goodbye Delicious, hello Pinboard: why we'll pay for internet plumbing. I therefore thought Pinboard might be worth trying. At the moment, it has a nominal one-time fee of about $10.

To switch, export your Delicous bookmarks to an HTML file and import that file into Pinboard.

Pinboard has the look and feel of the original Delicious. It's fast. It's clean. I created a Chrome search engine using the query so that I can efficiently navigate my bookmarks using the Omnibox. You can browse bookmarks by adding and subtracting tags from your set of tags. URLs are a bit annoying: a URL in delicious such as would be

You can add a bookmark with a bookmarklet, but I don't want to waste real estate with a bookmark toolbar. There are quite a few Pinboard Chrome extensions for adding bookmarks available. Ideally, the extension would use the existing star to add bookmarks and highlight it in yellow if the current page is bookmarked. Next best would be an icon. A context menu item would be helpful. I don't have a need to have a browse or search capability since I use the Omnibox for that. There is an official Pinboard extension called Pinboard Tools. However, Pinboard Plus is the best in my opinion: it changes color when the page is bookmarked and provides a delete button, enter will confirm an autocompleted tag as well as submit and dismiss the dialog, selected text goes in to the description. The only issue, which should be easily fixed, is that the focus does not start at the Tags field.

I sent a few ideas to the author, Maciej, and he's already responded with a gracious response.

To summarize, Pinboard is what Delicious used to be. Maciej is planning to add the good stuff that Delicious had (breadcrumbs, tag bundles).

This weekend, I said goodbye to Delicious and have said hello to Pinboard.


Removing underlines from links in Chrome permalink

Apparently, I'm not alone in my dislike of link underlines. Links are already colored to indicate that they are links and the underlines can be very distracting on pages with many links. Many forums talk about ways to get rid of the underlines, including installing plugins, or writing JavaScript or CSS. Here is the simplest way to remove the underlines in Chrome.

Add the following text to ~/.config/google-chrome/Default/User StyleSheets/Custom.css and either refresh each page or restart Chrome. The comments describe what each section does.

/* Don't underline links by default. */
:link, :visited {
    text-decoration: none;

/* Underline links when hovering above them. */
:link:hover, :visited:hover {
    text-decoration: underline;


Where's the CPU temperature applet? permalink

I have an old Thinkpad which gets hot and shuts off even though the fan is on full speed. Maybe some fresh thermal grease is all it needs. In the meantime, having the CPU temperature applet is mandatory.

The package gnome-shell-extension-cpu-temperature seems to be the ticket. After installing this package (on my Fedora system--I don't see it on Debian at this time) and restarting the GNOME Shell (Alt-F2 r RET), the temperature appeared on the status bar. I still need to figure out how to run a command when the temperature exceeds a certain temperature.

I've also found it helpful to turn down the CPU speed at times. This can be done with:

$ sudo cpufreq-set --cpu 0 --max 1.6GHz
$ sudo cpufreq-set --cpu 1 --max 1.6GHz

Use cpufreq-info to inquire what values of clock rate you can use. The rate 1.6 isn't my slowest, but it's enough to cool down the engines.


So what replaces the desktop metaphor? permalink

The Finding and Reminding document discusses the pertinent design considerations going into the GNOME Shell, but does not discuss what specific elements of the GNOME Shell address the desktop metaphor.

The original GNOME Shell design paper says, "In the Shell design, the `desktop' folder should no longer be presented as if it resides behind all open windows. We should have another way of representing ephemeral and working set objects."

And that is?

I could not find a concise answer to this question, so I'll attempt to do so. I agree with the designers that displaying the desktop folder on the background has a couple of major problems. The icons will either be occluded, or distracting. I think the desktop icons can be replaced in the following ways:


Why I love GNOME 3.0 permalink

OK, so now I'm using GNOME 3.0. And I'm loving it.

After using GNOME 3 for just half a day, coming home to my GNOME 2 desktop on my laptop seemed really ugly and clunky. I've removed my window list and workspace widget to clear the clutter somewhat, and moved my calendar to the middle so there is some semblance of consistency between home and work.

I have to agree, even after using GNOME since its infancy in 1999, and customizing it thoroughly, that hanging on to Window 95 roots isn't a good thing. I enjoy letting go and not worrying about customizing the desktop.

What is GNOME 3.0? It is a huge redesign, and can work on touchscreens and netbooks as well as on desktops. The GNOME Shell and Mutter (a metacity fork) replace the GNOME Panel and the metacity window manager. Written in JavaScript and using CSS, the GNOME Shell uses the same tools employed by HP's (Palm) WebOS. It uses the Clutter graphic and scene graph library. Unity, as it turns out, is Ubuuntu's offering at a new UI.

I would strongly suggest getting acquainted with GNOME 3.0 before upgrading your system and being shocked by the experience. To start, the GNOME 3 overview page contains some 45-second videos that highlight the new features and might even tempt you into trying the GNOME Shell. That, and the GNOME Shell Design paper, the GNOME Shell Tour, and especially the GNOME Shell Cheat Sheet are great introductions. The GNOME 3.0 release notes also provides a quick overview. Had I read these before the update, it would have helped to reduce the initial shock I received upon logging in.

After a few hours, I no longer miss the window list, fixed workspaces, application menus, and Desktop icons. The Activities window and dynamic workspaces, with an improved Alt-TAB/Alt-` command, make it easy to switch between applications and application windows. The application and recent document search function replaces the application menu and desktop icons just dandy. The notification area has replaced my widgets on the taskbar. I absolutely love the window tiling gestures.

But, while I look forward to GNOME 3 coming to Debian stable, if GNOME 3 is still not for you, please read my comments in Why I hate GNOME 3 to see how to go back to GNOME 2 (mostly).

Here are some additional articles on GNOME 3 that I found which may be interesting to you.

Here's a list of bugs or "features" that I discovered along the way along with fixes or workarounds when possible.

There are a few items in my last post that are applicable to GNOME 3 as well (keyboard rate, middle mouse button, etc.).


Why I hate GNOME 3.0 permalink

License: You agree to read Why I Love GNOME 3 before reading this blog posting.

We upgraded to Fedora 15 at work last night. This came with GNOME 3.0. I think it will be likely that you will have a WTF moment when this happens to you when GNOME 3.0 comes to Debian. More likely, you'll have a WTF day. I did. Your first reaction may be to get back to back to GNOME 2. This post describes how you can (mostly) do that. I hope you find it helpful. (I also hope that you read Why I Love GNOME 3 and find that GNOME 3 works for you too.)

The Fedora 15 upgrade brought some surprising changes. What follows is a list of my questions, and answers that I discovered.


Setting screensaver picture directory shouldn't be so hard permalink

You select the Pictures folder option in the Screensaver Preferences, and you'd expect to see a simple Configure button in the screensaver preferences where you can tell Screensaver where to get your pictures. Nope. Instead, you have to JFGI and discover that you have to edit /usr/share/applications/screensavers/personal-slideshow.desktop and change the Exec line. For example, I have:

Exec=/usr/lib/gnome-screensaver/gnome-screensaver/slideshow --location /home/wohler/doc/photos/apod

The gnome-screensaver program appears to ignore your ~/.local/share/applications folder which is why you have to edit the system file. GNOME 3 doesn't even have a screensaver (yet).

Note that the path on Fedora 15 is /usr/libexec/gnome-screensaver/slideshow.


Juniper VPN produces martian souce in logs permalink

After firing up the Juniper VPN, I get martian logging when the other hosts on my local network send out broadcasts. For example:

Apr 26 17:54:33 olgas kernel: [838375.198780] martian source from, on dev wlan0
Apr 26 17:54:33 olgas kernel: [838375.198787] ll header: ff:ff:ff:ff:ff:ff:90:27:e4:e9:26:8b:08:00
Apr 26 17:54:33 olgas kernel: [838375.200480] martian source from, on dev wlan0
Apr 26 17:54:33 olgas kernel: [838375.200485] ll header: ff:ff:ff:ff:ff:ff:90:27:e4:e9:26:8b:08:00

I discovered that you can turn off the logging of those packets via /proc/sys/net/ipv4/conf/*interface*/log_martians. In my case, this is done with the following.

sudo sh -c "echo 0 > /proc/sys/net/ipv4/conf/wlan0/log_martians"

The proc filesystem is documented in the kernel source tarball's Documentation subdirectory.

You don't want to always ignore martian logging since it helps to identify IP address spoofing. See the second post in this blog warning against it. I therefore clear this setting manually after closing the VPN.

sudo sh -c "echo 1 > /proc/sys/net/ipv4/conf/wlan0/log_martians"

Another reader in the SUSE blog came up with a possible solution, included below. I have not yet tried to play with the routing as he suggested.

Well, seeing as no one else has come up with a solution for me I found it myself I thought I would post it here for the benefit of future readers. It comes down to simple routing on the Linux machine. Even though the default gateway (192.168.2.x) is set for the normal subnet (lets say it doesn't want to work for a different subnet (lets say So what you have to do is add in a route like so:
        Gateway: 192.168.2.x (same as default gateway)
        Device: (whatever device the communication is coming in on)


Fixing Java's SocketException: Network is unreachable permalink

After upgrading to squeeze, my Java programs stopped networking. They all complained with, "SocketException: Network is unreachable." While there is some disagreement as to whether this is a Debian bug or a Java bug, the problem extends to both Sun/Oracle's version of Java as well as the OpenJDK.

There are two workarounds. On a per-program scale, pass in the Java option. On a global scale, edit /etc/sysctl.d/bindv6only.conf, set net.ipv6.bindv6only to 0, and run service procps restart.

I went with the later workaround since I had to affect a suite of programs. However, there may be consequences of this setting.


Dealing with syntax errors when starting X in squeeze permalink

I installed squeeze this weekend, but could not start an X session. From a console window, I could see the syntax errors in ~/.xsession-errors from when my .gnomerc was run. It contains the line . ~/.bashrc so that my X session has all the environment variables that it needs, especially PATH.

It turns out that the problem was caused by a symlink from sh to dash. While it might make a new squeeze system run faster, it certainly broke my X session. Perhaps the X session should use bash now, at least where it reads ~/.gnomerc.

The workaround for this is to run dpkg-reconfigure dash and say no when it asks to link sh to dash.

See Bug #595906h.


Installing Flash on 64-bit squeeze permalink

2010-09-26 update: The flashplugin-nonfree package in sid installs the new 64-bit version of Flash from Adobe. The instructions in the wiki page below have been updated accordingly.

The Debian wiki came to the rescue. I simply followed the first four steps in section Debian Testing 'Squeeze' amd64 in the FlashPlayer wiki page.


Updating a remove server from etch to lenny permalink

I took some notes where I had to deviate from the release notes which might be helpful to both me and you.


I accepted the default for any prompts that were given.

Note that the upgrade created new configuration files. After the upgrade, I replaced configuration files with the new ones and merged my changes into them. Here's what I ran to identify them:

sudo find /etc -name '*.dpkg-*' -o -name '*.ucf-*'

In the spirit of keeping a minimal server, the last step after the reboot, configuration file cleanup, and performing the tasks in section 4.10, Obsolete packages, was to remove any packages that had been installed as part of the upgrade that aren't necessary.

One way to do that is to run one or both of the following commands. The first that lists the packages you have explicitly installed while the latter lists all packages which have been installed. Do this before and after the upgrade, diff, and remove any packages you don't want.

aptitude search '!(!~i|~M)' -F %p
dpkg --get-selections |grep install

Since I maintain my system files with Subversion, I also ran svn diff / (which might not be useful on your system). This identified new files that seemed suspicious. For example, /etc/cups, I don't print from my server :-). I used dpkg --search file to identify the package that referred to these suspicious files.

Finally, use deborphan --guess-all to identify additional packages that can be removed. If it lists packages you installed, use deborphan -A package to tell deborphan that you need it.

Release Notes: 4.2.2. Disabling APT pinning

The release notes don't give an example, but here is what they meant:

Package: *
Pin: release a=stable
Pin-Priority: 1001

If you don't have /etc/apt/preferences, consider adding it with this for its sole contents. Comment it out after the upgrade for next time. The comment character is "Explanation:" :-).

Release Notes: 4.4. Preparing sources for APT

Here is the /etc/apt/sources.list file that I used for the upgrade.

deb lenny main non-free contrib
deb-src lenny main non-free contrib
deb lenny/updates main contrib non-free
deb-src lenny/updates main contrib non-free

Please don't use as is; rather use an example of what is described in the release notes. In particular, your mirror might be different. Note that I changed etch to lenny and commented out the non-critical sources to make the upgrade go as smoothly as possible. I'll leave the other sources commented-out until needed since lenny has the stuff I was getting from backports and sid.

Release Notes: 4.5.6. Minimal system upgrade

The sudo aptitude safe-upgrade command issued the following errors:

The following packages have unmet dependencies:
libgl1-mesa-swx11: Conflicts: libgl1 which is a virtual package.
libgl1-mesa-glx: Conflicts: libgl1 which is a virtual package.

I resolved this by running:

sudo aptitude purge libgl1-mesa-swx11 libgl1-mesa-glx xbase-clients

I reinstalled xbase-clients after the upgrade.

Release Notes: 4.5.7. Upgrading the rest of the system

This went swimmingly!

After running sudo aptitude dist-upgrade, I ran the following (mentioned at the end of this section) to ensure things were clean.

sudo aptitude -f install

This removed the now-unused libio-zlib-perl package.

Release Notes: 4.8.1. How to avoid the problem before upgrading

I didn't want to take the chance of a hanging reboot. Since UUIDs are the wave of the future, I opted to follow the instructions in the section "To implement the UUID approach:."

The instructions aren't correct. I got an error when I ran update-grub after updating menu.lst. After replacing /dev/hda* with UUIDs for / and /boot in /etc/fstab, update-grub ran fine.

In addition, I also the following commands to do the same thing with swap (where /dev/hda5 is the device associated with swap in /etc/fstab). First I disabled some low priority processes to avoid using swap.

sudo swapoff -a
sudo mkswap /dev/hda5

The output of the mkswap command provided a UUID that I could use for the swap entry in /etc/fstab. I then ran the following to turn swap on:

sudo swapon -a

I then pulled the trigger and rebooted. It worked!

After the reboot, my devices where still /dev/hda{1,5,6} so I would have been OK had I not done this. However, you have newer hardware, so perhaps the problem described might be an issue for you. Or not. It's possible that the problem described might have only been a problem from someone upgrading from, say, a 2.4 kernel.


Creating PDFs from scans permalink

This can be done completely in XSane. In XSane, select the Multipage mode. Select a working directory and press Create project (the PDF will be written in the directory containing this working directory). Use XSane as usual, including using Acquire Preview to select your scan area and pressing Scan to initiate the scan(s). Press the Save multipage file button in the multipage project dialog when you're done to create the PDF.

Setting the Scansource choice to Automatic Document Feeder can be very helpful. In this case, setting the ADF-Pages item to the number of pages in your hopper (or larger) makes XSane scan the pages automatically. Once XSane stopped when it was done scanning; another time it honored my ADF-Pages setting (and dutifully scanned dozens of blank images) so it is prudent not to make this number too large.

The links below describe some pretty fancy ways to turn scans into PDF documents, which are useful for bulk scanning of books and magazines. However, if you aren't scanning that much, creating PNGs (for text) or JPGs (for images) from the scans and running the following command is sufficient:

convert *.png *.jpg document.pdf

I'd suggest using 300 DPI if you plan to print the document. If you're only going to view the document on the screen, you can create smaller documents by scanning at 150 DPI and/or by choosing a Scanmode of Gray.

Creating multi-page PDF documents from scanned images in Linux
How to create ebooks with Linux
How to scan printed papers and to create Metadata


Editing videos II permalink

I recently tried to put together a few videos taken on an iPhone in cinelerra, but the sound in the exported video was corrupted. Kino, however, exported useful video and audio.

And today, I wanted to remove some bits of a video. Again kino made it easy to do. I loaded the video, selected the edit mode, used the cursor and arrow keys to pick frames, pressed the Split scene button at the beginning and end of each scene that I wanted to delete, and then pressed the Cut scene button. Note that the Split scene button creates the split before the selected frame. I then exported the video using H.264 MP4 Dual Pass Tool (Export button, Other tab). According to Optimizing your video uploads, this is a preferred YouTube format.

Please refer to Scott Hanselman's How to rotate an AVI or MPEG file taken in Portrait to see how to rotate a video.


Creating OpenOffice envelope data from Google Contacts permalink

  1. In the Google Contact Manager, click on the Export link.
  2. Export the group of your choosing, or Everyone, select the Outlook CSV format (not that the format is specific with LookOut/OutBreak), and press Export.
  3. Save the file in, for example, contacts.csv.
  4. Run ooffice contacts.csv, run File -> Save, and press the No button to save the spreadsheet in OpenDocument file format (contacts.ods).
  5. Run ooffice envelope.odt (previously created using Insert -> Envelope as described in the references).
  6. Run the Edit Exchange Database command and press the Browse button.
  7. Select the contacts.ods file and press the Close button.
  8. Run the View -> Data Sources command and select the contacts -> Tables node and double-check that the data source's headings match those in your envelope. Drag and drop data source headings to envelope if necessary.
  9. Run File -> Print, press the Yes button (print a form letter), and press the OK button.
  10. To test, select the Print to file checkbox, or just push the OK button to print.

My references include Robbie Ferguson's YouTube video and Kevin Andle's page on creating an envelope in OpenOffice.


GNOME Terminal Select-by-word characters permalink

The following update to GNOME's select-by-word characters setting in the Edit profile command lets you select an entire email address or URL (including Subversion svn+ssh URL) by simply double-clicking on it.



svnadmin: Unable to parse unordered revision ranges permalink

I got the following message when I tried to load a Subversion dump:

svnadmin: Unable to parse unordered revision ranges '9414-9445' and '7044-8971'

I discovered that I had some uncommitted transactions from years past using the svnadmin lstxns command. I cleaned them up with the following command, but this did not fix the problem.

sudo svnadmin rmtxns /repos/main $(svnadmin lstxns /repos/main)

I then performed svn info file:///var/tmp/foo on the temporary repository I was loading and observed that the last revision was 9547. I then went into the dump file and looked for the next revision, 9548, to see what may have caused the problem. I found the smoking gun!

K 13
V 177

Ah ha! Note how the revisions for the tassie branch are swapped. I fixed this by editing the Subversion database. Note that I use FSFS. I'm not sure if this would be possible with BDB.

$ diff db/revs/9548.orig db/revs/9548
--- db/revs/9548.orig	2008-11-15 14:03:54.000000000 -0800
+++ db/revs/9548	2009-04-25 12:08:43.000000000 -0700
@@ -181,7 +181,7 @@
 id: 3zl.x0.r9548/2049

I grepped for that same string and found it in revision 9590 as well. I performed a similar fix.

I verified the fix with the following:

$ svnadmin create foo
$ svnadmin dump --quiet /REPO | svnadmin load --quiet foo
$ svn log -v file:///REPO > repository.log
$ svn log -v file:///var/tmp/foo > temp.log
$ diff repository.log temp.log

I vaguely remember having similar troubles in the past after I converted from the svnmerge properties to the built-in Subversion 1.5 svn:mergeinfo property. At the time, I think I made a similar change to the svn:mergeinfo property manually and checked in the change. I am not sure if the problem was caused by the svnmerge conversion script or by Subversion 1.5 itself.

It's also possible that the svn:mergeinfo property was munged by the bug described in svn: Working copy path 'foo' does not exist in repository.

The moral of the story is that after you convert from svnmerge to Subversion's built-in merge tracking, do inspect the svn:mergeinfo property after the svnmerge conversion. If you see that the revision numbers are not in order, fix them as described above. Also inspect the svn:mergeinfo property after subsequent merges--particularly if you hit any Subversion bugs--before you commit the changes. If you see that the revision numbers are not in order, simply edit the svn:mergeinfo property before you check it in.


Editing videos I permalink

My Canon G9 can shoot AVI movies. I recently shot a video that I wanted to edit. I just needed to chop bits out at the beginning and end. However, it might be useful to be dub in audio in the future. A quick Google search and apt-cache search turned up the following:


Getting GoogleEarth to run on a 64-bit Debian system permalink

I was able to run GoogleEarth, a 32-bit binary, on my system by first performing the actions described in Running 32-bit binaries on a 64-bit Debian system. I then ran the following commands as described in Debian bug #514122. Note that it isn't necessary to link to the installed copy.

$ cd /usr/lib/googleearth/
$ sudo mv

If GoogleEarth freezes up your system, as it did to mine, ensure that lib32nss-mdns is installed. Then remove .googleearth and rerun make-googleearth-package --force (like many others, I got lots of warnings about shared libraries not being available when I ran this command; I ignored them). Move out of the way as described above and try again.

Since then GoogleEarth has frozen up my system once or twice so you might just try restarting GoogleEarth without bothering to follow the instructions in the previous paragraph. I've since found that GoogleEarth works fine for a period of time after a reboot (without repeating the above steps). I'm hoping that a real 64-bit version will clear this up for good.

Not sure why GoogleEarth continues to complain about not being able to create .googleearth after it creates it!


Running 32-bit binaries on a 64-bit Debian system permalink

I was able to run GoogleEarth, a 32-bit binary, on my 64-bit system after running the following.

$ sudo aptitude install ia32-libs ia32-libs-gtk lib32nss-mdns

I've also read that you have to install Skype as follows (although I have not yet done this).

$ sudo dpkg -i --force-architecture skype-debian_2.0.0.72-1_i386.deb


GnuCash Price Editor: Unable to retrieve quotes for these items permalink

This error is described in Debian bug #490395. Until the patch contained within is applied to Debian, apply the patch yourself. It's really easy.


Enabling Emacs keybindings in Iceweasel (Firefox) permalink

There are two ways to enable Emacs keybindings in Iceweasel (Firefox). The first is the GNOME way. Run gconf-editor, edit the /desktop/gnome/interface/gtk_key_theme key, and change it from Default to Emacs.

The GTK way is to append the following to ~/.gtkrc-2.0 and restart Iceweasel.

include "/usr/share/themes/Emacs/gtk-2.0-key/gtkrc"

gtk-key-theme-name = "Emacs"


svn: Working copy path 'foo' does not exist in repository permalink

The error listed in the title is due to a Subversion bug, which has been fixed in 1.6. The bug was triggered for me when I moved a file in the trunk, merged the renamed file into a branch, and then later tried to merge the branch back into the trunk.

I found a workaround that can be used if you still have 1.5. For me, the renamed file was merged into the branch in revision 9739. Normally, I'd issue the following command to merge the branch back into the trunk:

$svn merge branch-URL

However, this doesn't work in this scenario. The workaround is to avoid including the revision of when the trunk was merged into the branch. This is fine because the trunk already has the changes. In the example, revision 9688 is the last revision merged into the trunk, and 9750 is the HEAD version on the branch (or simply the largest revision in your repository).

$svn merge -c9739 --record-only branch-URL
$svn merge -r9688:9738 -r9739:9750


Eclipse crashes on 64-bit machine with Java 6 permalink

Although Eclipse seemed to be well-behaved on a 64-bit etch system with Java 6, after upgrading to lenny, it started crashing all the time. I found that the crashes went away after uninstalling Subclipse. But I found a better workaround. My crashes had the following signature:

#  SIGSEGV (0xb) at pc=0x00007f762d5c225a, pid=2534, tid=1091451216
# Java VM: Java HotSpot(TM) 64-Bit Server VM (10.0-b23 mixed mode
# Problematic frame:
# V  []
Current CompileTask:
      (469 bytes)

I was able to work around this problem by not compiling the method listed above. I launch eclipse from a script, so I added -XX:CompileCommandFile to eclipse's args as follows:

exec ./eclipse "$@" -vmargs -Xmx1500M -XX:MaxPermSize=256M \

The file /usr/local/etc/hotspot contains:

exclude org/eclipse/core/internal/dtree/DataTreeNode forwardDeltaWith

This file is handy if you have several methods to list. If you only have one, you can do this instead:


Finally, you can add either of these arguments to your eclipse.ini in Eclipse's installation directory.


Naughty nameservers permalink

Earthlink, and some other knaves who maintain root nameservers, have a really annoying "feature" in which they return the IP address of one of their search pages if you enter a bogus hostname in your browser's location bar.

The problem is that this breaks the feature in browsers like Firefox which try prepending a www if the host lookup fails. This is also annoying to the user since instead of fixing a simple typo, the user has to clear the bogus URL provided by Earthlink and re-enter the URL.

I found a way to subvert Earthlink's subversion: the dnsmasq program is a caching-only nameserver which has a feature which translates bogus IP addresses to NXDOMAIN DNS records. I configured /etc/dnsmasq.conf as follows (note that I do not have any programs that update /etc/resolv.conf):


I then edited /etc/resolv.conf as follows:

nameserver 127.0.0.l

Here is what the host command on a bogus host returned before I made this change:

$ host has address has address
Host not found: 3(NXDOMAIN)
;; connection timed out; no servers could be reached

And here is what it looks like now, and how it should have looked like in the first place!

$ host
Host not found: 3(NXDOMAIN)

As a nice side-effect, the dnsmasq server also insulates me from transient DNS outages to Earthlink which was seen in the previous example.


Firefox rewrites host part of URL permalink

I had a problem whereby the host in the URL would randomly be rewritten with, my domain. I suspected that Firefox wasn't getting to DNS and had some sort of "feature" for rewriting the host part. I was partially right.

I suspect that when there was a transient DNS error, the search directive in /etc/resolv.conf would cause the host in the URL to be rewritten with my domain appended. Because of the wildcard entry in my zone file, my web server's address would be returned. My web server then rewrote the address with as the host.

I deleted the wildcard entry and the problem disappeared as soon as the change propagated down to my local nameservers.


Bluetooth stopped working! permalink

After a recent upgrade or something, I noticed that my Bluetooth light on my ThinkPad was out and Fn-F5 didn't turn it on. I was able to enable Bluetooth manually with the following command (which should have been executed by /etc/acpi/

sudo sh -c 'echo "enabled" >| /proc/acpi/ibm/bluetooth'

So, why didn't this script run when Fn-F5 was pressed?


Dual-head Nvidia configuration permalink

With the help of HOWTO Dual Monitors, I was able to simply add the three Option lines to my xorg.conf as shown below, restart my X server, and be on my way.

Section "Device"
        Identifier      "nVidia Corporation NV44 [Quadro NVS 285]"
        Driver          "nvidia"
        Option          "TwinView"
        Option          "MetaModes" "1920x1200,1680x1050; 1920x1200,1280x1024; \
                        1600x1200,1600x1200; 1280x1024,1280x1024; 1152x864,1152x864; \
                        1024x768,1024x768; 800x600,800x600; 640x480,640x480"
        Option          "TwinViewOrientation" "RightOf"

The MetaModes line is actually all on a single line.


Fixing scanning with HP printers permalink

Although I could print, I could not longer scan, and the HP Device Manager from the system tray couldn't communicate with my printer either. I was seeing the following error message in syslog:

python: hp-toolbox(UI)[6561]: error: Unable to communicate with device
(code=12): hp:/usb/OfficeJet_G85?serial=SGG16E0ZRVVL
python: hp-toolbox(UI)[6561]: warning: Device not found

As of version 2.8.2 of hplip, all communications to the hp: device is now confined to members of the scanner group. Therefore, the fix was to run sudo adduser wohler scanner, log out, and log back in.


Fixing escapes in man output permalink

After installing a new system, man started emitting these ugly <80><90> escapes all over the place. I finally found the cause. I had LC_ALL set to en_US.utf8 but LESSCHARSET was still set to latin1. The fix was to change LESSCHARSET to utf-8.


Configuring postfix to use SMTP AUTH permalink

I finally got around to configuring SMTP AUTH (SASL) in postfix.

On the Server

  1. Create /etc/postfix/sasl/smtpd.conf and add the following to it:
    pwcheck_method: saslauthd
    mech_list: plain login
  2. Add the following to /etc/postfix/
    # TLS parameters.
    smtpd_tls_security_level = may
    smtpd_tls_auth_only = yes
    smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem
    smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key
    smtpd_tls_session_cache_database = btree:${queue_directory}/smtpd_scache
    # SMTP AUTH parameters.
    smtpd_sasl_auth_enable = yes
    smtp_sasl_security_options = noanonymous
  3. Modify /etc/default/saslauthd as follows:
    OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd"
  4. Run the following commands:
    # aptitude install sasl2-bin libsasl2-modules
    # dpkg-statoverride --add root sasl 710 /var/spool/postfix/var/run/saslauthd
    # adduser postfix sasl
  5. Restart saslauthd and postfix.

On the Client

  1. Create /etc/postfix/sasl/sasl_passwd and add one or both of the following lines as appropriate to it. Make sure the mode of this file and the directory that contains it are 600 and 700 respectively
    []:smtp your-login:your-password
    []:submission your-login:your-password
  2. Add the following to /etc/postfix/
    # SASL
    smtp_sasl_auth_enable = yes
    smtp_sasl_password_maps = hash:/etc/postfix/sasl/sasl_passwd
    smtp_sasl_security_options = noanonymous
    # TLS
    smtp_tls_security_level = encrypt
    smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
  3. Run postmap /etc/postfix/sasl/sasl_passwd and restart postfix.

See the following references for the whys and wherefores:
James Turnbull, Hardening Linux, 2005, p. 395-400.
Luca Gibelli <nervous at>,
Fabian Fagerholm <fabbe at>, /usr/share/doc/sasl2-bin/README.Debian.


Iceweasel gets no respect permalink

I was trying to use a feature at Bank of America's Homebanking (SafePass) and it didn't work for me. First I had to install Flash 9. But I also discovered that the site was also not recognizing my Iceweasel browser. I was able to fix this and enable SafePass by navigating to the URL about:config, filtering on "agent", and changing the general.useragent.extra.firefox setting from Iceweasel/ to Firefox/2.0.


Atheros ath5k wireless driver permalink

The ath5k driver for the Atheros wireless chipset is built into kernel 2.6.24. Remove madwifi-tools, or switch blacklist in /etc/modprobe.d/madwifi. modprobe -r ath* wlan*. modprobe ath5k.


Aptitude equivalent of dpkg --get-selections permalink

I wanted a list of packages on my current system so just in case I needed to recreate my entire system, I could just say

aptitude install $(cat packages)
Of course I don't want automatically installed packages in packages. Thanks to Scott Wegner on the Ubuuntu forums, here's how I created the packages file:
aptitude search '!(!~i|~M)' -F %p > packages


Kernel 2.6.25 upgrade permalink

I installed 2.6.25 from sid since it didn't pull in anything else.


Kernel 2.6.24 upgrade permalink

A while ago I installed 2.4.24. While it fixed the problem where gpsbabel could not talk to the usb: device, I found that my wireless (Atheros) connection would drop after a while. The network with 2.6.22 was fine. A second 2.4.24 update fixed the network problem, but a third 2.4.24 update broke it again.


Recovering from an interrupted aptitude permalink

Against my better judgment, I remotely interrupted an aptitude session so that I could continue with the installation at my current location. This put the package that was being installed into the half-installed state. After this, aptitude responded with:

Writing extended state information... Error!
E: I wasn't able to locate a file for the sun-java6-bin package.
   This might mean you need to manually fix this package. (due to missing arch)
E: Couldn't lock list directory..are you root?

After a bit of investigation, I discovered how to fix the dpkg database:

sudo dpkg --force-remove-reinstreq --remove sun-java6-bin


Resizing filesystems with LVM permalink

When I last installed lenny, I opted for the encrypted LVM filesystems. I recently ran out of room in /usr so I now had the opportunity to use LVM! I was apprehensive that the LUKS (Linux Unified Key Setup) encryption might get in the way, but since I wasn't dealing with the root filesystem, it wasn't an issue since I was able to work with the running system.

I learned that changes must be done in 4 MB increments, the size of the physical extent. In my ignorance and inexperience, I was nervous that the size given to resize2fs might round up and the same size given to lvreduce would round down which would mean that the end of the filesystem would get guillotined. I picked even gigabyte values mostly because resize2fs doesn't accept fractional values, but a gigabyte is divisable by the 4 MB extent size and the 512 byte disk sector size as well as any other unit the system might throw at me. If you choose megabytes as your unit, ensure the value is divisible by four. At any rate, I threw in an extra fsck at the end of each operation for paranoia and all seemed to go well.

I first had to decide how much space to transfer. I'm running out of space on my laptop, so I didn't want to steal much from /home. I ran lvs, df, and df -h to get some numbers. I decided that 500 MB would be enough, so I first needed to reduce /home from 44.5 GB to 44 GB.

# lsof
# kill [any processes still running out of /home]
# umount /home
# fsck -f /dev/mapper/olgas-home
# resize2fs -p /dev/mapper/olgas-home 44G
# lvreduce [--test] -L 44G olgas/home
# fsck -f /dev/mapper/olgas-home
# mount /home

Note the --test lvreduce argument above. I used that first to see what lvreduce would do. It's more useful when you aren't using gigabytes as a unit. You'll see what I mean when you run lvextend in the next example.

I then ran vgs (and vgdisplay) to see the Free Size which should now be around 500 MB. It was 788 MB in this case and that's the number I used to grow /usr in the lvextend command below.

# shutdown now "Resizing filesystems"
# lsof /usr
# kill [any /usr processes still hanging around]
# umount /usr
# fsck -f /dev/mapper/olgas-usr
# lvextend [--test] -L +788M olgas/usr
# resize2fs -p /dev/mapper/olgas-usr
# fsck -f /dev/mapper/olgas-usr
# mount /usr

I then opted for a quick reboot so that if that caused trouble, it would be now rather than when I least expected it. When the system returned, df showed that I once again had breathing room in /usr. While it took a while this time around for me to think I knew what I was doing, the next time, it'll go quickly. Unless it's the root filesystem, in which case I'll have to learn how to turn on LUKS when running with a Live CD.


AJ Lewis, LVM HOWTO, .
Bodhi Zazen, How to Resize a LUKS Encrypted File System, .
Martti Kuparinen, Hard Drive Encryption in My Ubuntu Installation, .


/dev/random versus /dev/urandom permalink

I just learned the difference between /dev/random and /dev/urandom. Use the former when you need strong randomness for keys; use the latter when you need speed and don't expect the bits to be broken (like when scattering random bits on a cleaned disk partition or when preparing the partition for encryption).


gnome-keyring versus ssh-agent permalink

This morning, ssh worked without having to run ssh-add, which is strange because I expire my passphrase. I then ran ssh-add and got a SSH_AGENT_FAILURE message. Apparently, gnome-keyring usurped ssh-agent as reported in BTS #473864.

Until I learn more about gnome-keyring, I've disabled the ssh component as Josh Triplett suggested by unsetting the gconf key /apps/gnome-keyring/daemon-components/ssh. You can do this in gconf-editor, or run the following command:

gconftool-2 --set /apps/gnome-keyring/daemon-components/ssh false --type=bool

2010-12-05 update: It appears that this bug is fixed in squeeze. This workaround is no longer necessary.


Syncing the Treo over USB permalink

I was spurred on by Tommy Trussell to enable syncing over USB so that I could take advantage of the sync button on the cradle and because it's much, much faster than using net: over Bluetooth.

When I plugged in the Treo and hit the button on the cradle, there wasn't a single message in the syslog and lsusb didn't list the device either. I found that if you unload ehci_hcd, then the system recognizes the Treo. However, after a reboot, I found that my system recognized the Treo (under uhci_hcd) even though the ehci_hcd module was still loaded, so all is well.

I also found that pilot-xfer -l -p usb: didn't connect initially. It seems that the first time you HotSync, you need to run the pilot-xfer command before starting HotSync on the Treo. After that first time, the order doesn't matter.

I've updated Using the Palm Treo 650 with Debian GNU/Linux accordingly.


Talking to a Garmin GPS permalink

In order to get the usb: filename to work with gpsbabel, follow the directions in Hotplug vs. Garmin USB on Linux, namely, add the following to /etc/modprobe.d/local:

blacklist garmin_gps

And add the following to /etc/udev/rules.d/51-garmin.rules:

SYSFS{idVendor}=="091e", SYSFS{idProduct}=="0003", MODE="0666"

However, while this worked for kernel 2.6.18, later kernel versions broke it! It is still not working as of 2.6.22.

Newsflash! I inserted the garmin_gps module and tried using /dev/ttyUSB0 instead of usb: and I was able to back up the Garmin! It appears that this driver has been repaired--somewhat--along the way. I still had some errors uploading routes, although with persistence, they eventually all arrived. I wasn't brave (or stupid) enough to try uploading large tracks or waypoint files though. So, I'll probably still try the usb: file again once 2.6.24 is installed.


Fixed blank DHCP host name permalink

My router's DHCP table was showing a blank where my laptop's hostname should be. I fixed this by uncommenting the send host-name line in /etc/dhcp3/.


Donated to the Software Freedom Law Center permalink

I just made a donation to the Software Freedom Law Center. Consider making a donation yourself.


Bluetooth woes permalink

I was getting errors like dund[31782]: Failed to connect to the local SDP server. Connection refused(111) in my syslog and HotSyncs that were failing with Faulty modem. I worked around this problem by running the following commands:

$ sudo killall dund
$ sudo /usr/bin/dund --listen --persist --auth call treo

I've reported the bug as BTS #452869.


Building AIDE from source permalink

The AIDE that comes with etch is very hard to keep quiet. Marc Huber suggested that the lenny version might be a bit quieter, so I ran the following to get the latest and greatest on my etch system:

apt-get source aide
aptitude install dpatch libmhash-dev flex libgcrypt-dev
(cd aide-0.13.1 && fakeroot dpkg-buildpackage -b -uc)
sudo dpkg -i aide_0.13.1-8_i386.deb aide-common_0.13.1-8_all.deb

These commands are listed here mostly so that I can clean up if aide 0.13.1-8 hits backports.


Rhythmbox and sound-juicer don't see CD permalink

I could mount data CDs, play DVDs with totem, and play audio CDs with gnome-cd. However, I was not getting the usual CD icon in rhythmbox when an audio CD was inserted, and sound-juicer produced a No CD-ROM drives found--Sound Juicer could not find any CD-ROM drives to read message and exited.

Both rhythmbox and sound-juicer played CDs just fine a week before my disk crashed and I reinstalled lenny from scratch.

I found that rebooting cleared this problem.


Truncated PDFs from Gnucash permalink

My disk crashed on Friday so I bought a new one and installed lenny from scratch. One problem I encountered is that the top of the PDF printed from Gnucash was truncated. It seems that this was observed by others in the gmane.linux.debian.user thread entitled Text on printed pages truncated with Message-ID

Interestingly, after I configured my printer in CUPS, this problem went away. This was confirmed by one of the installation gurus:

Jim Paris <> wrote:

> Interestingly, the top of Gnucash reports printed to PDF were truncated
> until I installed a printer in CUPS, and then the problem disappeared.
> Is a CUPS installation default suboptimal?

Maybe it was a paper size issue, and installing a printer changed your
default papersize?  You can change the current setting with
"dpkg-reconfigure libpaper1".  I noticed in your system information:

> Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=ANSI_X3.4-1968) (ignored: LC_ALL set to C)

If LC_ALL was set to C during installation, I think libpaper1 would
have defaulted to A4 (because "locale width" and "locale height"
return A4 size in that case)

2010-03-27 update: Another problem that can cause margins to go away is the new "borderless" media size provided by HPLIP. To see if this is the source of your problem, go to System -> Administration -> Printing, bring up the context menu for your printer, view the Properties, go to the Printer Options dialog, and view the Media Size field. If this field is set to one of the borderless flavors, use either the vanilla or AutoDuplex (if you have a duplexer) flavor instead.


Apt's Dynamic MMap ran out of room error permalink

On one of my etch systems, I got this error message after a recent upgrade. (I did not get it on my etch server, nor my lenny laptop.) I fixed this by adding the following to /etc/apt/apt.conf:

Cache-Limit "20000000";


Kernel 2.6.22 upgrade permalink

I installed Linux kernel version 2.6.22 and found that the ibm_acpi module was renamed to thinkpad_acpi. Unfortunately, this broke the Fn-F4 hotkey combination used to suspend my laptop. I'm assuming that Debian Bug report #434845: acpi-support: ibm_acpi module renamed thinkpad_acpi in kernel 2.6.22 is related, but the suggested fix didn't work for me. The GNOME Shut Down menu item does work however.

Gpsbabel still doesn't work--I'm keeping a version of 2.6.18 around for it. This might be related to the CONFIG_USB_SUSPEND problems that have been reported. But then, it could be CONFIG_USB_SUSPEND which fixed suspending under ACPI on my ThinkPad.


When ISPs block port 25 permalink

If your ISP (such as Earthlink) blocks port 25, and someone else in your household controls the authentication credentials and understandably does not want to share them with you, how do you send mail?

I got my hosting company to poke a hole in port 587 (submission) and then updated postfix on my laptop and on the server as follows: (on server
submission inet n - - - - smtpd (client)
relayhost = []:587

Note that I use pop-before-smtp for authentication.


Hibernate permalink

Thanks to a post on gmane.linux.debian.user.laptop from Stefan Monnier, I installed the hibernate package, and created a file called /etc/hibernate/scriptlets.d/local which contains the following code which turns off the Ultrabay LED. If you want to use it, replace my initials (BW) with your own since the hibernate namespace is global.

# -*- sh -*-
# vim:ft=sh:ts=8:sw=4:noet

# Ideas from /usr/share/hibernate/scriptlets.d/hardware_tweaks.

# ibm_acpi proc directory

BwIbmAcpiStartSuspend() {
    # Turn off Ultrabay LED.
    IbmAcpiLed 4 off
    return 0 # this shouldn't stop suspending

BwIbmAcpiEndResume() {
    # Turn on Ultrabay LED.
    IbmAcpiLed 4 on
    return 0

BwIbmAcpiOptions() {
    if [ -d "$BW_IBM_ACPI_PROC" -a -z "$BW_IBM_ACPI_HOOKED" ]; then
        AddSuspendHook 12 BwIbmAcpiStartSuspend
        AddResumeHook 12 BwIbmAcpiEndResume

    return 0



Power permalink

I had found that with lenny and 2.6.21 kernel, ACPI suspend was finally working. Yay! Further, I felt that the built-in power management stuff might be working as well and I could remove the acpid package dispense with the /etc/acpi scripts since I was seeing some gnome-power-management warnings in the syslog.

When I pressed Fn-F4 however, I got the message:

      gnome-power-manager: (wohler) A security policy in place
      prevents this sender from sending this message to this
      recipient, see message bus configuration file (rejected message
      had interface "org.freedesktop.Hal.Device.
      SystemPowerManagement" member "Suspend" error name "(unset)"
      destination ":1.22") code='9' quark='dbus-glib-error-quark'

After a little digging, I discovered that I had to add myself to the powerdev group. Then I got this message:

      gnome-power-manager: (wohler) Doing nothing because the suspend
      button has been pressed

This was fixed by going into the gconf-editor and changing the value for /apps/gnome-power-manager/action_button_suspend to suspend.

by-nc-nd Best Viewed With YOUR Browser Valid HTML 4.01! Valid CSS! Powered by Debian GNU/Linux
Free DNS