I wanted to write some completion scripts, and I was reminded of how complicated it is to do this. Here are some sources for helping out.
Usually, you can just bring up the context menu on a GNOME icon and select the Add to favorites or Pin command. If you don't have access to the icon, I learned that you could add a .desktop file to favorites on the command line by reading the current favorites and then setting the variable using a modified array.
$ dconf read /org/gnome/shell/favorite-apps $ dconf write /org/gnome/shell/favorite-apps <array including new .desktop file>
Like darktable, the installed version of PasswordSafe or pwsafe (1.17) command had an an annoying bug. The flatpak version was 1.19 so I gave it a go:
$ flatpak --user install flathub org.pwsafe.pwsafe $ flatpak run --filesystem=<my pwsafe directory> --nosocket=wayland org.pwsafe.pwsafe
Note that the --filesystem
option is required so
that the flatpak sandbox can see the directory that includes the
database. I copied
~/.local/share/flatpak/exports/share/applications/org.pwsafe.pwsafe.desktop
into
~/.local/share/applications/org.pwsafe.pwsafe.desktop
and added the
--filesystem
option.
Without the --nosocket=wayland
option, autotype
will fail with "cannot open X display" errors.
As I mentioned in my previous entry, I was running darktable 4.2 but wanted to take advantages of features introduced in darktable 4.4, 4.6, and 4.8. Once flatpak was installed, I installed darktable with the following command:
$ flatpak --user install flathub org.darktable.Darktable
I forgot the chain of events to install the desktop icon, but
org.darktable.darktable.desktop
wound up in
my .local/share/applications/
directory and
provided a runnable icon from the dock.
I then copied
data.db
, library.db
, styles
,
and user.css
from my Debian location
~/.config/darktable
to the flatpak
location of
~/.var/app/org.darktable.Darktable/config/darktable
.
I fired up darktable via the icon, accepted the database upgrades, and just like that I was running darktable 4.8 with my photo library!
I didn't copy darktablerc
along with the other
darktable files. Since the original darktablerc
had
been created with an ancient version of darktable, I wanted to
start off with the 4.8 defaults. After launching darktable, a
new
darktablerc
was created. I diffed this new version
against the old version and updated any settings (in darktable)
I wanted to preserve.
The version of darktable in Debian GNU/Linux 12 (bookworm) is 4.2 and darktable 4.8 was just released. There were several features I wanted to take advantage of and there was already a darktable 4.8 package in flatpak, so it was time to give flatpak a go. I had been introduced to flatpak at work, so this wasn't the first time I used it, but it was the first time I used it on Debian.
Refering the Debian
flatpak installation instructions, I ran the following (note
that I set up flatpak using my home directory for better or for
worse with the --user
option):
$ sudo aptitude install flatpak gnome-software-plugin-flatpak $ flatpak remote-add --if-not-exists --user flathub https://flathub.org/repo/flathub.flatpakrepo
I then rebooted to get the flatpak directories in
the XDG_DATA_DIRS
variable to quiet the warning
about this variable. See
/usr/lib/systemd/user-environment-generators/60-flatpak
.
Maintenance includes the weekly running of
flatpak update
.
The ddclient
has been kvetching all week. There is
a Debian bug report that it is no longer maintained upstream. A
quick search reveals that inadyn
is good and
well-maintained. In five minutes I had
uninstalled ddclient
, installed
inadyn
,
set /etc/default/inadyn:RUN_DAEMON
to yes,
ran sudo service inadyn start
and had an updated
DNS entry for my laptop at home.
You can run diff -r a b
, but you can also run the
much faster rsync -van --delete local-directory
remote-directory
, which will only look at file times and
sizes rather than contents. Make sure not to forget the -n and
ensure that you care less about remote-directory than
local-directory.
The rsync program no longer seems to be supported on newer versions of Android. However, if I plug the tablet into my computer with a USB cable, and go to Settings > Connected Devices > USB and select File transfer, the tablet is mounted with gio. I can then use rsync to back up the tablet. See Enable Developer Options to update the USB default.
I gave up on rsync to push files since it didn't play well with
MTP. The best I could come up with to avoid all errors and to
avoid leaving temporary files around was rsync -vrlgo
--inplace --temp-dir=tmp SOURCE DEST
, but it failed to
update the file. I resorted to \cp --no-preserve=timestamps
SOURCE DEST
.
You can unmount the filesystem with
gio mount -u mtp://FILE
, although I read it isn't
necessary to unmount mtp filesystems. Umounting takes almost a
minute
It appeared that the gpg program hung when I ran it over an ssh connection. The problem was that gpg prompted for the passphrase with a GUI dialog on the remote computer. Here's a way to get the prompt in the terminal for the entire system, which is fine for me.
sudo apt install pinentry-tty sudo update-alternatives --config pinentry
It's been a while since I had mounted a shared folder. The instructions have since changed. Here is what I used today:
$ vmhgfs-fuse .host:<share> <mountpoint> -o subtype=vmhgfs-fuse
However, it is possible that the shared folder is mounted automatically under /mnt/hgfs.
enscript -p - -B < file.txt | ps2pdf - file.pdf
I was an inline/bottom poster for decades, but have discovered that top posting is better. Much better.
From the reader's perspective, you see the reply right away rather than having to scroll through old text first--I find I now don't bother reading bottom postings if they are below the fold. You have the entire thread for reference as it was written rather than being spliced together like some horrific Frankenstein's monster, or worse, discarded entirely, which can be maddening if you need the context. You also don't have a bunch of >>>> characters on every line to lower the readability and can more easily see who said what rather than try to match up the row of >>>> characters with the attribution.
From the author's perspective, top posting is faster. You can start typing your reply right away. You don't have to scroll to the bottom, insert lines at just the right place, or discard precious text to make your email shorter.
A while ago, bridged networking stopped working on my Debian guest after the Big Sur upgrade on the Mac that forced the version 12 upgrade of VMware,
In the fullness of time, a solution emerged, and that was to adjust the MTU. It's like back to the future. I haven't had to mess with the MTU since the 80s! But the following command made the bridged network functional again. For me, reducing the value from 1500 to 1491 was sufficient.
$ sudo ip link set eth0 mtu 1491
I also updated the network settings in the GUI so the interface would be initialized properly (GNOME 3 menu in upper right > Wired Connection > Wired Settings > Interface Gear Icon > Identity > MTU).
2021-10-25 update: macOS 12.0.1 (Monterey) just dropped, and this update fixed the MTU issue. I was able to reset my MTU to automatic (erase or set to 0 in GUI) and get my nine bytes back. Note that the existing version of VMware Fusion at the time, 12.1.2, worked fine. By coincidence, I was offered an upgrade to Fusion 12.2.0 that "suppported Monterey", and thankfully, it continued to work. It provided a hardware version upgrade to version 19.
I've been getting the following warnings on most upgrades for a while now.
Warning: package evince listed in /etc/mailcap.order does not have mailcap entries.I found solution to this problem, which is this:
echo 'application/pdf; evince %s; test=test -n "$DISPLAY"' | sudo tee /usr/lib/mime/packages/evince sudo update-mime
I followed the instructions given in How to Increase space on Linux vmware as follows to take advantage of the increased disk space on my new system.
Going full screen in eog locked up my screen for the second time. Holding down F11 shows the other windows behind briefly. Logging into the system via ssh revealed that the eog process was gone.
I found this page that provided a couple of solutions. I opted for the less elegant but more expedient.
$ sudo aptitude purge xserver-xorg-video-intel
After rebooting to get my system back, F11 worked normally again.
The current command to send a file to the trash, as well as inspect and empty the trash can, is this:
$ gio trash file... $ gio list trash:// $ gio trash --empty
Just added my notes regarding unattended upgrades here. Need to wrap the commands below with some commentary. My goal is keep the system from reboot automatically, yet get notified if a reboot is required.
$ sudo unattended-upgrades --dry-run --debug $ less /var/log/unattended-upgrades/unattended-upgrades.log /var/log/unattended-upgrades/unattended-upgrades-dpkg.log $ less /usr/bin/unattended-upgrade /etc/apt/listchanges.conf /etc/apt/apt.conf.d/50unattended-upgrades Create a cron job that runs to send email if a reboot is required. $ cat /var/run/reboot-required /var/run/reboot-required.pkgs
I had a lot of photos in my DiveMate app that mysteriously started disappearing. I discovered that Google Smart Storage deletes "photos or movies that are backed up", so I'm guessing that was the source of the problem and turned it off. If so, it wasn't smart enough to tell DiveMate where to find the deleted photos.
Fortunately, I back up my phone with rsync. What follows are the one-liners I ran to identify and recover the deleted photos. Whitsunday is my backup drive, and olgas is my laptop. I used Dropbox to get the deleted photos back to the proper location on my phone.
$ sudo find <path-to-backup>/app -name *.jpg > app.out $ cat app.out | while read line; do basename "$line"; done | sort -u > app.all $ find * -name *.jpg | while read line; do basename "$line"; done | sort > ~/tmp/app.now $ comm -13 app.now app.all > app.deleted $ cp $(grep -f ~/tmp/app.deleted ~/tmp/app.out | grep <backup>) <path-to-Dropbox>
I noticed that I could not longer access olgas from the Internet this week. I thought Comcast routing tables were messed up. Later, I discovered that olgas was using a DHCP address, not my static IP address.
Chapter 5. Network setup of the Debian documentation
indicates that my configuration should be
in /etc/systemd/network/static.network
. However,
that directory only contains the
file 99-default.link
that says:
# This machine is most likely a virtualized guest, where the old persistent # network interface mechanism (75-persistent-net-generator.rules) did not work. # This file disables /lib/systemd/network/99-default.link to avoid # changing network interface names on upgrade. Please read # /usr/share/doc/udev/README.Debian.gz about how to migrate to the currently # supported mechanism.Maybe I should do that someday.
That documentation also pointed
to /etc/network/interfaces
, but my file didn't
contain any of my details.
So I went to the menu in the top-right corner and chose the Ethernet item and then the settings button in the dialog that appeared. I chose the IPv4 tab and saw that this had been switched from Manual to Automatic. I set it to manual, filled out the bits, and pressed Apply. I had to turn off the interface in the Network dialog and turn it back on for the change to take.
I used to be able to navigate to a Windows share and find it in
/run/user/1000/gvfs
. Not so since my upgrade to
bullseye. I learned I can now mount those shares with
$ gio mount smb://server/shareand that they should be mounted in
$XDG_RUNTIME_DIR
.
I lodged a bug report
(Bug#956009).
The solution was to install gvfs-fuse
and reboot.
So, I tried to load my raw photos into darktable (2.6), and the lens correction module did not identify them. However, I could manually look up my camera and lens successfully, which was strange. Looking at the ChangeLogs, I figured I could upgrade to bullseye and have a recent enough version of exiv2 and lensfun to handle my camera/lens (Canon PowerShot G7 X Mark II), but I was mistaken.
After a bunch of Googling, I came across the following steps.
$ sudo aptitude install liblensfun-bin $ sudo lensfun-update-data
By default, the lensfun database is in /usr/share/lensfun/version_1. Running lensfun-update-data updated /var/lib/lensfun-updates/version_1 instead. Sure enough there are differences associated with the G7X in the new file. This was enough. After restarting darktable (restarting turned out to be very necessary), importing photos was enough to trigger the proper lens correction. All that was left was to create a style that turned on the lens correction module so that I could select all of the new photos and turn on the lens correction module in one go.
While investigating why my fan was running, I found that logcheck was working very hard because /var/log/auth.log and /var/log/fail2ban.log had gotten really large the past couple of weeks. This was probably caused by fail2ban not banning attackers which was probably caused by persistent errors in the log including "Failed to execute ban jail 'sshd' action". I adapted the recipe found in [1] and [2] to perform a clean reinstall as follows:
In the meantime, I wanted to add the worst offenders to my Shorewall blrules file. I found that my old recipe no longer worked because fail2ban hasn't been sending me email for a couple of years. Here's what it was:
$ grep -h inetnum $MAIL/fail2ban/* | sort | uniq -c | sort -n
Here's my new recipe for finding the worst offenders.
$ for i in $(sudo lastb | awk '{print $3}' |\ sort | uniq -c | sort -n | awk '{if ($1 > 500) print $2}'); \ do whois $i | grep ^inetnum: | awk '{print $2"-"$4}';\ done | sort
Since I restarted fail2ban, I am no longer receiving a constant barrage of failed logins.
2019-12-12 update. I noticed the errors again. This time, I just did the following:
Hours have passed, and I haven't seen a recurrence of the errors. I added /var/log/fail2ban.log to /etc/logcheck/logcheck.logfiles in order to catch the errors faster.
My logs started getting spammed with these:
Feb 3 17:36:07 olgas kernel: [39888.256540] [drm:vmw_cmdbuf_work_func [vmwgfx]] *ERROR* Command buffer error. Feb 7 21:36:59 olgas google-chrome.desktop[68050]: context mismatch in svga_sampler_view_destroy
This created log files that were 10s of megabytes large causing logcheck to run hot and long.
Two suggestions to fix these two problems both start with shutting down the virtual machine and adding:
svga.maxWidth = X svga.maxHeight = Y svga.vramSize = "X * Y * 4"
where X and Y match your screen's dimensions to the VMWare .vmx file as well as disabling the "Accelerate 3D Graphic" option of the VM setting.
The following sites provided this help:
It was also suggested to add "export SVGA_VGPU10=0" to .bashrc for the latter problem.
However, I found that the former problem resolved itself, perhaps with some reboots in both the guest and host, so I removed the .vmx settings. I also found that adding SVGA_VGPU10=0 to .bashrc/.dashrc caused display issues in my text windows. When focus moved, the text would become lighter, sometimes in the window with the focus.
In the end, I created a script in $HOME/bin/google-chrome that set SVGA_VGPU10=0 and called /usr/bin/google-chrome and pointed $HOME/.local/share/applications/google-chrome.desktop to it.
Obviously, I have to start another blog!
Synergy isn't letting me start the screen saver in the usual corner, and there doesn't seem to be a separate command to start the screen saver! However, a screen saver service can be created and that service can be bound to a key. I following the instructions in How to Start the Mac Screen Saver with a Keyboard Shortcut in OS X. For some reason, I could not bind Super-L (the current GNOME binding), so I bound C-M-L (the old GNOME binding) to the screen saver.
Well, I switched to iMovie to compose videos. It's so much easier to create video in iMovie than anything I've tried before on Linux, and it's much less buggy.
Yahoo changed their API in November, 2017. The package libfinance-quote-perl was updated in version 1.41 accordingly. However, this version is not in stretch so you have get the version from buster using your favorite method. Fortunately, the updated version from buster installs without fanfare into stretch.
Once libfinance-quote-perl is updated and GnuCash is restarted, change your source of quotes to Yahoo as JSON.
After installing stretch from scratch recently, I had to relearn a few things. Here are a couple of items that I hadn't already covered here.
gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:\ $(gsettings get org.gnome.Terminal.ProfilesList default | tr -d \')/ cursor-blink-mode off
Or at least, it seems you can't. I like to set C-M-Down and -Up to lower and raise windows respectively (using Settings -> Keyboard -> Windows -> Raise/Lower).
However, changing the lower command to C-M-Down didn't seem to work. Using dconf-editor on org.gnome.desktop.wm.keybindings, I found that switch-to-workspace-down (and -up) add aliases for C-M-Down and -Up that were interfering with my shortcuts. I deleted these aliases and my shortcuts worked!
I upgraded to stretch today and couldn't boot. Using journalctl -xb in the emergency system, I found that my shared folder wouldn't mount. I was able to comment out that mount and boot. I then found that the host filesystem is now mounted with FUSE in stretch. Here's my new fstab entry:
.host:/doc /mnt/doc fuse.vmhgfs-fuse allow_other,uid=wohler,gid=wohler,auto_unmount,defaults 0 0
I maintain a handful of Debian machines. My laptop is my master machine and the servers and workstations are considered remote. I like to keep a copy of the remote machines' configuration on my master machine.
After installing etckeeper on all of the machines, I ran the following on each remote so that the master branch on the remote would push to a separate branch on the master machine.
$ git config push.default upstream $ git remote add origin ssh://master-machine/etc $ git push -u origin master:branch-name
I use the host's name as the branch-name.
Too bad etckeeper doesn't support a pristine branch so that upstream changes can be easily merged into modified files.
The default LibreOffice paste function inherits the format from the source. This is never what you want, so you always end up using the paint tool to fix the formatting. Then you learn about Edit -> Paste Special -> Unformatted text. Then you learn to rebind C-v to Paste Unformatted text. Here are the steps with thanks to Robert Reese.
I used to use fdisk.ext3 -cc to scrub disks, but I just read that this does a non-descructive read and write. Whoops.
I found a good resource that referenced a simple utility called shred that comes stock with Debian. Here is the command I used. It wrote random characters over the disk three times and took 8.5 hours to scrub a 700 GB disk over USB 3.0.
$ time sudo shred -v /dev/sdb
I wanted to install libreoffice 5 from backports, but I couldn't seem to coax aptitude to do the right thing. I found that that following command got pretty close. The first conflict resolution solution wasn't good, but the second one was perfect by suggesting a few upgrades to make it work.
$ sudo aptitude -t jessie-backports install libreoffice
While it may be true that GNOME 3 chrome is less than in the past, the titlebars are still unnecessarily massive if buttons are not employed.
The key is to reduce the excessive padding as suggested by [1]. I preferred the solution of copying the theme file into my home directory [2]. I saved a copy of the original so that I could merge future updates into into my local copy.
$ cd .local/share/ $ mkdir -p themes/Adwaita/metacity-1 $ cd themes/Adwaita/metacity-1/ $ cp /usr/share/themes/Adwaita/metacity-1/metacity-theme-3.xml . $ cp metacity-theme-3.xml metacity-theme-3.xml.orig
The proposed solutions simply made every title_vertical_pad value 0. I was mostly concerned with the window title bars so I just modified the "normal" and "max" frame_geometry elements. I also found the aesthetics of the proposed padding of 0 lacking, so I used values of "5" and "4" respectively. Next, install the changes by restarting the GNOME shell:
Press Alt+F2 Type restart Press Enter
The Terminal tabs also suffer from too much padding. I have not yet found a solution.
stretch update, 2017-12-28: The file metacity-theme-3.xml doesn't exist on stretch. Instead, add the following to ~/.config/gtk-3.0/gtk.css:
window.ssd headerbar.titlebar, window.ssd headerbar.titlebar button.titlebutton { padding: 0; }
Juniper network-connect stopped working for me recently. It was
on a new system, and in retrospect, I'm surprised it worked at
all. The error message in ncsvc.log
was:
rmon.error Unauthorized new route to 123.456.789.0/0.0.0.0 has been added (conflicts with our route to 0.0.0.0), disconnecting (routemon.cpp:478)
At any rate, Google helped me find the solution, which was
to add the following
to /etc/NetworkManager/NetworkManager.conf
:
[keyfile] unmanaged-devices=interface-name:tun0
XSane won't automatically find your network scanner. You have to
add an entry to the appropriate file
in /etc/sane.d
. For example, for my Canon printer,
I added something like bjnp://192.168.1.100/
to pixma.conf
.
VMware/Debian created a /dev/video0
device, so
XSane found it and presented it first in the devices menu along
with the Canon scanner. By commenting
out /dev/video0
in v4l.conf
so that
only the Canon scanner was left, XSane now skips the device
prompt resulting in a faster startup.
Here are a list of tips I gathered while running Debian under VMware Fusion.
usb.quirks.device0 = "0x091e:0x2619 skip-reset"
pciSound.playBuffer = "30"With thanks to Rockwell.NSS and Bryan Smart.
The problems I still have include:
Underscores are hidden when filenames are underlined as links, or when highlighted in a selection. They are harder to type.
Other problems with them is that they lower your Search Engine Optimization (SEO), mainly because they are not considered a word separator, like the dash (-). See Of Spaces, Underscores and Dashes for the details.
With recent updates to GNOME, a suite of tracker processes
appeared. They pegged the CPU at 100%, and their databases
filled up my disk. I didn't seem to be able to discern any
benefit. I found that to make the pain go away, I first had to
run dconf-editor
and make the following changes to
keep tracker from starting.
$ gsettings set org.freedesktop.Tracker.Miner.Files crawling-interval -2 $ gsettings set org.freedesktop.Tracker.Miner.Files enable-monitors false
I then ran the following to stop the processes and remove the
(large) database in ~/.cache/tracker
.
$ tracker-control -r
Add the following to /etc/apt/sources.list
on jessie:
deb https://ppa.launchpad.net/webupd8team/java/ubuntu trusty main
Then run the following:
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys EEA14886 $ sudo apt-get update $ sudo apt-get install oracle-java8-installer $ sudo update-java-alternatives -s java-8-oracle
See How To Install The Oracle Java 8 on Debian Wheezy And Debian Jessie, Via Repository .
The Heartbleed bug compromised our certificates. I needed to update my certificates anyway. I do it so rarely that a recipe will be nice the next time. Here it is.
-des3
argument requires a passphrase which could
be inconvenient when rebooting a remote system so that is
omitted in the recipe below.
Here are the actual commands.
1. # openssl genrsa -out newt.com.key 2048 2. # openssl req -new -key newt.com.key -out newt.com.csr 4. # cat > newt.com.crt <Paste certificate here> 5. # cat newt.com.key newt.com.crt > newt.com.pem
The .key and .csr files are created per Certificate Signing Request (CSR) Generation Instructions for Apache SSL.
While not related to this recipe, I also regenerated my dovecot
keys and certificates with dpkg-reconfigure
dovecot-core
(not dovecot-common, the documentation is in
error) after removing /etc/dovecot/dovecot.pem
and /etc/dovecot/private/dovecot.pem
. I then
updated the fetchmail fingerprint per:
Fixing "Server certificate verification error" error in
fetchmail
.
After DynDNS removed their free accounts, I moved to afraid.org. Initially, I tried inadyn to update my address, but did not care much for it. Perhaps if it had an entry in init.d, I'd still be using it, but having to add an entry to rc.local is annoying. Forget running inadyn out of cron every few minutes! If it fails to connect to an IP address service (which is often), it continues to try, which leads to dozens of inadyn processes filling up your logs.
I switched to ddclient, which was not without its problems either. However, it has an entry in /etc/init.d so it starts automatically at boottime. Since the current version of ddclient in wheezy doesn't support the freedns protocol, I installed 3.8.1 from source as follows:
# mkdir ddclient && cd ddclient # sudo aptitude build-dep ddclient # sudo apt-get source ddclient=3.8.1-1.1 # sudo dpkg --install ddclient_3.8.1-1.1_all.deb # sudo aptitude install cpanminus # cpanm --sudo Digest::SHA1 # vi /etc/default/ddclient /etc/ddclient.conf # service ddclient start
I found that I couldn't log into Empathy any more once I switched to Google's two-factor authentication or 2-step verification. I discovered that I had to create an application password at Google. Others found that they had to edit the Passwords and Keys settings so that their authentication survived reboots. Here are the steps:
The article, GNOME 3.6: GNOME Online Accounts and Google two-factor authentication has additional information.
Create your icon, make it 16x16, and export as a PNG. If you have ImageMagick installed, it's as easy as:
$ convert favicon.png favicon.ico
Alternatively, use the ConvertICO web site to convert your image to a favicon.ico file.
See Gabriel Saldaña'sblog for instructions to get an older version of Google Music Manager that works with Debian wheezy.
This error is described in Debian bug #739142. Although the bug says it's for Yahoo, the same patch referenced within works for Vanguard too. It's easy to apply, it just updates the URL in USA.pm.
If you need to get at the information hidden under GNOME 3 modal dialogs, run the following to detach the dialog from its parent window.
gsettings set org.gnome.shell.overrides attach-modal-dialogs false
You then need to restart the GNOME Shell to use this new setting. Press M-F2 r RET.
If the Java plugin does not exist, of course it won't work. In addition, if it is too old, Chrome will complain or some sites that use a Java applet won't work. Chrome will provide a button that says "Update plug-in." In the case of Java, it will only let you download a tarball from Oracle. You can either use that tarball, or install the Debian package for the OpenJDK plugin.
The Debian OpenJDK package is called IcedTea. The appropriate package is icedtea-plugin (or its older variant such as icedtea6-plugin). Thus, all you have to do is install this package and restart Chrome:
$ sudo aptitude install icedtea-plugin
If you downloaded the Oracle tarball, install it and make Chrome aware of it with something like the following. The link to jre1.7 is created so that the link in ~/.mozilla/plugins doesn't have to be changed if you update the installation. If you want to make the plugin available to all users of the system, the appropriate directory is /opt/google/chrome/plugins (or /usr/lib/mozilla/plugins for Iceweasel).
$ cd /usr/local/lib $ sudo tar xzf /tmp/jre-7u13-linux-x64.tar.gz $ sudo ln -s jre1.7.0_13 jre1.7 $ sudo mkdir ~/.mozilla/plugins $ sudo ln -s /usr/local/lib/jre1.7/lib/amd64/libnpjp2.so ~/.mozilla/plugins
Once this is done, restart Chrome. If you installed more than one plugin, you can control the plugin that is enabled by visiting chrome://plugins/. Then, test the plugin.
This entry supersedes this entry.
I learned that my current video editor of choice, kino (see Editing videos I and Editing videos II), died in 2009. Since it was using obsolete arguments to ffmpeg in wheezy, it could no longer export edited video!
The editors avidemux, OpenShot, and PiTiVi surfaced after a brief survey. The editor avidemux fared poorly during my first survey, so I considered OpenShot and PiTiVi. Since PiTiVi seems to be very integrated with GNOME, is under very heavy development, has favorable reviews, and—best of all— is found in the wheezy distribution rather than in deb-multimedia.org, I was definitely leaning towards it.
I first gave OpenShot a brief try. I found it difficult to split a clip at the desired frame. You have to use the context menu to remove clips, that you need to drag remaining clip to beginning and orange vertical bar to end of remaining clip to avoid exporting black space. My export settings were: Export: Profile: Web, Target: YouTube HD, default video profile (HD 720p, 25 fps) and quality (med). A nice .mp4 video was exported. However, there was a segmentation fault upon exit!
I found it very easy to edit video in PiTiVi. The manual is worthwhile to scan as it points out a couple of things that might not be obvious, but on the whole, it took less time to learn how to splice a video together than the rest. I first rendered my video in N800/MP4, but the export hung. However, rendering to Web (.webm) worked. While totem was able to play this video, the quality was not that good. I then chose a Container format of MP4, a Frame rate of 25 fps, and Codec of x264enc in the Video tab, and produced a nice .mp4 video at less than half the size of OpenShot and YouTube-friendly. I added it as a preset. Like OpenShot, you first need to drag the timeline to 0:00 to avoid rendering blackness at the beginning of your movie. However, you don't have to worry about trimming blackness from the end.
To add text to your video, create a transparent PNG image in the GIMP that is the same size as your video, add your text to that image, and then import it into PiTiVi. If you update the image, I found that the quickest way for PiTiVi to reread it was to replace it on the timeline, that is, to delete the clip from the timeline and to drag it back from the clip library.
I followed the release notes and things went rather smoothly. It took a little playing with aptitude afterward to complete the full update of the packages and work out the i386 multiarch kinks.
GNOME 3 has a changed a bit since I first talked about it. Here are a couple of changes and additional pointers that go with GNOME 3.4.
$ gsettings set org.gnome.shell.overrides workspaces-only-on-primary false
$ gsettings set org.gnome.settings-daemon.peripherals.mouse middle-button-enabled true
$ gsettings set org.gnome.desktop.interface can-change-accels true
$ gsettings set org.gnome.desktop.wm.preferences raise-on-click false
The computertemp applet appears to be gone as well and I learned that /proc/acpi/ibm is being deprecated in favor of the files in /sys. I found a good alternative to controlling my laptop's fan in the thinkfan package. I followed the German instructions for configuring thinkfan as well as a translation and embellishment using the sensors in /sys instead of in /proc/acpi/ibm. I referred to a page on my T500's sensors. The ThinkPad ACPI Extras Driver document contains additional good information.
With the upgrade, Picasa lost the ability to upload photos to Picasaweb. The file ~/.google/picasa/3.0/picasa.log held the clue: Picasa couldn't find libssl.so. Linking the new versions of libssl to libssl.so didn't work, but I found a compatible version 0.98 in /usr/local/lib/emul/ia32-linux that had been installed by somebody. In order to provide this library to Picasa, start Picasa as follows:
LD_LIBRARY_PATH=/usr/local/lib/emul/ia32-linux/usr/lib picasa
When I rsynced a directory onto my new Patriot 64 GB USB drive (60 MB/sec VFAT), the symbolic links failed to copy. When considering an ext filesystem for this stick in which I didn't care about compatibility with others, I found that you wanted to use ext2 rather than ext4 to avoid writing to the SDD (which has a limited lifetime). I also found an interesting article called Increase USB Flash Drive Write Speed .
Running the same commands shown in his blog, here are my results with this drive.
$ sudo hdparm -t /dev/sdb1 /dev/sdb1: Timing buffered disk reads: 90 MB in 3.07 seconds = 29.32 MB/sec $ dd count=100 bs=1M if=/dev/zero of=/media/PATRIOT/test 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 4.62097 s, 22.7 MB/s $ sudo fdisk -H 224 -S 56 /dev/sdb $ sudo mke2fs -t ext4 -E stripe-width=32 -m 0 /dev/sdb1 $ sudo hdparm -t /dev/sdb1 /dev/sdb1: Timing buffered disk reads: 90 MB in 3.02 seconds = 29.82 MB/sec $ dd count=100 bs=1M if=/dev/zero of=/media/891aeafd-24cd-426e-b37e-24738e324fdd/test 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.12317 s, 851 MB/s $ sudo mke2fs -t ext2 -E stripe-width=32 -m 0 /dev/sdb1 $ sudo hdparm -t /dev/sdb1 /dev/sdb1: Timing buffered disk reads: 90 MB in 3.01 seconds = 29.90 MB/sec $ dd count=100 bs=1M if=/dev/zero of=/media/2032f5d7-f4d0-4853-89a1-d6c7129e11cb/test 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.116094 s, 903 MB/s $ sudo fdisk -H 32 -S 8 /dev/sdb $ sudo mke2fs -t ext4 -E stripe-width=32 -m 0 /dev/sdb1 $ sudo hdparm -t /dev/sdb1 /dev/sdb1: Timing buffered disk reads: 94 MB in 3.06 seconds = 30.71 MB/sec $ dd count=100 bs=1M if=/dev/zero of=/media/093eda14-da74-4446-ac35-0106cd5d644f/test 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.124107 s, 845 MB/s $ sudo mke2fs -t ext2 -E stripe-width=32 -m 0 /dev/sdb1 $ sudo hdparm -t /dev/sdb1 /dev/sdb1: Timing buffered disk reads: 96 MB in 3.05 seconds = 31.52 MB/sec $ dd count=100 bs=1M if=/dev/zero of=/media/9007be40-0a89-480f-a0d6-ebdad491f4cb/test 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.118107 s, 888 MB/s
The first fdisk command arguments follows the blog while the second follows a suggestion in the comments. In my case, the original suggestion seems faster on the writes, and ext2 seems faster than ext4. Since ext2 is also easier on the drive, I opted with the original fdisk suggestion.
But first, for fun, I tried the default fdisk and ext commands.
$ sudo fdisk /dev/sdb $ sudo mke2fs -t ext2 /dev/sdb1 $ sudo hdparm -t /dev/sdb1 /dev/sdb1: Timing buffered disk reads: 96 MB in 3.02 seconds = 31.75 MB/sec $ dd count=100 bs=1M if=/dev/zero of=/media/adde8bde-13f5-4b1d-bf14-c3682d109715/test 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.119677 s, 876 MB/s
The proposed commands are about 3 percent faster than using the defaults which isn't enough for me to eschew the defaults. So I went with the default fdisk and ext2 commands.
I thought my microphone on my ThinkPad T500 running squeeze was working, but I hadn't used it in a really long time. When I really needed it recently, it didn't work. Googling didn't turn up an answer, but I stumbled across it.
$ sudo alsactl init $ sudo shutdown -r nowThe alsactl program complained about unknown hardware, but this was OK in my case. From what I read (and experienced), you do need to reboot after this step.
This entry has been superseded by this entry.
If the default version of the Java plugin is too old, Chrome will complain or some sites that use a Java applet won't work. Chrome will provide a button that says "Update plug-in." In the case of Java, it will only let you download a tarball from Oracle. Here are the steps I took to tell Chrome about it. Your precise locations may vary.
$ cd /usr/local/lib $ sudo tar xzf /tmp/jre-7u13-linux-x64.tar.gz $ sudo ln -s jre1.7.0_13 jre1.7 $ sudo mkdir /opt/google/chrome/plugins $ sudo ln -s /usr/local/lib/jre1.7/lib/amd64/libnpjp2.so /opt/google/chrome/plugins
After restarting Chrome, test the plugin.
I was hoping to be able to link to this library from my ~/.config/google-chrome directory. Please let me know if you know the directory Chrome is looking for.
By the way, the directory that that Iceweasel uses is /usr/lib/mozilla/plugins.
The Debian pam-abl maintainer Alex Mestiashvili suggested that I run the following to test the database:
$ sudo db5.1_verify -h /var/lib/abl users.db $ sudo db5.1_verify -h /var/lib/abl hosts.db
This revealed a problem with the database, so Alex proposed I perform the following:
$ sudo db5.1_recover -v -h /var/lib/abl
This took a long time to run, but it completed successfully and pam_abl returned to blocking access from script kiddies. In addition, I found that you can clean up the thousands of log files in /var/lib/abl with the following command:
$ sudo db5.1_archive -d -h /var/lib/abl
While this does prevent recovery from a catastrophic database corruption, in the case of pam-abl, there isn't much harm in starting from scratch. Thus, this cleanup does not present much risk. To really clean things up, there is nothing better than this:
$ cd /var/lib/abl $ sudo mkdir t $ sudo mv _* log* *.db t $ sudo pam_abl $ sudo rm -r t
We began having issues streaming our Netflix movies. They began to break up more and more. I also started noticing slow network speeds and dropouts on my Linux laptop and the following errors in my log:
kernel: [96404.491351] iwlagn 0000:03:00.0: Microcode SW error detected. Restart ing 0x2000000. wpa_supplicant[1910]: Failed to initiate AP scan. kernel: [107123.472075] wlan0: direct probe to AP XX:XX:XX:XX:XX:XX timed out wpa_supplicant[1910]: Authentication with XX:XX:XX:XX:XX:XX timed out.
After rebooting of every box in the house to no avail, I stumbled across this amazing Amazon review of my wireless router that suggested using a slower rate if there was wifi congestion in the neighborhood (does 13 APs count as a lot?)
After dialing the network speed down to 217 MB/sec from 450 Mb/sec, I have not seen any of the above errors in the past week and Netflix streaming is working perfectly.
My logs are full of failed login attempts. I wanted to reduce
the output in the logs so I installed libpam-abl
and configured it per the instructions
in /usr/share/doc/lib-pam/README.Debian
. In
addition, I replaced !root
with *
in
the user rule since I don't allow remote root logins anyway.
Also, there is a bug in version 0.4.3 of libpam-abl that allows
logins with correct passwords that would otherwise be blocked.
The symptom is a "Operation not permitted" error
in auth.log
. The workaround is to
set MaxAuthTries
to 1
in /etc/ssh/sshd_config
.
I then tested per the instructions in Jonathan Gardner's wiki by uncommenting out the debug line in the configuration file and logging in with:
$ ssh -o "PubkeyAuthentication=no" you@yourhost
As Bob Cromwell states on his How to Set Up and Use SSH page, access controls actually result in more logging, not less. However, now I'll feel a little better about suppressing failed logins from my logcheck messages.
The fetchmail program started throwing a segmentation violation last week, which seemed correlated to my changing my password at work. Because changing my password is such a pain, and it wasn't guaranteed to fix the problem (since I've had longer passwords, and passwords with similar characters), I switched to getmail, which hopefully will only be until the next time I have to change my password or upgrade to wheezy and get fresh bits.
First, here's my simple fetchmail configuration.
set daemon 60 set bouncemail set properties "" poll host protocol IMAP user keep ssl idle
Here is my .getmail/getmailrc.
[retriever] type = SimpleIMAPSSLRetriever server = host username = user password = password [destination] type = MDA_external path = /usr/bin/procmail unixfrom = true [options] read_all = false delete_after = 30 delivered_to = false
In order to use it, you have to create a cron entry.
* * * * * getmail --quiet
Here is why getmail sucks and why I'm looking forward to switching back to fetchmail.
Return-Path
header
field to unknown
instead of the address of the
sender. This makes all of my procmail log entries start
with From unknown
making the log useless for
checking on mail from a particular person.
Lame, lame, lame!
The xdg-open
program was opening gnumeric instead
of libreoffice for .xlsx files. I found a few tools to help
diagnose the problem:
$ xdg-mime query filetype foo.xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet $ xdg-mime query default application/vnd.openxmlformats-officedocument.spreadsheetml.sheet openoffice.org-calc.desktop $ grep -r gnumeric ~/.local ~/.local/share/applications/mimeapps.list:application/vnd.openxmlformats-officedocument.spreadsheetml.sheet=openoffice.org-calc.desktop;gnumeric.desktop;libreoffice-calc.desktop;file-roller.desktop; $ locate openoffice.org-calc.desktop /usr/share/app-install/desktop/openoffice.org-calc.desktop $ locate gnumeric.desktop /usr/share/app-install/desktop/gnumeric.desktop /usr/share/applications/gnumeric.desktop /var/lib/menu-xdg/applications/menu-xdg/X-Debian-Applications-Office-gnumeric.desktop $ locate libreoffice-calc.desktop /usr/share/applications/libreoffice-calc.desktop
I think what is happening is that /usr/share/app-install/desktop is not in the search path and /usr/share/applications is, so gnumeric.desktop is being used in favor of openoffice.org-calc.desktop since the former is found and the latter is not. The correct application, libreoffice-calc.desktop, doesn't even get to play.
I removed the offensive line from .local/share/applications/mimeapps.list and then ran:
$ xdg-mime default libreoffice-calc.desktop application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
When I run xdg-open on a .xlsx file, I get libreoffice as desired.
Recent versions of fetchmail seem to be getting picky about their SSL certificates. I only noticed after it seemed that fetchmail was taking its time getting mail. Running fetchmail -vN, I saw the following:
fetchmail: Server certificate verification error: self signed certificate fetchmail: This means that the root signing certificate (issued for /O=Dovecot mail server/OU=tassie.newt.com/CN=tassie.newt.com/emailAddress=root@newt.com) is not in the trusted CA certificate locations, or that c_rehash needs to be run on the certificate directory. For details, please see the documentation of --sslcertpath and --sslcertfile in the manual page.
I tried running c_rehash
on my server to no avail.
After a bit of Googling, I learned that I could give fetchmail
my server's certificate fingerprint to satisfy its security
stringentness. To obtain your server's fingerprint, run:
sudo openssl x509 -md5 -subject -dates -fingerprint -in /etc/dovecot/dovecot.pem
To tell fetchmail about it, add the sslfingerprint
keyword to your .fetchmailrc
. For example,
poll <host> ssl sslfingerprint "<fingerprint obtained above in quotes>" <other parameters>
Here are my notes on configuring a new disk for use. Update device names, volume group, and logical volume names to taste.
This time, instead of running fdisk
or cfdisk
, I tried System Tools -> Disk Utility. I
had used it previously to label external drives so that since
the label is used as the name of the directory
in /media
. It was certainly nicer than working with
cfdisk
. I created a single partition that spanned
the entire disk.
I then made this partition ready for use with the following commands:
# pvcreate /dev/sdc1 # vgcreate lmc2 /dev/sdc1 # vgdisplay lmc2 | grep "Total PE" # lvcreate -l 238466 lmc2 -n backup # mkfs -t ext4 /dev/mapper/lmc2-backup # emacs /etc/fstab /dev/mapper/lmc2-backup /var/local/backup ext4 errors=remount-ro 0 3 # mount /var/local/backup
The vgdisplay
command was used to get the number of
extents in the volume group which was used in
the lvcreate
command. Here, I used all of them when
creating the logical volume. I used LVM so that if I ever have
to add another logical volume, I can
simply resize the existing
volume and add another.
I have a pair of external backup drives that I use alternatively
for backups. For ages, I just let Nautilus mount them wherever
and my backup script, which used tar under the covers, looked
for a special file called BACKUPS in all of the directories
in /media
. However, I just switched
to rsnapshot
and needed to specify the root to the
backup directory in rsnapshot.conf
. The question
was thus: How can I ensure that my two backup drives always have
the same mount point?
One way is to use /etc/fstab
. For example:
UUID=11111111-1111-1111-1111-111111111111 /media/backups ext3 rw,nosuid,nodev,user 0 0 UUID=22222222-2222-2222-2222-222222222222 /media/backups ext3 rw,nosuid,nodev,user 0 0
Note that there is currently a bug in Nautilus that duplicates
the backups
name in its bookmarks using the UUID
format. A workaround for that is to use the link
in /dev/disk/by-uuid
as follows:
/dev/disk/by-uuid/11111111-1111-1111-1111-111111111111 /media/backups ext3 rw,nosuid,nodev,user 0 0 /dev/disk/by-uuid/22222222-2222-2222-2222-222222222222 /media/backups ext3 rw,nosuid,nodev,user 0 0
Since Nautilus and gvfs-mount
create and remove
mount points on the fly for removable media, it seems a shame to
have to modify /etc/fstab
. More important, I have
set the no_create_root
variable to 1
in rsnapshot.conf
so I don't want to have to create
the /media/backups
mount point because otherwise,
my backup would go into my root partition if my external drive
wasn't mounted.
I learned that Nautilus will use the partition's label as a
mount point if it exists; otherwise, it uses the UUID as the
mount point, which I was seeing. I then changed the label of my
external drives to backups via the System Tools -> Disk
Utility command. So, without any local configuration, my
external drives are mounted on /media/backups
and
that mount point is dynamically created and removed as needed.
Case closed.
Charles Arthur was kind enough to write Goodbye Delicious, hello Pinboard: why we'll pay for internet plumbing and I invite you to start with that article for my motivation to leave Delicious. Look to the Delicious forum to see that this truly is the winter of Delicious discontent.
I briefly checked out Google Bookmarks. I was disappointed to see that their tags are comma-separated. Adding bookmarks seemed to be pretty cumbersome, at least in Chrome. There is a bookmarklet available, but I don't want to waste screen real estate for a bookmark toolbar. Chrome extensions claim to be able to access Google Bookmarks but I couldn't figure out how to add bookmarks with them. Why can't one just use the existing Chrome bookmarking star to access a different bookmark tool?
I wasn't able to import my Delicious bookmarks (the feature exists, but Delicious responded with "access denied"--mmmmm), and there isn't a means to import an HTML file with bookmarks. I therefore didn't actually try its searching and browsing capabilities.
There is a limited sharing capability, but that is to be retired tomorrow.
In short, Google Bookmarks looks promising if you're willing to put in a little work to discover an easy way to add bookmarks and don't use the social aspect of bookmarking.
The Wikipedia page for "Social bookmarking" indicates a number of related sites. Another Delicious user mentioned Pinboard and I also read Charles Arthur's article Goodbye Delicious, hello Pinboard: why we'll pay for internet plumbing. I therefore thought Pinboard might be worth trying. At the moment, it has a nominal one-time fee of about $10.
To switch, export your Delicous bookmarks to an HTML file and import that file into Pinboard.
Pinboard has the look and feel of the original Delicious. It's
fast. It's clean. I created a Chrome search engine using the
query https://pinboard.in/search/?query=%s&mine=Search+Mine
so that I can efficiently navigate my bookmarks using the
Omnibox. You can browse bookmarks by adding and subtracting tags
from your set of tags. URLs are a bit annoying: a URL in
delicious such as delicious.com/user/tag+tag would be
pinboard.in/t:user/t:tag/t:tag.
You can add a bookmark with a bookmarklet, but I don't want to waste real estate with a bookmark toolbar. There are quite a few Pinboard Chrome extensions for adding bookmarks available. Ideally, the extension would use the existing star to add bookmarks and highlight it in yellow if the current page is bookmarked. Next best would be an icon. A context menu item would be helpful. I don't have a need to have a browse or search capability since I use the Omnibox for that. There is an official Pinboard extension called Pinboard Tools. However, Pinboard Plus is the best in my opinion: it changes color when the page is bookmarked and provides a delete button, enter will confirm an autocompleted tag as well as submit and dismiss the dialog, selected text goes in to the description. The only issue, which should be easily fixed, is that the focus does not start at the Tags field.
I sent a few ideas to the author, Maciej, and he's already responded with a gracious response.
To summarize, Pinboard is what Delicious used to be. Maciej is planning to add the good stuff that Delicious had (breadcrumbs, tag bundles).
This weekend, I said goodbye to Delicious and have said hello to Pinboard.
Apparently, I'm not alone in my dislike of link underlines. Links are already colored to indicate that they are links and the underlines can be very distracting on pages with many links. Many forums talk about ways to get rid of the underlines, including installing plugins, or writing JavaScript or CSS. Here is the simplest way to remove the underlines in Chrome.
Add the following text
to ~/.config/google-chrome/Default/User
StyleSheets/Custom.css
and either refresh each page or
restart Chrome. The comments describe what each section does.
/* Don't underline links by default. */ :link, :visited { text-decoration: none; } /* Underline links when hovering above them. */ :link:hover, :visited:hover { text-decoration: underline; }
I have an old Thinkpad which gets hot and shuts off even though the fan is on full speed. Maybe some fresh thermal grease is all it needs. In the meantime, having the CPU temperature applet is mandatory.
The package gnome-shell-extension-cpu-temperature
seems to be the ticket. After installing this package (on my
Fedora system--I don't see it on Debian at this time) and
restarting the GNOME Shell (Alt-F2 r RET), the temperature
appeared on the status bar. I still need to figure out how to
run a command when the temperature exceeds a certain
temperature.
I've also found it helpful to turn down the CPU speed at times. This can be done with:
$ sudo cpufreq-set --cpu 0 --max 1.6GHz $ sudo cpufreq-set --cpu 1 --max 1.6GHz
Use cpufreq-info
to inquire what values of clock
rate you can use. The rate 1.6 isn't my slowest, but it's enough
to cool down the engines.
The Finding and Reminding document discusses the pertinent design considerations going into the GNOME Shell, but does not discuss what specific elements of the GNOME Shell address the desktop metaphor.
The original GNOME Shell design paper says, "In the Shell design, the `desktop' folder should no longer be presented as if it resides behind all open windows. We should have another way of representing ephemeral and working set objects."
And that is?
I could not find a concise answer to this question, so I'll attempt to do so. I agree with the designers that displaying the desktop folder on the background has a couple of major problems. The icons will either be occluded, or distracting. I think the desktop icons can be replaced in the following ways:
OK, so now I'm using GNOME 3.0. And I'm loving it.
After using GNOME 3 for just half a day, coming home to my GNOME 2 desktop on my laptop seemed really ugly and clunky. I've removed my window list and workspace widget to clear the clutter somewhat, and moved my calendar to the middle so there is some semblance of consistency between home and work.
I have to agree, even after using GNOME since its infancy in 1999, and customizing it thoroughly, that hanging on to Window 95 roots isn't a good thing. I enjoy letting go and not worrying about customizing the desktop.
What is GNOME 3.0? It is a huge redesign, and can work on touchscreens and netbooks as well as on desktops. The GNOME Shell and Mutter (a metacity fork) replace the GNOME Panel and the metacity window manager. Written in JavaScript and using CSS, the GNOME Shell uses the same tools employed by HP's (Palm) WebOS. It uses the Clutter graphic and scene graph library. Unity, as it turns out, is Ubuuntu's offering at a new UI.
I would strongly suggest getting acquainted with GNOME 3.0 before upgrading your system and being shocked by the experience. To start, the GNOME 3 overview page contains some 45-second videos that highlight the new features and might even tempt you into trying the GNOME Shell. That, and the GNOME Shell Design paper, the GNOME Shell Tour, and especially the GNOME Shell Cheat Sheet are great introductions. The GNOME 3.0 release notes also provides a quick overview. Had I read these before the update, it would have helped to reduce the initial shock I received upon logging in.
After a few hours, I no longer miss the window list, fixed workspaces, application menus, and Desktop icons. The Activities window and dynamic workspaces, with an improved Alt-TAB/Alt-` command, make it easy to switch between applications and application windows. The application and recent document search function replaces the application menu and desktop icons just dandy. The notification area has replaced my widgets on the taskbar. I absolutely love the window tiling gestures.
But, while I look forward to GNOME 3 coming to Debian stable, if GNOME 3 is still not for you, please read my comments in Why I hate GNOME 3 to see how to go back to GNOME 2 (mostly).
Here are some additional articles on GNOME 3 that I found which may be interesting to you.
These three contain ways you can customize the GNOME shell. Heck, you may as well browse all of Finnbarr P. Murphy's GNOME articles tagged with GNOME or GNOME Shell (he wrote the first two articles). Thanks, Finnbarr!
This email describes why the minimize button disappeared, and how it has been replaced with hiding, the overview, and workspaces. An interesting quote from this email is:
The real form of feedback that we need going from GNOME 3.0 to 3.2 is careful observation of how users are using GNOME 3 - are they figuring out how to use the overview and workspaces and message tray as we expect them to use them, or are they doing cumbersome workarounds because we took away essential features.
This article describes some of the new gestures, which I mentioned above.
Here's a list of bugs or "features" that I discovered along the way along with fixes or workarounds when possible.
gconftool-2 --set /desktop/gnome/shell/windows/workspaces_only_on_primary false --type boolean
gconftool-2 --set /apps/metacity/general/focus_mode click --type string
gsettings
(like gconftool-2
)
or dconf-editor
(like
gconf-editor
).
Evince
is broken.PackageKit-gtk3-module
.
gnome-tweak-tool
is
like dconf-editor
but gives you a few more options
to tweak your UI.
There are a few items in my last post that are applicable to GNOME 3 as well (keyboard rate, middle mouse button, etc.).
License: You agree to read Why I Love GNOME 3 before reading this blog posting.
We upgraded to Fedora 15 at work last night. This came with GNOME 3.0. I think it will be likely that you will have a WTF moment when this happens to you when GNOME 3.0 comes to Debian. More likely, you'll have a WTF day. I did. Your first reaction may be to get back to back to GNOME 2. This post describes how you can (mostly) do that. I hope you find it helpful. (I also hope that you read Why I Love GNOME 3 and find that GNOME 3 works for you too.)
The Fedora 15 upgrade brought some surprising changes. What follows is a list of my questions, and answers that I discovered.
/tmp/xorg.conf
. Then move it to
/etc/X11
.
gsettings set org.gnome.desktop.background show-desktop-icons trueWith this setting, my desktop icons were back, AND the context menu was back too.
gsettings org.gnome.settings-daemon.peripherals.keyboard delay 200 gsettings org.gnome.settings-daemon.peripherals.keyboard repeat-interval 20
gsettings set org.gnome.settings-daemon.peripherals.mouse middle-button-enabled true
~/.gnomerc
is no longer run.
You select the Pictures folder option in the Screensaver
Preferences, and you'd expect to see a
simple Configure
button in the screensaver
preferences where you can tell Screensaver where to get your
pictures. Nope. Instead, you have to JFGI and discover that you
have to
edit /usr/share/applications/screensavers/personal-slideshow.desktop
and change the Exec line. For example, I have:
Exec=/usr/lib/gnome-screensaver/gnome-screensaver/slideshow --location /home/wohler/doc/photos/apod
The gnome-screensaver
program appears to ignore
your ~/.local/share/applications
folder which is
why you have to edit the system file. GNOME 3 doesn't even have
a screensaver (yet).
Note that the path on Fedora 15 is /usr/libexec/gnome-screensaver/slideshow
.
After firing up the Juniper VPN, I get martian logging when the other hosts on my local network send out broadcasts. For example:
Apr 26 17:54:33 olgas kernel: [838375.198780] martian source 255.255.255.255 from 192.168.0.104, on dev wlan0 Apr 26 17:54:33 olgas kernel: [838375.198787] ll header: ff:ff:ff:ff:ff:ff:90:27:e4:e9:26:8b:08:00 Apr 26 17:54:33 olgas kernel: [838375.200480] martian source 192.168.0.255 from 192.168.0.104, on dev wlan0 Apr 26 17:54:33 olgas kernel: [838375.200485] ll header: ff:ff:ff:ff:ff:ff:90:27:e4:e9:26:8b:08:00
I discovered that you can turn off the logging of those packets via
/proc/sys/net/ipv4/conf/*interface*/log_martians
.
In my case, this is done with the following.
sudo sh -c "echo 0 > /proc/sys/net/ipv4/conf/wlan0/log_martians"
The proc filesystem is documented in the kernel source tarball's Documentation subdirectory.
You don't want to always ignore martian logging since it helps to identify IP address spoofing. See the second post in this blog warning against it. I therefore clear this setting manually after closing the VPN.
sudo sh -c "echo 1 > /proc/sys/net/ipv4/conf/wlan0/log_martians"
Another reader in the SUSE blog came up with a possible solution, included below. I have not yet tried to play with the routing as he suggested.
Well, seeing as no one else has come up with a solution for me I found it myself I thought I would post it here for the benefit of future readers. It comes down to simple routing on the Linux machine. Even though the default gateway (192.168.2.x) is set for the normal subnet (lets say 192.168.2.0/24) it doesn't want to work for a different subnet (lets say 192.168.0.0/24). So what you have to do is add in a route like so:Destiation: 192.168.0.0/24 Gateway: 192.168.2.x (same as default gateway) Netmask: 255.255.255.0 Device: (whatever device the communication is coming in on)
After upgrading to squeeze, my Java programs stopped networking. They all complained with, "SocketException: Network is unreachable." While there is some disagreement as to whether this is a Debian bug or a Java bug, the problem extends to both Sun/Oracle's version of Java as well as the OpenJDK.
There are two workarounds. On a per-program scale, pass in
the -Djava.net.preferIPv4Stack
Java option. On a
global scale, edit /etc/sysctl.d/bindv6only.conf
,
set net.ipv6.bindv6only
to 0
, and
run service procps restart
.
I went with the later workaround since I had to affect a suite of programs. However, there may be consequences of this setting.
I installed squeeze this weekend, but could not start an X
session. From a console window, I could see the syntax errors in
~/.xsession-errors
from when
my .gnomerc
was run. It contains the line .
~/.bashrc
so that my X session has all the environment
variables that it needs, especially PATH.
It turns out that the problem was caused by a symlink from sh to
dash. While it might make a new squeeze system run faster, it
certainly broke my X session. Perhaps the X session should use
bash now, at least where it reads ~/.gnomerc
.
The workaround for this is to run dpkg-reconfigure
dash
and say no when it asks to link sh to dash.
See Bug #595906h.
2010-09-26 update: The flashplugin-nonfree package in sid installs the new 64-bit version of Flash from Adobe. The instructions in the wiki page below have been updated accordingly.
The Debian wiki came to the rescue. I simply followed the first four steps in section Debian Testing 'Squeeze' amd64 in the FlashPlayer wiki page.
I took some notes where I had to deviate from the release notes which might be helpful to both me and you.
GeneralI accepted the default for any prompts that were given.
Note that the upgrade created new configuration files. After the upgrade, I replaced configuration files with the new ones and merged my changes into them. Here's what I ran to identify them:
sudo find /etc -name '*.dpkg-*' -o -name '*.ucf-*'
In the spirit of keeping a minimal server, the last step after the reboot, configuration file cleanup, and performing the tasks in section 4.10, Obsolete packages, was to remove any packages that had been installed as part of the upgrade that aren't necessary.
One way to do that is to run one or both of the following commands. The first that lists the packages you have explicitly installed while the latter lists all packages which have been installed. Do this before and after the upgrade, diff, and remove any packages you don't want.
aptitude search '!(!~i|~M)' -F %p dpkg --get-selections |grep install
Since I maintain my system files with Subversion, I also
ran svn diff /
(which might not be useful on your
system). This identified new files that seemed suspicious. For
example, /etc/cups
, I don't print from my server
:-). I used dpkg --search file
to identify
the package that referred to these suspicious files.
Finally, use deborphan --guess-all
to identify
additional packages that can be removed. If it lists packages
you installed, use
deborphan -A package
to tell deborphan that
you need it.
The release notes don't give an example, but here is what they meant:
Package: * Pin: release a=stable Pin-Priority: 1001
If you don't have /etc/apt/preferences
, consider
adding it with this for its sole contents. Comment it out after
the upgrade for next time. The comment character is
"Explanation:" :-).
Here is the /etc/apt/sources.list
file that I used
for the upgrade.
deb https://mirrors.xmission.com/debian/ lenny main non-free contrib deb-src https://mirrors.xmission.com/debian/ lenny main non-free contrib deb https://security.debian.org/ lenny/updates main contrib non-free deb-src https://security.debian.org/ lenny/updates main contrib non-free
Please don't use as is; rather use an example of what is described in the release notes. In particular, your mirror might be different. Note that I changed etch to lenny and commented out the non-critical sources to make the upgrade go as smoothly as possible. I'll leave the other sources commented-out until needed since lenny has the stuff I was getting from backports and sid.
Release Notes: 4.5.6. Minimal system upgrade
The sudo aptitude safe-upgrade
command issued the
following errors:
The following packages have unmet dependencies: libgl1-mesa-swx11: Conflicts: libgl1 which is a virtual package. libgl1-mesa-glx: Conflicts: libgl1 which is a virtual package.
I resolved this by running:
sudo aptitude purge libgl1-mesa-swx11 libgl1-mesa-glx xbase-clients
I reinstalled xbase-clients after the upgrade.
Release Notes: 4.5.7. Upgrading the rest of the systemThis went swimmingly!
After running sudo aptitude dist-upgrade
, I ran the
following (mentioned at the end of this section) to ensure things were
clean.
sudo aptitude -f install
This removed the now-unused libio-zlib-perl package.
Release Notes: 4.8.1. How to avoid the problem before upgradingI didn't want to take the chance of a hanging reboot. Since UUIDs are the wave of the future, I opted to follow the instructions in the section "To implement the UUID approach:."
The instructions aren't correct. I got an error when I ran update-grub
after updating menu.lst. After replacing /dev/hda*
with
UUIDs for /
and /boot
in /etc/fstab
, update-grub ran fine.
In addition, I also the following commands to do the same thing with
swap (where /dev/hda5
is the device associated with swap
in /etc/fstab
). First I disabled some low priority
processes to avoid using swap.
sudo swapoff -a sudo mkswap /dev/hda5
The output of the mkswap command provided a UUID that I could use for
the swap entry in /etc/fstab
. I then ran the following to
turn swap on:
sudo swapon -a
I then pulled the trigger and rebooted. It worked!
After the reboot, my devices where still /dev/hda{1,5,6} so I would have been OK had I not done this. However, you have newer hardware, so perhaps the problem described might be an issue for you. Or not. It's possible that the problem described might have only been a problem from someone upgrading from, say, a 2.4 kernel.
This can be done completely in XSane. In XSane, select the Multipage mode. Select a working directory and press Create project (the PDF will be written in the directory containing this working directory). Use XSane as usual, including using Acquire Preview to select your scan area and pressing Scan to initiate the scan(s). Press the Save multipage file button in the multipage project dialog when you're done to create the PDF.
Setting the Scansource choice to Automatic Document Feeder can be very helpful. In this case, setting the ADF-Pages item to the number of pages in your hopper (or larger) makes XSane scan the pages automatically. Once XSane stopped when it was done scanning; another time it honored my ADF-Pages setting (and dutifully scanned dozens of blank images) so it is prudent not to make this number too large.
The links below describe some pretty fancy ways to turn scans into PDF documents, which are useful for bulk scanning of books and magazines. However, if you aren't scanning that much, creating PNGs (for text) or JPGs (for images) from the scans and running the following command is sufficient:
convert *.png *.jpg document.pdf
I'd suggest using 300 DPI if you plan to print the document. If you're only going to view the document on the screen, you can create smaller documents by scanning at 150 DPI and/or by choosing a Scanmode of Gray.
References:
Creating multi-page PDF documents from scanned images in
Linux
How to create ebooks with Linux
How to scan printed papers and to create Metadata
I recently tried to put together a few videos taken on an iPhone in cinelerra, but the sound in the exported video was corrupted. Kino, however, exported useful video and audio.
And today, I wanted to remove some bits of a video. Again kino made it easy to do. I loaded the video, selected the edit mode, used the cursor and arrow keys to pick frames, pressed the Split scene button at the beginning and end of each scene that I wanted to delete, and then pressed the Cut scene button. Note that the Split scene button creates the split before the selected frame. I then exported the video using H.264 MP4 Dual Pass Tool (Export button, Other tab). According to Optimizing your video uploads, this is a preferred YouTube format.
Please refer to Scott Hanselman's How to rotate an AVI or MPEG file taken in Portrait to see how to rotate a video.
My references include Robbie Ferguson's YouTube video and Kevin Andle's page on creating an envelope in OpenOffice.
The following update to GNOME's select-by-word characters setting in the Edit profile command lets you select an entire email address or URL (including Subversion svn+ssh URL) by simply double-clicking on it.
-A-Za-z0-9,./?%:_@+
I got the following message when I tried to load a Subversion dump:
svnadmin: Unable to parse unordered revision ranges '9414-9445' and '7044-8971'
I discovered that I had some uncommitted transactions from years
past using the svnadmin lstxns
command. I cleaned
them up with the following command, but this did not fix the
problem.
sudo svnadmin rmtxns /repos/main $(svnadmin lstxns /repos/main)
I then performed svn info file:///var/tmp/foo
on
the temporary repository I was loading and observed that the
last revision was 9547. I then went into the dump file and
looked for the next revision, 9548, to see what may have caused
the problem. I found the smoking gun!
K 13 svn:mergeinfo V 177 /home/wohler/branches/common:6912-7042 /home/wohler/branches/tassie:6913-7043 /home/wohler/common:7043-9259 /home/wohler/sydney:8972-9413 /home/wohler/tassie:9414-9445,7044-8971 PROPS-END
Ah ha! Note how the revisions for the tassie branch are swapped. I fixed this by editing the Subversion database. Note that I use FSFS. I'm not sure if this would be possible with BDB.
$ diff db/revs/9548.orig db/revs/9548 --- db/revs/9548.orig 2008-11-15 14:03:54.000000000 -0800 +++ db/revs/9548 2009-04-25 12:08:43.000000000 -0700 @@ -181,7 +181,7 @@ /home/wohler/branches/tassie:6913-7043 /home/wohler/common:7043-9259 /home/wohler/sydney:8972-9413 -/home/wohler/tassie:9414-9445,7044-8971 +/home/wohler/tassie:7044-8971,9414-9445 END ENDREP id: 3zl.x0.r9548/2049
I grepped for that same string and found it in revision 9590 as well. I performed a similar fix.
I verified the fix with the following:
$ svnadmin create foo $ svnadmin dump --quiet /REPO | svnadmin load --quiet foo $ svn log -v file:///REPO > repository.log $ svn log -v file:///var/tmp/foo > temp.log $ diff repository.log temp.log $
I vaguely remember having similar troubles in the past after I converted from the svnmerge properties to the built-in Subversion 1.5 svn:mergeinfo property. At the time, I think I made a similar change to the svn:mergeinfo property manually and checked in the change. I am not sure if the problem was caused by the svnmerge conversion script or by Subversion 1.5 itself.
It's also possible that the svn:mergeinfo property was munged by the bug described in svn: Working copy path 'foo' does not exist in repository.
The moral of the story is that after you convert from svnmerge to Subversion's built-in merge tracking, do inspect the svn:mergeinfo property after the svnmerge conversion. If you see that the revision numbers are not in order, fix them as described above. Also inspect the svn:mergeinfo property after subsequent merges--particularly if you hit any Subversion bugs--before you commit the changes. If you see that the revision numbers are not in order, simply edit the svn:mergeinfo property before you check it in.
My Canon G9 can shoot AVI movies. I recently shot a video that I wanted to edit. I just needed to chop bits out at the beginning and end. However, it might be useful to be dub in audio in the future. A quick Google search and apt-cache search turned up the following:
I was able to run GoogleEarth, a 32-bit binary, on my system by
first performing the actions described
in Running 32-bit binaries on a
64-bit Debian system. I then ran the following commands as
described
in Debian
bug #514122. Note that it isn't necessary to link
libcrypto.so.0.9.8
to the installed copy.
$ cd /usr/lib/googleearth/ $ sudo mv libcrypto.so.0.9.8 libcrypto.so.0.9.8-
If GoogleEarth freezes up your system, as it did to mine, ensure
that lib32nss-mdns is installed. Then
remove .googleearth
and rerun
make-googleearth-package --force
(like many others,
I got lots of warnings about shared libraries not being
available when I ran this command; I ignored them). Move
libcrypto.so out of the way as described above and try again.
Since then GoogleEarth has frozen up my system once or twice so you might just try restarting GoogleEarth without bothering to follow the instructions in the previous paragraph. I've since found that GoogleEarth works fine for a period of time after a reboot (without repeating the above steps). I'm hoping that a real 64-bit version will clear this up for good.
Not sure why GoogleEarth continues to complain about not being
able to create .googleearth
after it creates it!
I was able to run GoogleEarth, a 32-bit binary, on my 64-bit system after running the following.
$ sudo aptitude install ia32-libs ia32-libs-gtk lib32nss-mdns
I've also read that you have to install Skype as follows (although I have not yet done this).
$ sudo dpkg -i --force-architecture skype-debian_2.0.0.72-1_i386.deb
This error is described in Debian bug #490395. Until the patch contained within is applied to Debian, apply the patch yourself. It's really easy.
There are two ways to enable Emacs keybindings in Iceweasel
(Firefox). The first is the GNOME way. Run gconf-editor,
edit the /desktop/gnome/interface/gtk_key_theme
key
, and change it from Default
to Emacs
.
The GTK way is to append the following
to ~/.gtkrc-2.0
and restart Iceweasel.
include "/usr/share/themes/Emacs/gtk-2.0-key/gtkrc" gtk-key-theme-name = "Emacs"
The error listed in the title is due to a Subversion bug, which has been fixed in 1.6. The bug was triggered for me when I moved a file in the trunk, merged the renamed file into a branch, and then later tried to merge the branch back into the trunk.
I found a workaround that can be used if you still have 1.5. For me, the renamed file was merged into the branch in revision 9739. Normally, I'd issue the following command to merge the branch back into the trunk:
$svn merge branch-URL
However, this doesn't work in this scenario. The workaround is to avoid including the revision of when the trunk was merged into the branch. This is fine because the trunk already has the changes. In the example, revision 9688 is the last revision merged into the trunk, and 9750 is the HEAD version on the branch (or simply the largest revision in your repository).
$svn merge -c9739 --record-only branch-URL $svn merge -r9688:9738 -r9739:9750
Although Eclipse seemed to be well-behaved on a 64-bit etch system with Java 6, after upgrading to lenny, it started crashing all the time. I found that the crashes went away after uninstalling Subclipse. But I found a better workaround. My crashes had the following signature:
# SIGSEGV (0xb) at pc=0x00007f762d5c225a, pid=2534, tid=1091451216 # # Java VM: Java HotSpot(TM) 64-Bit Server VM (10.0-b23 mixed mode linux-amd64) # Problematic frame: # V [libjvm.so+0x1f125a] ... Current CompileTask: C2:715 org.eclipse.core.internal.dtree.DataTreeNode.forwardDeltaWith( [Lorg/eclipse/core/internal/dtree/AbstractDataTreeNode; [Lorg/eclipse/core/internal/dtree/AbstractDataTreeNode; Lorg/eclipse/core/internal/dtree/IComparator;) [Lorg/eclipse/core/internal/dtree/AbstractDataTreeNode; (469 bytes)
I was able to work around this problem by not compiling the
method listed above. I launch eclipse from a script, so I added
-XX:CompileCommandFile
to eclipse's args as follows:
exec ./eclipse "$@" -vmargs -Xmx1500M -XX:MaxPermSize=256M \ -XX:CompileCommandFile=/usr/local/etc/hotspot
The file /usr/local/etc/hotspot
contains:
exclude org/eclipse/core/internal/dtree/DataTreeNode forwardDeltaWith
This file is handy if you have several methods to list. If you only have one, you can do this instead:
-XX:CompileCommand=exclude,org/eclipse/core/internal/dtree/DataTreeNode,forwardDeltaWith
Finally, you can add either of these arguments to
your eclipse.ini
in Eclipse's installation
directory.
Earthlink, and some other knaves who maintain root nameservers, have a really annoying "feature" in which they return the IP address of one of their search pages if you enter a bogus hostname in your browser's location bar.
The problem is that this breaks the feature in browsers like Firefox which try prepending a www if the host lookup fails. This is also annoying to the user since instead of fixing a simple typo, the user has to clear the bogus URL provided by Earthlink and re-enter the URL.
I found a way to subvert Earthlink's subversion:
the dnsmasq program is a caching-only nameserver which
has a feature which translates bogus IP addresses to NXDOMAIN
DNS records. I configured /etc/dnsmasq.conf
as
follows (note that I do not have any programs that
update /etc/resolv.conf
):
no-resolv server=192.168.1.1 bogus-nxdomain=207.69.131.10 bogus-nxdomain=207.69.131.9
I then edited /etc/resolv.conf
as follows:
search newt.com #nameserver 192.168.1.1 nameserver 127.0.0.l
Here is what the host
command on a bogus host
returned before I made this change:
$ host lskjdflkdsjf.com lskjdflkdsjf.com has address 207.69.131.10 lskjdflkdsjf.com has address 207.69.131.9 Host lskjdflkdsjf.com not found: 3(NXDOMAIN) ;; connection timed out; no servers could be reached
And here is what it looks like now, and how it should have looked like in the first place!
$ host lskjdflkdsjf.com Host lskjdflkdsjf.com not found: 3(NXDOMAIN)
As a nice side-effect, the dnsmasq server also insulates me from transient DNS outages to Earthlink which was seen in the previous example.
I had a problem whereby the host in the URL would randomly be rewritten with www.newt.com, my domain. I suspected that Firefox wasn't getting to DNS and had some sort of "feature" for rewriting the host part. I was partially right.
I suspect that when there was a transient DNS error,
the search newt.com
directive
in /etc/resolv.conf
would cause the host in the URL
to be rewritten with my domain appended. Because of the wildcard
entry in my zone file, my web server's address would be
returned. My web server then rewrote the address with
www.newt.com as the host.
I deleted the wildcard entry and the problem disappeared as soon as the change propagated down to my local nameservers.
After a recent upgrade or something, I noticed that my Bluetooth
light on my ThinkPad was out and Fn-F5 didn't turn it on. I was
able to enable Bluetooth manually with the following command
(which should have been executed
by /etc/acpi/ibm-wireless.sh
):
sudo sh -c 'echo "enabled" >| /proc/acpi/ibm/bluetooth'
So, why didn't this script run when Fn-F5 was pressed?
With the help of HOWTO Dual Monitors, I was able to simply add the three Option lines to my xorg.conf as shown below, restart my X server, and be on my way.
Section "Device" Identifier "nVidia Corporation NV44 [Quadro NVS 285]" Driver "nvidia" Option "TwinView" Option "MetaModes" "1920x1200,1680x1050; 1920x1200,1280x1024; \ 1600x1200,1600x1200; 1280x1024,1280x1024; 1152x864,1152x864; \ 1024x768,1024x768; 800x600,800x600; 640x480,640x480" Option "TwinViewOrientation" "RightOf" EndSection
The MetaModes
line is actually all on a single
line.
Although I could print, I could not longer scan, and the HP Device Manager from the system tray couldn't communicate with my printer either. I was seeing the following error message in syslog:
python: hp-toolbox(UI)[6561]: error: Unable to communicate with device (code=12): hp:/usb/OfficeJet_G85?serial=SGG16E0ZRVVL python: hp-toolbox(UI)[6561]: warning: Device not found
As of version 2.8.2 of hplip, all communications to the hp:
device is now confined to members of the scanner group.
Therefore, the fix was to run sudo adduser wohler
scanner
, log out, and log back in.
After installing a new system, man started emitting these ugly
<80><90> escapes all over the place. I finally found the cause.
I had LC_ALL
set to en_US.utf8
but LESSCHARSET
was still set
to latin1
. The fix was to
change LESSCHARSET
to utf-8
.
I finally got around to configuring SMTP AUTH (SASL) in postfix.
pwcheck_method: saslauthd mech_list: plain login
# TLS parameters. smtpd_tls_security_level = may smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${queue_directory}/smtpd_scache # SMTP AUTH parameters. smtpd_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous
START=yes OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd"
# aptitude install sasl2-bin libsasl2-modules # dpkg-statoverride --add root sasl 710 /var/spool/postfix/var/run/saslauthd # adduser postfix sasl
[mail.your-domain.com]:smtp your-login:your-password [mail.your-domain.com]:submission your-login:your-password
# SASL smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl/sasl_passwd smtp_sasl_security_options = noanonymous # TLS smtp_tls_security_level = encrypt smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
postmap /etc/postfix/sasl/sasl_passwd
and
restart postfix.
See the following references for the whys and wherefores:
James Turnbull, Hardening Linux, 2005, p. 395-400.
Luca Gibelli
<nervous at nervous.it>, https://www.nervous.it/txt/Postfix-SMTP-AUTH-4-DUMMIES.html.
Fabian Fagerholm <fabbe at debian.org>, /usr/share/doc/sasl2-bin/README.Debian.
https://www.postfix.org/SASL_README.html.
I was trying to use a feature at Bank of America's Homebanking
(SafePass) and it didn't work for me. First I had to install
Flash 9. But I also discovered that the site was also not
recognizing my Iceweasel browser. I was able to fix this and
enable SafePass by navigating to the
URL about:config, filtering on
"agent", and changing
the general.useragent.extra.firefox
setting from
Iceweasel/2.0.0.14
to Firefox/2.0
.
The ath5k driver for the Atheros wireless chipset is built into kernel 2.6.24. Remove madwifi-tools, or switch blacklist in /etc/modprobe.d/madwifi. modprobe -r ath* wlan*. modprobe ath5k. https://linuxwireless.org/en/users/Drivers/ath5k. https://madwifi.org/wiki/About/ath5k
I wanted a list of packages on my current system so just in case I needed to recreate my entire system, I could just say
aptitude install $(cat packages)Of course I don't want automatically installed packages in packages. Thanks to Scott Wegner on the Ubuuntu forums, here's how I created the packages file:
aptitude search '!(!~i|~M)' -F %p > packages
I installed 2.6.25 from sid since it didn't pull in anything else.
A while ago I installed 2.4.24. While it fixed the problem where gpsbabel could not talk to the usb: device, I found that my wireless (Atheros) connection would drop after a while. The network with 2.6.22 was fine. A second 2.4.24 update fixed the network problem, but a third 2.4.24 update broke it again.
Against my better judgment, I remotely interrupted an aptitude
session so that I could continue with the installation at my
current location. This put the package that was being installed
into the half-installed
state. After this, aptitude
responded with:
Writing extended state information... Error! E: I wasn't able to locate a file for the sun-java6-bin package. This might mean you need to manually fix this package. (due to missing arch) E: Couldn't lock list directory..are you root?
After a bit of investigation, I discovered how to fix the dpkg database:
sudo dpkg --force-remove-reinstreq --remove sun-java6-bin
When I last installed lenny, I opted for the encrypted LVM
filesystems. I recently ran out of room in /usr
so
I now had the opportunity to use LVM! I was apprehensive that
the LUKS (Linux Unified Key Setup) encryption might get in the way,
but since I wasn't dealing with the root filesystem, it wasn't
an issue since I was able to work with the running system.
I learned that changes must be done in 4 MB increments, the size of the physical extent. In my ignorance and inexperience, I was nervous that the size given to resize2fs might round up and the same size given to lvreduce would round down which would mean that the end of the filesystem would get guillotined. I picked even gigabyte values mostly because resize2fs doesn't accept fractional values, but a gigabyte is divisable by the 4 MB extent size and the 512 byte disk sector size as well as any other unit the system might throw at me. If you choose megabytes as your unit, ensure the value is divisible by four. At any rate, I threw in an extra fsck at the end of each operation for paranoia and all seemed to go well.
I first had to decide how much space to transfer. I'm running
out of space on my laptop, so I didn't want to steal much
from /home
. I
ran lvs
, df
, and df -h
to
get some numbers. I decided that 500 MB would be enough, so I
first needed to reduce /home
from 44.5 GB to 44 GB.
# lsof # kill [any processes still running out of /home] # umount /home # fsck -f /dev/mapper/olgas-home # resize2fs -p /dev/mapper/olgas-home 44G # lvreduce [--test] -L 44G olgas/home # fsck -f /dev/mapper/olgas-home # mount /home
Note the --test
lvreduce argument above. I used
that first to see what lvreduce would do. It's more useful when
you aren't using gigabytes as a unit. You'll see what I mean
when you run lvextend in the next example.
I then ran vgs
(and vgdisplay
) to see
the Free Size which should now be around 500 MB. It was 788 MB
in this case and that's the number I used to
grow /usr
in the lvextend command below.
# shutdown now "Resizing filesystems" # lsof /usr # kill [any /usr processes still hanging around] # umount /usr # fsck -f /dev/mapper/olgas-usr # lvextend [--test] -L +788M olgas/usr # resize2fs -p /dev/mapper/olgas-usr # fsck -f /dev/mapper/olgas-usr # mount /usr
I then opted for a quick reboot so that if that caused trouble,
it would be now rather than when I least expected it. When the
system returned, df showed that I once again had breathing room
in /usr
. While it took a while this time around for
me to think I knew what I was doing, the next time, it'll go
quickly. Unless it's the root filesystem, in which case I'll
have to learn how to turn on LUKS when running with a Live CD.
References:
AJ Lewis, LVM HOWTO,
https://www.tldp.org/HOWTO/LVM-HOWTO/
.
Bodhi Zazen, How to Resize a LUKS Encrypted File System,
https://ubuntuforums.org/showthread.php?t=726724
.
Martti Kuparinen, Hard Drive Encryption in My Ubuntu
Installation,
https://users.piuha.net/martti/comp/ubuntu/en/cryptolvm.html
.
I just learned the difference between /dev/random
and /dev/urandom
. Use the former when you need
strong randomness for keys; use the latter when you need speed
and don't expect the bits to be broken (like when scattering
random bits on a cleaned disk partition or when preparing the
partition for encryption).
This morning, ssh worked without having to run ssh-add, which is
strange because I expire my passphrase. I then ran ssh-add and
got a SSH_AGENT_FAILURE
message. Apparently,
gnome-keyring usurped ssh-agent as reported
in BTS
#473864.
Until I learn more about gnome-keyring, I've disabled the ssh
component as Josh Triplett suggested by unsetting the gconf key
/apps/gnome-keyring/daemon-components/ssh
. You can
do this in gconf-editor
, or run the following
command:
gconftool-2 --set /apps/gnome-keyring/daemon-components/ssh false --type=bool
2010-12-05 update: It appears that this bug is fixed in squeeze. This workaround is no longer necessary.
I was spurred on by Tommy Trussell to enable syncing over USB so that I could take advantage of the sync button on the cradle and because it's much, much faster than using net: over Bluetooth.
When I plugged in the Treo and hit the button on the cradle,
there wasn't a single message in the syslog and lsusb didn't
list the device either. I found that if you
unload ehci_hcd
, then the system recognizes the
Treo. However, after a reboot, I found that my system recognized
the Treo (under uhci_hcd
) even though
the ehci_hcd
module was still loaded, so all is
well.
I also found that pilot-xfer -l -p usb:
didn't
connect initially. It seems that the first time you HotSync, you
need to run the pilot-xfer command before starting HotSync on
the Treo. After that first time, the order doesn't matter.
I've updated Using the Palm Treo 650 with Debian GNU/Linux accordingly.
In order to get the usb: filename to work with gpsbabel,
follow the directions
in Hotplug
vs. Garmin USB on Linux, namely, add the following
to /etc/modprobe.d/local
:
blacklist garmin_gps
And add the following
to /etc/udev/rules.d/51-garmin.rules
:
SYSFS{idVendor}=="091e", SYSFS{idProduct}=="0003", MODE="0666"
However, while this worked for kernel 2.6.18, later kernel versions broke it! It is still not working as of 2.6.22.
Newsflash! I inserted the garmin_gps
module and
tried using /dev/ttyUSB0
instead
of usb:
and I was able to back up the Garmin! It
appears that this driver has been repaired--somewhat--along the
way. I still had some errors uploading routes, although with
persistence, they eventually all arrived. I wasn't brave (or
stupid) enough to try uploading large tracks or waypoint files
though. So, I'll probably still try the usb:
file
again once 2.6.24 is installed.
My router's DHCP table was showing a blank where my laptop's
hostname should be. I fixed this by uncommenting the send
host-name
line in /etc/dhcp3/
.
I just made a donation to the Software Freedom Law Center. Consider making a donation yourself.
I was getting errors like dund[31782]: Failed to connect to the local SDP server. Connection refused(111) in my syslog and HotSyncs that were failing with Faulty modem. I worked around this problem by running the following commands:
$ sudo killall dund $ sudo /usr/bin/dund --listen --persist --auth call treo
I've reported the bug as BTS #452869.
The AIDE that comes with etch is very hard to keep quiet. Marc Huber suggested that the lenny version might be a bit quieter, so I ran the following to get the latest and greatest on my etch system:
apt-get source aide aptitude install dpatch libmhash-dev flex libgcrypt-dev (cd aide-0.13.1 && fakeroot dpkg-buildpackage -b -uc) sudo dpkg -i aide_0.13.1-8_i386.deb aide-common_0.13.1-8_all.deb
These commands are listed here mostly so that I can clean up if aide 0.13.1-8 hits backports.
I could mount data CDs, play DVDs with totem, and play audio CDs with gnome-cd. However, I was not getting the usual CD icon in rhythmbox when an audio CD was inserted, and sound-juicer produced a No CD-ROM drives found--Sound Juicer could not find any CD-ROM drives to read message and exited.
Both rhythmbox and sound-juicer played CDs just fine a week before my disk crashed and I reinstalled lenny from scratch.
I found that rebooting cleared this problem.
My disk crashed on Friday so I bought a new one and installed lenny from scratch. One problem I encountered is that the top of the PDF printed from Gnucash was truncated. It seems that this was observed by others in the gmane.linux.debian.user thread entitled Text on printed pages truncated with Message-ID 45A2C319.8020400@heard.name.
Interestingly, after I configured my printer in CUPS, this problem went away. This was confirmed by one of the installation gurus:
Jim Paris <jim@jtan.com> wrote: > Interestingly, the top of Gnucash reports printed to PDF were truncated > until I installed a printer in CUPS, and then the problem disappeared. > Is a CUPS installation default suboptimal? Maybe it was a paper size issue, and installing a printer changed your default papersize? You can change the current setting with "dpkg-reconfigure libpaper1". I noticed in your system information: > Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=ANSI_X3.4-1968) (ignored: LC_ALL set to C) If LC_ALL was set to C during installation, I think libpaper1 would have defaulted to A4 (because "locale width" and "locale height" return A4 size in that case)
2010-03-27 update: Another problem that can cause margins to go away is the new "borderless" media size provided by HPLIP. To see if this is the source of your problem, go to System -> Administration -> Printing, bring up the context menu for your printer, view the Properties, go to the Printer Options dialog, and view the Media Size field. If this field is set to one of the borderless flavors, use either the vanilla or AutoDuplex (if you have a duplexer) flavor instead.
On one of my etch systems, I got this error message after a
recent upgrade. (I did not get it on my etch server, nor my
lenny laptop.) I fixed this by adding the following
to /etc/apt/apt.conf
:
Cache-Limit "20000000";
I installed Linux kernel version 2.6.22 and found that
the ibm_acpi
module was renamed
to thinkpad_acpi
. Unfortunately, this broke the
Fn-F4 hotkey combination used to suspend my laptop. I'm assuming
that Debian
Bug report #434845: acpi-support: ibm_acpi module renamed
thinkpad_acpi in kernel 2.6.22 is related, but the suggested
fix didn't work for me. The GNOME Shut Down menu item does work
however.
Gpsbabel still doesn't work--I'm keeping a version of 2.6.18 around for it. This might be related to the CONFIG_USB_SUSPEND problems that have been reported. But then, it could be CONFIG_USB_SUSPEND which fixed suspending under ACPI on my ThinkPad.
If your ISP (such as Earthlink) blocks port 25, and someone else in your household controls the authentication credentials and understandably does not want to share them with you, how do you send mail?
I got my hosting company to poke a hole in port 587 (submission) and then updated postfix on my laptop and on the server as follows:
master.cf (on server my.relayhost.com)submission inet n - - - - smtpdmain.cf (client)
relayhost = [my.relayhost.com]:587
Note that I use pop-before-smtp for authentication.
Thanks to a post on gmane.linux.debian.user.laptop from Stefan
Monnier, I installed the hibernate package, and created a file
called
/etc/hibernate/scriptlets.d/local
which contains
the following code which turns off the Ultrabay LED. If you want
to use it, replace my initials (BW) with your own since the
hibernate namespace is global.
# -*- sh -*- # vim:ft=sh:ts=8:sw=4:noet # Ideas from /usr/share/hibernate/scriptlets.d/hardware_tweaks. # ibm_acpi proc directory BW_IBM_ACPI_PROC=/proc/acpi/ibm BwIbmAcpiStartSuspend() { # Turn off Ultrabay LED. IbmAcpiLed 4 off return 0 # this shouldn't stop suspending } BwIbmAcpiEndResume() { # Turn on Ultrabay LED. IbmAcpiLed 4 on return 0 } BwIbmAcpiOptions() { if [ -d "$BW_IBM_ACPI_PROC" -a -z "$BW_IBM_ACPI_HOOKED" ]; then AddSuspendHook 12 BwIbmAcpiStartSuspend AddResumeHook 12 BwIbmAcpiEndResume BW_IBM_ACPI_HOOKED=1 fi return 0 } BwIbmAcpiOptions
I had found that with lenny and 2.6.21 kernel, ACPI suspend was
finally working. Yay! Further, I felt that the built-in power
management stuff might be working as well and I could remove
the acpid
package dispense with
the /etc/acpi
scripts since I was seeing some
gnome-power-management warnings in the syslog.
When I pressed Fn-F4 however, I got the message:
gnome-power-manager: (wohler) A security policy in place prevents this sender from sending this message to this recipient, see message bus configuration file (rejected message had interface "org.freedesktop.Hal.Device. SystemPowerManagement" member "Suspend" error name "(unset)" destination ":1.22") code='9' quark='dbus-glib-error-quark'
After a little digging, I discovered that I had to add myself to
the powerdev
group. Then I got this message:
gnome-power-manager: (wohler) Doing nothing because the suspend button has been pressed
This was fixed by going into the gconf-editor and changing the
value
for /apps/gnome-power-manager/action_button_suspend
to suspend
.
Copyright © 2007-2024 Bill Wohler Last modified: Sat Jul 6 11:30:04 AM PDT 2024 About photos |
Free DNS |