Labels

lego (12) Linux (39) pi (20) Thinkpads (4)

Monday 8 December 2014

BTRFS - One Year Later

It was a year ago when I decided to give BTRFS a try.  Prior, I was using primarily EXT4 partitions, with a few XFS partitions on really slow HDs (4200rpm).

It was also about a year ago that I started using LXC (Linux Containers).  I needed a way to maintain multiple LXCs while reducing disk storage overhead.  Initially, this is what caught my attention for BTRFS.  BTRFS provided a typical journaled Linux filesystem a keen to EXT4, but provided "snapshot" and "cloning" a keen to QCOW2 with KVM.  I could maintain multiple LXC containers, and let them share common files, reducing disk consumption.  With BTRFS, I could clone or snapshot LXC instances and avoid duplication -- only worry about storing the changes between instances.  There are not many filesystems that provide this ability while still maintaining properties associated with a typical EXT filesystem and avoid using a file container like QCOW2.  I immediately started using a BTRFS filesystem for storing all my LXC instances.

Since my LXC usage was recreational and non-critical, I experimented with subvolumes, snapshots (including read-only), compression and a few other features of BTRFS.  Having encountered no stability or reliability issues, in the summer of 2014, I decided to give BTRFS a trial run as the primary filesystem for a new Linux Thinkpad.   BTRFS proved beneficial for me for LXC, and the snapshot functionality could equally provide me benefits from effortlessly backing up and rolling back Linux upgrades in the future.  Prior, whenever I moved to new update packs (distro upgrades) in LMDE (Linux Mint Debian Edition), I would create a clonezilla backup -- obviously an offline process, that required me to boot up with a clonezilla image to perform the backup.  I've had to rollback upgrades previously, for all kinds of reasons.

Like I typically do, I cloned a Linux EXT4 image of a working system (using Clonezilla) to the new system.  I utilized the BTRFS tools to convert the EXT4 image to a BTRFS filesystem.  Since the conversion process doesn't modify the data itself (only the inodes or meta data), it was a relatively quick process (mere minutes) and provides an equally quick process to rollback the filesystem back to EXT4.  I even (successfully) tested the rollback.

It was around this time I had my first BTRFS data disaster.  Ironically, it had nothing to do with the new system but on an existing system where I was using BTRFS only for LXC storage.  I had previously encountered situations where I would find myself with my LXC storage mounted in "read only".  A reboot would cure the issue, but I wasn't sure what was triggering the situation.  I was convinced that hibernating the system while LXC were running may have been causing the issue, so I modified my hibernation scripts to explicitly freeze any running LXC instances prior to hibernating.

One day when the problem had surfaced, I decided to resolve the issue by remounting the BTRFS subvolumes as read-write.  This proved disastrous.

When simply unmount and remounting in rw failed, I decided to try using the BTRFS fsck equivalent to examine the filesystem and fix any errors.  There were reported errors with the BTRFS structure, later which I determined to be false, but I attempted to fix these issues.  That was my first mistake.  I was not aware that the BTRFS filesystem repair utilities are wildly experimental and inherently will corrupt and cause further problems.  Warnings throughout the net, reiterate not to use btrfsck to try to repair filesytems.  Why does the tool exist, I cannot be certain, but definitely one day the tool may provide some kind of meaningful function.  My use of the tool led to a corrupt BTRFS root filesystem, which could no longer be mounted.  By virtue that the root filesystem was corrupt, all subvolumes were inaccessible and considered lost.

I had later determined that my BTRFS filesystem was never initially damaged or corrupt (at least, prior to running btrfsck).  I discovered that whenever I had encountered that my BTRFS had been mounted or remounted as read-only had only occurred after shutting down an LXC.  I quickly realized that the mount scripts installed in some of the init.d routines on a number of my LXC were "successfully" causing my BTRFS subvolumes (or root filesystem itself) to unmount and remount in ro.  I had not seen this behaviour with EXT4.  And because I don't usually shutdown my LXCs without rebooting the host itself, the issue was rarely triggered.  So, the issue was never caused or contributed to hibernating the host.  By simply not shutting down the LXC (use halt or leave them running and let them stop when the host shutsdown) or by removing/disabling these mount scripts, the problem was resolved.  I have not encountered the issue since.

I was very lucky in the disastrous scenario.  I had lost the entire filesystem hosting all my LXCs on one system, but because I had used the send/receive functionality in BTRFS to "clone" or backup my LXCs from the system to a central hard drive, I was able to simply "receive" backups of the filesystems back onto the drive.  My LXCs don't contain data, themselves, so the instances are fairly static.  The disaster let me at least validate recovery of the filesystems from backups.

But from the disaster and my other day-to-day experience with BTRFS on various systems for LXC storage and on the newly setup laptop using BTRFS as the primary drive, here is my list of cautions when using BTRFS:

1) BTRFS is considered "stable" but is still being highly developed.
For this reason, use the most current "stable" kernel as you can.  I would not consider using anything less than 3.13 for day-to-day use, and definitely nothing less than 3.10.  If you don't have other issues or reasons to not use 3.15, I would highly recommend 3.15 be your day-to-day kernel release.  The most current BTRFS changes are in 3.15 and there has been a considerable amount of enhancements and fixes that might prevent unnecessary issues.

2) "Stable" but not necessarily "production" ready.
Considering the repair tools themselves are destructive, I would not use this filesystem for "data" storage.  For example, I would not use these filesystems to store primary or sole backups of data.  I continue to use EXT3 and EXT4 with it's journaling and widely reliable and stable filesystem tools (such as fsck).  Generally, my "data drives" are mirrored across different hardware to account for hardware failure.  When proper processes are followed (performing clean dismounts and syncs), I haven't encountered a filesystem corruption issue that journaling could not resolve.  When that day occurs, the fact I maintain mirror devices will hopefully assist with recovery.  Accounting for the fact that these tools can be considered not production ready for BTRFS, you have no recourse to address filesystem corruption caused by software error, human error or hardware failure.  Even with mirroing, it is plausible that if your only recourse is to restore your data, there is a higher probability of a double-failure, where both mirrors fail, when using a filesystem that doesn't provide for reliable recovery.   Thusly, on systems where there is constant turn of data, I would also not recommend BTRFS.  On systems such as laptops for the primary operating system filesystem, as long as you have backups of the filesystem on other devices to restore from, I would recommend you do included BTRFS for your consideration.  However, if these systems are critical, where downtime is not tolerable, I would approach with caution.  If you are using the filesystem on a primary laptop while traveling, I would make sure a copy of the filesystem or alternative filesystem be accessible either on a memory key or flash device.  If you are traveling (where obviously you may not have your backup drives with you) and unfortunately encounter an issue with your filesystem for which you can't boot or mount your system, you'll need that alternative boot device.

3) Don't treat subvolumes within the same device as your sole-backups.
If you create subvolume snapshots and store them on the same root filesystem, then don't considered these backups.  They'll prove worthwhile for rolling back changes on your primary filesystem, but if you encounter a hardware failure or corrupt your root filesystem, your subvolumes will serve no purpose.

4) Mount subvolumes only, wherever possible.
Instead of mounting your root filesystem to access your subvolumes, consider only mounting your subvolumes.  Human errors such as a "rm -rf *" on your root filesystem will take out all your subvolumes, but the same human error when performed on a subvolume will only affect the subvolume itself.  Likewise, if you encounter a bug or other filesystem corruption, if only your subvolume is mounted at the time, the risk to the root filesystem (and thus, other independent subvolumes) should be unaffected.

5) Use send/receive to store mirror copies of your subvolumes to other devices.
When backing up, generate a read-only snapshot (snapshots are essentially free in terms of disk storage), and send these snapshots to a suitable BTRFS filesystem stored on a separate disk.  Space-saving  properties are maintained when transmitting your backups.  Use these read-only snapshots as a reference point.  Create your read-write filesystems as snapshots of these.

6) Track your snapshots and subvolumes by using logical naming and track hierarchies in a text file.
If you are creating a lot of subvolumes, performing a lot of snapshots and sending/receiving (cloning) your filesystems across multiple systems, it is smarter to ensure the subvolumes and snapshots include references in their name such as source and date.  Also, I recommend tracking this to a text file, to track which subvolume originated from which snapshot and is shared with which systems.  As you start having branches distributed among multiple targets, it'll be harder to trace the origin of a particular subvolume in the future.  I store this information in a text file typically found in the root filesystem so that I don't rely on memory or tracing through output from tools to familiarize or validate the origin of any given subvolume.

7) Use compress with lzo whenever possible.
When comparing performance with EXT4, BTRFS can sometimes perform "on par" but generally can be 3x slower in some real-world situations.  The ability to have inherent compression in the filesystem, coupled with the "lighter' lzo compression algorithm, a BTRFS drive will perform better than EXT4.  Your individual situation needs to be evaluated to determine if the benefits of BTRFS will outweigh the performance risks.  Understanding under what situations BTRFS will perform well is critical.  Generally in random read-write scenarios (most predominately on filesystems housing databases), BTRFS will under perform to EXT4 by up to 3x.  Compression in this case provides little or no benefit because the data tends to be in a state not suitable.  In cases where data is primarily read, BTRFS with lzo compression will outperform EXT4.  In situations such as a primary filesystem for a Linux distro on a laptop, you are generally reading and loading programs, so the benefits of BTRFS will outweigh the costs.

8) Understand how the benefits will actually help you.
For example, of the benefits of BTRFS that I haven't touched on is built in filesystem checksum tracking.  This means the filesystem has a means of checking for corruption from such bit rot, etc.  On an EXT4 filesystem, bits that have changed, files that have become corrupt from reasons other than from journal errors or apparently hardware failures, is detectable as the filesystem tracks the checksum of files.  Although bit rot is not new, it is increasingly becoming more common as society moves to cloud-storage (where bits are stored elsewhere and can be lost or distorted from transmision) and to SSD or other flash devices where data corruption is not caused by traditional hardware failures such as heads smashing into platters, or loss of magnetic polarity / or magnetic interference, but by memory chips failing,  electrical interference, wear-and-tear on the chips themselves or by software error/bugs in the controllers managing the data on the SSD devices.  But you have to understand what the benefit really is.  Bit rot on a EXT4 filesystem will most likely not occurring unless you access the data file and realize it is corrupt (if you even realize it).  It could be as simple as a corrupt number being incorrect on a spreadsheet or an "artifact" in a media file.  You may not realize the data is corrupt (and if you backup the corrupt data to another device, your backup will become corrupt).  You would have to exert ongoing effort to track checksum values on files and determine which changes in checksum values are due to valid file changes and those introduced by bit rot.  However, like on EXT4, identifying bit rot is only part of the solution.  On an EXT4 filesystem, you would replace the corruption with a "known good" version of the file that is stored elsewhere.  Likewise, the checksum benefit in BTRFS is the same.  It does not provide a means to fix corruption, only the means of detecting it.  Throwing away your backups because your filesystem can detect bit rot corruption does not mean it will provide you a means to automatically repair the bit rot.  There is a difference between perceived benefit and actual benefit.

My year-long experiment with BTRFS has been successfully and for reasons already explored in previous posts or ones to be explored in future posts, I'll continue to expand my use of BTRFS in the years to come.

Wednesday 27 August 2014

Linux Mint Debian Edition (LMDE) UP8 on Thinkpad X131e


I recently acquired a Thinkpad X131e.  It's the 3372-3FU, which is the AMD -- meaning AMD CPU, ATI Radeon HD 7340, and Broadcom wifi.   I'm used to Intel-based systems and Nvidia Optimus systems (with Intel as fallback).

Having previously used a system with broadcom wifi chipset, I knew I would need to load some non-free, closed-source wifi drivers on this new system.

Getting Started

I replaced a Thinkpad X61t (tablet) system for this system.  The Thinkpad X61t had my standard LMDE install with Update Pack 8.  It had a Intel CPU (core 2 duo), Intel Centrino wireless and  OCZ Vertex 3 60-SSD.

I thought I would at least be able to transplant the SSD and boot into my install (perhaps with no wifi and fallback to default drivers for video).

My first challenge was with the SSD.

Low Profile Drive

I had completely neglected that the industry created a new and shorter kind of 2.5" hard drive standard.  The drives are 7mm tall versus the standard 9.5mm.  My 2 year-old OCZ Vertex 3 drive would not fit in this Thinkpad X131e, that apparently uses the "low profile" 7mm standard drives.

I realized by looking at the SSD case, that most of the cases was likely just "filler".  What stopped me from finding out was a "warranty void" sticker.  Having still 1 year warranty on the drive, I decided to go for it, and void my warranty -- if the drive dies, I'll be eager to replace it by a larger drive anyways.  Opening up the drive, I realized my intuition was correct -- the SSD consisted of a slim board (no bigger than 5mm in depth).  I kept the bottom of the case for protection from the board ever meeting contact with the metal bottom of the laptop, and placed a plastic insulator sheet adapted from an old 2.5" HD caddy on the top side of the drive (to avoid the top portion making contact with the laptop case door).  Warranty issue aside, the SSD issue was resolved.

Wifi causes Linux to hang at startup

I quickly realized that upon bootup that I couldn't make it past the init.d startup scripts without the laptop completely hanging.  I did a trial-and-error, disabling the startup services (by starting up in recovery mode, dropping me to a root prompt before the services start up).  In the /etc/init.d directory, you can simply remove the execute permission on startup services that you suspect are causing the issue, and then trial-and-error until you find the offending service.  I realized quickly that it was the networking service causing issues, and shortly after that, the wifi in particular.  To allow me to at least boot up the system with ethernet, I needed to completely disable wifi.  It was not sufficient to log onto the BIOS and disable the Wireless LAN antenna.  I found I needed to go to IO Port Access and disable Wireless LAN.  Upon doing this, I was able to boot up with a network connection (over ethernet).

Installing broadcom wifi drivers


Fairly simple to do.  With a proper working internet connect over ethernet, I was able to run:

sudo apt-get install broadcom-sta-dkms

This downloaded and installed the kernel modules for the broadcom wifi.

Broadcom STA is a binary-only device driver to support the following IEEE
802.11a/b/g/n wireless network cards: BCM4311-, BCM4312-, BCM4313-,
BCM4321-, BCM4322-, BCM43224-, BCM43225-, BCM43227-, and BCM43228-based
hardware.

After re-enabling Wireless LAN in IO Port Access, I was able to boot up with a wifi connection.

Xserver fails

With xorg not understanding AMD/ATI graphics with a regular install (get the error No Screens Found on startup of xorg/xserver), I realized I needed to install some working graphic drivers.  

Avoiding the debate over installing the latest open-source drivers or the closed-source proprietary drivers, I decided to opt for the proprietary which happen to be fairly up-to-date.

I made my first mistake by trying to install the ATI Catalyst 14.10 and 14.20 beta drivers from the Linux section on the ATI website.  The installer installs the drivers successfully, but the installer isn't actually doing anything.  Trying to run the buildpkg manual method with the software fails, and you end up going in an endless loop of troubleshooting.  I wasted a day after realizing I simply won't be able to use the installer method as its designed to only install to a 32-bit system, whereas I'm using a 64-bit install.

I decided to use the Debian repository instead.  I already had the non-free directive in my apt source.list file.  I simply followed the instructions found on the Debian Wiki for ATI (https://wiki.debian.org/ATIProprietary).  

Making sure I already had the linux-headers for my kernel installed (uname -a showed 3.11-5.dmz.1-liquorix-amd64, so I ensured I had the linux-headers installed by running sudo apt-get install linux-headers-3.11-5.dmz.1-liquorix-amd64).  Next I installed the ATI drivers by running:

sudo apt-get install fglrx-driver

After the kernel driver installs, you need to have the installer configure xorg to understand how to work with ATI.  You would run:

sudo aticonfig --initial

Apparently, if the kernel module fails to build/install/load, the aticonfig changes to xorg are sufficient enough to get X windows to load with a fallback graphic driver.  It will be a display with a slow refresh rate with other performance factors, but it will load.  I realized that the version included with Update Pack 8 doesn't compile the module properly for the Linux Kernel 3.13 that I had installed.  Apparently ATI announced it does not yet support Linux Kernel > 3.11.  I did find references to patches people have created to get the kernel modules to build on 3.14, etc, but to deal with that on another day, I decided to just use Linux Kernel 3.11.  I was able to successfully build and load the ATI Catalyst 13 Beta drivers in the Linux Mint Debian Edition Update Pack 8 repository.  Backlight adjusting via function key was working properly with the proper module loaded -- on the fallback, the backlight function keys don't work, a sign you are falling back. 

Now, if you boot up the system, you should have wifi and proper working X windows.

With the proper driver loaded, 3D acceleration is supported (and the ATI control panel will work). The command fglrxinfo will return a GL version number.  When fallback driver is active, you won't receive a GL number with fglrxinfo.  Further, with the proper drive loaded, if you perform a lsmod, there will be a fglrx driver loaded.


Additional Notes

After some reading, I decided to add the directive "nomodeset" to my grub.cfg.

Compiz wouldn't load correctly (window manager crashed, windows displayed without a window manager, etc).  I expected compiz to work with a near-latest ATI proprietary driver installed and loaded (with 3D kernel module compiled and loaded).  I did read that the latest open-source ATI driver SHOULD have compiz support also working when the Mesa GL lib is also loaded.  Naturally you cannot have both open and closed source drivers installed on the system at the same time.  Once I have things stable, I will troubleshoot further.

Hibernation/Suspend

I use pm-suspend and pm-hibernate setup with laptop tools on all my systems.  Both fail to suspend or hibernate the system.  I can suspend and resume successfully by running as root:

echo "mem" >  /sys/power/state

The same trick for hibernation, 

echo "disk" >  /sys/power/state

results in a system that cannot thaw from hibernation properly.  Running the hibernate command does work though.

Therefore, I use the following two commands

to suspend:
echo "mem" >  /sys/power/state

to hibernate:
hibernate


Tuesday 24 June 2014

Adding exFAT support for Linux

I recently stumbled across a need to store > 4GB files on my 32GB microSD card.  It naturally came formatted in FAT32 and I needed to keep compatibility with a device that requires FAT32 or exFAT format types.

exFAT does support file sizes > 4GB, so I needed to reformat the card, but I noticed that Linux Mint Debian Edition (LMDE) or Debian, doesn't support exFAT natively out-of-the-box.

To gain support, install exfat-fuse exfat-utils

sudo apt-get install exfat-fuse exfat-utils


Then, to format your microSD card (or other device):

sudo mkfs.exfat /dev/mmcblk0p1

Friday 20 June 2014

Rejuvenating the DLINK DNS323 NAS with alt f


I've purchased three of the DLINK DNS323 over the years.  I retired the units a number of years ago because of the issues discussed below.

Problematic support for 4k sector sizes

The user needs to format drives that use 4k sector sizes in a firmwae image that supports it (which is the last, 1.10), but the community of users have pointed out various bugs and issues with 1.10, suggesting users not use this firmware on an ongoing basis.  Therefore, for the purposes of formating (and properly aligning the sectors on these drives), it is recommended the user ugprade their firmware to 1.10 and immediately downgrade to 1.08 after formatting.  You just need to setup the drives on a supported firmware -- you don't have to actually persist using that firmware.

No support for EXT4

No sense staying with EXT3 when journaling is significantly better in EXT4.

No support for hard drives > 2.2TB (such as 3TB drives)

With the official firmware updates retired, no new hard drives will ever be supported officially.

No GPT partition tables (only MBR)

Along the reasons of not supporting newer, larger hard drives.

Formatting the wrong drive bug

Although DLINK recognized the problem in earlier firmware images and says they corrected it, I personally encountered the issue when a pre-existing drive was formatted by the firmware instead of the selected new drive, despite both drives having different sizes, different manufacturers and different models -- therefore, not a mistaking the identify of the correct drive to format.


Last year I came across alt f, a free alternative firmware for these NAS units.  I flashed it on one of the NAS units and never looked back.  It really breaths life into these old units.  I've been making use of my obsolete and retired NAS units.

The primary features that stand out for me are:

  • Support for EXT4
  • Support for GPT
  • Support for larger/newer hard drives (3TB, 4TB, etc)
  • Use pre-existing hard drives without reformatting or modifying the partition tables
  • Support for NTFS drives in the left and right bay (in addition to USB)
  • RAID that includes support for drives over the USB port
  • ffp package support builtin

I've put together a video walkthrough below of alt f firmware.



Thursday 19 June 2014

USB Logitech microphone support in Kazam on Linux

I had an issue where my USB Logitech microphone would not show up in the Kazam Screencast application.  Linux sound record could see the microphone, and I could record audio through sound recorder from the microphone, but Kazam could not.

I ended up creating a loopback Pulse Audio device using:

pactl load-module module-null-sink sink_name=Virtual1    
pactl load-module module-loopback latency_msec=1 sink=Virtual1 

In Kazam, I would select Sound from speakers (not sound from microphone):



In preferences, for Speakers, select the microphone.



Now the sound recording will be relayed through the microphone to Kazam.  There won't be any speaker output, so there will be no feedback.


Wednesday 4 June 2014

Which filesystems I use and why

I've been frequently asked which filesystems I prefer.  I decided to put together a post discussing my filesystems of choice and general analysis of the criteria.

Windows VMs/KVMs etc

Obvious choice is NTFS.  In KVM, the NTFS filesystem is stored in a QCOW2 container (allowing the filesystem to be easily resized/extended and snapshots/clones reduce data storage consumption).

Raspberry Pi / Beagle Bone Black / Cubieboard

read-only EXT4 with journaling off

I generally turn journaling off (by running tune2fs -O ^has_journal /dev/sda) to preserve the flash and SD write health of my media.  I have literally killed genuine SanDisk premium SD cards with "active" Raspberry Pi systems that had journaling on.

I prefer to make my partitions read-only, with using tempfs to store writes.  If I want write to be persistent, I store the data on a secondary EXT4 writable partition.  However, the root and boot partitions, I keep read-only unless I'm performing an upgrade on the system.  By keeping the filesystem as read-only, I avoid corrupting the SD card with bad writes caused by bad power sources, overclocking or improper shutdown.  I found even with reliable power sources, no overclocking and always ensuring safe shutdown, my SD card would become corrupt over time (just generally less often -- say over 6 months instead of 6 weeks, but corrupt neither-less).  I have seen several exhaustive tests done by other Raspberry Pi users that conclude the same -- that only read-only filesystems on SD cards will prevent corruption.

Debian-based Linux Systems on SSD

EXT4 

I use the mounting parameters rw,errors=remount-ro,user_xattr,noatime,discard that reduce unnecessary update of inodes when accessing files.  The discard enables TRIM support and should be enabled for as long as your drive supports it (hdparm -I /dev/sda | grep TRIM).  If you enable discard and your drive doesn't support it, you'll have data loss.

EXT4 has better support than previous versions (EXT2, EXT3), even though there is increased journaling in EXT4 (that can reduce the lifespan of SSD drives).  If you have frequent writes and have means to mitigate the risk of data loss in system crashes/etc, then consider turning journaling off (tune2fs -O ^has_journal /dev/sda).

I reduce the amount of writes to SSD by using various tempfs (stored in RAM) for logging, tmp and caching.  I use the find -mtime command to identify files that are frequently written to (in most cases, for unnecessary reasons).  I then schedule startup, suspend/hibernate and shutdown routines that sync these tempfs to disk.  I store everything from system logs and google chrome / Mozilla configuration and cache directories to RAM.

Debian-based Linux Systems on slow hard drives

XFS

I have one Sony VAIO system that uses a 1.8" 4200rpm SATA hard drive.  A really slow hard drive benefits from the speed advantages of XFS.  One should note that data loss can become and issue for system crashes/etc.  You can consider mounting the XFS filesystem as read-only, and use tempfs to store writes to be synced to disk in a safe fasion (similar to the Raspberry Pi setup). 


Linux Containers / LXC and Linux Based VMs/KVMs

BTRFS 


For any system that you use kernel 3.9 or greater, you should consider using BTRFS to store your operating system.  BTRFS allows you to "snapshot" and clone your LXC/VM with minimum data duplication.  Similar to QCOW2 (in KVM) and snapshots in VMware, BTRFS will help you reduce your LXC/VM disk consumption.  

NAS units (such as DNS 323)

EXT4 with SAMBA and NFS

I use alt-f firmware on my DNS 323 units, so my NAS units do support EXT4.  A lot of older NAS units or DNS 323 units running on official firmware, may not support EXT4.  In those cases I use EXT3, and as I upgrade firmware on the NAS or replace it with a more capable NAS, I convert to EXT4.  The journaling of EXT4 provides the best means of preventing data loss besides moving to a filesystem such as BTRFS or ZFS.  My NAS units, even with modern open-source and community driven firmware like alf-f, does not support BTRFS or ZFS due to hardware limitation (memory and CPU required to support "tomorrow's filesystems").  I mitigate risk of data loss due to corruption by maintaining offline mirror drives for each NAS drive.

Samba shares enabled for sharing with Windows and XBMC on Raspberry Pi and NFS for mounting the drives on Linux systems.

Comments about BTRFS and ZFS

I really believe that BTRFS and ZFS will be the filesystems for the future.  BTRFS is highly being developed on and is meant to share a lot of the benefits and features of ZFS but without the licensing restrictions.  If you have situations/applications where you can't run newer versions of the Linux kernel, then you should avoid BTRFS -- it's always best to only use BTRFS on systems with at least kernel 3.9.  

ZFS is an Oracle created filesystem that has great data integrity (to guard against bit rot and data corruption), built in encryption and compression and data deduplication (great for snapshoting and cloning).  ZFS has been ported to Linux using either native kernel modules or by FUSE.  There are licensing issues as ZFS isn't licenses under GPL, and therefore, the implementation has stripped out features and is not as stable as the original Sun Solaris implementation.  Being that the original Oracle implementation is 64-bit, the linux native kernel modules are only compiled for amd64, so if you need to support 32-bit systems, you should not consider ZFS.  Further, ZFS needs a fair amount of RAM (4GB) to manage.  Also, if you often run/use the latest Linux kernel versions, you may have issues as you have to wait for the native kernel modules to be updated for your kernel release -- due to licensing restrictions, they don't come bundled with the kernel, so you need to wait for the support community to update the modules afterwards.  I've been using kernel 3.13 for 6+ months, and I still don't have ZFS kernel modules that work on my kernel version.

ZFS also has embedded support for Samba and NFS -- that is, the filesystem can be automatically shared, without any additional configuration, simply by enabling sharesmb or sharenfs.

BTRFS is a good compromise to ZFS -- to gain some of the features without dealing with some of the problems.  Both would be considered "non-production" at this point.  Don't store critical data on them without having good mechanisms in place to safe guard against being inflicted by bugs that often pop-up in bleeding-edge development work.


Tuesday 3 June 2014

Map Suspend & Hibernate to shortcut keys

One of the issues I recently encountered when upgrading to Linux Mint Debian Edition (LMDE) Update Pack 8 (UP8) is the loss of functionality of the suspend and hibernate keys.  At first I was under the impression that it was just an issue on my Thinkpad, but I discovered the issue existed on all the system that I upgraded.  It's a known issue and was identified in this bug: https://bugs.launchpad.net/linuxmint/+bug/1276678

There might be other workarounds to resolving the issue, but I took this as a good opportunity to finally mapping out a new suspend/hibernate key combination that I could use across all my systems.  The FN-F4 for suspend and FN-F12 may be universal on all Thinkpads, but my Vaio and Ultrabook use different suspend and hibernate keys, thus my finger-memory sometimes slips up.

To tackle both problems, I decided create a common shortcut key for each function across all my systems.  Because the FN keys tend to be tied to hardware routines, I settled on SHIFT+F4 for suspend and SHIFT+F12 for hibernate.

The first step to tackle was writing a suspend and hibernate script that could be invoked by my user ID without having to provide a sudo password.

I added the following two lines to my /etc/sudoers file.

durdle t420-ssd = (root) NOPASSWD: /usr/sbin/pm-hibernate
durdle t420-ssd = (root) NOPASSWD: /usr/sbin/pm-suspend

Substitute durdle with your user ID, and t420-ssd with our hostname.

You may find that you need to change the file  /etc/sudoers  to have write permissions before you can edit it (doing a  chmod 740 /etc/sudoers before updating the file and a chmod 440 /etc/sudoers after updating the file, will do the trick).

Next, I decided to create shortcut scripts in my ~/bin/ folder.  This will allow me to quickly suspend and hibernate from the command line and allow me to easily tie the two routines to the Autokey program.

My two scripts resemble as follows:

~/bin/hibernate.sh 
sudo /usr/sbin/pm-hibernate

~/bin/suspend.sh 
sudo /usr/sbin/pm-suspend

If you can't use the pm-hibernate or pm-suspend programs to hibernate or suspend, you can substitute your equivalent calls.  Or if you need to add wrappers around them, these shell scripts are the place to put them.  Previous to kernel 3.13, my Vaio need some additional system calls before calling pm-hibernate, in order to ensure a proper hibernate.  I place those calls in my hibernate.sh.

I use a program called Autokey for various shortcuts and keyboard macros.  If you don't normally use Autokey, you will need to ensure that it starts up on boot up.  You can do this by calling it in your "Startup Applications" list.


The program is fairly self explanatory.  You will create two new scripts, enter the following, substituting your username in, and then assigning the Hotkey <shift>+<f4> and Hotke <shift>+<f12>:

for suspend:
import os
os.system('/home/durdle/bin/suspend.sh')

for hibernate:
import os
os.system('/home/durdle/bin/hibernate.sh')

That's it!  Now when you press SHIFT+F4, your system should suspend and when you press SHIFT+F12, your system should hibernate, all without entering a password.

Tuesday 8 April 2014

Fixing ipod touch / iphone support in LMDE after Update Pack 8

Since upgrading to Update Pack 8 (UP8) in Linux Mint Debian Edition (LMDE), I'm no longer able to mount my ipod Touch 2G 3.1.3 or iphone 3GS 6.1.4. Both devices used to appear as a mounted device when plugged in, and I could use gtkpod to sync to. Since the upgrade, when I plug in either device, I get a DBUS timeout error. The devices appear as unmounted, and when I try mount them via caja, I get a DBUS timeout error.

dmeg has the following output:

usb 6-1: new high-speed USB device number 3 using ehci-pci
usb 6-1: New USB device found, idVendor=05ac, idProduct=1294
usb 6-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 6-1: Product: iPhone
usb 6-1: Manufacturer: Apple Inc.
usb 6-1: SerialNumber: <<>>
ipheth 6-1:4.2: Apple iPhone USB Ethernet device attached
usbcore: registered new interface driver ipheth
IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
pool[5534]: segfault at 300000003 ip 00007fc287ff8cd6 sp 00007fc283ffbc50 error 6 in libimobiledevice.so.4.0.1[7fc287fed000+1a000]
pool[5558]: segfault at 300000003 ip 00007f61bd848cd6 sp 00007f61b984bc50 error 6 in libimobiledevice.so.4.0.1[7f61bd83d000+1a000]


I found a solution posted at http://beautifulajax.dip.jp/?p=536.

In a nutshell, I downloaded the following 4 files:

libusbmuxd1_1.0.7-2_amd64.deb
libimobiledevice2_1.1.1-4_amd64.deb
fuse-utils_2.9.0-2+deb7u1_all.deb
ifuse_1.0.0-1+b1_amd64.deb 
 from:
https://packages.debian.org/en/wheezy/ifuse
https://packages.debian.org/en/wheezy/fuse-utils
https://packages.debian.org/en/wheezy/libimobiledevice2
https://packages.debian.org/en/wheezy/libusbmuxd1
Install by each of the 4 by running dkpg, such as:

sudo dpkg -i libusbmuxd1_1.0.7-2_amd64.deb sudo dpkg -i libimobiledevice2_1.1.1-4_amd64.deb sudo dpkg -i fuse-utils_2.9.0-2+deb7u1_all.deb sudo dpkg -i ifuse_1.0.0-1+b1_amd64.deb 

Create a mount point, such as /mnt/iphone:
sudo mkdir /mnt/iphone
sudo chmod 777 /mnt/iphone/

To mount the iphone / ipod touch, connect the device and run:

ifuse /mnt/iphone/

To unmount the iphone / ipod touch, run:

umount /mnt/iphone/






Fixing Disappearing Network Manager from LMDE after Update Pack 8

One of the issues I witnessed on one of my Linux Mint Debian Edition (LMDE) system, after performing Update Pack 8 was that the network icon

would disappear from the notification bar.  This would make it extremely difficult to manually switch between access points (automatic was fine and transparent, such as if you need to connect to a new access point, or select a particular hot spot among multiple known/saved access points.

The notification icon is actually a program called nm-applet.

When I started it manually, it would exit with the following error:

** (nm-applet:27520): WARNING **: Could not initialize NMClient /org/freedesktop/NetworkManager: Rejected send message, 2 matched rules; type="method_call", sender=":1.92" (uid=1000 pid=27520 comm="nm-applet ") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination="org.freedesktop.NetworkManager" (uid=0 pid=4075 comm="/usr/sbin/NetworkManager ")
** Message: applet now removed from the notification area
** Message: applet now embedded in the notification area
** Message: applet now removed from the notification area

The issue is that /etc/dbus-1/system.d/org.freedesktop.NetworkManager.conf gets reset with Update Pack 8.

To resolve the issue, make a backup of the file /etc/dbus-1/system.d/org.freedesktop.NetworkManager.conf  and modify the file by replacing all of the deny entries with allow.  Easiest way is to open vi, and enter :%s/deny/allow  to automatically switch all the occurrences.  Next time you reboot, nm-applet will stay alive.  You'll also be able to manually start nm-applet in the meanwhile.

Friday 7 March 2014

Debugging XBMC Plugins

I've been learning XBMC plugin development over the past few months.  I thought it would be a good time to review my remote debugging configuration.  When I had started setting up a debugging environment, I discovered the information available was fragmented and obsolete.

A helpful source is HOW-TO:Debug Python Scripts with Eclipse.  I will refer to it throughout.

Development Environment (IDE)


You can do a lot with just using gedit (with it's python syntax highlighting).  But when things get more complicated, you'll need a place to debug.  An IDE that I've used before, on Java and Perl projects, was Eclipse.  Eclipse has a PyDev plugin available, that can be easily installed through the Updates (Help...Install New Software), which is required to add Python integrated functionality to Eclipse.  Because I didn't have Eclipse installed on my particular current system, I decided to opt for downloading LiClipse instead.  It is "lightweight" and has PyDev already installed. You'll need a Python interpreter installed -- not an issue on most Linux systems.  XBMC also uses your native Linux Python interpreter.

Refer to section 2 in HOW-TO:Debug Python Scripts with Eclipse for setting up PyDev within XBMC.

Setup Workspace

I have all my XBMC plugins located in a particular development folder named development/xbmc.  I simply setup my workspace to this folder.  For each plugin, represented by single subfolder for each (such as XBMC-pluginname), I simply go to File -> New, PyDev Project.  Give the project the appropriate name (XBMC-pluginname), select Python for Project type.  Select Grammar Version 2.7.  For Interpreter, select Default. Click on the "Click here to configure an interpreter not listed".  In the upper right pane, click the Quick Auto-Config, which will configure an interpreter for you.  If you have multiple Python Interpreters installed, repeat until the required version is created.

Before clicking the Finish button, change the radio button to "Don't configure PYTHONPATH (to be done manually later on)".


Refer to section 3 in HOW-TO:Debug Python Scripts with Eclipse for more assistance.  This part of the Wiki is out-of-date, however.

Setup Pysrc (for remote-debugging)

If you want to remote-debug your plugin (that is step through your code while running), you will need to do some further setup.  First, you need to locate the pysrc library files that came with your PyDev install.  If you do a (find . | grep pysrc) from within your Eclipse install, you should find them located in plugins org.python.pydev_3... folder.  Mine were located at ./plugins/org.python.pydev_3.3.3.201401272005/pysrc/.

Next, you'll need to find the global library location for you Python interpreter.   You can do so by running the following:

python -c "from distutils.sysconfig import *; print(get_python_lib())"
My path was /usr/lib/python2.7/dist-packages.  Copy the pysrc folder into this noted folder (you'll need to be root).  

You'll also need to create an empty file called __init__.py within this pysrc folder ( /usr/lib/python2.7/dist-packages/pysrc).  This will allow XBMC transverse into it.

Modify Your XBMC Plugin

To enable remote-debugging to your application, you will need to add some code to your plugin (say into your default.py, below the import statements).  Code I've put in mine resemble this:

try:
    remote_debugger = addon.getSetting('remote_debugger')
    remote_debugger_host = addon.getSetting('remote_debugger_host')

    # append pydev remote debugger
    if remote_debugger == 'true':
        # Make pydev debugger works for auto reload.
        # Note pydevd module need to be copied in XBMC\system\python\Lib\pysrc
        import pysrc.pydevd as pydevd
        # stdoutToServer and stderrToServer redirect stdout and stderr to eclipse console
        pydevd.settrace(remote_debugger_host, stdoutToServer=True, stderrToServer=True)
except ImportError:
    sys.stderr.write("Error: " + "You must add org.python.pydev.debug.pysrc to your PYTHONPATH.")
    sys.exit(1)
except :
    pass
The code will look for the parameter "remote_debugger" and "remote_debugger_host" in your settings.xml file.  If neither is found, an exception will be thrown, and the debugging will be disabled.  This will allow you to maintain one version of your code.  You could use a constant variable instead, but then you'd need to set it somewhere.  I see others implementing a variable like REMOTE_DEBUG and then setting it True or False in the same class, but if you forgot to switch it to False, and push your code out, it'll fail if someone deploys it.  I found my approach elegant, as the debugger code will be inactive unless the user adds the following two lines to their settings.xml:

    <setting id="remote_debugger" value="true" />
    <setting id="remote_debugger_host" value="localhost" />

The remote_debugger_host allows you to use a remote system for debugging, rather than using the same machine.  This allows you to debug your plugin on a Raspberry Pi and debug it using your laptop.  If you are debugging on the same device (laptop running XBMC), then you can leave this as localhost.  Otherwise, change it to your IP of your debugging machine.

The above code was adapted from  HOW-TO:Debug Python Scripts with Eclipse .


Raspberry Pi for Debugging


Just like you setup pysrc earlier on your computer, if you want to run your plugin from a device, such as Raspberry Pi, to remote-debug, you'll need to copy the pysrc folder, found earlier, to your Raspberry Pi.  Follow the same steps under Setup Pysrc, copying the source folder from your computer with Eclipse or LiClipse installed, to your device.  In my case, my Raspberry Pi used the same dist-packages folder (so I copied to usr/lib/python2.7/dist-packages/pysrc).

You may need to add port 5678 to your computer's firewall to allow inbound connections from your debugging device.


Adding PyDev Debugger to your View


There are several ways of accomplishing this.  I prefer going to Window -> Customize Perspective, selecting Command Groups Availability and checkboxing "PyDev Debug".  This will add a PyDev menu with the buttons "Start Debug Server" and "Stop Debug Server" to my Eclipse menu.


Actually Doing the Remote-Debugging...


To start the debugger, I flip to Debug Perspective, and then click the Start Debug Server from PyDev menu.  In the Debug Server window, you'll see "Debug Server at port: 5678". 

Now you can open XBMC.  If you have the remote-debugger code added to your plugin AND have the remote_debugger and remote_debugger_host set in settings.xml, if you try to load the plugin in XBMC, it should try to communicate to the Eclipse debugger.  If it fails to connect to the debugger, it'll push a 110 Connection Failed to the xbmc.log file.  If you see a "You must add org.python.pydev.debug.pysrc to your PYTHONPATH" error, that means the pysrc isn't accessible on your XBMC device (did you add the pysrc folder to the dist-packages folder?  did you include an empty __init__.py  file within that directory?).

If it is working, XBMC should "halt" while your debugger is passed control.  In the Debug panel, you should see an unknown with MainThread show up.  This represents the session running on your XBMC device.  You can now step through the code using standard debugging techniques.  You can also simply "Resume" (F8) to continue running to completion the plugin.



Refer to HOW-TO:Debug Python Scripts with Eclipse for further advise on debugging.

Have fun :) 

Tuesday 21 January 2014

Re-purposing Old SD Cards for Booting a Raspberry Pi off USB

My server pi machine has a SanDisk Ultra 32GB card for which has a standard Raspbian install.  I found shortly after putting the system together, that I found it more stable to run the linux directly off the attached USB hard drive (for which I always mount).  I created a linux OS-partition on the USB drive (size 16GB).  I rsynced the OS from the SD card to the USB OS-partition.

The design of the Raspberry Pi insists on booting off the SD card.  Once the system is bootstrapped, it can boot of USB or SD card.  The bootstrap partition is very small (59MB).  There is no sense using a full 32GB card just for bootstrapping.

I found an old 32MB (yes, Megabytes) micro-SD card and an even smaller, 16MB SD card.  The bootstrap partition on the standard image 32GB card is actually 18MB.  However, with inspection, I noticed there is a 9MB kernel_emergency boot image contained on it.  I would hazard a guess that this boot image would be booted from as a "safe-mode" option if an upgrade (or other activity) causes the default kernel boot image to become inoperable (the default kernel boot image is only 2.8MB).  If I exclude the kernel_emergency boot image, I'll be able to fit the partition on the SD card without an issue.

Backup the Data off the 32GB SD Card


Looking at the data from the 32GB SD card:

Disk /dev/mmcblk0: 31.9 GB, 31914983424 bytes
4 heads, 16 sectors/track, 973968 cylinders, total 62333952 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00014d34

        Device Boot      Start         End      Blocks   Id  System
/dev/mmcblk0p1            8192      122879       57344    c  W95 FAT32 (LBA)
/dev/mmcblk0p2          122880    62333951    31105536   83  Linux

The MBR is stored at /dev/mmcblk0, the bootstrap partition is a FAT partition (/dev/mmcblk0p1), and the linux OS-partition is /dev/mmcblk0p2.

Reminder that the dd command will most likely require root access -- so run them as root or use sudo.  When copying an OS-partition, as I've done below, run the rsync using root or use sudo, to ensure all the root-restricted files are transfered.

Backup the MBR


The first step is to backup the MBR off the 32GB SD card.

This can be accomplished by running dd:

dd if=/dev/mmcblk0 of=/u01/sd_pi_mbr.iso bs=512 count=1
The MBR is actually a 512MB sector at /dev/mmcblk0.  This is where the boot sector is, along with the partition table.  Depending on the card reader you are using, it'll either appear as the preceeding device, but it could also appear as a /dev/sda, etc device.  Run fdisk -l to verify the device [ensure that you are not selecting the wrong device].

A little background information on the MBR.  The bootstrap part of the MBR is contained in the first 446 bytes (this explains one of the restore commands that follows).  The partition table data is stored in the proceeding 64 bytes (unless the destined SD card is identical in size and partition layout, we won't restore this part of the MBR).  The MBR signature is the last 2 bytes (again, the destined SD card is identical in size and partition layout, we won't restore this part of the MBR).

Backup the Bootstrap Partition


The second step is to backup the bootstrap partition off the 32GB SD card.

This can be accomplished by running dd:

dd if=/dev/mmcblk0p2 of=/u01/sd_pi_boot.iso
In my case, I'll be modifying the existing bootstrap to exclude the kernel_emergency.img.  Therefore, I ended up mounting /dev/mmcblk0p1 and copy the contents of the filesystem to a folder called sd_pi_boot, excluding the kernel_emergency.img.  The resulting is a folder < 10MB in size, which will clearly fit on my 16MB SD card.

Backup the OS-Partition

You'll need to copy the OS-partition over to the eventual destination device eventually.  In my case, it'll be a USB hard drive, and not the SD card.  You can either use dd to take a backup of the image (dd if=/dev/mmcblk0p2 of=/u01/sd_pi_os.iso) or you can copy the files.  I've copied it to the destination location using rsync.  I could also make a image backup and restore of the filesystem using dd and then resize the partition to occupy the entire destination partition size, but I find it faster and with fewer steps to just mound the filesystem on the SD card and using the rsync -avix command to create a mirror copy of the filesystem [reminder to use root or sudo].


Format and Partition the Destination SD Card


The first step is to ready your SD card.  Format the card with a MBR using a Disk Utility to fdisk.  Then create a FAT partition on the card using the same tool (in my case with the 16MB card, I create a partition the full size of the card).

Restore the MBR


The next step is to restore the MBR onto the new SD card.

This can be accomplished by running dd:

dd if=/u01/sd_pi_mbr.iso of=/dev/mmcblk0 bs=446 count=1
Take note of the bs parameter of 446.  Because the new card is of different size (and different partition sizes) then the original, we restore the "bootstrap" part of the MBR only -- we don't want to overwrite the partition table or MBR signature that we created in the proceeding step.

Restore the OS-Partition to your USB device [or other destination]

You'll need to copy the OS-partition over to the eventual destination device.  In my case, it'll be a USB hard drive, and not the SD card.  Either use dd to restore the filesystem image or use the rsync -avix command to create a mirror copy of the filesystem from your backup [reminder to use root or sudo].

Restore the Bootstrap Partition


The next step is to restore the bootstrap partition.

This can be accomplished by running dd:

dd if=/u01/sd_pi_boot.iso of=/dev/mmcblk0p2
In my case, I'll be modifying the existing bootstrap to exclude the kernel_emergency.img.  Therefore, I use copy over the files from a backup folder sd_pi_boot instead of using dd.

I end up with a bootstrap partition containing the following files:

bootcode.bin  cmdline.txt  config.txt  fixup_cd.dat  fixup.dat  fixup_x.dat  issue.txt   kernel.img  start_cd.elf  start.elf  start_x.elf

Modify the OS-Partition Boot Parameter



On the resulting SD card, I need to instruct the bootstrap where to load the OS-partition on boot up.  These startup details are contained in the file cmdline.txt.

The original file contains the following:

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait

The OS-partition, more formally known as the root partition, is noted as /dev/mmcblk0p2, which was the original location.  This parameter will need to be updated with the new destination.  In my situation, I restored this to a USB hard drive.  It was not the primary partition on the drive (as it was added after the drive was originally partitioned).  Therefore, in my case, the partition is  /dev/sda3.  If it is your primary partition on the first hard drive connected to the Raspberry Pi, then it'll most likely be /dev/sda1 instead.

My resulting cmdline.txt is as follows:

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/sda3 rootfstype=ext4 elevator=deadline rootwait



That's it!  Now perform a sync command (to ensure the write buffers are flushed to the SD card when you eject it), and then safely unmount the partitions on the SD card and eject the SD card from your computer.  Plug it into the Raspberry Pi along with your USB hard drive [or other device containing your new OS-partition].  When you power on the Pi, it should start accessing the partition containing your OS-partition within 3 seconds of receiving power.  In my case, my system is headless (without screen), so I use the LEDs to confirm what is happening.  I notice the hard drive LED starts blinking read (as opposed to green) within 2 seconds of the Pi being powered on, validating that I've done everything properly, and that the Raspberry Pi was able to read the new SD card, boot using the MBR, and bootstrap using the bootstrap partition, and begin booting off the OS-partition.  If you don't see the same result, validate firstly by examining the screen for any errors.  Most likely your cmdline.txt isn't linking to the OS-partition properly.  If you receive SD-card errors, first validate the structure of the SD-card is correct, and that you didn't eject/unmount before the changes were written completely to the card.