Experimenting with 64k pages for AArch32 code

Someone asked me about 64K pages and the AArch32 ABI again recently. It’s a question that has passed across my desk multiple times and even followed me through multiple companies. Given that long history, and the changes made to the Arm toolchains to ensure freshly built ELF binaries can be loaded, I was interested to see whether Debian 9 (Stretch) armhf userspace would on a machine with 64K pages. I also had access to a Developerbox to help me indulge my curiosity. The short answer is that it is *not* possible to boot Debian Stretch armhf container on a machine with 64k pages because the kernel cannot map the init process… but is only half the story; it really was very close to working!

Continue reading “Experimenting with 64k pages for AArch32 code”


Getting started with GStreamer 1.0 and Python 3.x

Way back in the mists of time (or a little over nine years ago if you prefer). Jono Bacon wrote a very detailed blog post describing how to use GStreamer with Python.

Getting started with GStreamer with Python

Mr. Bacon went into a lot of detail, so much so that now, almost ten years later it is still widely credited in other blog posts and remains highly ranked by search engines.

However both Python and GStreamer have moved on a bit over the last decade. The bindings too have moved on a lot as they now use the almost unspeakably awesome PyGObject to automatically generate most of the bindings by introspection.

In short, Jono’s code doesn’t work any more. However it doesn’t take much work to massage the first example until it does.

#!/usr/bin/env python3

import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstBase', '1.0')
gi.require_version('Gtk', '3.0')
from gi.repository import GObject, Gst, GstBase, Gtk, GObject

class Main:
 def __init__(self):

 self.pipeline = Gst.Pipeline("mypipeline")

 self.audiotestsrc = Gst.ElementFactory.make("audiotestsrc", "audio")

 self.sink = Gst.ElementFactory.make("autoaudiosink", "sink")




Roughly speaking “all” we have had to do to update this example:

  1. Update the imports to gather everything we need from the gi module.
  2. Add: Gst.init(None) (this should probably be Gst.init(sys.argv) but that’s not how the original code behaves so it’s not in this port either)
  3. Replace the lowercase g with an uppercase G in both Gst and Gtk.
  4. Tweak the Gst.ElementFactory and Gst.State calls; these were in a flatter namespace in the older PyGst bindings.
  5. Replace alsasink with autoaudiosink. Strictly speaking this is not required; alsasink will still work just fine. However autoaudiosink can adopt pulseaudio when available. Something else that has changed since this code was originally written.

… and that’s it. Not much to it really. Hopefully its enough to set you on your way if you want to grab ideas from old tutorials and blog posts into your own shiny new GStreamer application.

Happy hacking!


Debugging ARM kernels using fast interrupts [LWN.net]

I suspect that there are relatively few regular readers of this blog. However if you are one of them and are feed up with hastily written articles with inadequate proof reading then may I recommend you take a look at my recent article for lwn.net describing some of my recent Linux kernel work:

Debugging ARM kernels using fast interrupts

Not only did I proof read it, proof read it and proof read it again but the terrific folks over at lwn.net did the same resulting in an article I’m really proud of.

PS if you are not an lwn.net subscriber then you’ll have to wait until next week to read it…


Use “#!/usr/bin/env hbcxx” to make C++ source code executable

#! C++I normally write some kind of personal toy during the holiday season. For example last year I wrote a toy fibre scheduler to go with a microcontroller project I was working on. This year however I’ve cooked up something and can’t quite decide if its a great idea, a pointless idea or a stupid idea. One thing is clear however, to find out which of the three possibilities it is, this bit of code needed packaging up properly as a product and shared with the wider world. Basically hbcxx uses the Unix #!/path/to/interpreter technique to make C++ source code directly executable.I’ve been taking a new look at C++. There is a palpable sense of “buzz” in the C++ community as they realize that, with C++11, they are sitting on something pretty special. The advocacy from the presenters at Going Native this year was remarkably effective (although if you take my advice you won’t watch Scott Meyer’s brilliant Effective C++14 Sampler until you know what std::move is for).
Quoting Bjarne Stroustrup: Surprisingly, C++11 feels like a new language. Considering its source it is not at all surprising that this quote is absolutely on the money: modern C++, meaning C++11 or later, does feel like another language. This is not because the language has been changed massively but because the new features encourage a different, and slightly higher level way to think about writing C++. It’s faster and more fun, supports lambdas, has tools to simplify memory management and includes regular expressions out-of-the-box.I was actually pretty amazed to see regular expressions in the standard C++ libraries, so that coupled with humane memory management (albeit humanity where you have to explicitly opt-in) and the auto keyword really got me thinking differently about writing C++. auto even encouraged me to write a template (generic programming is so much easier when you don’t have to explicitly declare the type of every expression). All this and without losing type safety…So my great/pointless/stupid idea (delete whichever is inappropriate) is a tool to keep things fast and fun by putting off the moment you have to write a build system and install script. For simple programs, especially for quick and dirty personal toys and scripts, the day you have to write a proper build system may never come. You no longer want the distraction of making a separate directory and a Makefile and you’ll find that pkg-config to just work.Instead I just copy your C++ source code into $HOME/bin. Try it. It works.Features include:

  • Automatically uses ccache to reduce program startup times (for build avoidance).
  • Enables -std=c++11 by defualt.
  • Parses #include directives to automatically discover and compile other source code files.
  • Recognises the inclusion of boost header files and, where needed automatically links the relevant boost library.
  • pkg-config integration.
  • Direct access to underlying compiler flags (-O3, -fsanitize=address, -g).
  • Honours the CXX environemnt variable to ensure clean integration with tools such as clang-analyzer’s scan-build.

To learn more about hbcxx take a look at:

Then have fun.



Faking try/catch/finally in bourne shell (and jenkins)

When Bourne shell was first release in 1977 it turned out that, for several very good reasons, Steven Bourne had designed a nice simple language with no need for exception handling. That is, it did not need exception handling until Jenkins, also for very good reasons started using it with the -ex that causes the shell to bail out on the first error in encounters.

Normally Jenkins’ behaviour is is exactly what you need. Scripts stop as soon as something goes wrong. However a typical glue script to run a test suite overnight on a shared development board might look like the following pseudo-code:


This problem with the above code is run using jenkins, or any other tool that runs the shell with the -ex argument, then the unlock command is not run if the test suite fails and make returns the error to us and the board is never unlocked. A simple fix might be:


If you favour compactness (and only having to type out the unlock command once) then perhaps:

make test TARGET=$MY_DEVELOPMENT_BOARD && res=$? || res=$?
[ 0 -ne "$res" ] && false

However what about the following. Note that the “unlock” command is similar to a  a “finally” operation in some languages but will be executed before the catch statement:

catch echo "System tests failed! Please see logs"

The above can readily be implemented aided by a couple of simple shell functions:

try () {
        if [ -z $exception_has_been_thrown ]
                "$@" || exception_has_been_thrown=1

catch () {
        if [ ! -z $exception_has_been_thrown ]
                false   # If "sh -ex" then exit at this point
                unset exception_has_been_thrown

These scripts don’t make a big difference for the simple script above. However what if you are running multiple test suites sequentially under lock?

try make smoke_test TARGET=$MY_DEVELOPMENT_BOARD
try make heavy_regression_test TARGET=$MY_DEVELOPMENT_BOARD
catch echo "System tests failed! Please see logs"

Now at last the benefits of these wrapper functions really make sense. Because all the test suites are run using the try function then they will be skipped if previous try blocks have reported an error. This gives us behaviour similar to -e but delays the reporting of the error until the unlock has been performed.

To close, and for the historians amoung you, despite my starting this post with a reference to 1977 the code presented wouldn’t have run in the original Bourne shell because it uses features that were added later. However I think that by 1986 (SVR3) then all the features used here would have been available. Certainly if you know different then please let me know… I’d be interested.


Bootable Fedora USB stick with encrypted home partition – part 1

In this tutorial we will repartition a USB stick and install Fedora on it allowing it to be used:

  • As encrypted storage with any modern Linux system
  • As a bootable USB stick running Fedora and using an encrypted home partition
  • To copy files to/from other computers, including those running non-Linux operating systems (this bit uses an unencrypted partition).

The basic idea is to split the disc into two partitions, Boot and Vault.

Boot is a FAT partition that interoperates well with non-Linux operating systems. The FAT partition will also contain, as files, the bootloader, read only compressed file system image and “overlay” image that allows us to amend the main filesystem. It is the compression that makes this scheme attractive. A very rich development workstation (including eclipse and lots of header packages) weighs in at less than 2GB. The other big advantage of basing things on the live images is that all the logic to stop temporary (and log) files writing out to the USB media is ready and working out of the box. This keeps down the wear on the media.

Note: The read-only compressed file system comes from the Fedora “Live” media. Thus the images easily available are the live CD and the live DVD published by the Fedora project. However it is possible to use the Fedora tools to custom roll your own live media.

The Vault is an encrypted home partition where the user files (including audio/video streams) can be stored. It is also automounted, subject to password, on any modern Linux system allowing it to be used for encrypted file exchange.

Recommended partition sizes

This is just a rough guide since its up to you to decide what you’ll be using the bootable stick for.

For a 4GB USB stick a 3GB FAT partition leaving a 1GB encrypted partition would be fairly flexible and allow big files to be transferred to a non-Linux operating system. Consider using a CD sized live image and a relatively small overlay partition (300MB or so).

For a 8GB USB stick, either a 4GB/4GB or a 5GB/3GB division would make sense. With a 5GB/3GB split then the DVD sized live image is possible together with a generous home area and the capacity to transfer large files.

For 16GB media I like to have a very big encrypted area so I can keep lots of audio/video material on the encrypted partition. For me a 6GB/10GB split gives me exactly what I want. A 2GB live image together with a generous overlay partition (1GB) so I can easilt install extra software whilst travelling if I need to.

I seldom use non-Linux operating systems these days so these recommendations assume I can use the encrypted partition for file transfer. If the primary thing you use the USB stick for is file transfer to non-Linux operating systems then perhaps you want to just pick a relatively small size for the encrypted partition (say 1GB) and give all the rest to the boot partition.

Putting it into practice

After inserting the USB media it is likely to be auto-mounted by the OS. Therefore the first thing we need to do it identify the media and unmount it. I recommend using the command line for this. Many GUI “eject” commands do more than just unmount the file system, they also do a USB shutdown that makes it impossible to use the media until you unplug and replug it (at which point it auto mounts again). Here we use mount to list the mounted devices and hunt for the device mounted on either /media or /run/media/<username>/ and then use the device name on the left to do the unmount. Remember the device name (below it is /dev/sdb1) since we’ll need that later.

[root@lobster ~]# mount
 proc on /proc type proc (rw,relatime)
 sysfs on /sys type sysfs (rw,relatime)
 /dev/sda1 on /boot type ext3 (rw,relatime,data=ordered)
 /dev/sdb1 on /run/media/drt/9A63-9772 type vfat
 [root@lobster ~]# umount /dev/sdb1

Now we need to repartition the USB media to create seperate Boot and Vault partitions. THIS WILL ERASE EVERYTHING ON THE DISC. Here we use parted and the argument is the device name from above (/dev/sdb1) with the numeric part and the end shaved off (/dev/sdb).

Note: The following examples are taken from my own system where I’m setting up a 16GB USB stick with a 6GB/10GB split.

[root@lobster ~]# parted /dev/sdb
 GNU Parted 3.0
 Using /dev/sdb
 Welcome to GNU Parted! Type 'help' to view a list of commands.
 (parted) p
 Model: SanDisk Cruzer Fit (scsi)
 Disk /dev/sdb: 16.0GB
 Sector size (logical/physical): 512B/512B
 Partition Table: msdos
 Disk Flags:
Number Start End Size Type File system Flags
 1 16.4kB 16.0GB 16.0GB primary fat32 lba

Remove the original partition:

(parted) rm 1

Make a 6GB FAT partition to act as the boot partition, a 10GB encrypted partition and double check things by printing the partition table:

 (parted) mkpart primary fat32 16.4kB 6.0GB
 Warning: The resulting partition is not properly aligned for best performance.
 Ignore/Cancel? i
 (parted) mkpart primary ext2 6.0GB 16GB
 (parted) print
 Model: SanDisk Cruzer Fit (scsi)
 Disk /dev/sdb: 16.0GB
 Sector size (logical/physical): 512B/512B
 Partition Table: msdos
 Disk Flags:
Number Start End Size Type File system Flags
 1 16.4kB 6000MB 6000MB primary fat32 lba
 2 6001MB 16.0GB 10.0GB primary
(parted) quit
 Information: You may need to update /etc/fstab.

Now is a good time to unplug the media, just to make sure that the kernel adopts the new partition table. This is paranoid but, hey, unplugging a USB stick isn’t so hard now is it?

Having done that, the automounter might end up decided to mount the old filesystem (not caring that half of it is now missing). However because the file system has changed size we must make a new one in order to be save.

Firstly we format the boot partition:

[root@lobster ~]# umount /dev/sdb1
 [root@lobster ~]# mkfs.vfat -F 32 -n LIVE /dev/sdb1
 mkfs.vfat 3.0.12 (29 Oct 2011)
 [root@lobster ~]#

Having done that we now need to create an encrypted ext4 partition ready to use as the home area (and for Linux to Linux file transfers):

[root@lobster ~]# cryptsetup --verify-passphrase luksFormat /dev/sdb2
 This will overwrite data on /dev/sdb2 irrevocably.
Are you sure? (Type uppercase yes): YES
 Enter LUKS passphrase:
 Verify passphrase:
 [root@lobster ~]# cryptsetup luksOpen /dev/sdb2 tmp
 Enter passphrase for /dev/sdb2:
 [root@lobster ~]# mkfs.ext4 -L Vault -m 0 /dev/mapper/tmp
 mke2fs 1.42.3 (14-May-2012)
 Filesystem label=Vault
 OS type: Linux
 Block size=4096 (log=2)
 Fragment size=4096 (log=2)
 Stride=0 blocks, Stripe width=0 blocks
 610800 inodes, 2442752 blocks
 0 blocks (0.00%) reserved for the super user
 First data block=0
 Maximum filesystem blocks=2503999488
 75 block groups
 32768 blocks per group, 32768 fragments per group
 8144 inodes per group
 Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
 Writing inode tables: done
 Creating journal (32768 blocks): done
 Writing superblocks and filesystem accounting information: done
[root@lobster ~]# cryptsetup luksClose tmp
[root@lobster ~]#

Again this is paranoia but just to make sure everything writes out before we unplug I like to run a:

[root@lobster ~]# sync

That’s it. The USB stick is ready. You can confirm this by hot-plugging one last time and you should be prompted to enter your password by the auto mounter.

We’re now half way there. The disk is all ready to run liveusb-creator to install the bootable operating system. After that there’s one last trick to get the live operating system to mount the encrypted home partition automatically and we’re all set.

I’ll tell you about all that in another post!


Fedora preupgrade with a tiny /boot partition

This post is for people with, well, mature, installations of Fedora. The installers of yesteryear defaulted to a very small 250MB /boot partition. That’s so small it really gets in the way of using Fedora’s preupgrade feature.

These are the tricks I use whenever I’m upgrading one of these mature installations.

Firstly you must remove every kernel except the one you are currently using to run your system. That should clear out enough space for preupgrade to get things ready for you.

Even with the kernels removed preupgrade stil won’t have enough space to store the stage2 installer image. That’s it can download it during the install. When preupgrade completes you can reboot, select “Upgrade” via grub (if it is not selected by default) and try to do the upgrade.

Round about now you will discover the second problem. Even with a wired connection you can’t download the stage2 installer. Why not? Well, because preupgrade has incorrectly setup the kernel boot line causing the stage1 installer to try and download the image from the wrong place. You can fixup the kernel boot line using grub’s editing tools. Have a look for the parameter that tells the stage1 installer where to download stage2 and remove /LiveOS/squashfs.img from the end (stage1 automatically appends this).

With this obstacle knocked down you’ll encounter the third and final issuette. When anaconda scans the system to check there is enough disc space to complete the install it can’t find enough space in /boot. Now, by far the biggest thing in  /boot right now is the stage1 installer image which has already been copied to RAM. In otherwords if you can delete it from /boot before anaconda checks there’s enough space then the upgrade process will finally work! If you have an encrypted root filesystem this is no problem because you have to enter a password before the space check. If you don’t have any encrypted partitions then you’ll have to be the worlds fastest typist to beat the space check. Good luck!

These are the commands needed to delete the preupgrade stage1 installer:

mount LABEL=/boot /boot
rm /boot/upgrade/initrd.img
 rm /boot/upgrade/vmlinuz
 umount /boot
 rmdir /boot

Note that you may have to tailor the initial mount command if your /boot partition is not labeled /boot.

Finally don’t worry about the wanton destruction of the stage1 files. As I say they are already loaded into RAM and if for some reason the upgrade still don’t work and you need to reload them a try again then you can just re-run preupgrade.

Have fun…


Extracting text from the memory image of a running process

I found a really nasty problem with the bug tracker we use at work last week. If someone else posts to it whilst you are composing your comment it refuses to accept it. It doesn’t offer a “post comment anyway” feature and advises instead that you:

  • Press Back
  • Select the comment you have just written
  • Copy it to the clipboard
  • Reload the page in your browser
  • Paste the comment back into the text field

Other than the obvious epic fail regarding usability there is one additional problem in the instructions above. When you press back the rich text editing widget no longer has your comment in it! That’s right. Forty five minutes expressing my highly insightful view point as clearly as possible… gone.


I could tell the text was still there because I could press Forward and refresh but I just couldn’t see it.

At this point I fired up wireshark to try and capture my work as it went out over the network. This was when I realized that the bug tracker was using SSL and trying to launch a man-in-the-middle attach on myself was likely a waste of time.

So, the last resort of the desperate(ly lazy) is to grab the data from the memory image itself. That should be dead easy, I’ve been running GNU/Linux at home for almost fifteen years. I must have learnt my way around by now. Surely I just attach to a running process and dump its memory.

Having got the PID of the firefox process I fired up gdb:

 butch$ gdb
 GNU gdb (GDB) Fedora (
 Copyright (C) 2011 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or later
 This is free software: you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law. Type "show
 copying" and "show warranty" for details.
 This GDB was configured as "i686-redhat-linux-gnu".
 For bug reporting instructions, please see:
 (gdb) attach 23639
 Attaching to process 23639
 /usr/lib/firefox/firefox (deleted): No such file or directory.

At this point I tried the gcore command. No luck there either. gdb couldn’t figure out the memory ranges it needed to dump. Still I’m not one to give up. After trying and failing to scan /proc/23639/mem I decide to scan the list of the memory mappings and dump each one. When I discovered that firefox had over 700 blocks of mapped memory I decided to generate a gdb script to dump the memory automatically:

cat /proc/23639/maps \
| cut -d' ' -f1 \
| tr '-' ' ' \
| awk '{ print "dump memory core-" $1 "-" $2 " 0x" $1 " 0x" $2 }' \
> dumpmem.gdb

It worked. I have the memory in files. From here things get much easier:

cat core-* | strings | grep -C 40 BD-ROM | less



Fixing problems with encrypted removable media

If you have plug encrypted removable media into a recent GNU/Linux distribution it will probably try to automount it for you.

So far, so hoopy.

However a recurring class of bugs in the hot plug logic is failure to tear down the encrypted device mapper when the media is removed without unmounting it first.

It results in a message something like this:

Error unlocking device: cryptsetup exited with exit code 5: Device udisks-luks-uuid-d9fb9d0d-74e6-49b1-94d3-7edc083f04c0-uid80377 already exists.

Naturally this is a bug in your distribution but it is one that tends to regress as the desktop stack is developed so knowing how to workaround will do you no harm at all.

I generally use:

sudo cryptsetup luksClose udisks-luks-uuid-d9fb9d0d-74e6-49b1-94d3-7edc083f04c0-uid80377

Note: gnome-shell-3.2 will prompt you for a password but doesn’t issue an error message if the automount fails. If you want to see the error message (and hence the name of the mapping) open the file manager and try to mount the encrypted partititon from there instead.

Although I couldn’t really be bothered you could easily write a script to automatically identify encrypted device mapping that aren’t being used and undo them by getting a script to look at the output of ls -al /dev/mapper. Probably it would be best to look for devices that have a mapping but are not mounted.