Fedora 21 (or later) not resuming from Suspend to Disk

Consider this as a memo to myself (or a useful hint in case you ran into a similar issue recently πŸ˜‰ ):

Some days ago, my Asus Zenbook stopped to properly resume from Suspend to Disk state. I was running Fedora 21 (upgraded to 22 beta during the problem in the hope it was a caused by some bad Kernel update). As this did not help as well, I had to dig further and in fact found the issue:

It seems, that some recent update or some stupid configuration issue on my side (or an unlucky combination of both xD ) caused the Grub2 configuration to omit the resume kernel command line option. This option tells the kernel where the disk image is stored which conserves the system state. To fix the issue, first, find out which is your swap partition, e.g. using blkid:

martin@zenbook:~$ blkid | grep swap
/dev/mapper/vg_zenbook-lv_swap: UUID="XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" TYPE="swap"

So in my case, the swap device is /dev/mapper/vg_zenbook-lv_swap. With that information, you can now edit /etc/default/grub and append the resume argument to the kernel command line:

GRUB_CMDLINE_LINUX="[...] resume=/dev/mapper/vg_zenbook-lv_swap"

Note: I left out the existing parameters from my configuration — whatever you do, do not remove anything but just append the resume=/path/to/your/swap/device argument. Finally, you can regenerate the grub configuration. In my case (as I am using EFI), I had to run

sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

That’s it. At least in my case, Suspend to Disk is working properly again after this change :)

Vacation Retrospective

After not taking too many of my vacation days in 2014, I started with as much as 17 remaining vacations days into 2015. Unfortunately, at least in my company I had to take these days until end of March, so this meant 4 weeks off :)

Now, this is a long time, but I think whoever made this experience before me, knows such a “long” time is quickly over. Of course, March had some nice parts, which we used to go out for hiking (which we have beautiful places for quite near).

20150323_144004 20150309_144606 20150309_150503 20150309_150640 20150309_174430 20150323_152136
Another good thing is that during such a long time you usually manage to get stuff done, which you normally keep shifting in front of you (such as tidying up your apartment, giving the dentist a visit and so on πŸ˜‰ So that point from my vacation todo list also has been solved.

Finally — thanks to the more April-like weather especially end of March — also my private projects got some attention. After OpenTodoList received some love in the beginning of the year, I also managed to update our (the RPdev) website and in the very end also had a chance to work on a (for me) long awaited new project: OpenTodoList for ownCloud :) When initially starting work on OpenTodoList, I also had in mind some extension to ownCloud to be able to store todos there and share them with other people.

First, kudos here to the guys over at ownCloud! Creating new apps for ownCloud is really easy and I have a good feeling about the code so far, thanks to the well though-through frameworks created and/or used by ownCloud. After a first review of the existing tasks app for ownCloud (which makes a good impression to me) I decided to start over on my own. First of all, I was not sure whether the existing app would be fit to be integrated into OpenTodoList as I had in mind (well, it definitely could be integrated, but I wanted to have the app running in ownCloud being more a backend to OpenTodoList which supports any potential feature of the app) and second — the learning factor πŸ˜‰ Reading through the documentation, I couldn’t resist to create a complete app on my own (especially since I did not have the chance yet to work a lot with web applications, so this was a welcome chance to the otherwise more Desktop-centric world I am working in).

Long story short, during some rather windy and cold March/April days, I put some effort into bringing up into the initial OpenTodoList for ownCloud app :) It is still far away from being finished (or really usable), however, one already would be able to use it to store your todos and access them across devices. Sharing is not yet implemented and (of course) a new storage backend for the OpenTodoList app is missing as well, but giving the short time, I am quite pleased with the progress until now. For the sake of completeness, here’s a little demo on the current state:

Mac OS like gestures on Linux

Recently, I changed from a Windows based PC on my work to a MacBook Air. That’s really great, mostly because of Mac OS provides a lot of the features I love on Linux and that make working especially with a lot of open application windows quite easy.

Now, Mac OS also comes with excellent gesture support. I never really “missed” that feature (this is the first time I came in contact with Mac OS), but quickly came accustomed to that, too. Being able to show all open windows or get an overview of your virtual desktops with just a swipe on your trackpad is great. So the next weekend project was clear: Getting something like that for Linux, too πŸ˜‰

In fact, Canonical has implemented such gesture handling for Ubuntu already. Unfortunately, the required packages (namely, their utouch framework) is not easily available for Fedora, which I am using. So, some more manual work was required. Good news: Actually it is quite straightforward to install everything as long as you know what you need πŸ˜‰ So, here we go:

Preparing your environment

By default, the installation procedure for source builds will put the utouch framework into /usr/local. That is okay (so these files won’t interfere with anything typically installed via the package manager of your distribution). However, at least in my case I needed to setup some environment variables so the subsequent build commands would work and applications would find libraries at runtime. Without further ado:

export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH
export LD_LIBRARY_PATH=/usr/local/lib:/usr/local/lib64/$LD_LIBRARY_PATH

Execute this once in a terminal (and keep it open for the subsequent builds). If you plan to install utouch to a different location, adjust the paths accordingly.

Build Tools

You will of course also need the typical development tools in order to get everything done. As I had a lot of stuff already installed, I cannot tell which exact packages you need, however, typically the errors produced by the configure scripts will be informative enough for you to know. In addition, you have to install Qt4 and the accompanying development package (we will need it for the actual gesture recognition application later on).

Install utouch

First of all, we have to install the utouch framework. You can find it on Launchpad. We needs to install four components: utouch-evemu, utouch-frame, utouch-grail and utouch-geis. You can either get the sources via bazar or just download the latest release as a zipped tarball. I decided for the latter. Below, find the links to Launchpad, together with the tarballs I used. Just fetch and install the packages in the order given below:

Package Link to Launchpad Project Tarball used (check on website for newer versions)
utouch-evemu https://launchpad.net/evemu https://launchpad.net/evemu/trunk/evemu-1.0.10/+download/evemu-1.0.10.tar.gz
utouch-frame https://launchpad.net/frame https://launchpad.net/frame/trunk/v2.5.0/+download/frame-2.5.0.tar.gz
utouch-grail https://launchpad.net/grail https://launchpad.net/grail/trunk/3.1.0/+download/grail-3.1.0.tar.gz
utouch-geis https://launchpad.net/geis https://launchpad.net/geis/trunk/2.2.16/+download/geis-2.2.16.tar.gz

For each of these packages:

  1. Download it either by getting the latest release tarball or download via bazar.
  2. Change into the directory and execute the usual build and installation steps:
    ./configure && make && sudo make install

Installation of touchegg and touchegg-gce

Now that we have utouch available, the next step is to install an appropriate application that will recognize the actual gestures and trigger appropriate actions. For that, I installed touchegg and (for graphical configuration) touchegg-gce. Both are Qt based applications, hence make sure you have Qt4 and the Qt development packages installed.

First, you want to install touchegg. You can find it as a project on Launchpad. When I checked, there were no release tarballs, so I used

bzr branch lp:touchegg

to get the code via bazar. Change into the directory and issue

qmake && make && sudo make install

to build and install touchegg. Note, that this will install it into /usr (instead of /usr/local). Now, you can just execute touchegg to run it. This will create a configuration file in $HOME/.config/touchegg (if it not already exists). You can edit this file to change the gestures recognized and their associated actions.

If you prefer a GUI for editing this file, you can use Touchegg-gce. This application allows you to load, modify and save the touchegg configuration. As it is hosted on GitHub, use git to get it before building:

git clone https://github.com/Raffarti/Touchegg-gce.git
cd Touchegg-gce
qmake && make

Note that Touchegg-gce does not come with an installation procedure. Instead, just start it from where you also build it.

Some last steps…

Finally, you might want to do some configuration depending on your system and Desktop Environment. First of all, touchegg might interfere with the synaptics input driver for some gestures. To circumvent this, create a script $HOME/bin/prepare-touchpad-for-touchegg.sh. In this script we’ll use synclient to setup the synaptics driver appropriately. In my case, I want synaptics to handle one and two finger events and Touchegg 3 and 4 finger ones. For this to work, one has to disable 3 finger gestures in synaptics:

#!/bin/bash

# If you want Touchegg to handle 2 finger gestures, deactivate
# 2 finger gestures in synaptics:
#synclient TapButton2=0

# Same for 3 finger gestures:
synclient TapButton3=0

# Same for 2 finger clicks:
#synclient ClickFinger2=0

# And for 3 finger clicks:
synclient ClickFinger3=0

# If Touchegg shall take care for scrolling, 
# deactivate it in synaptics:
#synclient HorizTwoFingerScroll=0
#synclient VertTwoFingerScroll=0

Make it executable (

chmod +x $HOME/bin/prepare-touchpad-for-toughegg.sh

) and ensure it is started when your desktop starts. For example, when using KDE, fire up systemsettings and go to Startup and Shutdown and add your script via Add Program.

Next, you want to ensure that touchegg itself is started on desktop startup. For this, create a second script $HOME/bin/touchegg.sh with the following content:

#!/bin/bash
export LD_LIBRARY_PATH=/usr/local/lib:/usr/local/lib64/$LD_LIBRARY_PATH
touchegg &

Make this one executable, too (

chmod +x $HOME/bin/touchegg.sh

) and also add it to the startup procedure of your desktop environment. That script does nothing else than starting touchegg; however, as at least in my case the utouch libraries were not automatically found, I had to modify LD_LIBRARY_PATH to point to /usr/local .

Last but not least, in order to have Touchegg-gce easily available, I created a similar script for it as well:

#!/bin/bash
export LD_LIBRARY_PATH=/usr/local/lib:/usr/local/lib64/$LD_LIBRARY_PATH
/path/to/source/of/Touchegg-gce/touchegg-gce

Note that you want to adjust the path to point to the correct location where you stored Touchegg-gce πŸ˜‰

That’s it!

On Unknown Roads: Raspberries@Home

First of all (as a kind of disclaimer): If you came here expecting some cool project for Raspberry Pi… forget about it. This post is just about “I did it” (yes, bought and set up two Raspberry Pis). No fancy stuff yet (hope that’ll change soon enough). For now, I just want to describe one possible basic use case for the Pis, so maybe if you have the same use case as I have… have a lot of fun reading further πŸ˜‰

Before…

Since some time we had a little home server running. Nothing spectacular – an Intel Atom powered box running OpenSuse. Stable and easy to maintain so far. That box is running an ownCloud instance – used for hosting and sharing files within our home network (such as music collections or photos). Only drawback: The server itself was quite noisy. However, it did it’s jobs quite good. However: While ownCloud provides all you need to easily upload and manage your files to the server, consuming e.g. the music stored there was a bit difficult. Same holds true for videos: Viewing them usually meant having to use a laptop and connecting to the TV via HDMI. Nothing so bad so far, just the two cats that we have makes it more difficult to enjoy a full movie (either they try to “type in” something (when you don’t nearly close the lid) or they close the laptop (if you do). Besides, the server is also used as a backup target (via rsync from Linux and netatalk from MacOS).

Step 1: A little streaming helper

So, to improve the way we access our data, I decided to buy one of those Raspberry Pis. In the end, they are cheap enough that you cannot do anything wrong πŸ˜‰ Few days later…

2013-11-21

As promised: Right now nothing special so far. I decided to go for an OpenELEC installation (via NOOBS). OpenELEC is basically a lightweight Linux that directly boots into XBMC. As of now, that seems to work quite fine. BMC can be controlled either “directly” (assuming you have a mouse connected to the streaming device), via remote control (which we did not go for as of now) or via apps you can install on you Android/iOS device (which is what we’re currently using, as it also allows to browse e.g. the media collection on the server in a quite comfortable way).

Now, more interesting is: How to best access the files on the server? In our case, we’re using ownCloud for upload and sharing. That was okay so far – ownCloud is a great service when it comes to managing and sharing your files in your home network. The first idea to connect our new room mate with the server thus was, to simply use the WebDAV option that ownCloud provides. XBMC has built-in support for WebDAV, so why not use that. However, it turned out that playback (both music and video) is quite bad that way – it takes ages for XBMC to start playing songs; directory listings also need minutes to be finished). So, a better solution is required. On the other side, dropping ownCloud is also not preferred (as it makes sharing quite easy and we have other services running on it (as contact and event management and – in the future – also storing and sharing of todo lists).

Good news is: It seems there is nothing the ownCloud developers have not thought about already :) Since ownCloud 4.0, you can mount external locations into ownCloud’s virtual file system. So, in our case we decided to use the following setup: For each user, a dedicated directory is created on the server where he can upload files via Samba. Each user can also decide to mount that directory in his ownCloud account. That way, the sharing features can be used for files uploaded via Samba, too. Last but not least, there is an additional “streaming” user being created that also has access to the shares. The “streaming Pi” uses this account to access the media files uploaded to the server: XBMC, too, has built in support for the Samba protocol. And using this approach, streaming really works fine :)

Step 2: Server goes Pi

So, the client side of the streaming project works fine. What remains is to review what happens on server side. So, actually everything works. But (yeah, I have to admit I’m not a fan of “never touch a running system”) there is room for improvement. First of all: The server hardware used so far is a bit noisy. Good thing is that it is located outside of normal living areas, so that isn’t that bad. But still… Second, power consumption. While the Atom that is used in the server has quite some power (two cores with hyper threading πŸ˜‰ it also has higher power consumption. That would be okay if we would have used the capabilities of the processor, which was not the case. Indeed, most of the tasks our home server had to do where more or less I/O centric. So, next step was to replace the previous installation with – yeah – another Raspberry Pi. This time of course the software selection is a little bit different: Instead of OpenELEC, I decided for Raspbian – a Debian based distribution for the Raspi. Actually, I tried first with Pidora, however, I ran into some problems there (and as I currently had no time to look further into fixing this, I decided for Raspbian, as that promised to run just fine given it seems to be the most used OS with the Raspberry. So, two evenings of installation and configuration and we’re done: A second Raspberry Pi now is doing it’s jobs as a home server, running ownCloud, netatalk and the usual other stuff you need :)

What’s next

As the first paragraph already warned you: That “project” until now is not yet what you’d call exciting. Indeed, only bringing together some pieces of hardware and software and trying what works good together. However, that’s hopefully not the end πŸ˜‰ First of all, the Raspberry Pi provides some interesting GPIO pins. So, why not make use of them πŸ˜‰ In particular, I have two things in mind: Some kind of ambient light when streaming movies via the Pi would be one of them. That seems to be somewhat easy as I’m obviously not the only one who finds this interesting. A second thing (and here I’ve not yet found anything in place) would be to make something similar for our music collection. Let me only say Moodbar for now.

Apart from that, given that I’m currently experimenting a bit with QML, doing some home grown media center solution on my own also sounds kinda fascinating πŸ˜‰ However – as so often – that probably would require much more time than I currently can effort. But let’s see πŸ˜‰ Maybe everything “basic” is already in place.

Why Reading Helps or When Android tells you it cannot find a library, it really can’t

Recently, some pre-compiled beta packages of the upcoming Qt 5.2 have been released. As that release again adds more support for Android (one of the targets I want to see OpenTodoList properly running on most) I didn’t hesitate long, downloaded and installed it and started porting OpenTodoList to 5.2.

First of all: I indeed had to do some “porting” (which however did not came too unexpected). If I understood the recent news correctly, Qt has dropped V8 in favor of an own JavaScript engine which deals much better with QML. So far so good. However, there seems to be some minor “differences” between these engines which indeed caused OpenTodoList to crash. Basically, what I had in my code before was something like

// Utils.js
var PriorityColors = [ "red", "yellow", "green" ];
var PriorityColors[ -1 ] = "black";
 
// Somewhere in QML:
import "Utils.js" as Utils
 
Rectangle {
  property QtObject todo: null
  color: todo ? Utils.PriorityColors[todo.priority] : "black"
}

It seems that there were some problems with that approach, so I just exchanged the global array with a function and voila. Interestingly, when trying later on to reproduce the problem, I was not able to do so. So it seems that something else in my code (which I must also have changed meanwhile) has actually caused the crashes. Anyway, that problem was rather quickly solved.

Another one did not resolve so easily: When trying to deploy and run OpenTodoList to Android, I encountered the next problem. And this one indeed took me several days to fix (well.. better let’s call this “find”).

As I already said, support for Android has been again improved a lot in Qt 5.2. One of the (in my opinion) most useful changes is that building and deploying to Android does no longer require an additional “android” folder being created in your source directory. That directory contained a variety of files. Most notable:

  • The built binaries that later get deployed to your Android (virtual) device
  • The AndroidManifest.xml file which contains some important information about your application

Starting with Qt 5.2 (well, or better: Qt Creator 3.0 – as this is the one where the build process is actually implemented) the android directory is created in the shadow build directory. All files are basically copied from the Qt directory. If you want to provide some overrides, you can also keep a stripped down “android” directory somewhere in your source tree (which e.g. could only contain the AndroidManifest.xml and nothing else) and instruct the build process to take your files instead of the default ones by adding the following line in your project file:

ANDROID_PACKAGE_SOURCE_DIR = $$PWD/path/to/android

So I created such a directory and then proceeding with generating me a new AndroidManifest.xml. Qt Creator nowadays comes with some support for doing so: From Projects -> Build&Run -> [Your Target Configuration] -> Run -> Deploy Configurations -> Details one can simply click the Create AndroidManifest.xml button, select a file name an that’s it. Next, you can open that file from the project explorer and Qt Creator will show you a neat form where you can enter the most important stuff (access to the XML source is provided via an additional tab in the editor). I entered all important stuff there like the package name to use, minimum and maximum Android SDK versions and so on. In the “Application” I specified the application name, the icons to use and for the “Run” option I entered “libOpenTodoList.so” (as this is what the application gets compiled to in the end).

After these preparations I build, deployed and… well, I got this nice little “Unfortunately Open Todo List has crashed” dialog shortly after the application tried to start up. That was a bit unexpected, but okay. I digged into the debug output provided and found some line telling me about an UnsatisfiedLinkError. In the description of that (Java) exception, I furthermore got that “findLibrary returned null” when the Java based “starter” that loads the Qt application on Android tried to load the “executable library” (in my case libOpenTodoList.so). Uff… why that? My first though was, that some dependencies where not fulfilled. Further checking the logs, I found that the libraries that get loaded by the start up procedure are logged as well. I skipped through the list of loaded libraries and actually came to the conclusion that everything is there: All the required Qt libraries as well as the OpenTodoListCore library (which in my case provides the basic class infrastructure).

So what else could have gone wrong? Asking Google did not reveal anything that immediately led me to a solution. In fact, I spent several evenings in randomly changing some stuff in my QML code as well as on the C++ side in the hope that accidentally I might stumble upon something that might help me understand what’s going wrong.

As this, too, did not help, I went on to trying to debug the problem on a virtual device. Until then, I always had used my SGS3 for that, as the Qt 5.1 QML GUI did not work in the past (by the way: This seems to be solved now, too, so running and testing your QML apps in an AVD is possible with the new version). Unfortunately, that also did not immediately reveal any cure for my issue. I checked the log outputs of the tries on the AVD which seem to be a bit more verbose than what I got from my physical device. Today morning, it finally struck me then:

upgrade1

Together with the exception name and the (for me first not very meaningful “findLibrary returned null”) the AVD also prints out the search paths where it actually looks for native libraries. And looking at these paths, I finally realized what really went wrong: My app was looking for any libraries in the system library location and (ups…) in the root directory of OpenTodoList. And this indeed is unlikely to work, as libraries are stored in the lib/ sub-directory. So finally I had a hint where to search on. And then I was back at the very beginning of my “porting” work: The AndroidManifest.xml. Going back to the editor of that file, I noticed that the Run option is actually implemented as a pull down menu. And that one did provide me the option “OpenTodoList” (which is the target name in the *.pro file) instead of the name of the generated library. Very well… one compile&deploy later, OpenTodoList was starting up fine on Android again :) Hurray!

So, the bottom line for me: Learn reading (and understanding). The “findLibrary returned null” does not mean that dependencies are not fulfilled. It really just means that the library you just try to load cannot be found anywhere (and thus, a null value is returned).

Fedora on Lenovo IdeaPad Y560

So after I stuck quite a long time with my Asus lapop, I recently decided to get me a new, shiny toy. As you can see from the title, my choice is Lenovo this time, with an IdeaPad Y560.

As I’m a Linux user for several years now, one of the first things to do after the purchase was to install my favorite operating system on it. In the following, I want to collect some experiences and maybe hacks required to successfully use Linux on that device – as information for myself and maybe others that also want to install something different than Windows on that laptop πŸ˜‰

The Situation

The IdeaPad Y560 has an Intel i7 Quad core and comes with 6 GB RAM preinstalled. From the spec, it says, one can upgrade up to 8 GB.

My operating system choice is Fedora (currently version 14), 64 bit with KDE as default desktop.

Installation

At least my laptop had the following initial partitions:

  • Windows Boot Partition (200 MB)
  • Windows System Partition (around 580 GB)
  • Some “driver” partition (around 30 GB); this contained only some Windows drivers and programs
  • OEM Partition

The driver partition is set up as a logical drive inside an extended partition, so when using Windows, you actually might see 5 partitions reported.

Despite I usually don’t use Windows anymore, I decided to keep it installed in case some of the installed devices aren’t going to work with Linux. So, what I did was:

  • Making a backup of the driver partition; I assume, one can get these drivers from the Lenovo website as well, but just wanted to keep the files in case something goes horribly wrong πŸ˜‰
  • Next step, I deleted the drivers partition and the extended partition. Note, that it is currently a rather bad idea to delete the (hidden) OEM partition, as it is required to restore the laptop to factory settings (unless your model is delivered with a backup DVD, but mine was not)
  • In case you don’t trust the Linux installer, you optionally might want to shrink the Windows partition from inside Windows; however, note, that in this case you can only shrink that partition until the first non-movable sectors are located. What – however – might be a good idea is to defragment the partition before proceeding with the Linux installation, at least, if the system has already been used for some time

Now the actual installation can begin. Insert the install CD/DVD/USB stick and reboot. Make sure, booting from the appropriate device is enabled and the device’s boot priority is higher as the priority of the hard disk.

At last for me, the following boot procedure was straightforward: Actually, you just need to follow the instructions. I decided to shrink the Windows partition (to 100 GB, so Linux has a total of 480 GB in my setup). The installer will do the rest for you (usually, it will suggest to create a extended partition, where it will create a boot partition and a LVM volume with root and swap partitions). I advice to Use 3 LVM volumes – root, swap and home – for root, I used 20 GB (which is sufficient in most cases, but in case you are unsure you can set it at least to 50 GB, which is enough in any case).

After the copying of the live image to the hard disk, just reboot into the new system and complete installation.

First Impression

After I had quite some trouble with both my tower PC and my Asus laptop in the first time, I was really impressed. Linux works really well in this laptop, and most things seem to work out-of-the-box. So for example, the volume control keys indeed are usable (I especially like the mute button 8) ). WLAN does not need any additional work this time (which still wasn’t the case with my Asus laptop, where I needed to install additional kernel modules manually) and graphics also do fine with the open source radeon driver.

What might need some work

Graphics

The radeon driver, which is used by default on Fedora, works quite well. Sometimes, there are some fragments but these are negligible. However, if you want to run heavy-weight 3D applications (games, modelers, etc) or just like to use your desktop’s shiny 3D effects, you might want to use the (proprietary) display drivers.
Update:
The pre-installed radeon driver works now well (currently using F15) from a graphical point of view (no glitches, desktop effects working flawless and multiple monitors are no problem, too). However, currently, sound via HDMI seems not to be possible, so if you require it, consider installing the proprietary driver, too.
I recommend using the drivers from RPM Fusion. After enabling their repositories, issue an

su -c "yum install akmod-catalyst xorg-x11-drv-catalyst xorg-x11-drv-catalyst-libs.i686 xorg-x11-drv-catalyst-libs.x86_64"

If you decided to install the 32 bit version, you don’t need to install the xorg-x11-drv-catalyst-libs package for 64 and 32 bit, of course.

Also note that you might consider installing the “kmod” package instead of the “akmod”. The akmod’s are good in case when a new kernel is installed the appropriate kmod is not yet available (which might result in a blank screen as soon as you boot the next time using the new kernel). In that case, the kernel module will be build when starting the system. However, this increases boot time (so if you want to keep boot times absolutely minimal, better go for the simple kmod variant.

After installation, you should rebuild your initramfs (and make a backup of the old, in case you need to revert):

su -
mv /boot/initramfs-`uname -r`.img /boot/initramfs-`uname -r`.img-backup
dracut -v /boot/initramfs-`uname -r`.img `uname -r`

You should also disable KMS. Edit your /etc/grub.conf: You have to add radeon.modeset=0 to the kernel line. A complete section should then look like this:

title Fedora (2.6.35.11-83.fc14.x86_64)
        root (hd0,4)
        kernel /vmlinuz-2.6.35.11-83.fc14.x86_64 [...] radeon.modeset=0
        initrd /initramfs-2.6.35.11-83.fc14.x86_64.img

Last but not least: If you have manually created and changed the file /etc/X11/xorg.conf, create a backup as well. If this file does not exist in your installation, you don’t need to do something here (the file is not required anymore).

su -
cd /etc/X11/
cp xorg.conf xorg.conf.bak

For further information and some hacks, there is a blog post which I found to be quite useful.

Setting up dual-head

I usually use an additional monitor attached to my laptop. While configuration via the open source drivers worked flawless (just setup what you want in System Settings -> Display and Monitor, the proprietary driver had some problems. More specifically:
I want to setup one X screen to span both monitors. Usually, I have the laptop monitor configured to be the primary and the external monitor right of the laptop. One is able to set this configuration up via the Catalyst Control Center (start it via su -c amdcccle), however, the changes were not permanent, i.e.:

  • the control center instructed me to restart X (which I did)
  • after that restart, all was setup as instructed
  • however, after a system reboot, the external monitor was always set to be a clone of the first

After a bit playing around with the X configuration, I found this setup to be what I needed:

Section "ServerLayout"
	Identifier     "Default Layout"
	Screen         0  "Screen0" 0 0
EndSection

Section "Files"
	ModulePath   "/usr/lib64/xorg/modules/extensions/catalyst"
	ModulePath   "/usr/lib64/xorg/modules"
EndSection

Section "ServerFlags"
	Option	    "AIGLX" "on"
EndSection

Section "Monitor"
	Identifier   "0-LVDS"
	Option	    "VendorName" "ATI Proprietary Driver"
	Option	    "ModelName" "Generic Autodetecting Monitor"
	Option	    "DPMS" "true"
	Option	    "PreferredMode" "1366x768"
	Option	    "TargetRefresh" "60"
	Option	    "Position" "0 0"
	Option	    "Rotate" "normal"
	Option	    "Disable" "false"
EndSection

Section "Monitor"
	Identifier   "0-DFP1"
	Option	    "VendorName" "ATI Proprietary Driver"
	Option	    "ModelName" "Generic Autodetecting Monitor"
	Option	    "DPMS" "true"
	Option	    "PreferredMode" "1920x1080"
	Option	    "TargetRefresh" "60"
	Option	    "Position" "1366 0"
	Option	    "Rotate" "normal"
	Option	    "Disable" "false"
EndSection

Section "Device"
	Identifier  "Videocard0"
	Driver      "fglrx"
	Option	    "OpenGLOverlay" "off"
	Option	    "Monitor-LVDS" "0-LVDS"
	Option	    "Monitor-DFP1" "0-DFP1"
	BusID       "PCI:1:0:0"
EndSection

Section "Screen"
	Identifier "Screen0"
	Device     "Videocard0"
	DefaultDepth     24
	SubSection "Display"
		Viewport   0 0
		Virtual    3286 1920
		Depth      24
	EndSubSection
EndSection

Section "Extensions"
	Option	    "Composite" "Enable"
EndSection

Sound

In case you don’t hear any sound in KDE, don’t panic. At least for me, KDE picked the HDMI out as default output. So just go to the KDE system settings (hit Alt+F2 and enter “systemsettings” and hit enter to start it). There, navigate to Multimedia and select Phonon in the left sidebar. Now you can set up default output devices for the defined categories. Make sure, the entry “Internal Audio Analog Stereo” is at the very top of the list of output devices. You can Apply Device List To… to apply the list to all categories. Then, hit Apply to save the changes.

Ambient Light Sensor

The laptop comes with an integrated Ambient Light Sensor (ALS), which is used to automatically adjust the backlight brightness depending on the ambient light level. Up to Fedora 14, this sensor obviously has not been detected (and therefor automatic adjusting was off). However, starting from Fedora 15, the sensor is detected and fully used. In case you don’t want/need the ALS: I had to boot into Windows and disable the automatic brightness changes there (Lenovo preinstalled a tool for this, just tap the special button with the battery icon and it should show up, unless you wiped everything away). There, you can disable the ALS (use the button with the gear icon and then there should be an option to disable ALS). This seems to deactivate the sensor hardware-wise (after rebooting, the screen brightness remains constant in Fedora, too).

By the way: In case somebody knows either how to enable/disable the ALS from Linux or the name/manufacturer of the ALS in the laptop… would be nice if you could drop me a message πŸ˜‰

Digital Pen&Paper: A Summary

Well, it’s been a while since I wrote the last time. Meanwhile I moved (my new apartment is awful) and now nearly everything is tidied up here. Nevertheless, I also did some development work πŸ˜‰

So first I am glad to mention that a new version of KEmu will soon be officially be released. This new version rewrites a lot of the existing code, making it (hopefully) more stable and the GUI more usable and clear to the user. However, some parts are not yet ported, so you have to wait a bit more.

Additionally, I also had a project at university: The digital pen&paper project.

The Problem

Take some form, e.g. when you are at the doctor. Whatever is written on them basically has to be converted (until now by hand) manually to the computer, meaning: Someone spends a lot of time reading the form and writing the contents once again in a computer program. This could easily be solved by using e.g. tablet computers, however, these are not yet accepted by a broader mass, thus writing in a form which is printed on a sheet of paper will remain the default for some time. The digital pen&paper approach could help here: While one can use the pen (and the accompanying receiver unit) like a normal pen, the unit later can be connected to a computer to have the handwriting data in a digital form, which can be used by a handwriting recognition software to convert it to text, which then can be stored in a database (or where ever needed in the actual use case).

Now, there remains one problem: The files you receive from the unit connected to the pen are like writings on a transparent you use with e.g. overhead projectors: Imagine you have a sheet with a form you put on the projector. Then, you put another sheet on top of the form sheet and write on it. As long as the two sheets remain in that position, the resulting projection looks like what you expect (a form with filled fields). However, if you remove the form…

This basic problem extends to the following cases (there might be more, but these are the main ones we considered):

  • What, if either the form sheet or the receiver unit are misplaced (e.g. rotated by some degrees)?
  • What, if you have multiple forms (or one form, that spread over several pages of paper)?

These problems show one thing: Having a representation of the bare form and the digital writing (also called “Digital Ink”) is not sufficient to automatically convert the information hold by the writing to a computer representation that can be stored in a structured system.

The Solution

We considered one solution: We not only used the digital ink gathered from the device, but also a scanned image from the filled out form. Before you wonder: The image of the form is (standalone) not sufficient to gather all the information. So while it is perfectly possible to recognize the form elements printed on the sheet and even to separate the writing from the background, you currently cannot convert these information automatically to text. This is due to the fact, that handwriting recognition requires not only the actual shape of an object to convert it to text (or a single character) but it needs information about how that shape has been created, i.e. the separate strokes. These cannot be reconstructed from the scanned image.

So our approach is: From a set of ink files and scanned images, find the pairs that belong together. Next, find a projection (i.e. calculate a matrix that is applied to the ink data) such that you once again place the transparent with the form and the transparent on which you’ve written correctly (concerning our example with the overhead projector).

If you are interested, please have a look at https://gitlab.com/groups/kpki2010. There you’ll find what we developed over the last months. To give you a short overview:

General Structure

The project consists of several (standalone) programs. Each program does one step in the processing chain. This design is pretty useful: First, it allowed us to separate work. Additionally, every part of the chain thus can be replaced by another program. So rewriting a program or using an existing alternative is no problem. The sub-projects we worked on are:

imagefilter

This program is a command-line image filter (well, as the name suggests…) It’s used to create a version of the scanned image that contains only the text data (as black strokes) and background. However, it is pretty generic; internally, it provides several filters. Each filter is a plugin (available as a separate library), thus, the program can be extended and used also in other contexts (not only for our project).

The program uses an interesting approach: It can be seen as a “scripting language for image processing”. Each filter can be seen as a command. There are commands for file interaction (i.e. loading and saving), and commands that actually do something with the images in memory. This allows to use several filters consecutively to create interesting effects or generate multiple output images from the same source file in one program call.

As I already said: In our toolchain, the image filter program is used for preparing the scanned images, but it has the potential to be used in other situations as well πŸ˜‰

img2roi

This tool is used to create a XML representation of the filtered images and rendered versions of the ink files. “roi” is short for region of interest, in our case this is a consecutive region in the image/inkfile (i.e. a letter or number or a whole word, if it is handwritten and there is no space between two letters).

Each region is written to the resulting XML file together with some “meta” information (bounding box/circle, center of gravity, …). These values can later be used to compare two such XML files (where usually you compare a XML generated from a scanned image and one generated from a rendered ink file).

roimatcher

This tool is used to calculate a matrix which maps all regions from one input region file to the other. In our processing chain, it maps the ink files’ regions to the regions in the scanned files. Additionally, it creates a “difference” value, that determines how much the two input region files differ from each other. This value is required to find the correct ink/scanned file pairs. Later more on this.

inktransformer

This tool is used to apply transformations to Ink files (or better: Ink XML files, an intermediate format, as the original Ink format is a proprietary, binary one :\ )

Summing it all up: formextract

This program is a “meta” program: As input, it takes a list of ink files and scanned images. It then uses the other programs to filter and process these files. After calling roimatcher on each ink/scanned file pair, it runs an optimization algorithm (namely simulated annealing) to find an optimal list of ink/scanned pairs. Then it continues on calling the other programs to generate transformed ink files and some XML files that contain the read texts from the ink files.

So, did it work?

Basically: Yes. The results we archived until now are satisfying. When the scanned images are of sufficient quality (i.e. scanned with enough DPI and not spilled over and over with coffee) the transformed ink files look pretty good and the found ink/scanned pairs are mostly correct, too.

Nevertheless; there’s room for improvements. So the imagefilter part could use some recognition of figures as well for it’s form recognition: Currently, it searches for the surroundings of the form by looking for something red (the color we used for the forms). However, we basically know how the form delimiters look like (dashes and plus signs), so the filter could check whether what it found looks like such a sign and if not, continue).

Additionally, image filtering is currently rather slow. Some improvements would not harm in this area.

Can I test it myself?

Unfortunately: No. While the programs we wrote are all released under the terms of the GNU GPL v.3, the program chain currently requires two additional programs (namely InkManager and InkRecognizer) that we do not own. If you are interested in testing and if you want to re-write this missing parts, just e-mail me (or have a look in the sources to learn how the missing parts are assumed to work).

=-=-=-=-=
Powered by Blogilo

Hardly a better place….

 

picnic

… for a picnic!

Well, actually, these people probably did not select that place because of it’s the ideal place for something like that. As I saw the first of them arrive today, I though they just want to make fun. Indeed, they seem to protest against what the city is currently planning for its most famous excavation.

The “Wiener Loch”

…. that is the name for that amazing hole in the ground in Dresden’s very heart. I guess, it’s hardly more than 50m from the main station to what remained from some previous planning of how to make the city more attractive (to whoever). Even Wikipedia knows about it (article is in German). I cannot say too much about it myself, as it already was here when I moved to Dresden. Since nearly 3 years it is one of the first things I see when I enter my balcony in the morning.

In the past, there were several plans how to proceed with the hole. The probably most attractive one for the city would be to sell the area and let someone build something there. Yeah, that would at least bring some money for the city. However, in the past, that failed several times. Currently, the city is talking to some investor from Berlin who (still) has an interest in building a commercial there.

However, that is not the only possibility for the Wiener Loch. An (in my opinion) better solution would be to just fill up the hole and planting it. Currently, Dresden is already a city with a lot of parks and green places here and there; but especially around the main station the city looks like every other one: Gray.

An (and might it be such a small one) park would be great here, especially as the “Prager Straße” is nearby, where a lot of people pass each day. It would surely be cool to calm down after shopping or on the way back home after work. However, that solution would not earn any money for the city.

However, whatever happens, the situation cannot remain as it is. The city pays a lot of money to keep the hole free of water; additionally, the whole area is only surrounded by some fences. I saw children play several times around the hole. IMO, the danger that there might happen something is too big.

Well, I am very curious what will happen someday with that hole. However, I cannot report “live” anymore, as I’ll move this summer. Too bad, because as you can see, some trees and bushes started growing around and in the hole over the last years, and it starts looking attractive and calming πŸ˜‰

=-=-=-=-=
Powered by Blogilo

Digital Pen&Paper, AI and a new Interesting Project

To even don’t let come up the feeling that we might have nothing to present here, I’ll blog a bit between work, university and (later that evening) homework.

A new semester began early in April; and as the passed one, it is accompanied by a new, really interesting project for me :) But let me explain.

Maybe you already have heard of a neat little toy called digital pen&paper. If not: The idea is, that you have a pen. And you write with it. Yes, ordinary writing on a sheet of paper. However, somewhere the “digital” must come in πŸ˜‰ The pen sends information to a device, that you can attach to your computer. This device then records the things you’ve written on the paper and saves them (you might think of that as if working with a pen in tools like GIMP). The saved strokes can than either be saved as an image (so you get a “scanned” version of your handwritten stuff, though without scanning) or as a “strokes” page (i.e. the lines you drew are saved in a fitting format). Cool, isn’t it? You can even set up that stuff to work as HID device (you can then use it similar to a graphic tablet). This way, you can use handwriting recognition software to directly digitalize your writing.

Now, if you think of it: Does this really make sense? For someone like me – a student – probably not. If I want a version of my handwritten stuff I can scan it. If I need it digitalized I take my netbook with me and write stuff there. But there are indeed cases, where such a setup makes sense! Currently, often things are still written on paper; for example in a hospital (where the doc has to fill forms about you) or on a mess (for example a contact form). This is due to human nature (and habits, of course). But actually, these information are needed digitalized (e.g. the data from the doctor’s form might be needed by others in the hospital, so having it available in the local network is essential; similarly, the contact data from the mess is actually further processed and stored on computers anyway). So, after taking the notes using (analog) pen&paper in a second step the forms must be digitalized by hand; a time consuming and boring job (unless the persons have a really bad handwriting, so that “decoding” their writings is kinda game).

Here comes the digital pen&paper idea in: Using such a device, the conversion step can be omitted. The data – even though written in a known way – is simultaneously available digitalized and can easily be transferred into the local network and thus is accessible nearly immediately. Well… at least it could be.

Consider a form as the doctor fills it out. There, we have several problems:

Even if we have the strokes data from the pen, we don’t know exactly which page (assuming a form with several pages) a digital page belongs to. We can make assumptions (“the doc will always fill the forms out in the same order”), but as you know, a lot of assumptions tend to don’t hold when it comes down to the point where you need them. This means for us: We need to find out, which digitalized page belongs to which page of the form.

This would maybe even be easy, but we have another problem: What, if the receiver device is badly positioned and thus the recorded strokes are transformed compared to how the form actually looks like (e.g. moved a bit to the left/bottom, rotated by some degrees and so forth). This makes a mapping of strokes to pages of the form even more difficult; especially, as there is a cyclic dependency: To know, which page of strokes belongs to which page of the form we would need the strokes positions correctly inside their page. On the other side, to get the correct transformation, we need to know which page of the form to target.

This cyclic dependency can be broken up a bit: By scanning the pages of the form afterwards and evaluating the scanned images, it is possible to map the stroke pages to the pages of the form. This decreases the comfort factor a bit of course (the sense of digital pen&paper is to avoid an additional step), however, we assume that scanners scan faster than humans are able to type in the information by hand (and especially with progress in techniques this assumption is likely to hold).

Now, then, AI comes in. Our (yep, this time it’s a two-man-project πŸ˜‰ ) task is, to use algorithms and ideas from AI, to extract the information, make the mapping and thus ease digitalizing handwritten forms even further. I think, this will be really fun; especially, as we were asked to realize the project using C#/.NET. So… hey Mono, so we meet again :)

=-=-=-=-=
Powered by Blogilo