reading from named pipes in xmobar

I recently noticed that xmobar has a plugin to read data from Unix named pipes. Named pipes can be used for inter-process communication (IPC). Two different application can send and read data using named pipes. A named pipe operates much like the normal (unnamed) pipe you use in the shell. The difference is that named pipes must be explicitly created/deleted and they are accessed through the filesystem. You create named pipes using the mknod or mkfifo commands and delete them with rm.

In this tutorial we will use named pipes to read the current volume levels and display them in xmobar. Xmobar also has an alsa plugin but distributions do not always compile xmobar with alsa support. An alternative approach is to write a small script that will parse the output of the amixer command and then call it from xmobar in regular intervals to show the current volume levels. This however requires xmobar to frequently call an external command/script. Also if you happen to use a large frequency you might notice some “lag” from the time you change the volume levels to the time this change is reflected in xmobar. Using named pipes this change is instant and you also void redundant calls.

First, we need to create our named pipe. All my startup logic is inside a script which is called from ~/.xinitrc. For simplicity let’s create the pipe inside the ~/.xinitrc before launching the window manager.

1
2
_volume_pipe=/tmp/.volume-pipe
[[ -S $_volume_pipe ]] || mkfifo $_volume_pipe

We create the pipe only if it does not exists. That way we can logout/login without getting any “File exists” errors. My /tmp is mounted as tmpfs so the pipes do not survive a reboot.

Next we need to configure xmobar to read from this pipe. Edit your xmobar.hs and put in the command list:

xmobar.hs
1
, Run PipeReader "/tmp/.volume-pipe" "vol_pipe"

This registers the PipeReader plugin to read from /tmp/.volume-pipe. It also makes the vol_pipe alias available in the output template. The template is used to describe how to display information gathered from the plugins. I use something like this to show the volume levels: ♫ <fc=#b4cdcd>%vol_pipe%</fc>.

Basically, now you can pretty much feed anything to the xmobar through the pipe and it will display it without hesitation. Just write something to the pipe:

1
echo "something" > /tmp/.volume-pipe

In order to display the volume information we will create a small script which can be used to increase/decrease/mute/show the volume levels. After each operation it will send the output to the named pipe. I already have a script I use for this purpose, but let’s write something simpler.

volume.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#!/usr/bin/bash

get_volume() {
  # return volume levels (0-100)
  vol=$(amixer sget Master | grep -o -m 1 '[[:digit:]]*%' | tr -d '%')
  echo ${vol}% | tee /tmp/.volume-pipe
}

case $1 in
  "")
    ;;
  "up")
    amixer set Master 5+ >/dev/null
    ;;
  "down")
    amixer set Master 5- > /dev/null
    ;;
  "toggle")
    amixer set Master "toggle" >/dev/null
    ;;
  *)
    echo "unknown command"
    exit 1
    ;;
esac
get_volume

The key point here is to send the output after each operation to the pipe (line 6), a simple redirection will do the trick. We use tee to send the volume output both in the stdout and in the named pipe.

The only problem is that when the xmobar first starts, it does not have any data to read from the pipe, so it displays this annoying message: “Updating…”. In order to fix that we can run our script a single time after we create the pipe inside the .xinitrc.

1
2
3
_volume_pipe=/tmp/.volume-pipe
[[ -S $_volume_pipe ]] || mkfifo $_volume_pipe
/path/to/script/volume.sh

more reading about named pipes and xmobar:


using git rebase to remove duplicate cherry-picked commits

Git cherry-pick is a great tool, it allows you to select individual commits from a branch and merge them into another. However, if the branch that you cherry-picked from is eventually merged to the same branch that the individual commits landed, you end up with duplicate commits.

To demonstrate this behaviour, we assume that we are working on a project and at some point, we branched off to a dev branch to work on some exciting new feature. As we are working we find some other important issues, or maybe some minor issues that are not relevant to the new feature and decide to fix them too. Now, there are two approaches we can take in such situation. The first is to commit in our current branch and then use git cherry-pick to introduce these changes to the master (or some other) branch. The second approach is to stash any unsaved changes, create another branch and commit these changes there. I personally prefer the second approach as it makes your life way easier if your are contributing on a project and need to make a pull request on Github or create a patch. For our example, we will take the first approach and commit anyway.

So, after some work on the dev branch, the tree will look like this:

dev branch

I have used silly commit messages like “cherry-pick this commit” to indicate which commits we will cherry-pick into the master branch. Let’s do that now:

$ git checkout master
$ git cherry-pick e22c44f
$ git cherry-pick 1a2929a

If we use git log on master, we will notice that we have our new commits, but with different SHA-1 hashes. This is happening because when we cherry-pick commits, git creates new commits on the master branch which introduce the changes of these commits. At this point, the state of the repo is the following:

repot state after cherry-pick

At some point, we are happy with the work on the dev branch, and decide to merge the dev into master. However the master branch has changes since the time we initially branched to work on our features. This means that we cannot use a fast-forward merge. If we switch to master branch and run git merge dev then a new commit will be introduced. This is the standard behaviour of git when doing no-fast-forward merges. Sometimes this may be desirable, but some other times we prefer a linear history. This will be the repository state after a merge:

repo state after merge

And a git log --oneline will look like this:

b05bcc3 Merge branch 'dev'
b47ebea 2nd commit to cherry-pick
9d190e9 cherry pick this commit
8c0a87e some more work before merging
d1c9568 2nd commit to cherry-pick
9d4575e even more commits
790c30a some more commits!
ac60758 more commits on dev branch
03725e0 cherry pick this commit
2e106d4 1st commit at dev branch
ee38cc6 second commit at master
444357a initial commit

Notice that except from the (ugly) new commit on top, we also ended up with duplicate commits for each cherry-picked commit. Also, if you care enough to use the git show command you will notice that these duplicate commits contain the same changes.

This is where the rebase command comes into play. Rebase can be used to rewrite the git history. It allows us to extract the changes of a commit(s) and re-apply it ontop of a branch.

We will use it to introduce the new commits of the master branch to our dev branch and then replay all the work of the dev branch on top of these commits. In our use case, the “new” commits of the master branch are the cherry-picked changes. When rebase will start to re-apply our work, it is smart enough to not apply the same changes the second time, and thus removing the duplicate commits. Running:

git checkout dev
git rebase master

will bring our repository in the following state:

repo state after rebase

As you can see, the cherry-picked commits look like they never existed on the dev branch. Also, now we can use a fast-forward merge:

git checkout master
git merge --ff dev

You can find more information about rebasing and cherry-pick at the git-book and the relevant man pages.


preview sphinx documentation

I recently released a very small utility called sphinx-serve which helps me preview sphinx documentation locally. It spawns a simple http server on the documentation folder (html or singlehtml).

Installation is very simple, in the same virtual environment with your sphinx documentation issue pip install sphinx-serve. This will make the sphinx-serve command available.

When you issue the above command the scipt tries to detect the build folder of the documentation. It searches the current working directory plus it navigates backwards in case you are in a deeper level (much like the bundler utility). When it finds the build folder (the folder name is configurable through the --build argument) it spawns a simple http server. It serves files from the http folder (default) or from the singlehtml folder if --single is supplied.

It is handy to add the sphinx-serve command with any required arguments to the sphinx Makefile:

preview:
    sphinx-serve --single --port 4000

now you can easily preview your documentation by issuing make preview.


installing Arch Linux ARM in Raspberry Pi

This tutorial guides you through the process of installing the ARM flavour of Arch Linux in a RaspberryPi. This is a headless installation procedure, no monitor/tv or keyboard are required.

Get Arch Linux ARM

You can download the image using a direct download link or torrent. The latest version available (at this time) is the archlinux-hf-2013-02-11. After downloading the zip archive containing the image verify the SHA-1 checksum. First download the archlinux-hf-2013-02-11.zip.sha1 file in the same path as the zip archive and then run:

$ sha1sum -c archlinux-hf-2012-09-18.img.sha1

Install on SD card

The installation process is pretty straighforward. Insert the SD card in your computer and transfer the image to the card using the dd command as root. As always, pay attention to the supplied destination device. Make sure you use the one corresponding to the SD card or you could trash all your data on other devices. Also make sure that the destination device is not mounted. You can use the lsblk command to verity that. We will just use sdX as an example.

# dd bs=1M if=archlinux-hf-2013-02-11.img of=/dev/sdX

Expanding the root partition

After the image transfer is complete two partitions are created. The sdX1 partition will be mounted at /boot and the filesystem is VFAT. The other partition, sdX2 holds the root filesystem and is formatted as ext4.

The root filesystem is a little smaller than 2GB. If the SD card is bigger (it should be) we need to expand this partition to fill the remaining space. Alternatively we could create a second filesystem and mounted at a mount point of our choice. The creation of a new filesystem is really simple, so it is not covered here.

In order to utilize the remaining free space, first we expand the partition and then we expand the filesystem. As always make sure that the partitions are not mounted and that you are altering the proper ones. The following examples assume that the SD card is the device /dev/sdb and the root partition the /dev/sdb2.

Start fdisk. The p command lists the partitions in the device /dev/sdb.

# fdisk -u=sectors /dev/sdb
   Command: p

   Disk /dev/sdb: 7822 MB, 7822376960 bytes
   241 heads, 62 sectors/track, 1022 cylinders, total 15278080 sectors
   Units = sectors of 1 * 512 = 512 bytes
   Sector size (logical/physical): 512 bytes / 512 bytes
   I/O size (minimum/optimal): 512 bytes / 512 bytes
   Disk identifier: 0x000c21e5

   Device Boot      Start         End      Blocks   Id  System
   /dev/sdb1   *        2048      194559       96256    c  W95 FAT32 (LBA)
   /dev/sdb2          194560     3862527     1833984   83  Linux

Then, we delete the the second partition (the roor):

Command (m for help): d 
Partition number (1-4): 2
Partition 2 is deleted

and re-create it. We create the new partition as primary and we set the last sector at the end of the SD card. The default settings should be fine here.

Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): p
Partition number (1-4, default 2): 2
First sector (186368-15278079, default 186368): 
Using default value 186368
Last sector, +sectors or +size{K,M,G} (186368-15278079, default 15278079): 
Using default value 15278079
Partition 2 of type Linux and of size 7.2 GiB is set

The partition is already set to Linux (83) so there is no need to alter it. Finally, save the changes:

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

In order to expand the filesystem, first we run the checkdisk program to detect and fix inconsistencies and then use the resize program.

# e2fsck -f /dev/sdb2
# resize2fs /dev/sdb2

IP discovery and remote login

The Arch Linux ARM image has the ssh deamon enabled by default. In order to login we must first determine which IP the rpi has acquired from the local network. This assumes that there is a working DHCP server on the LAN. One way to get the IP address is to check the DHCP logs or the router logs if DHCP is running from a router. As an alternative, we can run a ping scan on the LAN using nmap. Assuming that the lan subnet is 192.168.0.0 with mask 255.255.255.0 run:

# nmap -PE -sn -n 192.168.0.0/24

After finding the IP login using the username and password root.

Post-install configuration

After you login, you should change the root password using passwd. Also, it is a good idea to re-generate the rsa and dsa keys.

# ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N "" -t dsa
# ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N "" -t rsa

Answer “yes” on prompt to overwrite. To run a full system upgrade run:

# pacman -Syu

Memory allocation

Depending on the usage you are planning to do with the rpi it might be a good idea to adjust the RAM split between the CPU and the GPU. Edit the file /boot/config.txt file and change the value of the variable gpu_mem_256 or gpu_mem_521 depending on the rpi model you have. This post shows the valid memory values. Note that you must also take into consideration the cma_lwm and cma_hwm variables. These variables allow dynamic memory management at runtime. Make sure that gpu_mem_256 (or 512) value is higher than the high water mark cma_hwm. More info at the links on the bottom of this post.

Sources and further reading


adjusting HDD spindown with systemd service files

The preferred way to run on-shot commands after boot when using Archlinux + initscripts, is placing them in the rc.local file. Also, initscripts provide a handy helper which creates a service file that runs these commands on boot when using systemd. If you want to completely move away from initscripts, to a pure systemd-based system, you need to create service file(s) for these commands.

The following is an example service file that adjust the HDD spindown levels.

[Unit]
Description=Fix excessive HDD parking frequency

[Service]
Type=oneshot
ExecStart=/sbin/hdparm -B 220 /dev/sda
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

The most notable options here are the oneshot service type and the RemainAfterExit option. The oneshot service type is used to indicate that the command should exit before starting any follow-up units. We also set the RemainAfterExit option to true to indicate that this service should be considered active after it exits. Finally, keep in mind that you need the full command path in ExecStart.


migrate disqus comments for jekyll’s pretty urls

I recently enabled the pretty-url feature in Jekyll, which removes the redundant .html suffix in URLs. You can enable it by adding permalink: pretty in your _config.yml. This of course broke Disqus comments on all existing pages.

Hopefully, this can be easily fixed using the cool migration tools Disqus provides. The concept is to write a text file which maps the old URLs to the new ones. The file format should be the like the following snippet.

old_url1, new_url1
old_url2, new_url2

Disqus provides a CSV file which contains all your site’s URLs. To download this file, navigate to your site’s admin panel and start the URL mapper tool under tools → migrate threads → start URL mapper. Then download the CSV by clicking the download CSV link.

First we need to get rid of the silly windows CRLF line endings from the CSV file. A little perl magic can help here.

perl -pi -e 's/\r\n/\n/g' disqus-comments-old.csv

Then, we need to format the file accordingly. The new URL is the old one without the .html suffix.

http://example.com/post.html, http://example.com/post

Again using perl, we can cook a simple script to automate the process.

#!/usr/bin/env perl
use strict;

open(fin, 'disqus-comments-old.csv') or die $!;
open(fout, '>>', 'disqus-comments-new.csv') or die $!;

my $old; #keep the original lines

while(<fin>) {
    chomp;
    $old = "$_";
    s/\.html/\//;
    print fout "$old, $_\n";
}

close(fin);
close(fout);

I didn’t bother to allow passing the filename as a script argument, so I just ‘hardcoded’ the filenames in the script. Edit at will and don’t forget to run it from the proper path.

Finally, upload the new CSV file to the URL migration tool and you will have your old comments back.


view your public IP from pentadactyl/vimperator

Pentadactyl and vimperator extensions allow you to write handy functions in javascript. These functions can be later called from the command mode or from keybindings.

Some time ago, I wrote a simple function, that makes a request to a page which returns your public ip address. That way you don’t have to actually visit that page and interrupt your flow of work. I find this extremely useful as I frequentlty use VPNs and SOCKS proxies.

The js code is the following:

function ip() {
    var req = new XMLHttpRequest();
    req.open('GET', 'http://ipz.herokuapp.com/', true);

    req.onreadystatechange = function (ev) {
        if (req.readyState == 4) {
            try {
                dactyl.echo(req.responseText);
            }
            catch (err) {
                dactyl.echo(err);
            }
        }
    }
    req.send(null);
}

I used a very simple service I wrote and is currently deployed at heroku, ipz. You can use any of the popular services like icanhazip or checkip service from dyndns.

In order for this to work, you need to put the function in your ~/.pentadactylrc enclosed in a here-document block:

javascript << EOF
 function-here
EOF

I also set a command to call this function by typing :ip, the command is:

command ip -js ip()

In order for this to work in vimperator, we need some adjustments. Put the following code in your ~/.vimperatorrc.

    ip = function() {
        var req = new XMLHttpRequest();
        req.open('GET', 'http://ipz.herokuapp.com/', true);

        req.onreadystatechange = function (ev) {
            if (req.readyState == 4) {
                try {
                    liberator.echo(req.responseText);
                }
                catch (err) {
                    liberator.echoerr(err);
                }
            }
        }
        req.send(null);
    }

Again don’t forget the here-document block and the command, which are a bit different.

:js << EOF
 function-here
EOF
command! ip js ip()

sorting null values last in CakePHP

Assume your data contain a field with some NULL and some not NULL values. If you want to sort your entries against that field in a descending order, NULL is interpreted as 0 and entries with NULL in that field go at the bottom. This is perfectly desirable. But what if you want to sort them into an ascending order? Entries with the NULL field would show up first. In some cases, you might want to sort entries with actual data in that field first, and let the rest show up last.

To accomplish that in CakePHP you need two entries of the same field in the order array. Assume our model is House (which obviously describes some houses) and we want to sort against the field geo_distance which contains the distance between a particular house and some other place. As this is an optional field, it might contain NULL values, but we want to promote the houses which contain distance information in the search results.

$order = array('geo_distance' => 'IS NULL ASC', 'House.geo_distance' => 'ASC');

Note that you need the model name for the second array entry in order to trick CakePHP into taking it into account.


generate jekyll front-matter with bash

Creating a post in a jekyll-powered blog is pretty straightforward. Just create a file in markdown format and put your post in. Every post has a special YAML Front Matter block with some variables. Using a bash script we can easily generate this file with proper values for the YAML variables. For example, for this post, the filename contains the post date and title.

2011-08-31-generate-jekyll-front-matter-with-bash.markdown

The same values (date, title) are contained in the YAML block inside the file, but a bit more formatted. A Bash script can take care the formatting for us by only supplying the post title as the script argument. This assumes that the date (and optionally time) are set to the current time we run the bash script.

First we need to grab the date and optionaly the time.

_date=$(date +'%Y-%m-%d')
_datetime=$(date +'%Y-%m-%d %H:%M:%S')

_date will be used for the filename and _datetime for the date variable.

We also need to replace whitespace with dashes in the post title, for use in the filename.

_post=$(echo $1 | tr ' ' '-')

The filename is supplied a an argument, so we might want to bail if the user does not supply one

if [[ -z $1 ]]; then
    echo "A post title is required. Bye.."
    exit 1
fi

Also, another safe switch would be to check if the file already exists to prevent accidental overwrites.

To do that we need the absolute path for our post file. Putting all these together we have:

_title="${_date}-${_post}.markdown"
_cwd=$(pwd)
_post_file="${_cwd}/${_title}"

if [[ -f ${_post_file} ]]; then
    echo "File already exists. Bye.."
    exit 1
fi

Finally, the magic code that creates our post file uses here documents

cat << EOF >| ${_post_file}
---
layout: post
title: $1
date: $_datetime
---
EOF

Make sure to avoid quoting of the EOF as bash variable subsitution won’t work. Also, note that I used the >| notation because I have the bash noclobber variable set.

The jekyll-post script is on github, also added some code to prompt user for launching a text editor.


weechat horizontal buffers bar

To get a nice clean buffer bar at the bottom of the screen you need the buffers.pl script.

First install using weeget:

/weeget install buffers.pl

Weeget will also load the script automatically for you.

Finaly, set its position at the bottom:

/set weechat.bar.buffers.position bottom

datetime to timestamp in python

Say you got a file’s last update time, using the stat command

>>> import os
>>> disk_ts = os.stat('filename').st_ctime
>>> disk_ts
1303113637.0

and you want to compare it with a datetime object:

>>> import datetime
>>> dt = datetime.datetime(2011, 2, 23, 13, 48, 0, 499000)
>>> dt
datetime.datetime(2011, 2, 23, 13, 48, 0, 499000)

Convert the datetime object using the time module:

>>> import time
>>> dtime_ts = time.mktime(dt.timetuple())
>>> dtime_ts
1298461680.0

now compare and rejoice! Also, if you want to display timestamps in a human-friendly format use time.ctime

>>> time.ctime(disk_ts)
'Mon Apr 18 11:00:37 2011'
>>> time.ctime(dtime_ts)
'Wed Feb 23 13:48:00 2011'



xorg - proper greek keyboard layout

I was getting recently a lot random crashes with xfce4-xkb-plugin. Also when I opened the plugin properties the greek layout variant text was all scrambled and ineligible. Having searched through the internet and bug trackers I couldn’t find something similar so I started suspecting that my setup was broken. A look in /usr/share/X11/xkb/rules/xorg.lst showed me that the greek layout was ‘gr’ and not ‘el’ as I had specified it.

You can get the greek layout code with:

grep -i 'greece' /usr/share/X11/xkb/rules/xorg.lst

and the keyboard variants with:

grep 'gr:' /usr/share/X11/xkb/rules/xorg.lst

so this is how my (correct) keyboard settings look:

Section "InputClass"
  Identifier       "Keyboard Defaults"
  MatchIsKeyboard  "yes"
  Option           "XkbLayout" "us, gr"
  Option           "XkbVariant" ",extended"
  Option           "XkbOptions" "grp:alt_shift_toggle, terminate:ctrl_alt_bksp,
lv3:ralt_switch_multikey, eurosign:e"
EndSection

(The lv3 multikey option is used to type ligatures see: http://shtrom.ssji.net/skb/xorg-ligatures.html)


weechat howto

weechat-front Weechat (We Enhanced Environment for Chat) is a lightweight, extensible, console based irc client. It is written in C and licensed under GNU GPL3.

Launch weechat from console with the weechat-curses command.

One of the most important commands is the /help command. So, when in doubt use /help. Also using the command /set config.section.option will print the value of config.section.option and if you want to set a value to an option you type /set config.section.option value Anyway, lets start by building our configuration to connect to freenode.

Start by printing all irc.server values.

/set irc.server.

you will get the response:

[server]
irc.server.freenode.addresses  = "chat.freenode.net/6667"
irc.server.freenode.autoconnect
irc.server.freenode.autojoin
irc.server.freenode.autoreconnect
irc.server.freenode.autoreconnect_delay
irc.server.freenode.autorejoin
irc.server.freenode.autorejoin_delay
irc.server.freenode.command
irc.server.freenode.command_delay
irc.server.freenode.ipv6
irc.server.freenode.local_hostname
irc.server.freenode.nicks
irc.server.freenode.password
irc.server.freenode.proxy
irc.server.freenode.realname
irc.server.freenode.sasl_mechanism
irc.server.freenode.sasl_password
irc.server.freenode.sasl_timeout
irc.server.freenode.sasl_username
irc.server.freenode.ssl
irc.server.freenode.ssl_cert
irc.server.freenode.ssl_dhkey_size
irc.server.freenode.ssl_verify
irc.server.freenode.username

As you can see freenode is already in the configuration by default (irc.server.freenode.addresses = “chat.freenode.net/6667”). If you need to add another server use e.g.:

/server add oftc irc.oftc.net/6667

Now, all we need to do is set the rest of the values for the freenode server. To get help for a particular option -including the list of expected values- type e.g.:

/help irc.server.freenode.autoconnect

So, we want to connect to freenode by default and reconnect automatically in case of connection problems:

/set irc.server.freenode.autoconnect on
/set irc.server.freenode.autoreconnect on

Next we need to automatically join some channels, for multiple channels use commas as separators without spaces:

/set irc.server.freenode.autojoin = "#archlinux,#archlinux-offtopic,#archlinux-greece"

After that we need to set our nicknames (also comma separated), username and realname (these are optional):

/set irc.server.freenode.nicks = "nick1,nick2,nick3"
/set irc.server.freenode.username = "my-user-name"
/set irc.server.freenode.realname = "my-real-name"

Last but not least set the irc command to identify with the freenode server with :

/set irc.server.freenode.command = "/msg NickServ identify <your-password-goes-here>"

Save your configuration with the /save command and restart weechat, /EXIT exits weechat. Now you should be connected to freenode!

Time for some keybindings!! * F5 / F6 : Cycle through the buffers * PageUp / PageDown : scroll up/down main chat area * F11 / F12 : scroll nickname list up/down

If you are running terminator and have problems with F11 interpreted as “go to full screen” add this to your ~/.config/terminator/config :

[keybindings]
  full_screen = Disabled

If you have the same problem with xfce Terminal go to Edit -> Preferences -> Shortcuts and kill the nasty fullscreen shortcut!

weechat-rooms Lastly the cool stuff! Weechat allows you to split the screen and have multiple channels open at the same time. Use the /window splitv to vertically split the screen and /window splith to horizontally split the screen. You can also provide a percentage to unevenly split the screen and use /window merge all to unify all buffers to default. When you are happy with your window layout save it:

/layout save

To automatically save window layout on exit use :

/set weechat.look.save_layout_on_exit all

When in split mode use F7 / F8 to cycle through the active windows-buffers.

More resources:



running eagle on arch linux using libjpeg6

Eagle 5.6 depends on libjpeg6 to run. I case you have upgraded to libjpeg7 there are a few tricks to help you run eagle.

One trick you should not use though is to make a libjpeg6 link, linking to libjpeg7. This could possibly mess up other application on your system.

Install libjpeg6 from AUR.

If you don’t want to install libjpeg6 sytem-wide there is a more elegand way to fix it: 1. grab a copy of libjpeg6 2. put it somewhere inside your home directory or for example inside /opt/eagle/lib 3. edit the launcher script (/usr/bin/eagle) and add before the exec ./eagle command:

export LD_LIBRARY_PATH=/opt/eagle/lib

Grab eagle from AUR.

Update: as of 5.6.0-2 version in aur, you do not need to do the above steps as the PKGBUILD will take care everything for you.


set max resolution of guest os in virtualbox

There is a possibility that even if you have installed the guest additions in virtualbox, the resolution of the guest OS won’t exceed a certain value. If that’s the case you can use the VBoxManage command to set the max resolution like this :

VBoxManage setextradata global GUI/MaxGuestResolution X,Y

where X,Y the desired resolution. e.g. 1680,1050


change mysql root password

If you have forgotten your mysql root password all you need is the root password of the system to create a new one.

First, stop mysql and then start it in the background with the skip-grant-tables parameter:

mysqld_safe --skip-grant-tables &

Then login to mysql as root (you don’t need a pass anymore)

mysql -uroot

and type the following:

mysql> use mysql;
mysql> update user set password=PASSWORD(your-new-root-pass) where User='root';
mysql> flush privileges;

Don’t forget to kill the mysql process when you are done and restart it properly.


set mysql encoding to utf8

This one troubled me for a long time and it also resulted in a corrupted database at some point, because i changed collation encoding to utf8 but the database encoding was latin. (so always make backups and make sure you don’t erase them :P )

Before doing the following make sure you backup any databases as this could possibly corrupt them. If you need to change the encoding of your database data, one possible solution (not tested) would be to use mysqldump to dump the database with the latin (or whatever) encoding. Then use the iconv utility to change that encoding to utf8. After that change mysql encoding to utf8 (as follows) and then import the database with the new encoding.

Another solution (worked for me) is to let mysql “autodetect” the database encoding when you dump (mysqldump) and later import the databases.

So to set everything to utf8 encoding in mysql add the following lines to the [mysqld] section of my.cnf

init-connect = 'SET NAMES utf8'
character-set-server = utf8
collation-server = utf8_general_ci
default-character-set = utf8
default-collation = utf8_general_ci

Also you need the following line under the [client] section of my.cnf

default-character-set = utf8

then restart mysql. You can verify the used encodings by loging in to mysql and running the following commands in cli:

mysql> show variables like '%server%';
mysql> show variables like '%characters%';
mysql> show variables like '%collation%';