Quite often, it is useful to log detailed information or metrics about the API requests your backend performs, while serving an HTTP request. There is a logger middleware, which provides plenty configuration options such as logging request headers, the request body, filtering sensitive information, or customizing the log format. However, one of the most important metrics you would usually need, is the duration of the request. This is not supported by the logger middleware. Enter, the instrumentation middleware.
The instrumentation middleware allows us to use the excellent Active Support Instrumentation to instrument our requests. Active Support includes an instrumentation API, which allows us to hook into various parts of our Rails application to take measurements. It is built using a pub/sub mechanism, we define events to be broadcasted, and also define subscribers who listen for these events. Rails, provides itself a wide range of predefined events that we can subscribe to, but also allows us to create custom events.
For this article, we have a minimal sample Rails application, which stores some primitive information
about movies. Let’s assume that it stores the movie name, release year and a rating. We also have access
to an external API, which we can use to query using the same movie id, to retrieve the movie’s cast. The idea
is to hit the /movies/:id
endpoint, retrieve all the information we have about this movie from our database,
then query the external API for the extra information (cast), and pass this info to our view. We will show how we can use
faraday for querying the external API and use the instrumentation middleware to log the API call duration.
First, we need to install faraday.
bundle add faraday
Our Rails application has the following parts:
# db/migrate20210220175430_create_movies.rb
class CreateMovies < ActiveRecord::Migration[6.1]
def change
create_table :movies do |t|
t.string :title
t.string :year
t.float :rating
t.timestamps
end
end
end
# config/routes.rb
resources :movies
# app/controllers/movies_controller.rb
class MoviesController < ApplicationController
def show
@movie = Movie.find(params[:id])
end
end
# app/models/movie.rb
class Movie < ApplicationRecord
end
Pretty basic. The db table could definitely be better, but for our use case is fine.
We have also defined app/views/movies/show.html.erb
to just render our controller
instance variables (omitted for brevity).
Now we will add a new model which will use our external API to retrieve the extra movie information. This won’t be an ActiveRecord model, we will use a simple PORO.
# app/models/movie_info.rb
class MovieInfo
HOST = 'bf74dc7a-7b56-47f6-9fcb-0881f7a36ff9.mock.pstmn.io'.freeze
class << self
def find(id)
conn = Faraday.new(url: "https://#{HOST}/movies/#{id}")
res = conn.get
JSON.parse(res.body)
end
end
end
This class defines a find
class method which accepts an id, then
constructs a new faraday connection object, and performs an API call to
the configured endpoint. Finally, it returns the response body as a ruby hash.
Faraday uses Net::HTTP by default, for this example we will not be configuring another adapter.
Defining a class method named find
is inspired by the Rails ActiveRecord API,
so updating our controller to use this model will look like this:
def show
@movie = Movie.find(params[:id])
@extra_movie_info = MovieInfo.find(params[:id])
end
With the current setup, if we access our application through http://localhost:3000/movies/1
we will get the following log output:
Started GET "/movies/1" for 127.0.0.1 at 2021-02-25 13:18:21 +0200
Processing by MoviesController#show as HTML
Parameters: {"id"=>"1"}
Movie Load (0.2ms) SELECT "movies".* FROM "movies" WHERE "movies"."id" = ? LIMIT ? [["id", 1], ["LIMIT", 1]]
↳ app/controllers/movies_controller.rb:3:in `show'
Rendering layout layouts/application.html.erb
Rendering movies/show.html.erb within layouts/application
Rendered movies/show.html.erb within layouts/application (Duration: 0.4ms | Allocations: 139)
[Webpacker] Everything's up-to-date. Nothing to do
Rendered layout layouts/application.html.erb (Duration: 9.8ms | Allocations: 3583)
Completed 200 OK in 1109ms (Views: 10.6ms | ActiveRecord: 0.2ms | Allocations: 5164)
Notice that by default we don’t get any info about the performed API request whatsoever. If we enable the logging middleware that we briefly mentioned at the beginning of the article, we will get some basic output on what is happening. Let’s enable the logging middleware in our faraday configuration:
def find(id)
conn = Faraday.new(url: "https://#{HOST}/movies/#{id}") do |faraday|
faraday.response :logger, nil, { headers: false, bodies: false }
end
res = conn.get
JSON.parse(res.body)
end
Then, the output will look like this:
Started GET "/movies/1" for 127.0.0.1 at 2021-02-20 20:25:07 +0200
Processing by MoviesController#show as HTML
Parameters: {"id"=>"1"}
(0.1ms) SELECT sqlite_version(*)
↳ app/controllers/movies_controller.rb:11:in `show'
Movie Load (0.1ms) SELECT "movies".* FROM "movies" WHERE "movies"."id" = ? LIMIT ? [["id", 1], ["LIMIT", 1]]
↳ app/controllers/movies_controller.rb:11:in `show'
I, [2021-02-20T20:25:07.573589 #39486] INFO -- request: GET https://bf74dc7a-7b56-47f6-9fcb-0881f7a36ff9.mock.pstmn.io/movies/1
I, [2021-02-20T20:25:08.647338 #39486] INFO -- response: Status 200
Rendering layout layouts/application.html.erb
Rendering movies/show.html.erb within layouts/application
Rendered movies/show.html.erb within layouts/application (Duration: 1.2ms | Allocations: 139)
[Webpacker] Everything's up-to-date. Nothing to do
Rendered layout layouts/application.html.erb (Duration: 47.3ms | Allocations: 3549)
Completed 200 OK in 1140ms (Views: 52.1ms | ActiveRecord: 1.9ms | Allocations: 8848)
Notice the two INFO log lines starting with I
. Definitely better than before, but what we are aiming for here,
is to get a line which includes the call duration. Much like the lines which show the view rendering duration,
or the time ActiveRecord took to retrieve the data from our database.
Let’s swap the logger with the instrumentation middleware:
def find(id)
conn = Faraday.new(url: "https://#{HOST}/movies/#{id}") do |faraday|
faraday.request :instrumentation, name: "movies.faraday"
end
res = conn.get
JSON.parse(res.body)
end
The name
parameter defines the name of the event that will be broadcasted. The Rails
convention for defining event names is event.library
and faraday sets this to request.faraday
by default.
The next step is to subscribe to listening to those events. This is done by
defining an ActiveSupport::Notifications.subscribe
block, usually in an initializer or under /lib
.
# /lib/faraday_subscriber.rb
ActiveSupport::Notifications.subscribe("movies.faraday") do |name, starts, ends, _, env|
url = env[:url]
http_method = env[:method].to_s.upcase
duration = ((ends - starts) * 1000.0).round(1)
log_prefix = name.split(".").last.camelize
output = "[%s] %s %s %s (Duration: %sms)" % [log_prefix, url.host, http_method, url.request_uri, duration]
Rails.logger.info(output)
end
Quite a few things happening here. The name
argument is set to the broadcasted event name, in our case
movies.faraday
. The starts
and ends
are Time objects representing the time our event started
and ended respectively. The env
argument is a Faraday::Env
object which holds various information
about our request. In the Rails instrumentation documentation, we can see more details about the subscribe
block arguments.
The env
argument which holds the faraday request information can be used to extract the API request host and
HTTP method, as well as the headers, and the response body. In our example we use the host, HTTP method and request
uri to construct our log line.
The starts
and ends
are used to calculate the duration. According to the Time
documentation, by subtracting
two Time
objects we get a Float
representing the difference in seconds. So we go one step further to calculate
the time in milliseconds.
After restarting our application, the log output looks like this:
Started GET "/movies/1" for 127.0.0.1 at 2021-02-25 20:20:00 +0200
(0.9ms) SELECT sqlite_version(*)
(0.1ms) SELECT "schema_migrations"."version" FROM "schema_migrations" ORDER BY "schema_migrations"."version" ASC
Processing by MoviesController#show as HTML
Parameters: {"id"=>"1"}
Movie Load (0.2ms) SELECT "movies".* FROM "movies" WHERE "movies"."id" = ? LIMIT ? [["id", 1], ["LIMIT", 1]]
↳ app/controllers/movies_controller.rb:3:in `show'
[Faraday] bf74dc7a-7b56-47f6-9fcb-0881f7a36ff9.mock.pstmn.io GET /movies/1 (Duration: 1262.7ms)
Rendering layout layouts/application.html.erb
Rendering movies/show.html.erb within layouts/application
Rendered movies/show.html.erb within layouts/application (Duration: 1.2ms | Allocations: 411)
[Webpacker] Everything's up-to-date. Nothing to do
Rendered layout layouts/application.html.erb (Duration: 12.9ms | Allocations: 5527)
Completed 200 OK in 1299ms (Views: 16.5ms | ActiveRecord: 1.1ms | Allocations: 13035)
Our log is prepended with [Faraday]
and includes the host, the HTTP method, the API endpoint and the total request
duration. Success!
The subscribe block can be simplified a bit more. We can construct an ActiveSupport::Notifications::Event object from the arguments. This will give us an object-oriented interface to the event data.
ActiveSupport::Notifications.subscribe("movies.faraday") do |*args|
event = ActiveSupport::Notifications::Event.new(*args)
url = event.payload[:url]
http_method = event.payload[:method].to_s.upcase
log_prefix = event.name.split(".").last.camelize
output = "[%s] %s %s %s (Duration: %sms)" % [log_prefix, url.host, http_method, url.request_uri, event.duration]
Rails.logger.info(output)
end
We even get the duration calculation for free (in ms), neat!
]]>The AbstractServlet
class allows us to respond to GET
, HEAD
and OPTIONS
requests. We can use it to encapsulate our
logic of receiving a request, forwarding it to the server (possibly changing it) and responding back.
Let’s write a simple proxy, that receives a request, appends a custom query param and forwards it to
the API server https://api-server.com
. We will append the realm
query parameter with a hardcoded
value of qa-realm
.
First, let’s create our proxy as a subclass of AbstractServlet
. We will implement the
do_GET method as we care only
for GET
requests. The incoming request object is stored in the request
parameter, we will manipulate
this later in order to add our query parameter. In order to respond, we need to set the content_type
and
body values on the response
object.
require "webrick"
class MyProxy < WEBrick::HTTPServlet::AbstractServlet
def do_GET(request, response)
response.content_type = "text/plain"
response.body = "It works!"
end
end
Next, we will add the realm parameter. We need to parse the request URL in order to make sure that we handle correctly all use cases involving any existing parameters. We will use URI for this.
require "webrick"
require "uri"
class MyProxy < WEBrick::HTTPServlet::AbstractServlet
REALM = "qa-realm"
def do_GET(request, response)
uri = forwarded_uri(request.unparsed_uri)
response.content_type = "text/plain"
response.body = "It works! New URI is #{uri}"
end
private
def forwarded_uri(unparsed_uri)
uri = URI(unparsed_uri)
params = URI.decode_www_form(uri.query || "") << ["realm", REALM]
uri.query = URI.encode_www_form(params)
uri.to_s
end
end
Right now, we manipulate the request URI, but we don’t forward anything to the intended endpoint. We will use Net::HTTP to forward the request and pass the response back. We will also use the body, and the content type we retrieve from the proxied server when we pass the response back.
require "webrick"
require "net/http"
require "uri"
class MyProxy < WEBrick::HTTPServlet::AbstractServlet
HOST = "api-server.com"
REALM = "qa-realm"
def do_GET(request, response)
uri = forwarded_uri(request.unparsed_uri)
http = Net::HTTP.new(HOST, 443)
http.use_ssl = true
resp = http.request(Net::HTTP::Get.new(uri))
body = resp.body
response.content_type = resp["content-type"]
response.body = body
end
private
def forwarded_uri(unparsed_uri)
uri = URI(unparsed_uri)
params = URI.decode_www_form(uri.query || "") << ["realm", REALM]
uri.query = URI.encode_www_form(params)
uri.to_s
end
end
In order to run our proxy, we need a few more missing pieces. First we need to create a new WEBrick::HTTPServer
server = WEBrick::HTTPServer.new(:Port => ENV["PORT"] || 8080)
Then we need to mount our proxy under an endpoint, we can use the root endpoint or any other we want.
server.mount "/", MyProxy
Finally, let’s allow stopping the server using Ctrl+C and then start the server.
trap("INT"){ server.shutdown }
server.start
If we run our proxy with ruby myproxy.rb
it will start serving using port 8080 under /.
[2021-01-24 21:36:00] INFO WEBrick 1.6.0
[2021-01-24 21:36:00] INFO ruby 2.7.1 (2020-03-31) [x86_64-darwin18]
[2021-01-24 21:36:00] INFO WEBrick::HTTPServer#start: pid=47104 port=8080
The final code is the following:
#!/usr/bin/env ruby
# frozen_string_literal: true
require "webrick"
require "net/http"
require "uri"
class MyProxy < WEBrick::HTTPServlet::AbstractServlet
HOST = "api-server.com"
REALM = "qa-realm"
def do_GET(request, response)
uri = forwarded_uri(request.unparsed_uri)
http = Net::HTTP.new(HOST, 443)
http.use_ssl = true
resp = http.request(Net::HTTP::Get.new(uri))
body = resp.body
response.content_type = resp["content-type"]
response.body = body
end
private
def forwarded_uri(unparsed_uri)
uri = URI(unparsed_uri)
params = URI.decode_www_form(uri.query || "") << ["realm", REALM]
uri.query = URI.encode_www_form(params)
uri.to_s
end
end
server = WEBrick::HTTPServer.new(:Port => ENV["PORT"] || 8080)
server.mount "/", MyProxy
trap("INT"){ server.shutdown }
server.start
Docker can be very helpful when creating virtual environments because containers are spawned way faster than virtual machines and additionaly you don’t have the virtual machine overhead.
Vagrant has the notion of base boxes. Base boxes are OS images with bare minimum packages installed. Vagrant can use these to run any supplied provisioner such as Puppet, Chef, SaltStack etc. to create the desired virtual environment. However, Vagrant has some requirements in order to be able communicate and run commands in the virtual machines, such as properly configured ssh and sudo access.
If we have our Puppet/Chef/whatever code already in place and want to switch from, let’s say VirtualBox, to docker, we need a docker image which supports these Vagrant requirements. We can use a Dockerfile to define the Vagrant requirements and then build our image. We will create a Vagrant-ready docker image starting off the debian base image.
First, we need to create a file named Dockerfile
. A Dockerfile is a text file which describes all
the instructions/commands that docker needs to run in order to build a docker image.
For starters we add the following lines:
The first line defines which debian base image we are going to use to build ours.
We will be using the image tagged as 7.8
. The maintainer line is informative
showing the author of the generated images.
Next we are going to create the vagrant
user and set the password to “vagrant”:
Also, setting the root password to “vagrant” is considered a good practice if we are going to distribute our image/Dockerfile, so let’s do that too:
Next we need to configure proper ssh access for the vagrant user. First we add the vagrant public
key to the authorized_keys file and then apply the approrpiate permissions.
The ADD
instruction will retrieve the vagrant.pub
key from github and add it in the authorized_keys
file of the vagrant user.
Next we are going to install the minimum required packages. This is a good time to also install puppet if you are going to use it to provision the container. Feel free to ignore it if you don’t need it or install chef or whatever tool you need. We also run apt-get clean to free some space from the resulting image.
The vagrant user should have root access to the system. We are going
to take advantage of the sudoers.d
folder. Sudo parses files in the /etc/sudoers.d
folder so that we don’t have to edit /etc/sudoers directly. The format for the files in the
sudoers.d folder is the same as for /etc/sudoers.
Create a folder next to the docker file named sudoers.d
and place the file 01_vagrant
inside
with the following contents:
Then we use the ADD
instruction to add it to the system and also ensure it has the proper permissions.
Finally we need to have the ssh server running in order for Vagrant to ssh into the mahine. First we create
the /var/run/sshd folder because we are going to skip running the ssh server init script and run it
using the CMD
instruction. Also we expose the port 22 using the EXPOSE
instruction.
Building the docker image is pretty straightforward, running:
will create the image with name vagrant-debian
. In order to spawn a docker container from that
image we run the docker run
command. We can see that the default action of this container
is to run an ssh server.
However the whole point is to run this through Vagrant. First stop the running container using the container id or the container name:
Then let’s create a Vagrantfile
with the following contents:
and run vagrant up --provider=docker
.
Now we can ssh in the running container through Vagrant using vagrant ssh
. We can use this
base image to create other images that work with Vagrant (use the image name in the FROM
instruction). Or we can use this image together with Vagrant + puppet, or any other configuration
management tool using the --provision-with
flag. If we just wanted to run a specific service/application we might be better off
putting the service installation/setup in the Dockerfile in the first place and skip the provisioning part.
However, this setup is super helpful if we already have a working puppet configuration and use Vagrant with
VirtualBox et. al. You can find my Vagrant-ready debian images in my dockerhub repo
and the Dockerfiles in my github repo.
mknod
or mkfifo
commands and delete them with rm
.
In this tutorial we will use named pipes to read the current volume levels and display them in xmobar. Xmobar also has an alsa plugin but distributions do not always compile xmobar with alsa support. An alternative approach is to write a small script that will parse the output of the amixer
command and then call it from xmobar in regular intervals to show the current volume levels. This however requires xmobar to frequently call an external command/script. Also if you happen to use a large frequency you might notice some “lag” from the time you change the volume levels to the time this change is reflected in xmobar. Using named pipes this change is instant and you also void redundant calls.
First, we need to create our named pipe. All my startup logic is inside a script which is called from ~/.xinitrc
. For simplicity let’s create the pipe inside the ~/.xinitrc
before launching the window manager.
We create the pipe only if it does not exists. That way we can logout/login without getting any “File exists” errors. My /tmp
is mounted as tmpfs so the pipes do not survive a reboot.
Next we need to configure xmobar to read from this pipe. Edit your xmobar.hs
and put in the command
list:
This registers the PipeReader plugin to read from /tmp/.volume-pipe
. It also makes the vol_pipe
alias available in the output template. The template is used to describe how to display information gathered from the plugins. I use something like this to show the volume levels: ♫ <fc=#b4cdcd>%vol_pipe%</fc>
.
Basically, now you can pretty much feed anything to the xmobar through the pipe and it will display it without hesitation. Just write something to the pipe:
In order to display the volume information we will create a small script which can be used to increase/decrease/mute/show the volume levels. After each operation it will send the output to the named pipe. I already have a script I use for this purpose, but let’s write something simpler.
The key point here is to send the output after each operation to the pipe (line 6), a simple redirection will do the trick. We use tee
to send the volume output both in the stdout and in the named pipe.
The only problem is that when the xmobar first starts, it does not have any data to read from the pipe, so it displays this annoying message: “Updating…”. In order to fix that we can run our script a single time after we create the pipe inside the .xinitrc
.
more reading about named pipes and xmobar:
]]>To demonstrate this behaviour, we assume that we are working on a project
and at some point, we branched off to a dev branch to work on some exciting
new feature. As we are working we find some other important issues, or maybe
some minor issues that are not relevant to the new feature and decide to fix
them too. Now, there are two approaches we can take in such situation. The
first is to commit in our current branch and then use git cherry-pick
to
introduce these changes to the master (or some other) branch. The second
approach is to stash any unsaved changes, create another branch
and commit these changes there. I personally prefer the second approach
as it makes your life way easier if your are contributing on a
project and need to make a pull request on Github or create a patch.
For our example, we will take the first approach and commit anyway.
So, after some work on the dev branch, the tree will look like this:
I have used silly commit messages like “cherry-pick this commit” to indicate which commits we will cherry-pick into the master branch. Let’s do that now:
If we use git log
on master, we will notice that we have our new
commits, but with different SHA-1 hashes. This is happening because when
we cherry-pick commits, git creates new
commits on the master branch which introduce
the changes of these commits. At this point, the state of
the repo is the following:
At some point, we are happy with the work on the dev branch, and
decide to merge the dev into master. However the master
branch has changes since the time we initially branched to work on our
features. This means that we cannot use a fast-forward merge.
If we switch to master branch and run git merge dev
then a new
commit will be introduced. This is the standard behaviour of git when
doing no-fast-forward merges. Sometimes this may be desirable, but
some other times we prefer a linear history. This will be the repository
state after a merge:
And a git log --oneline
will look like this:
Notice that except from the (ugly) new commit on top, we also ended up
with duplicate commits for each cherry-picked commit. Also, if you care
enough to use the git show
command you will notice
that these duplicate commits contain the same changes.
This is where the rebase
command comes into play. Rebase can be used
to rewrite the git history. It allows us to extract the changes of a
commit(s) and re-apply it ontop of a branch.
We will use it to introduce the new commits of the master branch to our dev branch and then replay all the work of the dev branch on top of these commits. In our use case, the “new” commits of the master branch are the cherry-picked changes. When rebase will start to re-apply our work, it is smart enough to not apply the same changes the second time, and thus removing the duplicate commits. Running:
will bring our repository in the following state:
As you can see, the cherry-picked commits look like they never existed on the dev branch. Also, now we can use a fast-forward merge:
You can find more information about rebasing and cherry-pick at the git-book and the relevant man pages.
]]>Installation is very simple, in the same virtual environment with your
sphinx documentation issue pip install sphinx-serve
. This will make
the sphinx-serve command available.
When you issue the above command the scipt tries to detect the build
folder of the documentation. It searches the current working directory
plus it navigates backwards in case you are in a deeper level (much
like the bundler utility). When it finds the build folder (the folder
name is configurable through the --build
argument) it spawns a simple
http server. It serves files from the http folder (default) or from
the singlehtml folder if --single
is supplied.
It is handy to add the sphinx-serve command with any required arguments to the sphinx Makefile:
now you can easily preview your documentation by issuing make preview
.
You can download the image using a direct download link or torrent. The latest version
available (at this time) is the archlinux-hf-2013-02-11
. After downloading the zip archive containing the
image verify the SHA-1 checksum. First download the archlinux-hf-2013-02-11.zip.sha1
file
in the same path as the zip archive and then run:
The installation process is pretty straighforward. Insert the SD card in your computer
and transfer the image to the card using the dd
command as root. As always, pay attention
to the supplied destination device. Make sure you use the one corresponding to the SD card
or you could trash all your data on other devices. Also make sure that the destination device
is not mounted. You can use the lsblk
command to verity that. We will just use sdX
as an example.
After the image transfer is complete two partitions are created. The sdX1
partition will be mounted
at /boot
and the filesystem is VFAT
. The other partition, sdX2
holds the root filesystem and is
formatted as ext4
.
The root filesystem is a little smaller than 2GB. If the SD card is bigger (it should be) we need to expand this partition to fill the remaining space. Alternatively we could create a second filesystem and mounted at a mount point of our choice. The creation of a new filesystem is really simple, so it is not covered here.
In order to utilize the remaining free space, first we expand the partition and then we expand the filesystem.
As always make sure that the partitions are not mounted and that you are altering the proper ones. The following
examples assume that the SD card is the device /dev/sdb
and the root partition the /dev/sdb2
.
Start fdisk
. The p
command lists the partitions in the device /dev/sdb
.
Then, we delete the the second partition (the roor):
and re-create it. We create the new partition as primary and we set the last sector at the end of the SD card. The default settings should be fine here.
The partition is already set to Linux (83) so there is no need to alter it. Finally, save the changes:
In order to expand the filesystem, first we run the checkdisk program to detect and fix inconsistencies and then use the resize program.
The Arch Linux ARM image has the ssh deamon enabled by default. In order to login we must first determine which IP the rpi has acquired from the local network. This assumes that there is a working DHCP server on the LAN. One way to get the IP address is to check the DHCP logs or the router logs if DHCP is running from a router. As an alternative, we can run a ping scan on the LAN using nmap. Assuming that the lan subnet is 192.168.0.0 with mask 255.255.255.0 run:
After finding the IP login using the username and password root
.
After you login, you should change the root password using passwd
. Also, it is a good idea to re-generate
the rsa and dsa keys.
Answer “yes” on prompt to overwrite. To run a full system upgrade run:
Depending on the usage you are planning to do with the rpi it might be a good idea to adjust the RAM
split between the CPU and the GPU. Edit the file /boot/config.txt
file and change the value of the
variable gpu_mem_256
or gpu_mem_521
depending on the rpi model you have.
This post shows the valid memory values. Note that you
must also take into consideration the cma_lwm
and cma_hwm
variables. These variables allow dynamic
memory management at runtime. Make sure that gpu_mem_256
(or 512) value is higher than the high water mark
cma_hwm
. More info at the links on the bottom of this post.
rc.local
file. Also, initscripts provide a handy helper which creates a
service file that runs these commands on boot when using systemd. If you want to completely
move away from initscripts, to a pure systemd-based system, you need to create service
file(s) for these commands.
The following is an example service file that adjust the HDD spindown levels.
The most notable options here are the oneshot
service type and the RemainAfterExit
option.
The oneshot
service type is used to indicate that the command should exit before starting any
follow-up units. We also set the RemainAfterExit
option to true
to indicate that this service
should be considered active after it exits. Finally, keep in mind that you need the full command
path in ExecStart
.
.html
suffix in
URLs. You can enable it by adding permalink: pretty
in your _config.yml
. This of course
broke Disqus comments on all existing pages.
Hopefully, this can be easily fixed using the cool migration tools Disqus provides. The concept is to write a text file which maps the old URLs to the new ones. The file format should be the like the following snippet.
Disqus provides a CSV file which contains all your site’s URLs. To download this file, navigate
to your site’s admin panel and start the URL mapper tool under tools → migrate threads →
start URL mapper. Then download the CSV by clicking the download CSV
link.
First we need to get rid of the silly windows CRLF line endings from the CSV file. A little perl magic can help here.
Then, we need to format the file accordingly. The new URL is the old one without the .html
suffix.
Again using perl, we can cook a simple script to automate the process.
I didn’t bother to allow passing the filename as a script argument, so I just ‘hardcoded’ the filenames in the script. Edit at will and don’t forget to run it from the proper path.
Finally, upload the new CSV file to the URL migration tool and you will have your old comments back.
]]>Some time ago, I wrote a simple function, that makes a request to a page which returns your public ip address. That way you don’t have to actually visit that page and interrupt your flow of work. I find this extremely useful as I frequentlty use VPNs and SOCKS proxies.
The js code is the following:
I used a very simple service I wrote and is currently deployed at heroku, ipz. You can use any of the popular services like icanhazip or checkip service from dyndns.
In order for this to work, you need to put the function in your ~/.pentadactylrc
enclosed
in a here-document block:
I also set a command to call this function by typing :ip
, the command is:
In order for this to work in vimperator, we need some adjustments. Put the following code in your
~/.vimperatorrc
.
Again don’t forget the here-document block and the command, which are a bit different.
]]>To accomplish that in CakePHP you need two entries of the same field in the order array. Assume our model is House (which obviously describes some houses) and we want to sort against the field geo_distance which contains the distance between a particular house and some other place. As this is an optional field, it might contain NULL values, but we want to promote the houses which contain distance information in the search results.
Note that you need the model name for the second array entry in order to trick CakePHP into taking it into account.
]]>The same values (date, title) are contained in the YAML block inside the file, but a bit more formatted. A Bash script can take care the formatting for us by only supplying the post title as the script argument. This assumes that the date (and optionally time) are set to the current time we run the bash script.
First we need to grab the date and optionaly the time.
_date will be used for the filename and _datetime for the date variable.
We also need to replace whitespace with dashes in the post title, for use in the filename.
The filename is supplied a an argument, so we might want to bail if the user does not supply one
Also, another safe switch would be to check if the file already exists to prevent accidental overwrites.
To do that we need the absolute path for our post file. Putting all these together we have:
Finally, the magic code that creates our post file uses here documents
Make sure to avoid quoting of the EOF as bash variable subsitution won’t work. Also, note that I used the >| notation because I have the bash noclobber variable set.
The jekyll-post script is on github, also added some code to prompt user for launching a text editor.
]]>First install using weeget:
Weeget will also load the script automatically for you.
Finaly, set its position at the bottom:
]]>and you want to compare it with a datetime object:
Convert the datetime object using the time module:
now compare and rejoice! Also, if you want to display timestamps in a human-friendly format use time.ctime
]]>I took this script and polished it a bit. It now features:
You can get the script from here: alsavol.
]]>You can get the greek layout code with:
and the keyboard variants with:
so this is how my (correct) keyboard settings look:
(The lv3 multikey option is used to type ligatures see: http://shtrom.ssji.net/skb/xorg-ligatures.html)
]]>Launch weechat from console with the weechat-curses command.
One of the most important commands is the /help command. So, when in doubt use /help. Also using the command /set config.section.option will print the value of config.section.option and if you want to set a value to an option you type /set config.section.option value Anyway, lets start by building our configuration to connect to freenode.
Start by printing all irc.server values.
you will get the response:
As you can see freenode is already in the configuration by default (irc.server.freenode.addresses = “chat.freenode.net/6667”). If you need to add another server use e.g.:
Now, all we need to do is set the rest of the values for the freenode server. To get help for a particular option -including the list of expected values- type e.g.:
So, we want to connect to freenode by default and reconnect automatically in case of connection problems:
Next we need to automatically join some channels, for multiple channels use commas as separators without spaces:
After that we need to set our nicknames (also comma separated), username and realname (these are optional):
Last but not least set the irc command to identify with the freenode server with :
Save your configuration with the /save command and restart weechat, /EXIT exits weechat. Now you should be connected to freenode!
Time for some keybindings!!
If you are running terminator and have problems with F11 interpreted as “go to full screen” add this to your ~/.config/terminator/config :
If you have the same problem with xfce Terminal go to Edit -> Preferences -> Shortcuts and kill the nasty fullscreen shortcut!
Lastly the cool stuff! Weechat allows you to split the screen and have multiple channels open at the same time. Use the /window splitv to vertically split the screen and /window splith to horizontally split the screen. You can also provide a percentage to unevenly split the screen and use /window merge all to unify all buffers to default. When you are happy with your window layout save it:
To automatically save window layout on exit use :
When in split mode use F7 / F8 to cycle through the active windows-buffers.
More resources:
]]>to your Keyboard InputClass to enable Ctrl+Alt+Backspace to kill your x server
]]>One trick you should not use though is to make a libjpeg6 link, linking to libjpeg7. This could possibly mess up other application on your system.
Install libjpeg6 from AUR.
If you don’t want to install libjpeg6 sytem-wide there is a more elegand way to fix it:
Grab eagle from AUR.
Update: as of 5.6.0-2 version in aur, you do not need to do the above steps as the PKGBUILD will take care everything for you.
]]>