Downloading torrent files with aria2

aria2 is a nifty command line download manager — similar to, but much more powerful than — wget.

This post is not a review of aria2 (maybe I'll do one some day), rather it focuses on one of its features which has bugged me for a while.

aria2 can — among other things — be used to download torrents. You just pass a torrent file or a URL pointing to one, to aria2c (the actual name of the binary) and it starts downloading the torrent contents.

But, I use rtorrent to download torrents, and just want to download the .torrent file with aria2. But, since aria2 simply starts downloading the torrent contents instead of just the file, I was forced to use wget for downloading the file.

Even after searching all over the internet, I couldn't find a way to do so with aria2.

Continue Reading...

Viewing and Dealing with Hard Tabs in Vim

While doing a cat on some files, I noticed that some of the text wasn't aligning properly as it does with Vim.

After thinking for a while, I realized that it was due to the <Tab> characters present in those files. I used to use hard tabs instead of spaces (also called soft tabs) a long time back. Later, I started using soft tabs by putting set expandtab in my ~/.vimrc.

The problematic files contained a mixture of both hard, as well as soft tabs, resulting in misalignment of text in some places.

List the hard tabs

To fix this, i.e., remove hard tabs, the first thing that needs to be done is listing them. This can be done using:

:set list

on the Vim command line.

Continue Reading...

Fix for sudo's NOPASSWD directive not working in Arch Linux

For the past few days, I was having trouble with sudo on my Arch Linux machine.

The problem was that sudo was asking me to enter my password even for the commands I was using with the NOPASSWD directive.

Searching the Arch forums, bug tracker, mailing list etc., yielded no hint about this problem.

Then I checked if the sudo or pam packages were updated recently. Nope, the problem started occuring much later after they were updated.

Scratching my head, I fired up:

# visudo

as well as:

# man sudoers

and started verifying the entries in my /etc/sudoers file line-by-line. I couldn't find any problem with any of the entries and no sudo options had been changed upstream.

Continue Reading...

Interesting Links - May 2013, Batch 2

Here's the second batch of Interesting Links for the month of May.

Things concerning Linux:

  • Tails 0.18 released with more persistence related additions.
  • Debian GNU/Hurd 2013 released. Did you know that the kernel is named GNU Mach while Hurd refers to a collection of servers that run atop the Mach kernel? I used to think that Hurd was the kernel ...
  • Ubuntu planning a new package format, initially for the Ubuntu Phone and Tablet.
  • Tired of long freeze periods for Debian? Who isn't? Well, these guys have a proposal to fix that.
  • Here's why I feel that systemd is overkill.

Internet and Browsers:

  • Firefox 21 released with more useless additions to Do Not Track (DNT). An interesting addition is the Firefox Health Report feature. Hopefully, it won't have a negative effect on Firefox's performance because of the constant collection of data.

Continue Reading...

Loading updated nvidia module when X is started through inittab

For a while, I have been using /etc/inittab to start X. I have set autologin on one getty and X is started from that through ~/.bash_profile, automatically.

But, starting X through inittab has a limitation that if you try to kill X, it keeps respawning! It is a known limitation, but I never had the need to kill X and so I never ran into this problem, until today.

What happened was — that I tried playing a video using mplayer2, but it failed to play the file. Among its output was the following:

API mismatch: the client has the version 319.23, but this kernel module has the version 319.17. Please make sure that this kernel module and all NVIDIA driver components have the same version.

This was because I had updated my Arch Linux system a few hours earlier but had not restarted it. Turns out, that the nvidia package was updated too.

Continue Reading...

Excluding yourself from your Piwik Logs

What is Piwik?

Piwik is a web analytics software similar to the omnipresent Google Analytics. I chose Piwik instead of Google Analytics as I wanted a self-hosted solution instead of yet another third-party service. As a bonus, Piwik is licensed under GPLv3. It is always heartening to see high quality free software.

So, while Piwik works great for its intended task of providing analytics, I found that it was tracking my visits to my website too. This is not good, as it pollutes the actual useful data such as, how many people visit the website, time spent by the visitors on the website, etc.

Excluding yourself from being tracked

Thankfully, Piwik provides multiple ways to exclude yourself from being tracked.

First log in to your Piwik account and go to Settings -> Websites. There are two options available on this page.

  • Global list of Excluded IPs

    First option is to add your IP to the "Global list of Excluded IPs". Unfortunately, this only works with static IPs, and most home users, including me, are assigned dynamic IPs these days. So, this option was ruled out.

Continue Reading...

Preventing reStructuredText sources from being Indexed by Search Engines

While doing a search for my website on Google, I discovered that Google had indexed the reStructuredText (reST) sources for my posts as well.

What is reStructuredText?

reStructuredText is a markup format used to write text in a relatively simple format, which can then be converted to other formats such as HTML, LaTeX, PDF etc. It uses a filename extension of .rst. As you can see from the footer of my blog, I use Nikola for generating this blog. I write the posts in reStructuredText, and Nikola converts them to HTML. Pretty neat, huh?

On Topic: The issue is that Google is indexing those .rst files as well, which is polluting the search results. I am pretty sure people won't be searching for the reST sources of the content posted here.

So, I started looking for ways to prevent web crawlers from indexing the .rst files. I found that it can be done by adding the following to your site's robots.txt file:

User-agent: *
Disallow: /*.rst$

Continue Reading...