Well, it’s been a while since my last post and I think it’s about time I started posting some new stuff here. During all these months I spent my free time οn various things. As far as tech is concerned I mostly spent time on Linux servers and a little bit on software development. Digging into the Linux Desktop is not a top priority for me any more since I only use it on secondary desktop boxes or virtual machines, so there isn’t much to write about it. So, I’ll be getting my notes together and I’ll try to post some cool and interesting guides in the next months.
Back to blogging, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
There has been much controversy about Copyright and file-sharing on the internet during the last decade. Admittedly, the All Rights Reserved statement is incompatible with the nature of communications in the Digital Age and thus a new more flexible content and media licensing scheme is required. A permission system would be the natural solution to the All-Rights-Reserved problem (it could even be extended to Patents, but this is outside the scope of this post). Such a permission system is imposed by Free Software licenses, Creative Commons licenses and others. Although it’s not perfect, it does provide an acceptable and realistic solution for content, media and software publishing in the Digital Era.
Surprisingly though, regardless of the fact that such a permission system has not been fully adopted yet by the industry, there are several activist groups worldwide, which go several steps further suggesting that the current copyright and patent systems should be thrown to the trash bin without second thoughts. OK. But what’s the alternative plan and how can it keep us humans motivated to advance our civilization? I’m afraid a proper alternative plan does not currently exist. It’s all about thoughts and beliefs combined with highly subjective estimates.
I generally try to keep an open mind to any suggested change no matter how radical it might be, but in this particular case I think it is quite moronic to believe that a transition from point A (copyright) to point C (no copyright) without at least trying to go through B (permission system) is possible, without severe negative effects on the technological progress. Such a transition does not make sense. Consequently, fighting for such a transition does not make sense as well.
And when activism does not make sense in terms of serving a greater good, then the only explanation I can give is that such activism is driven by each individual activist’s own benefit. This might be just the fun involved in being different (IMHO the vast majority of activists), public relations driven activism, promotion of services etc. The kind of activism which does not serve the greater good has so many faces.
Nevertheless, the problem still remains. The industry has missed the train of the digital world. So, what can we do about that? In my opinion, the very first thing that needs to be done is that both the industry and all those activist and especially hacktivist groups stop acting like morons. Conflict is not the right solution as it leads with mathematical precision to the end of the internet as we know it today. On the other hand, a permission system seems like a good possible solution for most of the existing problems. Let’s stop being part of the problem. That’s a good start.
Some thoughts about Copyright related activism, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Software has become part of our lives. Businesses, homes and even individuals more and more rely on software to meet their goals and serve their needs. Recently, I
had tried to have a discussion with people who are active in the FLOSS ecosystem about if and how the development process of free software could be improved in order to increase its quality and efficiency. As usual, the conservative minds within the community did not let the discussion get far. This was not the first time nor the first place I tried to start a discussion like that. Nevertheless, the outcome has always been the same.
A few minutes ago I ran across a news item about the ReactOS community seeking to raise funds in order to hire developers who will work full-time on the project. The community realizes the fact that dedication to the project is the determinant factor in the process of creating a high quality product. In their announcement we read:
This year we want to do something different, something even grander. ReactOS is quite close to transitioning to beta testing and we are constantly improving the development process itself. However for many core developers ReactOS remains a hobby in which they participate in their spare time as all have other real life obligations to meet. All of the developers are extremely skilled and every contribution they make helps significantly improve ReactOS’ quality.
For the first time ever, the ReactOS Foundation seeks to go beyond the usual small fundraising campaigns aimed at paying infrastructure expenses. We wish to raise money to formally hire as many core developers as possible, to work on the project they believe in, the project they’ve been working on, to transform a hobby into a job so they can dedicate all of their time to the ReactOS project.
In light of the significant advances the project enjoyed thanks to work done as part of Google’s Summer of Code 2011, it became even more obvious that the fastest way to accelerate the development of ReactOS is by directly funding developers to contribute to ReactOS. As such, the project is reaching out to our many fans and believers to help make this happen. Together, we can make ReactOS into a true competitor and alternative for computer users worldwide.
I could not agree more! In my opinion, paid development is a key step in the evolution of the development of free software. The benefits are pretty obvious:
- The current development model mainly involves people working on FLOSS projects in their spare time. As a result, it has become customary to release incomplete and buggy “stable” releases for us to debug. On the contrary, there are several companies which pay developers to work on FLOSS projects or invest their own capital on the development of a FLOSS project. Comparing the quality of the two kinds of projects, it is quite obvious that paid development results in a higher quality product. Money alone cannot produce quality. But, money can greatly help with the creation of the right environment for the right people to produce a high quality product.
- Usually, a Do-It-Yourself mentality reigns the FLOSS ecosystem. No matter how important the “freedom to customize” is, it is also extremely difficult for people who are not software engineers or who are extremely busy with other things to follow such practices. Donations currently do not work as they should. Contributing money to a FLOSS project should buy nothing more or less than dedication.
- Funding the development of FLOSS projects will preserve stability and help them survive in the long run. The users will have a greater assurance that a project won’t be suddenly abandoned or dramatically change its goals. This is very important, especially if you base your own work upon such a project.
Today, there is an abundance of free software out there. But in several cases quality is below par. Both users and developers can change things. In my humble opinion, “paid development” and micro-donations (in the form of a subscription) is the necessary next step in the evolution of the model of FLOSS development.
The ReactOS fundraising campaign‘s goal for 2012 is set at 30000 EUR. This means that if 6000 people donated 5 EUR each, the goal would be met. Quite easy I guess for a vast community like the FLOSS ecosystem. I just donated my 5 EUR. Now, it’s your turn. I’m sure this money won’t be wasted. Even if it is, hell, that’s just 5 euros. But… on the other hand, if this plan works out, it will act as a great example for other open source and free software projects about how they should go ahead. It will also be a great example for us users about how important a micro-donation can be and how much it can change things.
Why ReactOS leads the way with their decision to hire full-time developers, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
There are several ways to improve the performance of a web site. One of them is HTTP compression. Moreover, compressing the web server responses can save tons of bandwidth without adding any significant amount of extra CPU load on the server. Two of the most common compression algorithms used in HTTP are gzip and deflate. An article containing step-by-step instructions on how to configure Apache to compress web server responses using mod_deflate had been published on G-Loaded a long time ago. This post is about the web browser support for the gzip, deflate and raw deflate compression algorithms. The following page contains the results of several compression tests run by various modern and older web browsers. In case you had been looking for such information, that’s a good place to start.
gzip and deflate compression support by web browsers, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
I tested the website using the default browser of a smartphone and I realized that it is needed to improve the theme to make the content easier to read on mobile devices. If you think such a task is easy, you’re way outline! From a quick web search I noticed that there are many things to take into consideration before making any changes. I’m currently gathering information that will help me decide what would be the best way to serve two versions of the content, one suitable for mobile devices (smartphones and tablets) and one of PCs (desktops, laptops). If you’ve gone through this procedure and care to provide some insight, feel free.
Update: A mobile friendly version of the current theme has been scheduled and will be applied to the web site soon. Stay tuned.
Mobile version of G-Loaded, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
From the time I had set up my first server at home over a decade ago, I’ve performed numerous operating system upgrades. Usually, it used to take me several hours – if not days – to complete each upgrade and make sure that everything would work as expected. During all these years, I’ve been working hard whenever time permitted it in order to make several pieces of software work flawlessly together requiring the least possible time for manual maintenance. Despite the deployment of my services having reached a high level of automation, I recently spent almost a whole day upgrading CentOS in one of my remote boxes.
According to my initial plan this procedure shouldn’t have taken longer than 2-3 hours. I had simulated it in Virtualbox at home and I knew exactly what to expect. Unfortunately, I didn’t strictly follow the plan, but deviated from it 2 times and this almost cost me the whole day.
The first thing that went wrong had to do with testing my backup, a step that was not in my original plan. I keep my server data in encrypted containers on Amazon S3 using duplicity. Although I have restored data from the backup numerous times and I was certain it worked OK, I had this strange idea to test the restoration of the data to a virtual machine at home just to make sure. For that purpose I happened to use a VM whose state had been saved several days ago, meaning that its time was way out of sync. That was a detail I had n’t taken into account. So, when I tried to restore the data on that box, I got a glorious exception from duplicity informing me that it could not find any signatures on the S3 bucket. That message was really unhelpful and it resulted in wasting many hours trying to figure out what was wrong with my backup or duplicity, until I finally realized that it was the box’s wrong time that had caused the exception. Once the time was updated, duplicity worked like a charm.
The second thing that went wrong had to do with pvGRUB, which is based on the grub 0.97 code and used to boot Xen DomUs (guests). Due to some limitations of the VPS provider regarding pvgrub, I have to use a very small partition that contains a GRUB configuration file which eventually boots CentOS (root LVM setup). This small partition was initially formatted using ext3. Again, I had a strange hunch to reformat that small partition to ext4! This would have absolutely no benefit, but at that moment I had just thought “why not?”. I was completely unaware that grub 0.97 and eventually pvgrub did not support the ext4 filesystem. To make things even worse, pvgrub deceptively reported that it had recognized the partition as ext2, but could not locate the file I had configured it to load. Disaster. It was a few hours later, after having gone through several bug trackers and mailing lists, that I realized that pvgrub did not actually support reading from ext4. I reformatted the small partition to ext3 and everything went on smoothly.
If I had stuck to the original plan, none of the incidents above would have taken place. No matter how much I trust free software, deciding to experiment with it while I should be doing a specific job is admittedly one of the worst decisions possible. Regardless of how popular a piece of free software might be, it can still have serious bugs and limitations hidden in the last place you’d ever look. Lesson learned: stay on your path and strictly follow the plan.
Lessons learned from a recent OS upgrade, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
It’s been a while since WordPress implemented revisions for posts and pages. Being able to revert a post or page to a previous state is a useful feature. However, I recently realized that WP creates a revision of the content every time it is saved, but there is no upper limit for the number of stored revisions. So, if you save your work quite often, it is very possible that the WordPress database is filled with numerous revisions of the content, which make the database grow in size quite aggressively. After thinking about it for a while, I realized that all those revisions are pretty useless to me. All I really care about is having a couple of recent revisions of each post or page available, so as to be able to get an idea of my recent changes.
It was only a matter of minutes to find out that it is possible to limit the number of stored revisions per post in WordPress. Open
wp-config.php in your favorite editor and add the following:
/** * Revision management. Store 5 (+1 autosave) revisions per post. */ define('WP_POST_REVISIONS', 5);
The above setting forces WordPress to store only the 5 most recent revisions of the content. One extra revision is also saved because of the autosave feature.
The acceptable values for
WP_POST_REVISIONS and their meanings are:
- -1: store every revision (this is the default)
- 0: do not store any revisions except for the one used for the autosave feature
- N (integer) > 0: store N revisions per post. Revisions are rotated so that only the N most recent recent ones are available.
After getting rid of all those useless revisions that had piled up over the last months, I’m now quite happy with the size of the database. Although the idea of automatic revisions is fine, in my case, on-demand revisions would have been more useful and would make more sense in the way I author these posts.
Limit the number of stored revisions per post in WordPress, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
By default, when the user installs software through the RPM Package Manager or through YUM, usually, the software’s configuration files included in the RPM do not replace the existing configuration files on the filesystem, but, if they differ from those that currently exist, they are saved with the rpmnew extension. In case the rpm is already installed and is the latest version, the quickest way to get the original configuration file back is to uninstall and install the package again. Today, while on CentOS 6.2, I needed to restore the original
/etc/sysctl.conf file, which is part of the initscripts package. In this case, uninstalling initscripts was out of the question as it would also remove half of the installed packages due to dependencies. So, I grabbed the chance to figure out and document what would be the quickest and easiest way to restore
/etc/sysctl.conf, excluding downloading the package itself and extract the RPM contents. Fortunately, as soon as I opened yum’s man page and having spotted the new reinstall command, the solution was quite obvious.
For completeness, I hereby document the whole procedure that involves the verification and restoration of the original
/etc/sysctl.conf hoping that new users might find these notes helpful.
First of all, I needed to know whether the
/etc/sysctl.conf I had on my box differed from the original one. But, before doing that, I had to know which RPM package had installed that file. So, I used the rpm command to query this file:
# rpm -qf /etc/sysctl.conf initscripts-9.03.27-1.el6.centos.1.i686
So, the initscripts package had installed
Then I verified the initscripts package:
# rpm -V initscripts S.5....T. c /etc/sysconfig/init S.5....T. c /etc/sysctl.conf
According to the following table:
S file Size differs M Mode differs (includes permissions and file type) 5 MD5 sum differs D Device major/minor number mismatch L readLink(2) path mismatch U User ownership differs G Group ownership differs T mTime differs P caPabilities differ
the attributes: size, MD5 checksum and the modification time of
/etc/sysctl.conf that existed on my system differed from the attributes of the original file.
Since I had no idea the exact changes I had made to that file at some earlier time, I needed to restore the original and re-modify it from scratch. The new yum “reinstall” command could be used to to do this quite easily.
First, I kept a copy of the current file:
# mv /etc/sysctl.conf /etc/sysctl.conf.modified
Then I reinstalled initscripts using YUM’s reinstall command:
# yum reinstall initscripts Loaded plugins: downloadonly, fastestmirror, priorities Setting up Reinstall Process Loading mirror speeds from cached hostfile * base: ftp.ntua.gr * epel: ftp.ntua.gr * extras: ftp.ntua.gr * ius: mirror.rackspace.co.uk * updates: centosr3.centos.org 6 packages excluded due to repository priority protections Resolving Dependencies --> Running transaction check ---> Package initscripts.i686 0:9.03.27-1.el6.centos.1 will be reinstalled --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================================================= Package Arch Version Repository Size ============================================================================================================================================================= Reinstalling: initscripts i686 9.03.27-1.el6.centos.1 updates 934 k Transaction Summary ============================================================================================================================================================= Reinstall 1 Package(s) Total download size: 934 k Installed size: 5.4 M Is this ok [y/N]: y Downloading Packages: initscripts-9.03.27-1.el6.centos.1.i686.rpm | 934 kB 00:02 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : initscripts-9.03.27-1.el6.centos.1.i686 1/1 Installed: initscripts.i686 0:9.03.27-1.el6.centos.1 Complete!
Verify the initscripts package again:
# rpm -V initscripts S.5....T. c /etc/sysconfig/init
No verification errors for
/etc/sysctl.conf. Note that reinstalling the package did not touch the
/etc/sysconfig/init file. It has been mentioned previously that rpm packages do not overwrite existing configuration files.
I had the original file back and I could then start customizing it from scratch.
Restore original configuration files from RPM packages, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
I cannot overstate how disappointed I am after having a discussion with people who tend to partially mix the various declarations of Rights and the Law in order to make a point valid enough to justify their actions. I am not really the one to tell whether such behavior derives from competence or incompetence. What i do know is that I will never again join any discussion which, at least, is not based on common sense. Ever.
The 1st Rule of Discussion, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
It is widely known that, if virtual hosts in Apache (httpd) are configured to permit vhost administrators override specific configuration options at the directory level using htaccess files, the web server consumes valuable time in order to check whether an htaccess file exists in every directory included in the requested path and parse it. On the other hand, many popular web applications utilize htaccess files, especially those residing in the DocumentRoot, in order to implement pretty URLs or HTTP redirections, which is extremely convenient since the virtual host owner does not have to edit httpd’s configuration directly. So, I had the idea to include the htaccess file of the DocumentRoot directory on the filesystem into the virtual host’s configuration.
Suppose we have the
/home/example.org/public_html/ directory on the filesystem, which serves as the document root of our virtualhost. The relevant httpd configuration for that vhost would look like this:
<VirtualHost 22.214.171.124:80> ServerName example.org:80 ... DocumentRoot /home/example.org/public_html <Directory /home/example.org/public_html> AllowOverride All ... </Directory> ... </VirtualHost>
In order to prevent the htaccess lookups on the filesystem without losing the htaccess functionality – at least at the DocumentRoot level- I transformed the configuration to the following:
<VirtualHost 126.96.36.199:80> ServerName example.org:80 ... DocumentRoot /home/example.org/public_html <Directory /home/example.org/public_html> AllowOverride None Include /home/example.org/public_html/.htaccess ... </Directory> ... </VirtualHost>
Let’s see what we have accomplished with this:
- httpd does not waste any time looking for and parsing htaccess files resulting in faster request processing,
- the virtual host administrator can still override the configuration options of the document root manually or through the web interface of the web application.
Seems like a win-win situation performance and functionality wise.
But, as usual, there is no win-win situation without a downside. In this case, the above trick weakens the server’s security. Let’s see how.
Although the configuration of a directory can be set in both
httpd.conf and the directory’s htaccess file, not all directives can be used in both contexts. htaccess files support a subset of the directives that can be used in the
Directory context within
httpd.conf. By including the htaccess file in httpd’s configuration the vhost admin is no longer restricted to that subset of directives.
This means that by implementing the above configuration the virtual host administrator is granted more privileges regarding the configuration of the virtual host. This also means that a potential attacker, that would exploit a vulnerability of the web application, would be granted the same privileges once he got write access to that htaccess file.
So, although this trick may seem like a good idea at first, it is in fact a rather bad idea and should never be used in production, unless you trust the virtual host administrator and the web application. I do not intend to use such a configuration and I do not recommend it. There are by far better ways to speed up Apache.
Your comments and suggestions are welcome.
Speed up Apache by including htaccess files into httpd.conf, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
I have been using Firefox since it was called Phoenix (version 0.5). I’ve witnessed all the effort that has been put into making this web browser a success. It is still the only web browser I can fully trust. Suddenly, earlier this year, the Mozilla Foundation decided to change the release strategy of the project for obvious marketing reasons and release several major versions within a short period of time. It was inevitable that such a change of release cycles would introduce numerous incompatibility issues with the available extensions. Such problems should have been solved before switching release strategies.
Today I happened to browse the tech section of Digg and stumbled upon this news item about the release of Firefox 7. Some of the comments pretty much summarize my feelings about the new release strategy:
- Wtf I just installed 6….
- I just installed 5.
- Once the version number is up to 50 in a short amount of time, it will become a joke, and future releases will be ignored.
- Firefox is killing themselves is what they’re doing. People use Firefox for the plugins, every new version installation kills all plugins. After I install this, there’s technically no reason for me to use Firefox over Chrome anymore. Why doesn’t Mozilla understand this???
- Pro tip: People aren’t switching from Firefox to Chrome because it’s got a “better” version number, guys.
I would add that it is not necessary to go through all the numbers from 5 to 14 to catch up with Chrome in terms of major version numbers. These could be just skipped and go straight to 14!
PS: The Firefox extension of a software I had paid for had stopped working since the FF3 -> FF4 upgrade. A workaround was released by the company for FF4, but as soon as FF5 came out it stopped working again. I’ll be straight. I’d rather use Internet Explorer or Chrome or Opera instead of asking the company for a workaround every time the Mozilla folks roll out a new major release of Firefox.
The new amateuristic release strategy of Firefox, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
The first time I used WordPress back in September 2005 I considered it to be the content publishing platform of choice as far as a personal or business website was concerned. It was easy to set up and publish content and, also, easy to customize, even with ugly hacks. Today I still consider WordPress to be the overall best choice, despite the fact it still does not excel in any particular sector. Although I am not a pro, I have enough experience to say it’s the engine with the most acceptable trade-off between ease of use, security, features, ease of customization and being a solid base for development upon it. It is also one of those open-source projects that can create business opportunities for software engineers, system administrators, web designers, internet advertisers and marketers. The huge and active community of users and developers have boosted this project over the years. It proves that, if an open-source project is surrounded by a big active community and has good marketing (like WordPress had all these years) it can fly and create business opportunities in many sectors. During the last days I spent much time trying some of the features that have been implemented in the newer releases of WordPress and also various plugins. I must admit that today, by using WordPress, it is just a matter of some hours to have a high performance, good looking and feature rich small web site from scratch with minimal cost.
WordPress is getting better, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
It’s been a while since the details of an SSL/TLS vulnerability have been released to the public. Since then, security experts have worked on the issue and have released a whitepaper describing how to mitigate the attack, known as BEAST (Browser Exploit Against SSL/TLS).
From the security researchers’ article:
The problem lies in the way that block ciphers are used in SSL/TLS. Block ciphers are generally operated in one of several modes that define how encrypted blocks are manipulated to ensure complete confidentiality. Cipher Block Chaining, or CBC mode, is used in SSL for all block ciphers, including AES and Triple-DES. The BEAST attack relies on a weakness in the way CBC mode is used in SSL and TLS. Non-CBC cipher suites, such as those using the RC4 stream encryption algorithm, are not vulnerable.
There have been several suggested mitigations that can be put into play from the perspective of the client, such as reorganizing the way the data is sent in the encrypted stream. Servers can protect themselves by requiring a non-CBC cipher suite. One such cipher suite is rc4-sha, which is widely supported by clients and servers.
Researchers have concluded that the RC4 (Alleged RC4) based cipher suites are not vulnerable to the BEAST attack, while CBC (Cipher Block Chaining mode) based cipher suites are. This involves both the TLS 1.0 and the SSL 3.0 protocols. On the contrary, TLS 1.1 and 1.2 have not been found to be vulnerable, but their use is very limited since they haven’t been adopted by the majority of HTTP clients and servers yet.
So, the use of RC4 based ciphers is all that is left for the moment. The security experts have released a list of cipher suites that is suitable for use in the configuration of the mod_ssl module for httpd:
SSLHonorCipherOrder on SSLCipherSuite !aNULL:!eNULL:!EXPORT:!DSS:!DES:RC4-SHA:RC4-MD5:ALL
They have also released a one-liner list of ciphers suitable for use in the relevant fields of the Local Group Policy Editor in Windows Server boxes.
However, there is no info about configuring the mod_gnutls module for apache to use RC4 based ciphers, so, as a dedicated user of mod_gnutls, I decided to release this tip. All you have to do is set the preferred ciphers in the GnuTLSPriorities directive. In this example we use the TLS 1.0 protocol:
Visiting a secure web site that has been configured using any of the methods described above and by checking the information of the secure connection to that website, you should see the following message:
This means that everything is working correctly.
As always, comments and suggestions are welcome and appreciated.
Mozilla Thunderbird is of those pieces of software I could say I am a fan of, but since I upgraded from TB3 to TB5 and recently to TB6, I’ve been experiencing various problems with the application’s overall speed and responsiveness. Using Thunderbird almost felt as if it was reading its data from the internet. Working with it had become an unpleasant experience, until I found some tips about how to make it more responsive. It seems that versions 5 and 6 try to use hardware acceleration to render the application’s user interface and, apparently, this does not work very well with my hardware. Anyway, here is what you have to do in order to restore Thunderbird 6 responsiveness to that of version 3.
Open TB’s Config Editor (
tools/options/advanced/general-tab/config-editor-button). Search for the following settings and double-click on them to set them to TRUE.
Also, open the Add-on Manager (
tools/add-ons/plugins) and disable all plugins that do not need to be enabled in your email client. Note that the plugins are a different thing than the extensions. I disabled them all in my TB.
Finally, restart Thunderbird.
This has worked for me. Thunderbird 6 now feels as responsive as TB3 was.
Today I realized my web site had been serving an empty HTML document for the last 2 days on every HTTP request no matter what the path was. When I initially noticed the issue, I was a bit worried, but, after taking a closer look at the Apache error log, I found out about a PHP syntax error causing the issue. A couple of days ago I had edited a WordPress plugin on my live web site and, apparently, I made a typo which led to a syntax error and an empty document being served on every request. But, I recall I had checked the website after the modification of the plugin but I hadn’t noticed any issues! Actually, this happened because I had forgot to clear the cache, so when I checked the site after the modification of the plugin, it seemed OK. My modification didn’t change the visual representation of the web site, so, having forgot to turn off caching, there was no way for me to find out about the typo. Lesson learned: never modify the code of a live web site, but, if you have to do it, always turn-off caching before doing so.
More and more I realize that there is a misconception about free software. Many people tend to believe that free software actually means software that should not cost any money. They somehow find natural and fair the fact that some people may work voluntarily in order to produce software, which the rest can use to make money without having any legal obligation to contribute either money or effort back upstream.
As I see it, free software should be free from cost for all to use and build upon, BUT using or building upon free software to make a profit should not be cost-free. That’s a straightforward and very fair model. Also, it seems to be the only realistic concept that could drive money back to those who invested their time and effort producing free software. I know that currently there is no free software license that makes a distinction between commercial and non-commercial use and thus be the solid ground for such a software production ecosystem. But, who knows… maybe we see one in the near future. Such a software license would make a difference in the way we perceive the “doing business with free software” concept that people talk about these days.
For content and media, there are the Creative Commons licenses, some of which make it possible for creators to provide their work for free, while at the same time they still reserve the right to selectively make their work available for commercial purposes under different terms. That’s the beauty of those licenses. They are made to solve real problems and that’s why I highly respect them.
It’s been a long time since the last time I checked the available software for managing long running processes. Software in this particular area has evolved and, after some research and testing on a virtual machine, I tried to install supervisord in a CentOS 5.6 box. Unfortunately, no RPM package exists for the latest 3.X version, so I decided to go through the procedure to rebuild the supervisor RPM from the Fedora development tree, since v3.X has some really cool features I cannot do without.
If you haven’t built an RPM package before, you will need to go through a simple procedure in order to prepare an RPM building environment. It is highly recommended that you build your packages in a separate box, dedicated to this kind of work, and also use a typical user and not root for building. So, let’s get the source RPM of supervisord from a Fedora Rawhide mirror:
The simplest way to build an RPM for our CentOS release is to use the
--rebuild option of
rpmbuild --rebuild supervisor-3.0-0.4.a10.fc16.src.rpm
Wait a few seconds for it to finish and then your binary RPM package will be in
To satisfy dependencies, you will also need the package python-meld3. So, download and rebuild it:
wget ftp://ftp.cc.uoc.gr/mirrors/linux/fedora/linux/development/rawhide/source/SRPMS/python-meld3-0.6.7-4.fc16.src.rpm rpmbuild --rebuild python-meld3-0.6.7-4.fc16.src.rpm
Having built RPM packages for supervisor and python-meld3, you can now transfer them to a testing box and install them:
yum localinstall /path/to/supervisor.rpm /path/to/python-meld3 yum install python-elementtree
We also installed elementtree as this is a dependency missing from the supervisor spec file (note to self: file a bug report).
Finally, try to start supervisord:
Surprisingly, it complains about the missing elementtree!
Starting supervisord: Traceback (most recent call last): File "/usr/bin/supervisord", line 5, in ? from pkg_resources import load_entry_point File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 2479, in ? working_set.require(__requires__) File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 585, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 483, in resolve raise DistributionNotFound(req) # XXX put more info here pkg_resources.DistributionNotFound: elementtree
Checking the requirements file at:
/usr/lib/python2.4/site-packages/supervisor-3.0a10-py2.4.egg-info/requires.txt there are no special version requirements:
meld3 >= 0.6.5 elementtree [iterparse] cElementTree >= 1.0.2
After some investigation on the supervisor-users mailing list, I found a message that indicated that setuptools.setup() should be used instead of distutils.core.setup() in the setup.py script.
So, the dependency resolution depends on the way the package has been installed! I didn’t even bother to patch setup.py… Knowing that the dependencies are correctly installed, I commented out the
elementtree line and also the
meld3 line as this caused the same error.
/usr/lib/python2.4/site-packages/supervisor-3.0a10-py2.4.egg-info/requires.txt file looks like this:
#meld3 >= 0.6.5 #elementtree [iterparse] cElementTree >= 1.0.2
Now supervisor starts without errors:
# /etc/init.d/supervisord start Starting supervisord: [ OK ] # cat /var/log/supervisor/supervisord.log 2011-05-12 02:56:08,221 CRIT Supervisor running as root (no user in config file) 2011-05-12 02:56:08,267 INFO RPC interface 'supervisor' initialized 2011-05-12 02:56:08,267 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2011-05-12 02:56:08,271 INFO daemonizing the supervisord process 2011-05-12 02:56:08,272 INFO supervisord started with pid 21337
This is what it takes to run supervisord 3 in CentOS 5 or RHEL 5. If you have any comments and suggestions, please let me know in the comments below.
The fact that the developers of two projects with common goals decided to get along together is news in the Free Software world. What we usually hear are the announcements of new forks. I was very glad to read that the developers of two projects, ntfs-3g and ntfsprogs, which are open source tools that can be used to access and maintain the NTFS filesystem, decided to combine their efforts and made their first merged release. The merge has many benefits: no duplicate effort, easier code maintenance, more focused and coordinated effort and probably the most important one, common and more confident user base. I hope more projects follow their example.
I just discovered the Read-It-Later addon for the Firefox browser. This is one of the most fantastic plugins I’ve seen in a while. From what I see, there have been about 4 million downloads already. This means I am too late, but as they say “better late than never“! This extension makes it possible to maintain a queue of unread content locally or remotely. It is also possible to save the text locally and read it at a later time even if you are offline. Really cool.
Here is a list of the features:
- Save pages to a reading list to read when you have time.
- Offline reading mode lets you read the items you’ve saved for later on the plane, train, or anywhere without an internet connection.
- Sync your list to all of your computers, at work or home.
- Sync your list to Read It Later apps for iPhone, iPod, iPad, Android and more.
- After reading, bookmark pages on your preferred bookmarking service or share them with friends.
- Click to Save Mode lets you quickly batch a reading list just by clicking on interesting links.
- Text view strips away images, ads, and layout from articles and presents them in an easy to consume way.
Until today I have been keeping content that I wished to read at a later time in the browser’s bookmark system or I left yet another tab open or stored the URL in a text file. This plugin seems that it will provide a decent solution and eliminate the trouble. I haven’t used the remote service yet and I am not sure if I will ever do. The local “read queue” works really well.
I recently read that the Free Software Foundation has given the Award for Projects of Social Benefit to the TOR Project. Congratulations! There are indeed some cases that the TOR network can be extremely useful to the societies. On the other hand, the fact that an organization like the FSF gives this award to the TOR project combined with statements like “People like you and your family use Tor to protect themselves, their children, and their dignity while using the Internet“, that can be found throughout the TOR project website, may lead the typical internet user into thinking that the TOR network, apart from providing anonymity, is also a secure way of communication, which is far from the truth. I don’t claim to be a network security expert or an authority on the TOR network, but I don’t think any expertise is required in order to state the obvious.
At this point, it is useful to roughly describe how TOR works. The TOR network consists of TOR clients, relays and exit nodes. A client connects to the network which initiates the creation of a tunnel that starts at the user’s location and ends, after following a random route through the relays, to a random exit node. The user configures other software like web browsers or instant messengers to connect to the remote service through this tunnel. Once the request exits the tunnel at the exit-node, it goes through the network of the ISP that provides internet access to the TOR exit-node and it finally reaches the remote service. The response from the remote service follows the inverse route to get back to the user’s software. This way, the user’s ISP has absolutely no idea what services the user communicates with, since all user traffic goes through the TOR network and the network of a 3rd party ISP.
So, TOR can provide anonymity as far as the user’s ISP is concerned, but is it a secure way to communicate with remote services? If no extra encryption is used, then it is quite obvious that using remote services through the TOR network is totally insecure. Here is why.
The TOR exit-node is a key point in the communication between the user and the remote service. This is where the user’s data exits the TOR tunnel and continues its way to the remote service through the 3rd party ISP’s network. It is also the place where data from the remote service leaves the 3rd party ISP’s network and enters the TOR tunnel in order to reach the end user. If no encryption is used, it is possible for the exit-node operator to sniff this network traffic. This means that it is technically possible for an evil exit-node operator to:
- know which web pages the user visits
- read the messages the user exchanges through unencrypted IM networks
- read the emails the user sends
- if the user authenticates to any services without encryption, the evil exit-node operator could for example find out his mailbox or FTP account password or the passwords the user uses for authentication to web sites
- even if the authentication to a web service has taken place through an encrypted SSL tunnel, if the rest of the communication with this specific web service is not encrypted, the evil operator could grab a copy of the user’s session cookie for this service and access it pretending to be him
These are some of the nasty things that can happen when you access remote services through a proxy server which you do not control.
Is there any guarantee that exit node operators do not sniff network traffic?
Even if the exit-node operators are cool, who can guarantee that the network traffic is not monitored within the 3rd party ISP‘s network? If the user accesses personalized services without encryption, then, even if the user’s real IP and thus his real name is not known, various pieces of collected data can be combined together and possibly reveal his real identity. This process is widely known as re-identification.
Is there any guarantee that the ISP providing internet access to a TOR exit node does not collect and sell information to “marketers and identity thieves”?
I consider the TOR project quite important. But, since typical internet users are urged to use the TOR network in order to browse the internet, the involved risks have to be explained in detail.
On the other hand, I’d like to urge internet users to spend some time to familiarize themselves with the basics of the HTTP protocol, the concepts of HTTP authentication and cookie based authentication and the importance of encrypted HTTP connections through SSL or TLS tunnels. Since the internet has become part of your lives, regardless of your profession, you need to be educated about these things, so as to be able to realize when your communication with the various internet services is vulnerable. You don’t have to be gurus, but rather get an idea of what is going on.
So far, it is quite clear that the only way to stay on the safe side while using anonymizing proxies on which you usually do not have full control, like TOR, is to connect to any remote services using encrypted connections only, usually through SSL or TLS tunnels. Personally, I never use anonymizing networks or third party proxies. This is because I never really had the need to hide my real location. Furthermore, I find it pointless as I don’t believe that such a thing as anonymity is really feasible. If I had to use TOR, I would try to find a way to connect to the remote service over an encrypted connection. In general, whenever I need a secure SOCKS proxy, for example when I have to use a public network to access personalized internet services, which do not offer full SSL access, I use OpenSSH client’s -D switch while logging in to a SSH server which I own and fully control and thus I have all the security I need.