The big news for this version is that we included a new “apt” binary that combines the most commonly used commands from apt-get and apt-cache. The commands are the same as their apt-get/apt-cache counterparts but with slightly different configuration options.
Currently the apt binary supports the following commands:
- list: which is similar to dpkg list and can be used with flags like
--installed or --upgradable.
- search: works just like apt-cache search but sorted alphabetically.
- show: works like apt-cache show but hide some details that people are less likely to care about (like the hashes). The full record is still available via apt-cache show of course.
- update: just like the regular apt-get update with color output enabled.
- install,remove: adds progress output during the dpkg run.
- upgrade: the same as apt-get dist-upgrade –with-new-pkgs.
- full-upgrade: a more meaningful name for dist-upgrade.
- edit-sources: edit sources.list using $EDITOR.
You can enable/disable the install progress via:
# echo 'Dpkg::Progress-Fancy "1";' > /etc/apt/apt.conf.d/99progressbar
If you have further suggestions or bugreport about APT, get in touch and most importantly, have fun!
Recently the ansible apt module got fnmatch (shell) style wildcard support for installing packages. Aparently this broke the workflow for some users who passed a “*” via a variable to apt to get the candidate version installed.
A more descriptive way of achiving this is to use the one of the special words “candidate”, “installed”, “newest” in the version tag or in the release tag.
For example you can write:
# apt-get install ansible/newest (or) # apt-get install 2vcard=candidate
As in the ansible case, this can be a useful default for script that calcuclate a version and need to fallback to a default.
The recently released apt 0.9.12 contains a bunch of good stuff, bugfixes and cleanups. But there are two new feature I particularly like.
The first is the new parameter “–with-new-pkgs” for the upgrade
# apt-get upgrade --with-new-pkgs
that will install new dependencies on the upgrade but never remove
packages. A typical use-case is a stable system that gets a kernel
with a new kernel ABI package.
The second is “–show-progress” for
install/remove/upgrade/dist-upgrade which will show inline progress
when dpkg is running to indicate the global progress.
# apt-get install --show-progress tea ... Selecting previously unselected package tea-data. (Reading database ... 380116 files and directories currently installed.) Unpacking tea-data (from .../tea-data_33.1.0-1_all.deb) ... Progress: [ 10%] Progress: [ 20%] Progress: [ 30%] Selecting previously unselected package tea. Unpacking tea (from .../tea_33.1.0-1_amd64.deb) ... Progress: [ 40%] Progress: [ 50%] Progress: [ 60%] Processing triggers for doc-base ... Processing 2 added doc-base files... Registering documents with scrollkeeper... ... Processing triggers for man-db ... Setting up tea-data (33.1.0-1) ... Progress: [ 70%] Progress: [ 80%] Setting up tea (33.1.0-1) ... Progress: [ 90%] Progress: [100%]
For the install progress, there is also a new experimental option
“Dpkg::Progress-Fancy”. It will display a persistent progress status bar in the last terminal line. This works like this:
# apt-get -o Dpkg::Progress-Fancy=true install tea
This kind of information is obviously most useful on complex operations like big installs or (release) upgrades.
My friend Peter (Kiwinote) has a very interessting new project called AppGrid. Its a replacement for the ubuntu software center written from scratch. Peter contributed a lot to the original software-center so he knows the problem domain quite well. You should give it a try, it can be added via:
$ sudo add-apt-repository -y ppa:appgrid/stable $ sudo apt-get update && sudo apt-get install -y appgrid
Then it can be found in the dash as “App Grid”. I hope you like it!
I like django and the more I work with it, the more I like it :)
For a unittest I needed to simulate requests coming from different remote addresses. And the django.test.client.Client makes this pretty easy:
class DistributedTestClient(Client): def request(self, **request): request["REMOTE_ADDR"] = "192.168.%i.%i" % (random.randint(1,254), random.randint(1,254)) return super(DistributedTestClient, self).request(**request) class DistributedClientkTestCase(TestCase): client_class = DistributedTestClient def test_distributed_meep(self): test_stuff()
$ ./demo.js pass salt $6$salt$3aEJgflnzWuw1O3tr0IYSmhUY0cZ7iBQeBP392T7RXjLP3TKKu3ddIapQaCpbD4p9ioeGaVIjOHaym7HvCuUm0 $ python -c 'import crypt; crypt.crypt("pass", "$6$salt") $6$salt$3aEJgflnzWuw1O3tr0IYSmhUY0cZ7iBQeBP392T7RXjLP3TKKu3ddIapQaCpbD4p9ioeGaVIjOHaym7HvCuUm0
With that, I plan to extend the PassHash firefox plugin to use that as the default algorithm for the password generation.
One of the projects I created a while ago is called “rapt (restricted apt)“. As I was asked about it on irc about recently I thought I should mention it here as well :)
It is a python-apt app that will allow regular users to install/update software or install build-depends via sudo without giving them full root access. rapt will ensure that there is no interaction (like conffile prompts or debconf) that might allow the user to get a rootshell. It allows blacklisting and with a suiteable sources.list it is a easy way to give limited access to more trusted users. One use-case is to allow developers to install build dependencies on a staging machine.
You can install it via
$ bzr branch lp:rapt
and just run the binary via sudo (and a sudoers file that allows to run it). All it needs is python and python-apt (which is installed on most system anyway).
When using ansible and its “setup” module to gather ad-hoc facts-data about multiple hosts, remember that it runs the jobs in parallel which may result in out-of-order output. With “ansible -f1″ the number of parallel processes can be limited to one to ensure this won’t happen. E.g.:
$ ansible all -f1 -m setup -a filter=ansible_mounts
(the filter argument for the facts module is also a nice feature).
I recently started using ansible to automate some server administration tasks.
Its very cool and easy to learn/extend. One nice feature is the “facts” gathering. It will collect information about the host(s) and stores them in its internal variables. This is useful for conditional execution of tasks (see below) but also as a ad-hoc way to gather information like DMI information or the running kernel.
To see all “facts” known to ansible about the hosts, run:
$ ansible all -m setup
To execute tasks conditionally you can do something like this:
- name: install vmware packages action: apt pkg=open-vm-tools only_if: "'$ansible_virtualization_type' == 'VMware'"
Note that ansible 1.2+ has a different (and simpler) conditional called “when”.
Ansible is available in Ubuntu 12.04+ via:
$ sudo apt-get install ansible
It is also available in Debian unstable and testing.
Due to popular demand I moved debian apt and python-apt from bzr to git today. Moving was pretty painless:
$ git init $ bzr fast-export --export-marks=marks.bzr -b debian/sid /path/to/debian-sid | git fast-import --export-marks=marks.git
And then a fast-import for the debian-wheezy and debian-experimental branches too. Then a
$ git gc --aggressive
(thanks to Guillem Jover for pointing this out) and that was it.
The branches are available at:
For a project of mine I created a small app based on webkitgtk that talks to a SSL server.
And I almost forgot about the libsoup default behavior for SSL certificates checking. By default libsoup and therefore webkitgtk will not do any SSL certificate checks. You need to put something like the following snippet into your code (adjust for your language of choice):
from gi.repository import WebKit session = WebKit.get_default_session() session.set_property("ssl-use-system-ca-file", True)
If you don’t do this it will accept any certificate (including self-signed ones).
This is documented behavior in libsoup and they don’t want to change it for compatiblity reasons in libsoup. But for webkit its unexpected behavior (at least to me) and I hope the webkitgtk developers will consider changing this default in webkit. I filed a bug about it. So if you use webkitgtk and SSL, remember to set the above property.
I use the PassHash firefox extension to generate site-specific strong passwords. The idea behind the extension is that a master password and a siteTag (e.g. the domain name) is used to generate a sha1 hash. This hash is used as the password for the website. In python its essentially this code:
h = hmac.new(master_pass, site_tag, hashlib.sha1) print(b64encode(h.digest())[:hash_len])
I want a commandline utility that can output me PassHash compatible hashes when I use w3m (or if the extension stops working for some reason).
To my delight I discovered that the upstream git repNice and hard to brute-force.o of PassHash already has a python helper to generate passhash compatible password. I added some tweaks to add pythons argparse  and now I’m really happy with it:
$ ./tools/passhash.py --hash-size 14 slashdot.org Please enter the master key: KPXveo7bq7j1%X
Hard to brute-force and matches what the extension generates.
I uploaded squid-deb-proxy into Debian unstable today and its in the NEW queue. I created it back in the days of Ubuntu 10.04 and some people voiced interest in having it in Debian as well so I spend a bit of time to get it customized for Debian.
Squid-deb-proxy uses the well known squid proxy with a custom configuration to cache deb package and Indexfiles (like Packages.gz) that will allow caching from the default archives and mirrors and reject anything else by default.
The basic philosophy is that “it just works”. You run on your server:
root@server# apt-get install squid-deb-proxy
and on your clients:
root@client# apt-get install squid-deb-proxy-client
and that’s it. It does not require any fiddling with configuration (unless you want to ). The default will let you connect to .debian.org and nothing else.
The server will announce itself via avahi as _apt_proxy._tcp and the
client will hook into apt to use Acquire::http::ProxyAutoDetect. The
client is useful for other servers that announce themself via avahi.
Packaging was a bit more work than anticipated because there is a bit of setup and teardown work in the initscript. For Debian as sysvinit script was needed, Ubuntu uses upstart so it took a bit of refactoring to extract the code into a common helper.
If you want to try it now, its available via:
$ bzr branch lp:squid-deb-proxy $ cd squid-deb-proxy $ bzr-buildpackage
and in unstable once it leaves the NEW queue.
A while ago I played with sqlite. Its pretty awesome. When using the full text search (fts) extension it also provides super fast full text searching. One of the things I was missing (compared to other engines) is the similar text suggestion (“Did you mean?”) support. Fortunately this is relatively easy to add via the fts4aux virtual table that sqlite supports.
I pushed a full example of to https://github.com/mvo5/sqlite-fts-did-you-mean. The way it works is that you build a set of similar words and use that to query for the “term” value from the fts4aux table.
Here is the output from the example:
$ ./fts_did_you_mean.py aptx Did you mean: apex (rank: 2) apt (rank: 1) time 0.024138927459716797
I wrote gdebi a long time ago to make it really easy to install .deb package with proper dependency resolution from the commandline and via a gtk (and kde) UI. But another neat (but not very well known) feature of the gdebi-core cli tool is to install the build-dependencies of a debian source package. If you run:
$ gdebi debian/control
in a unpacked debian source package it will check for missing build-dependencies and offer to install them.
There is a new 0.80~exp2 version of unattended-upgrades available. This would normally be fine for debian/sid but because of the freeze I decided to put it into experimental. Some nice features like adding a “–verbose” mode that shows the actual dpkg output when
running and codename based matching plus some nice fixes from Brian Murray (thanks!).
The codename based matching is interessting as it allows writing a matcher like “n=wheezy” (or more verbose “codename=wheezy”) in the config file. You can use “apt-cache policy” (without further arguments) to see what origins are available.
Enjoy and let me know if you find any issues issues!
I got a Music Shield from Seeedstudio a while ago. Its fun to play with but the example source did not quite work for me. So I ported it over to the the new SD library from arduino 1.0 and published a google plus post about it. Some people asked me about the source so I pushed the result to launchpad and github. I also filed a upstream request to include it.
If you tried to reach me via my ubuntu.com mail in the last couple of days you got a 550 error. There seems to be some misconfiguration on the ubuntu.com mailserver, I hope it gets fixed soon. You can use my mvo at debian.org in the meantime.
I created a small extension for sqlite3 today to allow order by debian version easily. This allows writing:
SELECT * FROM packages ORDER BY version COLLAT debversion_compare;
And it will do the right ordering. Its available on launchpad.