• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Wednesday, 23 Apr 2014 17:29
A simple, easy, fast and useful app! Change the color of your folders in Nautilus in a really easy way, so that you can get a better visual layout!

Folder Color in Ubuntu

How to install? Just enter this command into a Terminal, logout and enjoy it!
sudo add-apt-repository ppa:costales/folder-color ; sudo apt-get update ; sudo apt-get install folder-color -y

More info.
Author: "Marcos Costales"
Send by mail Print  Save  Delicious 
Date: Wednesday, 23 Apr 2014 17:16

This upstirring undertaking Ubuntu is, as my colleague MPT explains, performance art. Not only must it be art, it must also perform, and that on a deadline. So many thanks and much credit to the teams and individuals who made our most recent release, the Trusty Tahr, into the gem of 14.04 LTS. And after the uproarious ululation and post-release respite, it’s time to open the floodgates to umpteen pent-up changes and begin shaping our next show.

The discipline of an LTS constrains our creativity – our users appreciate the results of a focused effort on performance and stability and maintainability, and we appreciate the spring cleaning that comes with a focus on technical debt. But the point of spring cleaning is to make room for fresh ideas and new art, and our next release has to raise the roof in that regard. And what a spectacular time to be unleashing creativity in Ubuntu. We have the foundations of convergence so beautifully demonstrated by our core apps teams – with examples that shine on phone and tablet and PC. And we have equally interesting innovation landed in the foundational LXC 1.0, the fastest, lightest virtual machines on the planet, born and raised on Ubuntu. With an LTS hot off the press, now is the time to refresh the foundations of the next generation of Linux: faster, smaller, better scaled and better maintained. We’re in a unique position to bring useful change to the ubiquitary Ubuntu developer, that hardy and precise pioneer of frontiers new and potent.

That future Ubuntu developer wants to deliver app updates instantly to users everywhere; we can make that possible. They want to deploy distributed brilliance instantly on all the clouds and all the hardware. We’ll make that possible. They want PAAS and SAAS and an Internet of Things that Don’t Bite, let’s make that possible. If free software is to fulfil it’s true promise it needs to be useful for people putting precious parts into production, and we’ll stand by our commitment that Ubuntu be the most useful platform for free software developers who carry the responsibilities of Dev and Ops.

It’s a good time to shine a light on umbrageous if understandably imminent undulations in the landscape we love – time to bring systemd to the centre of Ubuntu, time to untwist ourselves from Python 2.x and time to walk a little uphill and, thereby, upstream. Time to purge the ugsome and prune the unusable. We’ve all got our ucky code, and now’s a good time to stand united in favour of the useful over the uncolike and the utile over the uncous. It’s not a time to become unhinged or ultrafidian, just a time for careful review and consideration of business as usual.

So bring your upstanding best to the table – or the forum – or the mailing list – and let’s make something amazing. Something unified and upright, something about which we can be universally proud. And since we’re getting that once-every-two-years chance to make fresh starts and dream unconstrained dreams about what the future should look like, we may as well go all out and give it a dreamlike name. Let’s get going on the utopic unicorn. Give it stick. See you at vUDS.

Author: "mark"
Send by mail Print  Save  Delicious 
Date: Wednesday, 23 Apr 2014 14:10

On the last UDS we talked about migrating from upstart to systemd to boot Ubuntu, after Mark announced that Ubuntu will follow Debian in that regard. There’s a lot of work to do, but it parallelizes well once developers can run systemd on their workstations or in VMs easily and the system boots up enough to still be able to work with it.

So today I merged our systemd package with Debian again, dropped the systemd-services split (which wasn’t accepted by Debian and will be unnecessary now), and put it into my systemd PPA. Quite surprisingly, this booted a fresh 14.04 VM pretty much right away (of course there’s no Plymouth prettiness). The main two things which were missing were NetworkManager and lightdm, as these don’t have an init.d script at all (NM) or it isn’t enabled (lightdm). Thus the PPA also contains updated packages for these two which provide a proper systemd unit. With that, the desktop is pretty much fully working, except for some details like cron not running. I didn’t go through /etc/init/*.conf with a small comb yet to check which upstart jobs need to be ported, that’s now part of the TODO list.

So, if you want to help with that, or just test and tell us what’s wrong, take the plunge. In a 14.04 VM (or real machine if you feel adventurous), do

  sudo add-apt-repository ppa:pitti/systemd
  sudo apt-get update
  sudo apt-get dist-upgrade

This will replace systemd-services with systemd, update network-manager and lightdm, and a few libraries. Up to now, when you reboot you’ll still get good old upstart. To actually boot with systemd, press Shift during boot to get the grub menu, edit the Ubuntu stanza, and append this to the linux line: init=/lib/systemd/systemd.

For the record, if pressing shift doesn’t work for you (too fast, VM, or similar), enable the grub menu with

  sudo sed -i '/GRUB_HIDDEN_TIMEOUT/ s/^/#/' /etc/default/grub
  sudo update-grub

Once you are satisfied that your system boots well enough, you can make this permanent by adding the init= option to /etc/default/grub (and possibly remove the comment sign from the GRUB_HIDDEN_TIMEOUT lines) and run sudo update-grub again. To go back to upstart, just edit the file again, remove the init=sudo update-grub again.

I’ll be on the Debian systemd/GNOME sprint next weekend, so I feel reasonably well prepared now. :-)

Update: As the comments pointed out, this bricked /etc/resolv.conf. I now uploaded a resolvconf package to the PPA which provides the missing unit (counterpart to the /etc/init/resolvconf.conf upstart job) and this now works fine. If you are in that situation, please boot with upstart, and do the following to clean up:

  sudo rm /etc/resolv.conf
  sudo ln -s ../run/resolvconf/resolv.conf /etc/resolv.conf

Then you can boot back to systemd.

Author: "pitti"
Send by mail Print  Save  Delicious 
Date: Wednesday, 23 Apr 2014 13:54

I’m thinking of doing a vBlog about Ubuntu and other things:


Author: "belkinsa"
Send by mail Print  Save  Delicious 
Date: Wednesday, 23 Apr 2014 05:52

Juju sos is my entryway into Go code and the juju internals. This plugin will execute and pull sosreports from all machines known to juju or a specific machine of your choice and copy them locally on your machine.

An example of what this plugin does, first, some output of juju status to give you an idea of the machines I have:

┌[poe@cloudymeatballs] [/dev/pts/1] 
└[~]> juju status
environment: local
machines:
  "0":
    agent-state: started
    agent-version: 1.18.1.1
    dns-name: localhost
    instance-id: localhost
    series: trusty
  "1":
    agent-state: started
    agent-version: 1.18.1.1
    dns-name: 10.0.3.27
    instance-id: poe-local-machine-1
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=2048M root-disk=8192M
  "2":
    agent-state: started
    agent-version: 1.18.1.1
    dns-name: 10.0.3.19
    instance-id: poe-local-machine-2
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=2048M root-disk=8192M
services:
  keystone:
    charm: cs:trusty/keystone-2
    exposed: false
    relations:
      cluster:
      - keystone
      identity-service:
      - openstack-dashboard
    units:
      keystone/0:
        agent-state: started
        agent-version: 1.18.1.1
        machine: "2"
        public-address: 10.0.3.19
  openstack-dashboard:
    charm: cs:trusty/openstack-dashboard-0
    exposed: false
    relations:
      cluster:
      - openstack-dashboard
      identity-service:
      - keystone
    units:
      openstack-dashboard/0:
        agent-state: started
        agent-version: 1.18.1.1
        machine: "1"
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: 10.0.3.27

Basically what we are looking at is 2 machines that are running various services on them in my case Openstack Horizon and Keystone. Now suppose I have some issues with my juju machines and openstack and I need a quick way to gather a bunch of data on those machines and send them to someone who can help. With my juju-sos plugin, I can quickly gather sosreports on each of the machines I care about in as little typing as possible.

Here is the output from juju sos querying all machines known to juju:

┌[poe@cloudymeatballs] [/dev/pts/1] 
└[~]> juju sos -d ~/scratch
2014-04-23 05:30:47 INFO juju.provider.local environprovider.go:40 opening environment "local"
2014-04-23 05:30:47 INFO juju.state open.go:81 opening state, mongo addresses: ["10.0.3.1:37017"]; entity ""
2014-04-23 05:30:47 INFO juju.state open.go:133 dialled mongo successfully
2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:53 Querying all machines
2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:59 Adding machine(1)
2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:59 Adding machine(2)
2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 1
2014-04-23 05:30:55 INFO juju.sos main.go:119 Copying archive to "/home/poe/scratch"
2014-04-23 05:30:56 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 2
2014-04-23 05:31:08 INFO juju.sos main.go:119 Copying archive to "/home/poe/scratch"
┌[poe@cloudymeatballs] [/dev/pts/1] 
└[~]> ls $HOME/scratch
sosreport-ubuntu-20140423040507.tar.xz  sosreport-ubuntu-20140423052125.tar.xz  sosreport-ubuntu-20140423052545.tar.xz
sosreport-ubuntu-20140423050401.tar.xz  sosreport-ubuntu-20140423052223.tar.xz  sosreport-ubuntu-20140423052600.tar.xz
sosreport-ubuntu-20140423050727.tar.xz  sosreport-ubuntu-20140423052330.tar.xz  sosreport-ubuntu-20140423052610.tar.xz
sosreport-ubuntu-20140423051436.tar.xz  sosreport-ubuntu-20140423052348.tar.xz  sosreport-ubuntu-20140423053052.tar.xz
sosreport-ubuntu-20140423051635.tar.xz  sosreport-ubuntu-20140423052450.tar.xz  sosreport-ubuntu-20140423053101.tar.xz
sosreport-ubuntu-20140423052006.tar.xz  sosreport-ubuntu-20140423052532.tar.xz

Another example of juju sos just capturing a sosreport from one machine:

┌[poe@cloudymeatballs] [/dev/pts/1] 
└[~]> juju sos -d ~/scratch -m 2
2014-04-23 05:41:59 INFO juju.provider.local environprovider.go:40 opening environment "local"
2014-04-23 05:42:00 INFO juju.state open.go:81 opening state, mongo addresses: ["10.0.3.1:37017"]; entity ""
2014-04-23 05:42:00 INFO juju.state open.go:133 dialled mongo successfully
2014-04-23 05:42:00 INFO juju.sos.cmd cmd.go:70 Querying one machine(2)
2014-04-23 05:42:00 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 2
2014-04-23 05:42:08 INFO juju.sos main.go:99 Copying archive to "/home/poe/scratch"

Fancy, fancy :)

Of course this is a work in progress and I have a few ideas of what else to add here, some of those being:

  • Rename the sosreports to match the dns-name of the juju machine
  • Filter sosreport captures based on services
  • Optionally pass arguments to sosreport command in order to filter out specific plugins I want to run, ie

    $ juju sos -d ~/sosreport -- -b -o juju,maas,nova-compute

As usual contributions are welcomed and some installation instructions are located in the readme

Author: "Adam Stokes"
Send by mail Print  Save  Delicious 
Date: Wednesday, 23 Apr 2014 05:48
After working with my ambilight clone for a few days, I discovered the biggest annoyance was that it wouldn't turn off after turning off the TV.  I had some ideas on how I could remotely trigger it from the phone or from an external HTPC but I really wanted a self contained solution in case I decided to swap the HTPC for a FireTV or a Chromecast.

This brought me to trying to do it directly via my remote.  My HTPC uses a mceusb, so I was tempted to just get another mceusb for the pi.  This would have been overkill though, the pi has tons of unused GPIO's, it can be done far simpler (and cheaper).

I looked into it and discovered that someone actually already wrote a kernel module that directly controls an IR sensor on a GPIO.  The kernel module is based off the existing lirc_serial module, but adapted specifically for the raspberry pi.  (See http://aron.ws/projects/lirc_rpi/ for more information)

Hardware

All that's necessary is a 38 kHz IR sensor.  You'll spend under $5 on one of them on Amazon (plus some shipping) or you can get one from radio shack if you want something quick and local.  I spent $4.87 on one at my local radio shack.

The sensor is really simple, 3 pins.  All 3 pins are available in the pi's header.  One goes to 3.3V rail, one to ground, and one to a spare GPIO.  There's a few places on the header that you can use for each.  Just make sure you match up the pinout to the sensor you get.  I chose to use GPIO 22 as it's most convenient for my lego case.  The lirc_rpi defaults to GPIO 18.

Some notes to keep in mind:

  1. While soldering it, be cognizant of which way you want the sensor to face so that it can be accessed from the remote.  
  2. Remember that you are connecting to 3.3V and Ground from the Pi header.  The ground connection won't be the same as your rail that was used to power the pi if you are powering via USB.  
  3. The GPIO pins are not rated for 5V, so be sure to connect to the 3.3V.



Software


LIRC is available directly in the raspbian repositories.  Install it like this:

# sudo apt-get install lirc

Manually load the module so that you can test it.

# sudo modprobe lirc_rpi gpio_in_pin=22

Now use mode2 to test that it's working.  Once you run the command, press some buttons on your remote.  You should be output about space, pulse and other stuff.  Once you're satisfied, press ctrl-c to exit.

# mode2 -d /dev/lirc0

Now, add the modules that need to be loaded to /etc/modules.  If you are using a different GPIO than 18, specify it here again.  This will make sure that lirc_rpi loads on boot.

/etc/modules

lirc_dev
lirc_rpi gpio_in_pin=22


Now modify /etc/lirc/hardware.conf to match this configuration to make it work for the rpi:

/etc/lirc/hardware.conf

# /etc/lirc/hardware.conf
#
# Arguments which will be used when launching lircd
LIRCD_ARGS="--uinput"

#Don't start lircmd even if there seems to be a good config file
#START_LIRCMD=false

#Don't start irexec, even if a good config file seems to exist.
#START_IREXEC=false

#Try to load appropriate kernel modules
LOAD_MODULES=true

# Run "lircd --driver=help" for a list of supported drivers.
DRIVER="default"
# usually /dev/lirc0 is the correct setting for systems using udev 
DEVICE="/dev/lirc0"
MODULES="lirc_rpi"

# Default configuration files for your hardware if any
LIRCD_CONF=""
LIRCMD_CONF=""

Next, we'll record the buttons that you want the pi to trigger the backlight toggle on.  I chose to do it on the event of turning the TV on or off.  For me I actually have a harmony remote that has separate events for "Power On" and "Power Off" available.  So I chose to program KEY_POWER and KEY_POWER2.  If you don't have the codes available for both "Power On" and "Power Off" then you can just program "Power Toggle" to KEY_POWER.

# irrecord -d /dev/lirc0 ~/lircd.conf

Once you have the lircd.conf recorded, move it into /etc/lirc to overwrite /etc/lirc/lircd.conf and start lirc

# sudo mv /home/pi/lircd.conf /etc/lirc/lircd.conf
# sudo /etc/init.d/lirc start

With lirc running you can examine that it's properly recognizing your key event using the irw command.  Once irw is running, press the button on the remote and make sure your pi recognizes it.  Once you're done press ctrl-c to exit.

# irw

Now that you've validated the pi can recognize the command, it's time to tie it to an actual script.  Create /home/pi/.lircrc with contents like this:

/home/pi/.lircrc

begin
     button = KEY_POWER
     prog = irexec
     repeat = 0
     config = /home/pi/toggle_backlight.sh off
end

begin
     button = KEY_POWER2
     prog = irexec
     repeat = 0
     config = /home/pi/toggle_backlight.sh on
end

My toggle_backlight.sh looks like this:

/home/pi/toggle_backlight.sh

#!/bin/sh
ARG=toggle
if [ -n "$1" ]; then
ARG=$1
fi
RUNNING=$(pgrep hyperion-v4l2)
if [ -n "$RUNNING" ]; then
if [ "$ARG" = "on" ]; then
exit 0
fi
pkill hyperion-v4l2
hyperion-remote --color black
exit 0
fi
if [ "$ARG" = "off" ]; then
hyperion-remote --color black
exit 0
fi
#spawn hyperion remote before actually clearing channels to prevent extra flickers
hyperion-v4l2 --crop-height 30 --crop-width 10 --size-decimator 8 --frame-decimator 2 --skip-reply --signal-threshold 0.08&
hyperion-remote --clearall


To test, run irexec and then press your remote button.  With any luck irexec will launch the toggle script and change your LED status.

# irexec

Lastly, you need to add irexec to your /etc/rc.local to make it boot with the pi.  Make sure you put the execution before the exit 0

/etc/rc.local

su pi -c "irexec -d"
su pi -c "/home/pi/toggle_backlight.sh off"

Reboot your pi, and make sure everything works together.  

# sudo reboot


Author: "Mario Limonciello"
Send by mail Print  Save  Delicious 
Date: Wednesday, 23 Apr 2014 03:10

I just completed upgrading four computers to Ubuntu 14.04 tonight. My testing machine has been running 14.04 since early alpha phase, but in the last two days I upgrade by work Lenovo W520, my person Lenovo T530 and the self-assembled desktop with a core2duo and Nvidia 8800 GTS that I haded down to my son.

Confidence In Ubuntu
On Friday of this week I will be involved in delivering training to a group of Boy Scout leaders at a Wood Badge course. I will utilize my primary laptop, the T530, to give a presentation and produce the Gilwell Gazette. I completed a great deal of prep work on Ubuntu 13.10 and if I did not have complete confidence in Ubuntu 14.04 I would have waited until after the weekend to upgrade. I needed to be confident that the multi-monitor functionality would work, that documents produced in an earlier version of Libre Office would not suddenly change the page layouts. In short, I was depending on Ubuntu being dependable and solid more than I usually do.

Subtle Changes Add Flexibility and Polish
Ubuntu added some very small tweaks that truly add to the overall user experience. The borderless windows, new lock screen, and smaller minimum size of launcher icons all add up to slight, but pleasant changes.

Here is a screen shot of the 14.04 desktop on the Lenovo T530.

14.04 desktop

14.04 desktop


Author: "Charles Profitt"
Send by mail Print  Save  Delicious 
Date: Wednesday, 23 Apr 2014 01:12




This article is cross-posted on Docker's blog as well.

There is a design pattern, occasionally found in nature, when some of the most elegant and impressive solutions often seem so intuitive, in retrospect.



For me, Docker is just that sort of game changing, hyper-innovative technology, that, at its core,  somehow seems straightforward, beautiful, and obvious.



Linux containers, repositories of popular base images, snapshots using modern copy-on-write filesystem features.  Brilliant, yet so simple.  Docker.io for the win!


I clearly recall nine long months ago, intrigued by a fervor of HackerNews excitement pulsing around a nascent Docker technology.  I followed a set of instructions on a very well designed and tastefully manicured web page, in order to launch my first Docker container.  Something like: start with Ubuntu 13.04, downgrade the kernel, reboot, add an out-of-band package repository, install an oddly named package, import some images, perhaps debug or ignore some errors, and then launch.  In few moments, I could clearly see the beginnings of a brave new world of lightning fast, cleanly managed, incrementally saved, highly dense, operating system containers.

Ubuntu inside of Ubuntu, Inception style.  So.  Much.  Potential.



Fast forward to today -- April 18, 2014 -- and the combination of Docker and Ubuntu 14.04 LTS has raised the bar, introducing a new echelon of usability and convenience, and coupled with the trust and track record of enterprise grade Long Term Support from Canonical and the Ubuntu community.
Big thanks, by the way, to Paul Tagliamonte, upstream Debian packager of Docker.io, as well as all of the early testers and users of Docker during the Ubuntu development cycle.
Docker is now officially in Ubuntu.  That makes Ubuntu 14.04 LTS the first enterprise grade Linux distribution to ship with Docker natively packaged, continuously tested, and instantly installable.  Millions of Ubuntu servers are now never more than three commands away from launching or managing Linux container sandboxes, thanks to Docker.


sudo apt-get install docker.io
sudo docker.io pull ubuntu
sudo docker.io run -i -t ubuntu /bin/bash


And after that last command, Ubuntu is now running within Docker, inside of a Linux container.

Brilliant.

Simple.

Elegant.

User friendly.

Just the way we've been doing things in Ubuntu for nearly a decade. Thanks to our friends at Docker.io!


Cheers,
:-Dustin
Author: "Dustin Kirkland"
Send by mail Print  Save  Delicious 
Date: Tuesday, 22 Apr 2014 20:34

One of the less exciting points in the day of a Debian Developer is the moment they realize they have to create a repackaged upstream source tarball.

This is often a process that they have to repeat on each new upstream release too.

Wouldn't it be useful to:

  • Scan all the existing repackaged upstream source tarballs and diff them against the real tarballs to catalog the things that have to be removed and spot patterns?
  • Operate a system that automatically produces repackaged upstream source tarballs for all tags in the upstream source repository or all new tarballs in the upstream download directory? Then the DD can take any of them and package them when he wants to with less manual effort.
  • Apply any insights from this process to detect non-free content in the rest of the Debian archive and when somebody is early in the process of evaluating a new upstream project?

Google Summer of Code is back

One of the Google Summer of Code projects this year involves recursively building Java projects from their source. Some parts of the project, such as repackaged upstream tarballs, can be generalized for things other than Java. Web projects including minified JavaScript are a common example.

Andrew Schurman, based near Vancouver, is the student selected for this project. Over the next couple of weeks, I'll be starting to discuss the ideas in more depth with him. I keep on stumbling on situations where repackaged upstream tarballs are necessary and so I'm hoping that this is one area the community will be keen to collaborate on.

Author: "Daniel.Pocock"
Send by mail Print  Save  Delicious 
Date: Tuesday, 22 Apr 2014 12:11

KDE Project:

There's only 1 tool to deal with an unsupported Windows XP...

Author: "jriddell"
Send by mail Print  Save  Delicious 
Date: Tuesday, 22 Apr 2014 12:00

Recently my friend Joël Franusic was stressing out about sending postcards to his Kickstarter backers and asked me to help him out. He pointed me to the excellent service Lob.com, which is a very developer-friendly API around printing and mailing. We quickly had some code up and running that could take a CSV export of Kickstarter campaign backers, verify addresses, and trigger the sending of customizable, actual physical postcards to the backers.

Screen Shot 2014-04-14 at 2.24.30 PM

We wanted to share the project such that it could help out other Kickstarter campaigns, so we put it on Github: https://github.com/mrooney/kickstarter-lob.

Below I explain how to install and use this script to use Lob to send postcards to your Kickstarter backers. The section after that explains how the script works in detail.

Using the script to mail postcards to your backers

First, you’ll need to sign up for a free account at Lob.com, then grab your “Test API Key” from the “Account Details” section of your account page. At this point you can use your sandbox API key to test away free of charge and view images of any resulting postcards. Once you are happy with everything, you can plug in credit card details and start using your “Live” API key. Second, you’ll need an export from Kickstarter for the backers you wish to send postcards to.

Now you’ll want to grab the kickstarter-lob code and get it set.

These instructions assume that you’re using a POSIX compatible operating system like Mac OS X or Linux. If you’re using Mac OS X, open the “Terminal” program and type the commands below into it to get started:

git clone https://github.com/mrooney/kickstarter-lob.git
cd kickstarter-lob
sudo easy_install pip # (if you don’t have pip installed already)
pip install -r requirements.txt
cp config_example.json config.json
open config.json

At this point, you should have a text editor open with the configuration information. Plug in the correct details, making sure to maintain quotes around the values. You’ll need to provide a few things besides an API key:

  • A URL of an image or PDF to be used for the front of the postcard.
    This means that you need to have your PDF available online somewhere. I suggest using Amazon’s S3 service to host your PDF.

  • A message to be printed on the back of the postcard (the address of the receiver will automatically show up here as well).

  • Your return address.

Now you are ready to give it a whirl. Run it like so. Make sure you include the filename for your Kickstarter export:

$ python kslob.py ~/Downloads/your-kickstarter-backer-report.csv
Fetching list of any postcards already sent...
Verifying addresses of backers...
warning: address verification failed for jsmith@example.com, cannot send to this backer.
Already sent postcards to 0 of 161 backers
Send to 160 unsent backers now? [y/N]: y
Postcard sent to Jeff Bezos! (psc_45df20c2ade155a9)
Postcard sent to Tim Cook! (psc_dcbf89cd1e46c488)
...
Successfully sent to 160 backers with 0 failures

The script will verify all addresses, and importantly, only send to addresses not already sent to. The script queries Lob to keep track of who you’ve already sent a postcard to; this important feature allows you to download new Kickstarter exports as people fill in or update their addresses. After downloading a new export from Kickstarter, just run the script against the new export, and the script will only send postcards to the new addresses.

Before anything actually happens, you’ll notice that you’re informed of how many addresses have not yet received postcards and prompted to send them or not, so you can feel assured it is sending only as many postcards as you expect.

If you were to run it again immediately, you’d see something like this:

$ python kslob.py ~/Downloads/your-kickstarter-backer-report.csv
 Fetching list of any postcards already sent...
 Verifying addresses of backers...
 warning: address verification failed for jsmith@example.com, cannot send to this backer.
 Already sent postcards to 160 of 161 backers
 SUCCESS: All backers with verified addresses have been processed, you're done!

After previewing your sandbox postcards on Lob’s website, you can plug in your live API key in the config.json file and send real postcards at reasonable rates.

How the script works

This section explains how the script actually works. If all you wanted to do is send postcards to your Kickstarter backers, then you can stop reading now. Otherwise, read on!

Before you get started, take a quick look at the “kslob.py” file on GitHub: https://github.com/mrooney/kickstarter-lob/blob/master/kslob.py

We start by importing four Python libraries: “csv”, “json”, “lob”, and “sys”. Of those four libraries, “lob” is the only one that isn’t part of Python’s standard library. The “lob” library is installed by using the “pip install -r requirements.txt” command I suggest using above. You can also install “lob-python” using pip or easy_install.

#!/usr/bin/env python
import csv
import json
import lob
import sys

Next we define one class named “ParseKickstarterAddresses” and two functions “addr_identifier” and “kickstarter_dict_to_lob_dict”

“ParseKickstarterAddresses” is the code that reads in the backer report from Kickstarter and turns it into an array of Python dictionaries.

class ParseKickstarterAddresses:
   def __init__(self, filename):
       self.items = []
       with open(filename, 'r') as csvfile:
           reader = csv.DictReader(csvfile)
           for row in reader:
               self.items.append(row)

The “addr_identifier” function takes an address and turns it into a unique identifier, allowing us to avoid sending duplicate postcards to backers.

def addr_identifier(addr):
   return u"{name}|{address_line1}|{address_line2}|{address_city}|{address_state}|{address_zip}|{address_country}".format(**addr).upper()

The “kickstarter_dict_to_lob_dict” function takes a Python dictionary and turns it into a dictionary we can give to Lob as an argument.

def kickstarter_dict_to_lob_dict(dictionary):
   ks_to_lob = {'Shipping Name': 'name',
                'Shipping Address 1': 'address_line1',
                'Shipping Address 2': 'address_line2',
                'Shipping City': 'address_city',
                'Shipping State': 'address_state',
                'Shipping Postal Code': 'address_zip',
                'Shipping Country': 'address_country'}
   address_dict = {}
   for key in ks_to_lob.keys():
       address_dict[ks_to_lob[key]] = dictionary[key]
   return address_dict

The “main” function is where the majority of the logic for our script resides. Let’s cover that in more detail.

We start by reading in the name of the Kickstarter backer export file. Loading our configuration file (“config.json”) and then configuring Lob with the Lob API key from the configuration file:

def main():
   filename = sys.argv[1]
   config = json.load(open("config.json"))
   lob.api_key = config['api_key']

Next we query Lob for the list of postcards that have already been sent. You’ll notice that the “processed_addrs” variable is a Python “set”, if you haven’t used a set in Python before, a set is sort of like an array that doesn’t allow duplicates. We only fetch 100 results from Lob at a time, and use a “while” loop to make sure that we get all of the results.

print("Fetching list of any postcards already sent...")
processed_addrs = set()
postcards = []
postcards_result = lob.Postcard.list(count=100)
while len(postcards_result):
    postcards.extend(postcards_result)
    postcards_result = lob.Postcard.list(count=100, offset=len(postcards))

One we fetch all of the postcards, we print out how many were found:

print("...found {} previously sent postcards.".format(len(postcards)))

Then we iterate through all of our results and add them to the “processed_addrs” set. Note the use of the “addr_identifier” function, which turns each address dictionary into a string that uniquely identifies that address.

for processed in postcards:
    identifier = addr_identifier(processed.to.to_dict())
    processed_addrs.add(identifier)

Next we set up a bunch of variables that will be used later on, variables with configuration information for the postcards that Lob will send, the addresses from the Kickstarter backers export file, and variables to keep track of who we’ve sent postcards to and who we still need to send postcards to.

postcard_from_address = config['postcard_from_address']
postcard_message = config['postcard_message']
postcard_front = config['postcard_front']
postcard_name = config['postcard_name']
addresses = ParseKickstarterAddresses(filename)
to_send = []
already_sent = []

At this point, we’re ready to start validating addresses, the code below loops over every line in the Kickstarter backers export file and uses Lob to see if the address is valid.

print("Verifying addresses of backers...")
for line in addresses.items:
    to_person = line['Shipping Name']
    to_address = kickstarter_dict_to_lob_dict(line)
    try:
        to_name = to_address['name']
        to_address = lob.AddressVerify.verify(**to_address).to_dict()['address']
        to_address['name'] = to_name
    except lob.exceptions.LobError:
        msg = 'warning: address verification failed for {}, cannot send to this backer.'
        print(msg.format(line['Email']))
        continue

If the address is indeed valid, we check to see if we’ve already sent a postcard to that address. If so, the address is added to the list of addresses we’ve “already_sent” postcards to. Otherwise, it’s added to the list of address we still need “to_send” postcards to.

if addr_identifier(to_address) in processed_addrs:
    already_sent.append(to_address)
else:
    to_send.append(to_address)

Next we print out the number of backers we’ve already sent postcards to and check to see if we need to send postcards to anybody, exiting if we don’t need to send postcards to anybody.

nbackers = len(addresses.items)
print("Already sent postcards to {} of {} backers".format(len(already_sent), nbackers))
if not to_send:
    print("SUCCESS: All backers with verified addresses have been processed, you're done!")
    return

Finally, if we do need to send one or more postcards, we tell the user how many postcards will be mailed and then ask them to confirm that those postcards should be mailed:

query = "Send to {} unsent backers now? [y/N]: ".format(len(to_send), nbackers)
if raw_input(query).lower() == "y":
    successes = failures = 0

If the user enters “Y” or “y”, then we start sending postcards. The call to Lob is wrapped in a “try/except” block. We handle calls to the Lob library that return a “LobError” exception, counting those calls as a “failure”. Other exceptions are not handled and will result in the script exciting with that exception.

for to_address in to_send:
    try:
        rv = lob.Postcard.create(to=to_address, name=postcard_name, from_address=postcard_from_address, front=postcard_front, message=postcard_message)
        print("Postcard sent to {}! ({})".format(to_address['name'], rv.id))
        successes += 1
    except lob.exceptions.LobError:
        msg = 'Error: Failed to send postcard to Lob.com'
        print("{} for {}".format(msg, to_address['name']))
        failures += 1

Lastly, we print a message indicating how many messages were sent and how many failures we had.

    print("Successfully sent to {} backers with {} failures".format(successes, failures))
else:

(If the user pressed a key other than “Y” or “y”, this is the message that they’ll see)

print("Okay, not sending to unsent backers.")

And there you have it, a short script that uses Lob to send postcards to your Kickstarter backers, with code to only send one postcard per address, that gracefully handles errors from Lob.

I hope that you’ve found this useful! Please let us know of any issues you encounter on Github, or send pull requests adding exciting new features. Most importantly, enjoy easily bringing smiles to your backers!

Author: "Mike Rooney"
Send by mail Print  Save  Delicious 
Date: Monday, 21 Apr 2014 21:30

Welcome to the Ubuntu Weekly Newsletter. This is issue #364 for the week April 14 – 20, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth Krumbach Joseph
  • Paul White
  • Emily Gonyer
  • Tiago Carrondo
  • Jose Antonio Rey
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Author: "lyz"
Send by mail Print  Save  Delicious 
Date: Monday, 21 Apr 2014 20:44
Recently I came across www.androidpolice.com/2014/04/07/new-app-huey-synchronizes-philips-hue-lights-with-your-movies-and-tv-shows-for-awesome-ambient-lighting-effects/ and thought it was pretty neat.  The lights were expensive however, and it required your phone or tablet to be in use every time you wanted to use it which seemed sub-optimal.

I've been hunting for a useful project to do with my Raspberry Pi, and found out that there were two major projects centered around getting something similar setup.

Ambi-TV: https://github.com/gkaindl/ambi-tv
Hyperion: https://github.com/tvdzwan/hyperion/wiki

With both software projects, you take an HDMI signal, convert it to analog and then capture the analog signal to analyze.  Once the signal is analyzed a string of addressable LED's is programmed to match what the borders are colored.
I did my initial setup using both software packages but in the end preferred using Hyperion for it's easy of use of configuration and results.

Necessary Hardware

I purchased the following (links point to where I purchased):
Other stuff I already had on hand that was needed:
  • Soldering tools
  • Spare prototyping board
  • Raspberry pi w/ case
  • Extra HDMI cables
  • Analog Composite cable 
  • Spare wires

Electronics Setup

Once everything arrived, I soldered a handful of wires to a prototyping board so that I could house more of the pieces in the raspberry pi case.  I used a cut up micro USB cord to provide power from the 5V rail and ground to the pi itself and then also to one end of the 4 pin JST adapter.

Prototyping board, probably this size is overkill,
but I have flexibility for future projects to add on now.
The power comes into the board and is used to power both the LEDs and the the raspberry pi from a single power source.  The clock and data lines for the LED string are connected to some header cable to plug into the raspberry pi.

GPIO connectors

The clock and data lines on the LPD8806 strip (DI/CI)  matched up to these pins on the raspberry pi:
Pin 19 (MOSI)          LPD8806 DAT pin
Pin 23 (SCLK) LPD8806 CLK pin
Although it's possible to power the raspberry pi from the 5V and ground rails in the GPIO connector on the pi instead of micro USB, there is no over current protection on those rails.  In case of any problems with a current spike the pi would be toast.

Case

Once I got everything put into the pi properly, I double checked all the connections and closed up the case.
My pi case with the top removed and an
inset put in for holding the proto board
Whole thing assembled

TV mounted LEDs

I proceeded to do the TV.  I have a 46" set, which works out to 18 LEDs on either side and 30 LEDs on the top and bottom.  I cut up the LED strips and used double sided tape to affix to the TV.  Once the LED strips are cut up you have to solder 4 pins from the out end of one strip to the in end of another strip.  I'd recommend looking for some of the prebuilt L corner strips if you do this.  I didn't use them and it was a pain to strip and hold such small wires in place to solder in the small corners.  All of the pins that are marked "out" on one end of the LED strip get connected to the "in" end on the next strip.

Back of TV w/ LEDs attached

Corner with wires soldered on from out to in

External hardware Setup

From the output of my receiver that would be going to my TV, I connect it to the input of the HDMI splitter.
The HDMI splitter's primary output goes to the TV.
The secondary output goes to the HDMI2AV adapter.
The HDMI2AV adapter's composite video output gets connected to the video input of the USB grabber.
The USB grabber is plugged directly into the raspberry pi.


Software Setup

Once all the HW was together I proceeded to get the software set up.  I originally had an up to date version of raspbian wheezy installed.  It included an updated kernel (version 3.10).  I managed to set everything up using it except the grabber, but then discovered that there were problems with the USB grabber I purchased.
Plugging it in causes the box to kernel panic.  The driver for the USB grabber has made it upstream in kernel version 3.11, so I expected it should be usable in 3.10 with some simple backporting tweaks, but didn't narrow it down entirely.

I did find out that kernel 3.6.11 did work with an earlier version of the driver however, so I re-did my install using an older snapshot of raspbian.  I managed to get things working there, but would like to iron out the problems causing a kernel panic at some point.

USB Grabber instructions

The USB grabber I got is dirt cheap but not based off the really common chipsets already supported in the kernel with the versions in raspbian, so it requires some extra work.
  1. Install Raspbian snapshot from 2013-07-26.  Configure as desired.
  2. git clone https://github.com/gkaindl/ambi-tv.git ambi-tv
  3. cd ambi-tv/misc && sudo sh ./get-kernel-source.sh
  4. cd usbtv-driver && make
  5. sudo mkdir /lib/modules/3.6.11+/extra
  6. sudo cp usbtv.ko /lib/modules/3.6.11+/extra/
  7. sudo depmod -a

Hyperiond Instructions

After getting the grabber working, installing hyperion is a piece of cake.  This will set up hyperiond to start on boot.
  1. wget -N https://raw.github.com/tvdzwan/hyperion/master/bin/install_hyperion.sh
  2. sudo sh ./install_hyperion.sh
  3. Edit /etc/modprobe.d/raspi-blacklist.conf using nano.  Comment out the line with blacklist spi-bcm2708
  4. sudo reboot

Hyperion configuration file

From another PC that has java (OpenJDK 7 works on Ubuntu 14.04)
  1. Visit https://github.com/tvdzwan/hyperion/wiki/configuration and fetch the jar file.
  2. Run it to configure your LEDs.
  3. From the defaults, I particularly had to change the LED type and the number of LEDs around the TV.
  4. My LEDs were originally listed at RGB but I later discovered that they are GRB.  If you encounter problems later with the wrong colors showing up, you can change them here too.
  5. Save the conf file and scp it into the /etc directory on your pi
  6. sudo /etc/init.d/hyperiond restart

Test the LED's

  1. Plug in the LEDs and install the test application at https://github.com/tvdzwan/hyperion/wiki/android-remote
  2. Try out some of the patterns and color wheel to make sure that everything is working properly.  It will save you problems later diagnosing grabber problems if you know things are sound here (this is where I found my RGB/GRB problem).
Test pattern

Set up things for Hyperion-V4L2

I created a script in ~ called toggle_backlight.sh.  It runs the V4L2 capture application (hyperion-v4l2) and sets the LEDs accordingly.  I can invoke it again to turn off the LEDs.  As a future modification I intend to control this with my harmony remote or some other method.  If someone comes up with something cool, please share.

#!/bin/sh
ARG=toggle
if [ -n "$1" ]; then
        ARG=$1
fi
RUNNING=$(pgrep hyperion-v4l2)
if [ -n "$RUNNING" ]; then
        if [ "$ARG" = "on" ]; then
                exit 0
        fi
        pkill hyperion-v4l2
        exit 0
fi
hyperion-v4l2 --crop-height 30 --crop-width 10 --size-decimator 8 --frame-decimator 2 --skip-reply --signal-threshold 0.08&

That's the exact script I use to run things.  I had to modify the crop height from the defaults that were on the directions elsewhere to avoid flicker on the top.  To diganose problems here, I'd recommend using the --screenshot argument of hyperion-v4l2 and examining output.

Once you've got it good, add it to /etc/rc.local to start up on boot:

su pi -c /home/pi/toggle_backlight.sh

Test It all together

Everything should now be working.

Here's my working setup:

https://www.youtube.com/watch?v=nSrGfh8asgg
Author: "Mario Limonciello"
Send by mail Print  Save  Delicious 
Date: Monday, 21 Apr 2014 18:54

Bicentennial Man PosterEver since we started building the Ubuntu SDK, we’ve been trying to find ways of bringing the vast number of Android apps that exist over to Ubuntu. As with any new platform, there’s a chasm between Android apps and native apps that can only be crossed through the effort of porting.

There are simple solutions, of course, like providing an Android runtime on Ubuntu. On other platforms, those have shown to present Android apps as second-class citizens that can’t benefit from a new platform’s unique features. Worse, they don’t provide a way for apps to gradually become first-class citizens, so chasm between Android and native still exists, which means the vast majority of apps supported this way will never improve.

There are also complicates solutions, like code conversion, that try to translate Android/Java code into the native platform’s language and toolkit, preserving logic and structure along the way. But doing this right becomes such a monumental task that making a tool to do it is virtually impossible, and the amount of cleanup and checking needed to be done by an actual developer quickly rises to the same level of effort as a manual port would have. This approach also fails to take advantage of differences in the platforms, and will re-create the old way of doing things even when it doesn’t make sense on the new platform.

Screenshot from 2014-04-19 14:44:22NDR takes a different approach to these, it doesn’t let you run our Android code on Ubuntu, nor does it try to convert your Android code to native code. Instead NDR will re-create the general framework of your Android app as a native Ubuntu app, converting Activities to Pages, for example, to give you a skeleton project on which you can build your port. It won’t get you over the chasm, but it’ll show you the path to take and give you a head start on it. You will just need to fill it in with the logic code to make it behave like your Android app. NDR won’t provide any of logic for you, and chances are you’ll want to do it slightly differently than you did in Android anyway, due to the differences between the two platforms.

Screenshot from 2014-04-19 14:44:31To test NDR during development, I chose the Telegram app because it was open source, popular, and largely used Android’s layout definitions and components. NDR will be less useful against apps such as games, that use their own UI components and draw directly to a canvas, but it’s pretty good at converting apps that use Android’s components and UI builder.

After only a couple days of hacking I was able to get NDR to generate enough of an Ubuntu SDK application that, with a little bit of manual cleanup, it was recognizably similar to the Android app’s.

This proves, in my opinion, that bootstrapping an Ubuntu port based on Android source code is not only possible, but is a viable way of supporting Android app developers who want to cross that chasm and target their apps for Ubuntu as well. I hope it will open the door for high-quality, native Ubuntu app ports from the Android ecosystem.  There is still much more NDR can do to make this easier, and having people with more Android experience than me (that would be none) would certainly make it a more powerful tool, so I’m making it a public, open source project on Launchpad and am inviting anybody who has an interest in this to help me improve it.

Author: "Michael Hall"
Send by mail Print  Save  Delicious 
Date: Monday, 21 Apr 2014 15:30
On Friday I started my app "GetThereDC". I started by adding the locations of all of the Bikeshare stations in DC to a map. Knowing where the stations are is great, but it's a bummer when you go to a station and there are no bikes, or there are no empty parking spots. Fortunately, that exact information is in the XML feed, so I just need a way to display it.  
The way I decided to do it is to make the POI (the little icons for each station on the map) clickable, and when the user clicks the POI to use the Popover feature in the Ubuntu Components toolkit to display the data.

Make the POI Clickable

When you want to make anyting "clickable" in QML, you just use a MouseArea component. Remember that each POI is constructed as a delegate in the MapItemView as an Image component. So all I have to do is add a MouseArea inside the Image and respond to the Click event. So, not my image looks like this:
           sourceItem: Image  
{
id: poiImage
width: units.gu(2)
height: units.gu(2)
source: "images/bike_poi.png"
MouseArea
{
anchors.fill: parent
onClicked:
{
print("The POI was clicked! ")
}
}
}
This can be used anywhere in QML to make an image respond to a click. MouseArea, of course, has other useful events as well, for things like onPressed, onPressAndHold, etc...

Add the Roles to the XmlListModel

I already know that I'll want something to use for a title for each station, the address, as well as the number of bikes and the number of parking slots. Looking at the XML I can see that the "name" property is the address, so that's a bonus. Additionally, I can see the other properties I want are called "nbBikes" and "nbEmptyDocks". So, all I do is add those three new roles to the XmlListModel that I constructed before:
   XmlListModel  
{
id: bikeStationModel
source: "https://www.capitalbikeshare.com/data/stations/bikeStations.xml"
query: "/stations/station"
XmlRole { name: "lat"; query: "lat/string()"; isKey: true }
XmlRole { name: "lng"; query: "long/string()"; isKey: true }
XmlRole {name: "name"; query: "name/string()"; isKey: true}
XmlRole {name: "available"; query: "nbBikes/string()"; isKey: true}
XmlRole {name: "freeSlots"; query: "nbEmptyDocks/string()"; isKey: true}
}

Make a Popover Component

The Ubuntu SDK offers some options for displaying additional information. In old school applications these might be dialog boxes, or message boxes. For the purposes of this app, Popover looks like the best bet. I suspect that over time the popover code might get a little complex, so I don't want it to be too deeply nested inside the MapItemView, as the code will become unwieldy. So, instead I decided to add a file called BikeShareStationPopover.qml to the components sub-directory. Then I copy and pasted the sample code in the documentation to get started. 

To make a popover, you start with a Component tag, and then add a popover tag inside that. Then, you can put pretty much whatever you want into that Popover. I am going to go with a Column and use ListItem components because I think it will look nice, and it's the easiest way to get started. Since I already added the XmlRoles I'll just use those roles in the construction of each popover. 

Since I know that I will be adding other kinds of POI, I decided to add a Capital Bike Share logo to the top of the list so users will know what kind of POI they clicked. I also added a close button just to be certain that users don't get confused about how to go back to the map. So, at the end of they day, I just have a column with ListItems:
 import QtQuick 2.0  
import Ubuntu.Components 0.1
import Ubuntu.Components.ListItems 0.1 as ListItem
import Ubuntu.Components.Popups 0.1
Component
{
id: popoverComponent
Popover
{
id: popover
Column
{
id: containerLayout
anchors
{
left: parent.left
top: parent.top
right: parent.right
}
ListItem.SingleControl
{
control: Image
{
source: "../images/CapitalBikeshare_Logo.jpg"
height: units.gu(5)
width: units.gu(5)
}
}
ListItem.Header { text: name}
ListItem.Standard { text: available + " bikes available" }
ListItem.Standard { text: freeSlots + " parking spots available"}
ListItem.SingleControl
{
highlightWhenPressed: false
control: Button
{
text: "Close"
onClicked: PopupUtils.close(popover)
}
}
}
}

Make the Popover Component Appear on Click

So, now that I made the component code, I just need to add it to the MapItemView and make it appear on click. So, I add the tag and give it an id to the MapQuickItem Delegate, and change the onClicked handler for the MouseArea to open the popover:
 delegate: MapQuickItem  
{
id: poiItem
coordinate: QtPositioning.coordinate(lat,lng)
anchorPoint.x: poiImage.width * 0.5
anchorPoint.y: poiImage.height
z: 9
sourceItem: Image
{
id: poiImage
width: units.gu(2)
height: units.gu(2)
source: "images/bike_poi.png"
MouseArea
{
anchors.fill: parent
onClicked:
{
PopupUtils.open(bikeSharePopover)
}
}
}
BikeShareStationPopover
{
id: bikeSharePopover
}
}
And when I run the app, I can click on any POI and see the info I want! Easy!

Code is here
Author: "Rick Spencer"
Send by mail Print  Save  Delicious 
Date: Sunday, 20 Apr 2014 19:52
First of all, my apologies for disappear due to personal reasons (went out for a few days). But it's here, Lubuntu 14.04 codename Trusty Tahr. The missing links for PowerPC machines have been recovered. Feel free to go to the Downloads section and grab it. If you need more info check the release page.
Author: "Rafael Laguna"
Send by mail Print  Save  Delicious 
Date: Sunday, 20 Apr 2014 03:15

ubuntu-open-week

And the Ubuntu Open Week for this cycle is just around the corner! This will be three days full of excitement, where you will be able to know what different teams and people in the community do. Whether you are a developer, a designer, a tester or a community member, this is the right event if you want to get involved with the community and are looking for a starting point.

The event will take place from April 22nd to April 24th 2014, from 15 to 19 UTC each day. During these three days we will have people from various teams, such as the Server, Documentation, and Juju teams. There are twelve different sessions scheduled, so make sure to find which ones interest you and write the times down in your calendars! The full schedule can be found at the Open Week Wiki page.

All sessions will take place at #ubuntu-classroom and #ubuntu-classroom-chat on irc.freenode.net (click here to join from your web browser). There are three sessions in the schedule which are labeled with the [ON AIR!] tag, which means the session will be streamed live at the Ubuntu on Air! webpage.

In the case you can not attend the event, logs will be linked in the schedule as soon as they become available. For On Air! sessions, they will be available at the Ubuntu on Air! YouTube Channel. Hope to see you all there!


Author: "José Antonio Rey"
Send by mail Print  Save  Delicious 
Date: Saturday, 19 Apr 2014 22:35
The past week has been exhilarating and exhausting for our Kubuntu crew. I'm sure the other *buntu teams were working just as hard. Not just packaging, because that goes on all the time, though not at this intense pace. But the attention to detail, the testing, polishing, patching, discussion with developers to get those patches upstream, coordination with Debian, cleaning up copyright files, man pages and other documentation, making screen shots, our user docs and new website, more testing, more polish.... it was truly an amazing effort.

I used `ubuntu-bug` from the cli more than I ever have before, testing out the betas. It was an amazing experience to file the bug, and then see it fixed within the day! This happened again and again. The entire Ubuntu ecosystem really works well together. My thanks to those developers who read and respond to those bug reports.

What I love about Kubuntu is how everyone pitches in. All of us try to maintain balance in our lives, so that there is time for leisure and enrichment, along with work. Also, the work is fun, because the team enjoys one another, posting fun links, joking around, but continuing to work away on our todo lists. Even those who didn't have time for packaging, often stopped by the devel channel to find out what needed testing. It all helped!

Since I'm not a devel, all this was inspiring rather than exhausting. So I had the time and energy to spend time helping out folks with questions and trouble in #kubuntu and #kde. That felt great! We were able to answer most of the questions, and overcome most of the difficulties.

One issue that came up quite a few times in the last couple of days, was PPAs. On a clean install, of course all old PPAs are blown away. On an upgrade, however, they can linger and cause lots of perplexing problems. Official PPAs like backports are fine, but specialty ones should be removed before upgrading. If you need them, you can always re-add after the upgrade. For the same reason, unpin any packages you have pinned.

It is really fabulous to be able to present the latest KDE software into our Kubuntu LTS. This will give us the freedom to try out the newest stuff from KDE based on the sparkly new Frameworks, Plasma Next and so forth, in our next release. So, our users will be able to use software supported for five years if they want, while also having the option to install 14.10 (if all goes well) and check out the newest.
Author: "Valorie Zimmerman"
Send by mail Print  Save  Delicious 
Date: Saturday, 19 Apr 2014 17:57

UbuconLA is happy to announce that Canonical and the Ubuntu Community will be sponsors in this version 2014.

Also, the Ubuntu Community will be present in the event with members from Spain, India, Uruguay, Venezuela, Mexico, Peru, Colombia, giving talks and workshops… the perfect place to learn a lot about Linux, Ubuntu ,Community, all in this amazing event.

 

 

You can find more information about the UbuconLA in the official site and wikipage


Author: "sergiomeneses"
Send by mail Print  Save  Delicious 
Date: Saturday, 19 Apr 2014 17:31

Today, tweaking the Bootstrap_Walker class used by Melany for another project, I discovered an interesting issue with the title attribute.

Melany allows you to prepend a menu item with an icon coming from the Glyphicon set included in Twitter Bootstrap in a very easy way: just put the glyphicon name in the menu item’s title attribute field and let the Melany do the rest. See an example in the following image:

How to prepend icons to menu items

How to prepend icons to menu items

So, what’s the problem? Well, if you try to define a true title attribute, it won’t work, because the Bootstrap_Walker handles this attribute as if it were an icon. Let me do an example. If you want to set the title attribute to “This link opens in a new tab”, the resulting markup is:

<a href="[menu_item_url]"><span class="glyphicon This link opens in a new tab"></span>&nbsp;[navigation_label]</a>

Of course, you wanted something like this:

<a href="[menu_item_url]" title="This link opens in a new tab">[navigation_label]</a>

I solved this issue with a simple check to see if the word glyphicon is in the title attribute, so you can now use this real attribute without problems. The fix is already in the 1.1.0-dev version, but will soon be released in the 1.0.5 series too.

I hope this has not caused you too many hassles.

Oh, do you know Melany 1.1.0 Alpha2 has been released? Check it out!

Author: "Mattia “deshack” Migliorini"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader