• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Friday, 10 Oct 2014 10:59

We’re back with Season Seven, Episode Twenty-Eight of the Ubuntu Podcast! Alan Pope, Laura Cowen, Mark Johnson, and Tony Whitmore! We ate this carrot cake from the Co-op. It’s very tasty.

In this week’s show:

HyPi-photo

  • We also discuss:
  • We share some Command Line Lurve which saves you valuable keystrokes:
    tar xvf archive.tar.bz2
    tar xvf foo.tar.gz
    

    Tar now auto-detects the compression algorithm used!

  • And we read your feedback. Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Author: "admin"
Send by mail Print  Save  Delicious 
Date: Friday, 10 Oct 2014 07:25

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific "daily" image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical's Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the "most demanding" tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

  $ cat pkglist
  apport
  apt
  aptdaemon
  apache2
  autopilot-gtk
  autopkgtest
  binutils
  chromium-browser
  cups
  dbus
  gem2deb
  glib-networking
  glib2.0
  gvfs
  kcalc
  keystone
  libnih
  libreoffice
  lintian
  lxc
  mysql-5.5
  network-manager
  nut
  ofono-phonesim
  php5
  postgresql-9.4
  python3.4
  sbuild
  shotwell
  systemd-shim
  ubiquity
  ubuntu-drivers-common
  udisks2
  upstart

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an "always need this constant to select a non-default network" option that is specific to Canonical Bootstack.

Finally, let' run the packages from above with using ten VMs in parallel:

  parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Author: "pitti"
Send by mail Print  Save  Delicious 
Date: Friday, 10 Oct 2014 05:53

Somewhere roundabout 10 years ago Ryan Troy started what we all know and love – Ubuntu Forums.

The look and feel has gone through several iterations, matching the Ubuntu colour scheme evolutions. Each look & feel has had its own crowd saying the previous one was better, but the one constant has been the members who in their thousands log on to share knowledge

Click to view slideshow.

As part of the forum wide celebration – there’s a custom avatar you can use if you wish, and if you’ve less than 10 posts, you too can use it now that we’ve changed the 10 post rule to allow you to use custom avatars. We’re also making some changes to how we deal with other operating systems – soon.

For now we’ll be checking the posts that get made roundabout midnight of the 9th – if we pick you – expect a PM from one of us.

Thanks for your participation in helping the forum become what it is – as we passed through 2 million threads and rapidly approach 2 million users.


Author: "forumscouncil"
Send by mail Print  Save  Delicious 
Date: Friday, 10 Oct 2014 01:07

Frameworks 5.3.0 has finished uploading to archive! apt-get update is all that is required to upgrade.
We are currently finishing up Plasma 5.1.0 ! The problems encountered during beta have been resolved :)

Author: "Scarlett Clark"
Send by mail Print  Save  Delicious 
Date: Thursday, 09 Oct 2014 10:30

Top open source news website TheMukt has an article headlined KDE’s Plasma used in Hobbit movies.  Long time fans of Kubuntu will know of previous boasts that they use a 35,000-Core Ubuntu Farm to render films like Avatar and The Hobbit supported by a Kubuntu desktop.  Great to see they’re making use of Plasma 4 and its desktop capabilities.

Author: "Jonathan Riddell"
Send by mail Print  Save  Delicious 
Date: Wednesday, 08 Oct 2014 17:55

I do not remember exactly when I started working on ARMv8 stuff. Checked old emails from Linaro times and found that we discussed AArch64 bootstrap using OpenEmbedded during Linaro Connect Asia (June 2012). But it had to wait a bit…

First we took OpenEmbedded and created all tasks/images we needed but built them for 32-bit ARM. But during September we had all toolchain parts available: binutils was public, gcc was public, glibc was on a way to be released. I remember that moment when built first “helloworld” — probably as one of first people outside ARM and their hardware partners.

At first week of October we had ARMv8 sprint in Cambridge, UK (in Linaro and ARM offices). When I arrived and took a seat I got information that glibc just went public. Fetched, rebased my OpenEmbedded tree to drop traces of “private” patches and started build. Once finished all went public at git.linaro.org repository.

But we still lacked hardware… The only thing available was Versatile Express emulator which required license from ARM Ltd. But then free (but proprietary) emulator was released so everyone was able to boot our images. OMG it was so slow…

Then fun porting work started. Patched this, that, sent patches to OpenEmbedded and to upstream projects and time was going. And going.

In January 2013 I started X11 on emulated AArch64. Had to wait few months before other distributions went to that point.

February 2013 was nice moment as Debian/Ubuntu team presented their AArch64 port. It was their first architecture bootstrapped without using external toolchains. Work was done in Ubuntu due to different approach to development than Debian has. All work was merged back so some time later Debian also had AArch64 port.

It was March or April when OpenSUSE did mass build of whole distribution for AArch64. They had biggest amount of packages built for quite long time. But I did not tracked their progress too much.

And then 31st May came. A day when I left Linaro. But I was already after call with Red Hat so future looked quite bright ;D

June was month when first silicon was publicly presented. I do not know what Jon Masters was showing but it probably was some prototype from Applied Micro.

On 1st August I got officially hired by Red Hat and started month later. My wife was joking that next step would be Retired Software Engineer ;D

So I moved from OpenEmbedded to Fedora with my AArch64 work. Lot of work here was already done as Fedora developers were planning 64-bit ARM port few years before — when it was at design phase. So when Fedora 15 was bootstrapped for “armhf” it was done as preparation for AArch64. 64-bit ARM port was started in October 2012 with Fedora 17 packages (and switched to Fedora 19 during work).

My first task at Red Hat was getting Qt4 working properly. That beast took few days in foundation model… Good that we got first hardware then so it went faster. 1-2 months later and I had remote APM Mustang available for my porting work.

In January 2014 QEmu got AArch64 system emulation. People started migrating from foundation model.

Next months were full of hardware announcements. AMD, Cavium, Freescale, Marvell, Mediatek, NVidia, Qualcomm and others.

In meantime I decided to make crazy experiment with OpenEmbedded. I was first to use it to build for AArch64 so why not be first to build OE on 64-bit ARM?

And then June came. With APM Mustang for use at home. Finally X11 forwarding started to be useful. One of first things to do was running firefox on AArch64 just to make fun of running software which porting/upstreaming took me biggest amount of time.

Did not took me long to get idea of transforming APM Mustang (which I named “pinkiepie” as all machines at my home are named after cartoon characters) into ARMv8 desktop. Still waiting for PCI Express riser and USB host support.

Now we have October. Soon will be 2 years since people got foundation model available. And there are rumors about AArch64 development boards in production with prices below 100 USD. Will do what needed to get one of them on my desk ;)


All rights reserved © Marcin Juszkiewicz
2 years of AArch64 work was originally posted on Marcin Juszkiewicz website

Author: "Marcin “hrw” Juszkiewicz"
Send by mail Print  Save  Delicious 
Date: Wednesday, 08 Oct 2014 15:49

Some of you might recall seeing this insights article about Ubuntu and the City of Munich.  What you may not know is that the desktop in question is the Kubuntu flavor of Ubuntu.  This wasn’t clear in the original article and I really appreciate Canonical being willing to change it to make that clear.

Author: "skitterman"
Send by mail Print  Save  Delicious 
Date: Tuesday, 07 Oct 2014 17:15

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20141007 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to the v3.16.4 upstream stable
kernel. This is available for testing as of the 3.16.0-21.28 upload to
the archive. Please test and let us know your results.
Also, Utopic Kernel Freeze is this Thurs Oct 9. Any patches submitted
after kernel freeze are subject to our Ubuntu kernel SRU policy. I sent
a friendly reminder about this to the Ubuntu kernel-team mailing list
yesterday as well.
—–
Important upcoming dates:
Thurs Oct 9 – Utopic Kernel Freeze (~2 days away)
Thurs Oct 16 – Utopic Final Freeze (~1 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~2 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 30):

  • Lucid – Testing
  • Precise – Testing
  • Trusty – Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 19-Sep through 11-Oct
    ====================================================================
    19-Sep Last day for kernel commits for this cycle
    21-Sep – 27-Sep Kernel prep week.
    28-Sep – 04-Oct Bug verification & Regression testing.
    05-Oct – 08-Oct Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Author: "Joseph Salisbury"
Send by mail Print  Save  Delicious 
Date: Tuesday, 07 Oct 2014 17:09

The full team assemble (that’s Laura Cowen, Mark Johnson and Alan Pope, joined by a returning Tony Whitmore) in Studio L for Season Seven, Episode Twenty-Seven of the Ubuntu Podcast!

In this week’s show:-

We’ll be back next week, when we’ll have some mystery content and your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Author: "admin"
Send by mail Print  Save  Delicious 
Date: Tuesday, 07 Oct 2014 16:25

Ubuntu Mauritius CommunityBut it isn’t perfect.  And that, in my opinion, is okay.  I’m not perfect, and neither are you, but you are still wonderful too.

I was asked, not too long ago, what I hated about the community. The truth, then and now, is that I don’t hate anything about it. There is a lot I don’t like about what happens, of course, but nothing that I hate. I make an effort to understand people, to “grok” them if I may borrow the word from Heinlein. When you understand somebody, or in this case a community of somebodies, you understand the whole of them, the good and the bad. Now understanding the bad parts doesn’t make them any less bad, but it does provide opportunities for correcting or removing them that you don’t get otherwise.

You reap what you sow

People will usually respond in kind with the way they are treated. I try to treat everybody I interact with respectfully, kindly, and rationally, and I’ve found that I am treated that way back. But, if somebody is prone to arrogance or cruelty or passion, they will find far more of that treatment given back and them than the positive ones. They are quite often shocked when this happens. But when you are a source of negativity you drive away people who are looking for something positive, and attract people who are looking for something negative. It’s not absolute, nice people will have some unhappy followers, and crumpy people will have some delightful ones, but on average you will be surrounded by people who behave like you.

Don’t get even, get better

An eye for an eye makes the whole world blind, as the old saying goes. When somebody is rude or disrespectful to us, it’s easy to give in to the desire to be rude and disrespectful back. When somebody calls us out on something, especially in public, we want to call them out on their own problems to show everybody that they are just as bad. This might feel good in the short term, but it causes long term harm to both the person who does it and the community they are a part of. This ties into what I wrote above, because even if you aren’t naturally a negative person, if you respond to negativity with more of the same, you’ll ultimately share the same fate. Instead use that negativity as fuel to drive you forward in a positive way, respond with coolness, thoughtfulness and introspection and not only will you disarm the person who started it, you’ll attract far more of the kind of people and interactions that you want.

Know your audience

Your audience isn’t the person or people you are talking to. Your audience is the people who hear you. Many of the defenders of Linus’ beratement of kernel contributors is that he only does it to people he knows can take it. This defense is almost always countered, quite properly, by somebody pointing out that his actions are seen by far more than just their intended recipient. Whenever you interact with any member of your community in a public space, such as a forum or mailing list, treat it as if you were interacting with every member, because you are. Again, if you perpetuate negativity in your community, you will foster negativity in your community, either directly in response to you or indirectly by driving away those who are more positive in nature. Linus’ actions might be seen as a joke, or necessary “tough love” to get the job done, but the LKML has a reputation of being inhospitable to potential contributors in no small part because of them. You can gather a large number of negative, or negativity-accepting, people into a community and get a lot of work done, but it’s easier and in my opinion better to have a large number of positive people doing it.

Monoculture is dangerous

I think all of us in the open source community know this, and most of us have said it at least once to somebody else. As noted security researcher Bruce Schneier says, “monoculture is bad; embrace diversity or die along with everyone else.” But it’s not just dangerous for software and agriculture, it’s dangerous to communities too. Communities need, desperately need, diversity, and not just for the immediate benefits that various opinions and perspectives bring. Including minorities in your community will point out flaws you didn’t know existed, because they didn’t affect anyone else, but a distro-specific bug in upstream is still a bug, and a minority-specific flaw in your community is still a flaw. Communities that are almost all male, or white, or western, aren’t necessarily bad because of their monoculture, but they should certainly consider themselves vulnerable and deficient because of it. Bringing in diversity will strengthen it, and adding minority contributor will ultimately benefit a project more than adding another to the majority. When somebody from a minority tells you there is a problem in your community that you didn’t see, don’t try to defend it by pointing out that it doesn’t affect you, but instead treat it like you would a normal bug report from somebody on different hardware than you.

Good people are human too

The appendix is a funny organ. Most of the time it’s just there, innocuous or maybe even slightly helpful. But every so often one happens to, for whatever reason, explode and try to kill the rest of the body. People in a community do this too.  I’ve seen a number of people that were good or even great contributors who, for whatever reason, had to explode and they threatened to take down anything they were a part of when it happened. But these people were no more malevolent than your appendix is, they aren’t bad, even if they do need to be removed in order to avoid lasting harm to the rest of the body. Sometimes, once whatever caused their eruption has passed, these people can come back to being a constructive part of your community.

Love the whole, not the parts

When you look at it, all of it, the open source community is a marvel of collaboration, of friendship and family. Yes, family. I know I’m not alone in feeling this way about people I may not have ever met in person. And just like family you love them during the good and the bad. There are some annoying, obnoxious people in our family. There are good people who are sometimes annoying and obnoxious. But neither of those truths changes the fact that we are still a part of an amazing, inspiring, wonderful community of open source contributors and enthusiasts.

Author: "Michael Hall"
Send by mail Print  Save  Delicious 
Date: Tuesday, 07 Oct 2014 09:21

As preannounced here the GNOME Infrastructure switched to a new Account Management System which is reachable at https://account.gnome.org. All the details will follow.

Introduction

It’s been a while since someone actually touched the underlaying authentication infrastructure that powers the GNOME machines. The very first setup was originally configured by Jonathan Blandford (jrb) who configured an OpenLDAP istance with several customized schemas. (pServer fields in the old CVS days, pubAuthorizedKeys and GNOME modules related fields in recent times)

While OpenLDAP-server was living on the GNOME machine called clipboard (aka ldap.gnome.org) the clients were configured to synchronize users, groups, passwords through the nslcd daemon. After several years Jeff Schroeder joined the Sysadmin Team and during one cold evening (date is Tue, February 1st 2011) spent some time configuring SSSD to replace the nslcd daemon which was missing one of the most important SSSD features: caching. What surely convinced Jeff to adopt SSSD (a very new but promising sofware at that time as the first release happened right before 2010’s Christmas) and as the commit log also states (“New sssd module for ldap information caching”) was SSSD’s caching feature.

It was enough for a certain user to log in once and the ‘/var/lib/sss/db’ directory was populated with its login information preventing the LDAP daemon in charge of picking up login details (from the LDAP server) to query the LDAP server itself every single time a request was made against it. This feature has definitely helped in many occasions especially when the LDAP server was down for a particular reason and sysadmins needed to access a specific machine or service: without SSSD this wasn’t ever going to work and sysadmins were probably going to be locked out from the machines they were used to manage. (except if you still had ‘/etc/passwd’, ‘/etc/group’ and ‘/etc/shadow’ entries as fallback)

Things were working just fine except for a few downsides that appeared later on:

  1. the web interface (view) on our LDAP user database was managed by Mango, an outdated tool which many wanted to rewrite in Django that slowly became a huge dinosaur nobody ever wanted to look into again
  2. the Foundation membership information were managed through a MySQL database, so two databases, two sets of users unrelated to each other
  3. users were not able to modify their own account information on their own but even a single e-mail change required them to mail the GNOME Accounts Team which was then going to authenticate their request and finally update the account.

Today’s infrastructure changes are here to finally say the issues outlined at (1, 2, 3) are now fixed.

What has changed?

The GNOME Infrastructure is now powered by Red Hat’s FreeIPA which bundles several FOSS softwares into one big “bundle” all surrounded by an easy and intuitive web UI that will help users update their account information on their own without the need of the Accounts Team or any other administrative entity. Users will also find two custom fields on their “Overview” page, these being “Foundation Member since” and “Last Renewed on date”. As you may have understood already we finally managed to migrate the Foundation membership database into LDAP itself to store the information we want once and for all. As a side note it might be possible that some users that were Foundation members in the past won’t find any detail stored on the Foundation fields outlined above. That is actually expected as we were able to migrate all the current and old Foundation members that had an LDAP account registered at the time of the migration. If that’s your case and you still would like the information to be stored on the new setup please get in contact with the Membership Committee at stating so.

Where can I get my first login credentials?

Let’s make a little distinction between users that previously had access to Mango (usually maintainers) and users that didn’t. If you were used to access Mango before you should be able to login on the new Account Management System by entering your GNOME username and the password you were used to use for loggin in into Mango. (after loggin in the very first time you will be prompted to update your password, please choose a strong password as this account will be unique across all the GNOME Infrastructure)

If you never had access to Mango, you lost your password or the first time you read the word Mango on this post you thought “why is he talking about a fruit now?” you should be able to reset it by using the following command:

ssh -l yourgnomeuserid account.gnome.org

The command will start an SSH connection between you and account.gnome.org, once authenticated (with the SSH key you previously had registered on our Infrastructure) you will trigger a command that will directly send your brand new password on the e-mail registered for your account. From my tests seems GMail sees the e-mail as a phishing attempt probably because the body contains the word “password” twice. That said if the e-mail won’t appear on your INBOX, please double-check your Spam folder.

Now that Mango is gone how can I request a new account?

With Mango we used to have a form that automatically e-mailed the maintainer of the selected GNOME module which was then going to approve / reject the request. From there and in the case of a positive vote from the maintainer the Accounts Team was going to create the account itself.

With the recent introduction of a commit robot directly on l10n.gnome.org the number of account requests reduced its numbers. In addition to that users will now be able to perform pretty much all the needed maintenance on their accounts themselves. That said and while we will probably work on building a form in the future we feel that requesting accounts can definitely be achieved directly by mailing the Accounts Team itself which will mail the maintainer of the respective module and create the account. As just said the number of account creations has become very low and the queue is currently clear. The documentation has been updated to reflect these changes at:

https://wiki.gnome.org/AccountsTeam
https://wiki.gnome.org/AccountsTeam/NewAccounts

I was used to have access to a specific service but I don’t anymore, what should I do?

The migration of all the user data and ACLs has been massive and I’ve been spending a lot of time reviewing the existing HBAC rules trying to spot possible errors or misconfigurations. If you happen to not being able to access a certain service as you were used to in the past, please get in contact with the Sysadmin Team. All the possible ways to contact us are available at https://wiki.gnome.org/Sysadmin/Contact.

What is missing still?

Now that the Foundation membership information has been moved to LDAP I’ll be looking at porting some of the existing membership scripts to it. What I managed to port already are welcome e-mails for new or existing members. (renewals)

Next step will be generating a membership page from LDAP (to populate http://www.gnome.org/foundation/membership) and all the your-membership-is-going-to-lapse e-mails that were being sent till today.

Other news – /home/users mount on master.gnome.org

You will notice that loggin in into master.gnome.org will result in your home directory being empty, don’t worry, you did not lose any of your files but master.gnome.org is now currently hosting your home directories itself. As you may have been aware of adding files to the public_html directory on master resulted in them appearing on your people.gnome.org/~userid space. That was unfortunately expected as both master and webapps2 (the machine serving people.gnome.org’s webspaces) were mounting the same GlusterFS share.

We wanted to prevent that behaviour to happen as we wanted to know who has access to what resource and where. From today master’s home directories will be there just as a temporary spot for your tarballs, just scp and use ftpadmin against them, that should be all you need from master. If you are interested in receiving or keeping using your people.gnome.org’s webspace please mail <accounts AT gnome DOT org> stating so.

Other news – a shiny and new error 500 page has been deployed

Thanks to Magdalen Berns (magpie) a new error 500 web page has been deployed on all the Apache istances we host. The page contains an iframe of status.gnome.org and will appear every single time the web server behind the service you are trying to reach will be unreachable for maintenance or other purposes. While I hope you won’t see the page that often you can still enjoy it at https://static.gnome.org/error-500/500.html. Make sure to whitelist status.gnome.org on your browser as it currently loads it without https. (as the service is currently hosted on OpenShift which provides us with a *.rhcloud.com wildcard certificate, which differs from the CN the browser would expect it to be)

Author: "Andrea Veri"
Send by mail Print  Save  Delicious 
Date: Tuesday, 07 Oct 2014 08:33

Are you an Ubuntu Member? Have you ever wanted to get a technical certification?

My buddy Jorge Castro has an offer for you! Please take a look at this page over on Ubuntu Discourse:

http://discourse.ubuntu.com/t/100-off-linux-foundation-certification-for-ubuntu-members/1915

In Jorge's words, "Go rock that exam!"

Author: "randall"
Send by mail Print  Save  Delicious 
Date: Tuesday, 07 Oct 2014 08:05

Greetings Planet! First, I'd like to apologize for not posting in a long while. Life has been, shall we say, interesting!

Up until the end of August, my focus has been on (non-Ubuntu-related) client work as part of my IT cyber-security consulting practice. This has meant that I've been traveling back and forth between San Francisco and Vancouver BC, living and working in both of these beautiful cities. This has also meant that I've been somewhat time-starved to do some of the things I've historically enjoyed doing in the Ubuntu world, blogging being one of those things.

So, what happened at the end of August? That's a bit of *great* news that I'll save that for an upcoming post. ;)

Author: "randall"
Send by mail Print  Save  Delicious 
Date: Monday, 06 Oct 2014 23:04

Welcome to the Ubuntu Weekly Newsletter. This is issue #386 for the week September 29 – October 5, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Aaron Honeycutt
  • John Mahoney
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Author: "lyz"
Send by mail Print  Save  Delicious 
Date: Monday, 06 Oct 2014 20:53

As it was stated some months ago, the next Ubuntu Online Summit (UOS) is in (almost) a month on November 12th to the 14th.  There will be five (5) tracks: app development, cloud development, community, Ubuntu development, and users.  I will be one of the Community track leads.  Since the UOS is (again almost) a month away, we should start planning for sessions.   For the sessions that don’t need a blueprint, you are welcome to use the “propose a session” button on the UOS homepage.  For the ones that require a blueprint, please use this Google Spreadsheet to add your session idea.  Once we know how to name our UOS blueprints, it will be easy to remember what sessions that still need to be proposed.


Author: "Svetlana Belkin"
Send by mail Print  Save  Delicious 
Date: Monday, 06 Oct 2014 18:01

Today, I received my Acer Chromebook 13, in the glorious FullHD variant with 4GB RAM. For those of you who don’t know it, the Acer Chromebook 13 is a 13.3 inch chromebook powered by a Tegra K1 cpu.

Chromebook

This version cannot be ordered currently, only pre-orders were shipped yesterday (at least here in Germany). I cannot even review it on Amazon (despite having it bought there), as they have not enabled reviews for it yet.

The device feels solidly built, and looks good. It comes in all-white matte plastic and is slightly reminiscent of the old white MacBooks. The keyboard is horrible, there’s no well defined pressure point. It feels like your typing on a pillow. The display is OK, an IPS would be a lot nicer to work with, though. Oh, and it could be brighter. I do not think that using it outside on a sunny day would be a good idea. The speakers are loud and clear compared to my ThinkPad X230.

The performance of the device is about acceptable (unfortunately, I do not have any comparison in this device class). Even when typing this blog post in the visual wordpress editor, I notice some sluggishness. Opening the app launcher or loading the new tab page while music is playing makes the music stop for or skip a few ms (20-50ms if I had to guess). Running a benchmark in parallel or browsing does not usually cause this stuttering, though.

There are still some bugs in Chrome OS:  Loading the Play Books library the first time resulted in some rendering issues. The “Browser” process always consumes at least 10% CPU, even when idling, with no page open; this might cause some of the sluggishness I mentioned above. Also watching Flash videos used more CPU than I expected given that it is hardware accelerated.

Finally, Netflix did not work out of the box, despite the Chromebook shipping with a special Netflix plugin. I always get some unexpected issue-type page. Setting the user agent to Chrome 38 from Windows, thus forcing the use of the EME video player instead of the Netflix plugin, makes it work.

I reported these software issues to Google via Alt+Shift+I. The issues appeared on the current version of the stable channel, 37.0.2062.120.

What’s next? I don’t know.


Filed under: Chromebook
Author: "Julian Andres Klode"
Send by mail Print  Save  Delicious 
Date: Monday, 06 Oct 2014 18:00

I spent the weekend using almost exclusively my Chromebook 13, on a single charge Saturday and Sunday.

Keyboard

I think I like the keyboard better now than I used to when I first tried it. It gets nowhere near the ThinkPad X230 one, though; appart from the coating, which my (backlit) X230 unfortunately does not have.

Screen

While the screen appeared very grainy to me on first sight, having only used IPS screens in the past year, I got used to it over the weekend. I now do not notice much graininess anymore. The contrast still seems extremely poor, the colors are not vivid, and the vertical viewing angles are still a disaster, though.

Battery life

I think the battery life is awesome. I have 30% remaining now while I am writing this blog post and Chrome OS tells me I still have 3 hours and 19 minutes remaining. It could probably still be improved though, I notice that Chrome OS uses 7-14% CPU in idle normally (and up to 20% in exceptional cases).

The maximum power usage I measured using the battery’s internal sensor was about 9.2W, that was with 5 Big Buck Bunny 1080p videos played in parallel. Average power consumption is around 3-5W (up to 6.5 with single video playing), depending on brightness, and use.

Performance

While I do notice a performance difference to my much more high-end Ivy Bridge Core i5 laptop, it turns out to be usable enough to not make me want to throw it at a wall. Things take a bit longer than I am used to, but it is still acceptable.

Input: Software Part

The user interface is great. There are a lot of gestures available for navigating between windows, tabs, and in the history. For example, horizontally swiping with two finger moves in history, three fingers moves between tabs; and swiping down (or up for Australian scrolling) gives an overview of all windows (like expose on Mac, GNOME’s activities, or the multi-tasking thing Maemo used to have).

What I miss is a keyboard shortcut like Meta + Left/Right on GNOME which moves the active window to the left/right side of the screen. That would be very useful for mult-tasking situations.

Issues

I noticed some performance issues. For example, I can easily get the Chromebook to use 85% of a CPU by scrolling on a page with the touchpad or 70% for scrolling by keeping a key pressed (crbug.com/420452).

While watching Big Buck Bunny on YouTube, I noticed some (micro) stuttering in the beginning of the film, as well as each time I move in or out of the video area when not in full-screen mode (crbug.com/420582). It also increases CPU usage to about 70%.

Running a “proper” Linux?

Today, I tried to play around a bit with Debian wheezy and Ubuntu trusty systems, in a chroot for now. I was trying to find out if I can get an accelerated X server with the standard ChromeOS kernel. The short answer is: No. I tried two things:

  1. Debian wheezy with the binaries from ChromeOS (they have the same xserver version)
  2. Ubuntu trusty with the Nvidia drivers

Unfortunately, they did not work. Option 1 failed because ChromeOS uses glibc 2.15 whereas wheezy uses 1.13. Option 2 failed because the sysfs interface is different between the ChromeOS and Linux4Tegra kernels.

I guess I’ll have to wait.

I also tried booting a custom kernel from USB, but given that the u-boot always sets console= and there is no non-verified u-boot available yet, I could not see any output on the screen :(  – Maybe I should build a u-boot myself?


Filed under: Chromebook
Author: "Julian Andres Klode"
Send by mail Print  Save  Delicious 
Date: Monday, 06 Oct 2014 14:18
As part of the 1.4.3 release of GStreamer I helped the team by making the OS X and iOS builds. The process is easy but has a long sequence of steps. So it is worth sharing it here just in case you might want to run your own GStreamer in any of these platforms.

1. First, you need to download CMake
http://www.cmake.org/files/v3.0/cmake-3.0.2-Darwin-universal.dmg

2. Add CMake to your PATH
$ export PATH=$PATH:/Applications/CMake.app/Contents/bin

3. Prepare the destination (as root)
$ mkdir /Library/Frameworks/GStreamer.framework
$ chown user:user /Library/Frameworks/GStreamer.framework


4. Check out the GStreamer release code
$ git clone git://anongit.freedesktop.org/gstreamer/sdk/cerbero
$ cd cerbero
$ git checkout -b 1.4 origin/1.4


5. Pin the commits to build
edit config/osx-universal.cbc to have the following:

prefix='/Library/Frameworks/GStreamer.framework/Versions/1.0'

recipes_commits = {
'gstreamer-1.0' : '1.4.3',
'gstreamer-1.0-static' : '1.4.3',
'gst-plugins-base-1.0' : '1.4.3',
'gst-plugins-base-1.0-static' : '1.4.3',
'gst-plugins-good-1.0' : '1.4.3',
'gst-plugins-good-1.0-static' : '1.4.3',
'gst-plugins-bad-1.0' : '1.4.3',
'gst-plugins-bad-1.0-static' : '1.4.3',
'gst-plugins-ugly-1.0' : '1.4.3',
'gst-plugins-ugly-1.0-static' : '1.4.3',
'gst-libav-1.0' : '1.4.3',
'gst-libav-1.0-static' : '1.4.3',
'gnonlin-1.0' : '1.2.1',
'gnonlin-1.0-static' : '1.2.1',
'gst-editing-services-1.0' : '1.2.1',
'gst-rtsp-server-1.0' : '1.4.3',
}


6. Run the bootstrap
$ ./cerbero-uninstalled bootstrap
$ echo "allow_parallel_build = True" > ~/.cerbero/cerbero.cbc


7. Run the build for OS X. Patience, it needs to build ~80 modules.
$ ./cerbero-uninstalled -c config/osx-universal.cbc package gstreamer-1.0

8. Run the build for iOS. Some extra steps are necessary for this build.
$ ./cerbero-uninstalled -c config/cross-ios-universal.cbc buildone gettext libiconv
$ ./cerbero-uninstalled -c config/cross-ios-universal.cbc package gstreamer-1.0
$ ./cerbero-uninstalled -c config/cross-ios-universal.cbc buildone gstreamer-ios-templates
Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 06 Oct 2014 14:11

FACIL, pour l’appropriation collective de l’informatique libre (FACIL), a Quebec-based non-profit, has decided to crowd-fund the development production of a 16 GB USB key bearing the FACIL logo and capable of booting from a selection of free-software operating systems (such as GNU/Linux and BSD) on a large set of target computers, specifically those using UEFI.

FACIL - 10 ans!Modern systems often won’t boot some Live USB keys created by traditional methods, specially when wanting to combine several systems on one large-capacity USB key. This is a very useful item to add to your advocacy/testing toolkit.

The FACIL key will serve to propagate free software on the computers of ordinary Quebecers all the while providing FACIL with a better source of financing than only selling T-shirts and stickers. This can also be used by any other organization producing their own keys once the project has completed.

The project first consists in developing the prototype of a 16 GB USB key capable of booting different free-software operating systems. The key will be developed using, MultiSystem, an excellent free software application designed to do just that. MultiSystem will have to be modified to allow booting on computers with either a classic BIOS or the more recent UEFI.

Multi SystemThe resulting improvements to MultiSystem source code will be integrated into the project itself, meaning any other organization or individual using it will also be able to produce their own custom USB keys and benefit from this.

The next step will be to mass duplicate the USB key image on good quality devices that will bear the FACIL logo.

For more details on the funding needs and how the money will be used if this succeeds, see the project page at Goteo.

We need your support! Please consider donating any amount you can, and share this information with anyone interested in GNU/Linux and in general free open source software advocacy.

I am posting this as the acting president of FACIL, I can relay / answer any questions about the project to those directly involved.

Bitcoin logo

Pas de contributions.
Donnez l'exemple!

Faites un don / Make a donation

Si vous appréciez cet article ou fichier, encouragez-moi en faisant un don. Si vous voulez faire un don en Bitcoin (c'est quoi?), utilisez le code QR à gauche avec votre téléphone intelligent ou l'adresse suivante:
1DLFgV8jYCn1BYdY1tzCiDAFpt63QmFdFT

 
Author: "MagicFab"
Send by mail Print  Save  Delicious 
Date: Monday, 06 Oct 2014 08:24
Final lovely quote from Creativity, Inc. by Ed Catmull. Please get the book for yourself if you want to know how to foster creativity in a community or company.
In the very early days of Pixar, John, Andrew, Pete, Lee, and Joe made a promise to one another. No matter what happened, they would always tell each other the truth. They did this because they recognized how important and rare candid feedback is and how, without it, our films would suffer. Then and now, the term we use to describe this kind of constructive criticism is "good notes." 
A good not says what is wrong, what is missing, what itsn't clear, what makes no sense. A good note is offered in a timely moment, not too late to fix the problem. A good note doesn't make demands; it doesn't even have to include a proposed fix. But if it does, that fix is offered only to illustrate a potential solution, not to prescribe an answer. Most of all, though, a good note is specific. "I'm writhing with boredom," is not a good note.
Catmull quotes Andrew Stanton at length explaining the difference between criticism, and constructive criticism, ending with: It's more of a challenge. "Isn't this what you want? I want that too!" [103]

I think this bit is the key: good criticism focuses on the common goal: a great product. It inspires, rather than creating defensiveness.

I read Reviewboard feedback in a sort of random way, and see a lot of "good note" behavior. But that timely part is sometimes missing. We have some Reviewboard requests languishing, along with patches in bug reports. Fortunately, the Gardening project has sprung up to improve this part of the community. Help out if you have time! https://mail.kde.org/mailman/listinfo/kde-gardening and https://community.kde.org/Gardening.
Author: "Valorie Zimmerman"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader