• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Wednesday, 01 Oct 2014 01:30

Over the last week, I started to think about how to improve the collaboration between the Open Science groups and researchers and also between the groups themselves. One of the ideas that I thought about using simple tools that are around in other Open * places (mainly Open Source/Linux distros). These tools are your forums (Discourse and other ones), Planet feeds, and wikis. Using these creates a meta community where members of the community can start there and get themselves involved in one or more groups. Open Science seems to lack this meta community.

Even though I think that meta community is not present, I do think that there is one group that can maintain this meta community and that group is the Open Knowledge Foundation Network (OKFN). They have a working group for Open Science. Therefore, I think, if they take the time and the resources, then it could happen or else some other group can be created for this.

What this meta community tool-wise needs:

Planet Feeds

Since I’m an official Ubuntu Member, I’m allowed to add my blog’s feed to Planet Ubuntu.  Planet Ubuntu allows anyone to read blog posts from many Ubuntu Members because it’s one giant feed reader.  This is well needed for Open Science, as Reddit doesn’t work for academia.  I asked on the Open Science OKFN mailing list and five people e-mailed me saying that they are interested in seeing one.  My next goal is to ask the folks of Open Science OKFN for help on building a Planet for Open Science.

Forums

I can only think of one forum, which is the Mozilla Science Lab one, that I wrote about last a few hours ago.  Having some general forum allows users to talk about various projects to job posting for their groups.  I don’t know if Discourse would be the right platform for the forums.  To me, it’s dynamicness is a bit too much at times.

Wiki

I have no idea if a wiki would work for this meta Open Science community but at least having a guide that introduces newcomers to the groups is worthwhile to have.  There is a plan for a guide.

I hope these ideas can be used by some group within the Open Science community and allow it the grow.


Author: "Svetlana Belkin"
Send by mail Print  Save  Delicious 
Date: Tuesday, 30 Sep 2014 22:08

I am pleased to announce that the Mozilla Science Lab now has a forum that anyone can use.  Anyone can introduce themselves in this topic or the category.


Author: "Svetlana Belkin"
Send by mail Print  Save  Delicious 
Date: Tuesday, 30 Sep 2014 19:24

Agenda

  • Review ACTION points from previous meeting
  • rbasak to review mysql-5.6 transition plans with ABI breaks with infinity
  • blueprint updating
  • U Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair
  • ACTION: meeting chair (of this meeting, not the next one) to carry out post-meeting procedure (minutes, etc) documented at https://wiki.ubuntu.com/ServerTeam/KnowledgeBase

Minutes

  • REVIEW ACTION POINTS FROM PREVIOUS MEETING
    • re: rbasak noted that regarding mysql mysql-5.6 transition / abi infinity action, we decided to defer the 5.6 move for this cycle, as we felt it was too late given the ABI concerns.
  • UTOPIC DEVELOPMENT
    • LINK: https://wiki.ubuntu.com/UtopicUnicorn/ReleaseSchedule
    • LINK: http://reqorts.qa.ubuntu.com/reports/rls-mgr/rls-u-tracking-bug-tasks.html#ubuntu-server
    • LINK: http://status.ubuntu.com/ubuntu-u/group/topic-u-server.html
    • LINK: https://blueprints.launchpad.net/ubuntu/+spec/topic-u-server
  • SERVER & CLOUD BUGS (CARIBOU)
    • Nothing to report.
  • WEEKLY UPDATES & QUESTIONS FOR THE QA TEAM (PSIVAA)
    • Nothing to report.
  • WEEKLY UPDATES & QUESTIONS FOR THE KERNEL TEAM (SMB, SFORSHEE)
    • smb reports that he is digging into a potential race between libvirt and xen init
  • UBUNTU SERVER TEAM EVENTS
    • None to report.
  • OPEN DISCUSSION
    • Pretty quiet. Not even any bad jokes. Back to crunch time!
  • ANNOUNCE NEXT MEETING DATE AND TIME
    • next meeting will be : Tue Oct 7 16:00:00 UTC 2014 chair will be lutostag
  • MEETING ACTIONS
    • ACTION: all to review blueprint work items before next weeks meeting.

People present (lines said)

  • beisner (54)
  • smb (8)
  • meetingology (4)
  • smoser (3)
  • rbasak (3)
  • kickinz1 (3)
  • caribou (2)
  • gnuoy (1)
  • matsubara (1)
  • jamespage (1)
  • arges (1)
  • hallyn (1)

IRC Log

Author: "Ryan Beisner"
Send by mail Print  Save  Delicious 
Date: Tuesday, 30 Sep 2014 17:55

The sos team is pleased to announce the release of sos-3.2. This release includes a large number of enhancements and fixes, including:

  • Profiles for plugin selection
  • Improved log size limiting
  • File archiving enhancements and robustness improvements
  • Global plugin options:
    • --verify, --log-size, --all-logs
  • Better plugin descriptions
  • Improved journalctl log capture
  • PEP8 compliant code base
  • oVirt support improvements
  • New and updated plugins: hpasm, ctdb, dbus, oVirt engine hosted, MongoDB, ActiveMQ, OpenShift 2.0, MegaCLI, FCoEm, NUMA, Team network driver, Juju, MAAS, Openstack

References:

Author: "Adam Stokes"
Send by mail Print  Save  Delicious 
Date: Tuesday, 30 Sep 2014 17:15

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140930 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel remainds rebased on the v3.16.3 upstream stable
kernel. The latest uploaded to the archive is 3.16.0-19.26. Please
test and let us know your results.
Also, Utopic Kernel Freeze is next week on Thurs Oct 9. Any patches
submitted after kernel freeze are subject to our Ubuntu kernel SRU
policy.
—–
Important upcoming dates:
Thurs Oct 9 – Utopic Kernel Freeze (~1 week away)
Thurs Oct 16 – Utopic Final Freeze (~2 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~3 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 30):

  • Lucid – Verification and Testing
  • Precise – Verification and Testing
  • Trusty – Verification and Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 19-Sep through 11-Oct
    ====================================================================
    19-Sep Last day for kernel commits for this cycle
    21-Sep – 27-Sep Kernel prep week.
    28-Sep – 04-Oct Bug verification & Regression testing.
    05-Oct – 11-Oct Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Author: "Joseph Salisbury"
Send by mail Print  Save  Delicious 
Date: Tuesday, 30 Sep 2014 14:24

“The Internet sees censorship as damage and routes around it” was a very motivating tagline during my early forays into the internet. Having grown up in Apartheid-era South Africa, where government control suppressed the free flow of ideas and information, I was inspired by the idea of connecting with people all over the world to explore the cutting edge of science and technology. Today, people connect with peers and fellow explorers all over the world not just for science but also for arts, culture, friendship, relationships and more. The Internet is the glue that is turning us into a super-organism, for better or worse. And yes, there are dark sides to that easy exchange – internet comments alone will make you cry. But we should remember that the brain is smart even if individual brain cells are dumb, and negative, nasty elements on the Internet are just part of a healthy whole. There’s no Department of Morals I would trust to weed ‘em out or protect me or mine from them.

Today, the pendulum is swinging back to government control of speech, most notably on the net. First, it became clear that total surveillance is the norm even amongst Western democratic governments (the “total information act” reborn).  Now we hear the UK government wants to be able to ban organisations without any evidence of involvement in illegal activities because they might “poison young minds”. Well, nonsense. Frustrated young minds will go off to Syria precisely BECAUSE they feel their avenues for discourse and debate are being shut down by an unfair and unrepresentative government – you couldn’t ask for a more compelling motivation for the next generation of home-grown anti-Western jihadists than to clamp down on discussion without recourse to due process. And yet, at the same time this is happening in the UK, protesters in Hong Kong are moving to peer-to-peer mechanisms to organise their protests precisely because of central control of the flow of information.

One of the reasons I picked the certificate and security business back in the 1990′s was because I wanted to be part of letting people communicate privately and securely, for business and pleasure. I’m saddened now at the extent to which the promise of that security has been undermined by state pressure and bad actors in the business of trust.

So I think it’s time that those of us who invest time, effort and money in the underpinnings of technology focus attention on the defensibility of the core freedoms at the heart of the internet.

There are many efforts to fix this under way. The IETF is slowly become more conscious of the ways in which ideals can be undermined and the central role it can play in setting standards which are robust in the face of such inevitable pressure. But we can do more, and I’m writing now to invite applications for Fellowships at the Shuttleworth Foundation by leaders that are focused on these problems. TSF already has Fellows working on privacy in personal communications; we are interested in generalising that to the foundations of all communications. We already have a range of applications in this regard, I would welcome more. And I’d like to call attention to the Edgenet effort (distributing network capabilities, based on zero-mq) which is holding a sprint in Brussels October 30-31.

20 years ago, “Clipper” (a proposed mandatory US government back door, supported by the NSA) died on the vine thanks to a concerted effort by industry to show the risks inherent to such schemes. For two decades we’ve had the tide on the side of those who believe it’s more important for individuals and companies to be able to protect information than it is for security agencies to be able to monitor it. I’m glad that today, you are more likely to get into trouble if you don’t encrypt sensitive information in transit on your laptop than if you do. I believe that’s the right side to fight for and the right side for all of our security in the long term, too. But with mandatory back doors back on the table we can take nothing for granted – regulatory regimes can and do change, as often for the worse as for the better. If you care about these issues, please take action of one form or another.

Law enforcement is important. There are huge dividends to a society in which people to make long term plans, which depends on their confidence in security and safety as much as their confidence in economic fairness and opportunity. But the agencies in whom we place this authority are human and tend over time, like any institution, to be more forceful in defending their own existence and privileges than they are in providing for the needs of others. There has never been an institution in history which has managed to avoid this cycle. For that reason, it’s important to ensure that law enforcement is done by due process; there are no short cuts which will not be abused sooner rather than later. Checks and balances are more important than knee-jerk responses to the last attack. Every society, even today’s modern Western society, is prone to abusive governance. We should fear our own darknesses more than we fear others.

A fair society is one where laws are clear and crimes are punished in a way that is deemed fair. It is not one where thinking about crime is criminal, or one where talking about things that are unpalatable is criminal, or one where everybody is notionally protected from the arbitrary and the capricious. Over the past 20 years life has become safer, not more risky, for people living in an Internet-connected West. That’s no thanks to the listeners; it’s thanks to living in a period when the youth (the source of most trouble in the world) feel they have access to opportunity and ideas on a world-wide basis. We are pretty much certain to have hard challenges ahead in that regard. So for all the scaremongering about Chinese cyber-espionage and Russian cyber-warfare and criminal activity in darknets, we are better off keeping the Internet as a free-flowing and confidential medium than we are entrusting an agency with the job of monitoring us for inappropriate and dangerous ideas. And that’s something we’ll have to work for.

Author: "mark"
Send by mail Print  Save  Delicious 
Date: Tuesday, 30 Sep 2014 13:44
A StackExchange question, back in February of this year inspired a new feature in Byobu, that I had been thinking about for quite some time:
Wouldn't it be nice to have a hot key in Byobu that would send a command to multiple splits (or windows?
This feature was added and is available in Byobu 5.73 and newer (in Ubuntu 14.04 and newer, and available in the Byobu PPA for older Ubuntu releases).

I actually use this feature all the time, to update packages across multiple computers.  Of course, Landscape is a fantastic way to do this as well.  But if you don't have access to Landscape, you can always do this very simply with Byobu!

Create some splits, using Ctrl-F2 and Shift-F2, and in each split, ssh into a target Ubuntu (or Debian) machine.

Now, use Shift-F9 to open up the purple prompt at the bottom of your screen.  Here, you enter the command you want to run on each split.  First, you might want to run:

sudo true

This will prompt you for your password, if you don't already have root or sudo access.  You might need to use Shift-Up, Shift-Down, Shift-Left, Shift-Right to move around your splits, and enter passwords.

Now, update your package lists:

sudo apt-get update

And now, apply your updates:

sudo apt-get dist-upgrade

Here's a video to demonstrate!


In a related note, another user-requested feature has been added, to simultaneously synchronize this behavior among all splits.  You'll need the latest version of Byobu, 5.87, which will be in Ubuntu 14.10 (Utopic).  Here, you'll press Alt-F9 and just start typing!  Another demonstration video here...




Cheers,
Dustin
Author: "Dustin Kirkland"
Send by mail Print  Save  Delicious 
Date: Tuesday, 30 Sep 2014 13:24

Thanks to the sponsorship of multiple companies, I have been paid to work 11 hours on Debian LTS this month.

CVE triagingI started by doing lots of triage in the security tracker (if you want to help, instructions are here) because I noticed that the dla-needed.txt list (which contains the list of packages that must be taken care of via an LTS security update) was missing quite a few packages that had open vulnerabilities in oldstable.

In the end, I pushed 23 commits to the security tracker. I won’t list the details each time but for once, it’s interesting to let you know the kind of things that this work entailed:

  • I reviewed the patches for CVE-2014-0231, CVE-2014-0226, CVE-2014-0118, CVE-2013-5704 and confirmed that they all affected the version of apache2 that we have in Squeeze. I thus added apache2 to dla-needed.txt.
  • I reviewed CVE-2014-6610 concerning asterisk and marked the version in Squeeze as not affected since the file with the vulnerability doesn’t exist in that version (this entails some checking that the specific feature is not implemented in some other file due to file reorganization or similar internal changes).
  • I reviewed CVE-2014-3596 and corrected the entry that said that is was fixed in unstable. I confirmed that the versions in squeeze was affected and added it to dla-needed.txt.
  • Same story for CVE-2012-6153 affecting commons-httpclient.
  • I reviewed CVE-2012-5351 and added a link to the upstream ticket.
  • I reviewed CVE-2014-4946 and CVE-2014-4945 for php-horde-imp/horde3, added links to upstream patches and marked the version in squeeze as unaffected since those concern javascript files that are not in the version in squeeze.
  • I reviewed CVE-2012-3155 affecting glassfish and was really annoyed by the lack of detailed information. I thus started a discussion on debian-lts to see whether this package should not be marked as unsupported security wise. It looks like we’re going to mark a single binary packages as unsupported… the one containing the application server with the vulnerabilities, the rest is still needed to build multiple java packages.
  • I reviewed many CVE on dbus, drupal6, eglibc, kde4libs, libplack-perl, mysql-5.1, ppp, squid and fckeditor and added those packages to dla-needed.txt.
  • I reviewed CVE-2011-5244 and CVE-2011-0433 concerning evince and came to the conclusion that those had already been fixed in the upload 2.30.3-2+squeeze1. I marked them as fixed.
  • I droppped graphicsmagick from dla-needed.txt because the only CVE affecting had been marked as no-dsa (meaning that we don’t estimate that a security updated is needed, usually because the problem is minor and/or that fixing it has more chances to introduce a regression than to help).
  • I filed a few bugs when those were missing: #762789 on ppp, #762444 on axis.
  • I marked a bunch of CVE concerning qemu-kvm and xen as end-of-life in Squeeze since those packages are not currently supported in Debian LTS.
  • I reviewed CVE-2012-3541 and since the whole report is not very clear I mailed the upstream author. This discussion led me to mark the bug as no-dsa as the impact seems to be limited to some information disclosure. I invited the upstream author to continue the discussion on RedHat’s bugzilla entry.

And when I say “I reviewed” it’s a simplification for this kind of process:

  • Look up for a clear explanation of the security issue, for a list of vulnerable versions, and for patches for the versions we have in Debian in the following places:
    • The Debian security tracker CVE page.
    • The associated Debian bug tracker entry (if any).
    • The description of the CVE on cve.mitre.org and the pages linked from there.
    • RedHat’s bugzilla entry for the CVE (which often implies downloading source RPM from CentOS to extract the patch they used).
    • The upstream git repository and sometimes the dedicated security pages on the upstream website.
  • When that was not enough to be conclusive for the version we have in Debian (and unfortunately, it’s often the case), download the Debian source package and look at the source code to verify if the problematic code (assuming that we can identify it based on the patch we have for newer versions) is also present in the old version that we are shipping.

CVE triaging is often almost half the work in the general process: once you know that you are affected and that you have a patch, the process to release an update is relatively straightforward (sometimes there’s still work to do to backport the patch).

Once I was over that first pass of triaging, I had already spent more than the 11 hours paid but I still took care of preparing the security update for python-django. Thorsten Alteholz had started the work but got stuck in the process of backporting the patches. Since I’m co-maintainer of the package, I took over and finished the work to release it as DLA-65-1.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Author: "Raphaël Hertzog"
Send by mail Print  Save  Delicious 
Date: Tuesday, 30 Sep 2014 03:16

Welcome to the Ubuntu Weekly Newsletter. This is issue #385 for the week September 22 – 28, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Author: "lyz"
Send by mail Print  Save  Delicious 
Date: Monday, 29 Sep 2014 23:41

The way you beat an incumbent is by coming up with a thing that people want, that you do, and that your competitors can’t do.

Not won’t. Can’t.

How did Apple beat Microsoft? Not by making a better desktop OS. They did it by shifting the goalposts. By creating a whole new field of competition where Microsoft’s massive entrenched advantage didn’t exist: mobile. How did Microsoft beat Digital and the mainframe pushers? By inventing the idea that every desktop should have a real computer on it, not a terminal.

How do you beat Google and Facebook? By inventing a thing that they can’t compete against. By making privacy your core goal. Because companies who have built their whole business model on monetising your personal information cannot compete against that. They’d have to give up on everything that they are, which they can’t do. Facebook altering itself to ensure privacy for its users… wouldn’t exist. Can’t exist. That’s how you win.

If you ask actual people whether they want privacy, they say, yes. Always. But if you then ask, are they, are we, prepared to give that privacy up to get things? They say yes again. They, we, want privacy, but not as much as we want stuff. Not as much as we want to talk to one another. Giving up our personal data to enable that, that’s a reasonable cost to pay, because we don’t value our personal data. Some of that’s because there’s no alternative, and some of that’s because nobody’s properly articulated the alternative.

Privacy will define the next major change in computing.

We saw the change to mobile. The change to social. These things fundamentally redefined the way technology looked to the mainstream. The next thing will be privacy. The issue here is that nobody has worked out a way of articulating the importance of privacy which convinces actual ordinary people. There are products and firms trying to do that right now. Look at Blackphone. Look at the recent fertile ground for instant messaging with privacy included from Telegram and Threema and Whisper System‘s Text Secure. They’re all currently basically for geeks. They’re doing the right thing, but they haven’t worked out how to convince real people that they are the right thing.

The company who work out how to convince people that privacy is important will define the next five years of technology.

Privacy, historically the concern of super-geeks, is beginning to poke its head above the parapet. Tim Berners-Lee calls for a “digital Magna Carta”. The EFF tries to fix it and gets their app banned because it’s threatening Google’s business model to have people defend their own data. The desire for privacy is becoming mainstream enough that the Daily Mash are prepared to make jokes about it. Apple declare to the world that they can’t unlock your iPhone, and Google are at pains to insist that they’re the same. We’re seeing the birth of a movement; the early days before the concern of the geeks becomes the concern of the populace.

So what about the ind.ie project?

The ind.ie project will tell you that this is what they’re for, and so you need to get on board with them right now. That’s what they’ll tell you.

The ind.ie project is to open source as Brewdog are to CAMRA. Those of you who are not English may not follow this analogy.

CAMRA is the Campaign for Real Ale: a British society created in the 1970s and still existing today who fight to preserve traditionally made beer in the UK, which they name “real ale” and have a detailed description of what “real ale” is. Brewdog are a brewer of real ale who were founded in 2007. You’d think that Brewdog were exactly what CAMRA want, but it is not so. Brewdog, and a bunch of similar modern breweries, have discovered the same hatred that new approaches in other fields also discovered. In particular, Brewdog have done a superb job at bringing a formerly exclusive insular community into the mainstream. But that insular community feel resentful because people are making the right decisions, but not because they’ve embraced the insular community. That is: people drink Brewdog beer because they like it, and Brewdog themselves have put that beer into the market in such a way that it’s now trendy to drink real ale again. But those drinking it are not doing it because they’ve bought into CAMRA’s reasoning. They like real ale, but they don’t like it for the same reasons that CAMRA do. As Daniel Davies said, every subculture has this complicated relationship with its “trendy” element. From the point of view of CAMRA nerds, who believe that beer isn’t real unless it has moss floating in it, there is a risk that many new joiners are fair-weather friends just jumping on a trendy bandwagon and the Brewdog popularity may be a flash in the pan. The important point here is that the new people are honestly committed to the underlying goals of the old guard (real ale is good!) but not the old guard’s way of articulating that message. And while that should get applause, what it gets is resentment.

Ind.ie is the same. They have, rather excellently, found a way of describing the underlying message of open source software without bringing along the existing open source community. That is, they’ve articulated the value of being open, and of your data being yours without it being sold to others or kept as commercial advantage, but have not done so by pushing the existing open source message, which is full of people who start petty fights over precisely which OS you use and what distribution A did to distribution B back in the mists of prehistory. This is a deft and smart move; people in general tend to agree with the open source movement’s goals, but are hugely turned off by interacting with that existing open source movement, and ind.ie have found a way to have that cake and eat it.

Complaints from open source people about ind.ie are at least partially justified, though. It is not reasonable to sneer at existing open source projects for knowing nothing about users and at the same time take advantage of their work. It is not at all clear how ind.ie will handle a bunch of essential features — reading an SD card, reformatting a drive, categorising applications, storing images, sandboxing apps from one another, connecting to a computer, talking to the cloud — without using existing open source software. The ind.ie project seem confident that they can overlay a user experience on this essential substrate and make that user experience relevant to real people rather than techies; but it is at best disingenuous and at worst frankly offensive to simultaneously mock open source projects for knowing nothing about users and then also depend on their work to make your own project successful. Worse, it ignores the time and effort that companies such as Canonical have put in to user testing with actual people. It’s blackboard economics of the worst sort, and it will have serious repercussions down the line when the ind.ie project approaches one of its underlying open source projects and says “we need this change made because users care” and the project says “but you called us morons who don’t care about users” and so ignores the request. Canonical have suffered this problem with upstream projects, and they were nowhere near as smugly, sneeringly dismissive as ind.ie have been of the open source substrate on which they vitally depend.

However, they, ind.ie, are doing the right thing. The company who work out how to convince people that privacy is important will define the next five years of technology. This is not an idle prediction. The next big wave in technology will be privacy.

There are plenty of companies right now who would say that they’re already all over that. As mentioned above, there’s Blackphone and Threema and Telegram and ello and diaspora. All of them are contributors and that’s it. They’re not the herald who usher in the next big wave. They’re ICQ, or Friends Reunited: when someone writes the History Of Tech In The Late 2010s, Blackphone and ello and Diaspora will be footnotes, with the remark that they were early adopters of privacy-based technology. There were mp3 players before the iPod. There were social networks before Facebook. All the existing players who are pushing privacy as their raison d’etre and writing manifestos are creating an environment which is ripe for someone to do it right, but they aren’t themselves the agent of change; they’re the Diamond Rio who come before the iPod, the ICQ who come before WhatsApp. Privacy hasn’t yet found its Facebook. When it does, that Facebook of privacy will change the world so that we hardly understand that there was a time when we didn’t care about it. They’ll take over and destroy all the old business models and make a new tech universe which is better for us and better for them too.

I hope it comes soon.

Author: "sil"
Send by mail Print  Save  Delicious 
Date: Monday, 29 Sep 2014 17:45

Cloud Images and Bash Vulnerabilities

The Ubuntu Cloud Image team has been monitoring the bash vulnerabilities. Due to the scope, impact and high profile nature of these vulnerabilties, we have published new images. New cloud images to address the lastest bash USN-2364-1 [1, 8, 9] are being released with a build serials of 20140927. These images include code to address all prior CVEs, including CVE-2014-6271 [6] and CVE-2014-7169 [7], and supersede images published in the past week which addressed those CVEs.

Please note: Securing Ubuntu Cloud Images requires users to regularly apply updates[5]; using the latest Cloud Images are insufficient. 

Addressing the full scope of the Bash vulnerability has been an iterative process. The security team has worked with the upstream bash community to address multiple aspects of the bash issue. As these fixes have become available, the Cloud Image team has published daily[2]. New released images[3] have been made available at the request of the Ubuntu Security team.

Canonical has been in contact with our public Cloud Partners to make these new builds available as soon as possible.

Cloud image update timeline

Daily image builds are automatically triggered when new package versions become available in the public archives. New releases for Cloud Images are triggered automatically when a new kernel becomes available. The Cloud Image team will manually trigger new released images when either requested by the Ubuntu Security team or when a significant defect requires.

Please note:  Securing Ubuntu cloud images requires that security updates be applied regularly [5], using the latest available cloud image is not sufficient in itself.  Cloud Images are built only after updated packages are made available in the public archives. Since it takes time to build the  images, test/QA and finally promote the images, there is time (sometimes  considerable) between public availablity of the package and updated Cloud Images. Users should consider this timing in their update strategy.

[1] http://www.ubuntu.com/usn/usn-2364-1/
[2] http://cloud-images.ubuntu.com/daily/server/
[3] http://cloud-images.ubuntu.com/releases/
[4] https://help.ubuntu.com/community/Repositories/Ubuntu/
[5] https://wiki.ubuntu.com/Security/Upgrades/
[6] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-6271.html
[7] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7169.html
[8] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7187.html
[9] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7186.html
Author: "Ben Howard"
Send by mail Print  Save  Delicious 
Date: Monday, 29 Sep 2014 15:25

We’re back with Season Seven, Episode Twenty-Six of the Ubuntu Podcast! Just Alan Pope and Laura Cowen with a set of interviews from Mark Johnson this week.

In this week’s show:

python 2.x:
python -m SimpleHTTPServer [port]

python 3.x
python -m http.server [port]

  • And we read your feedback. Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Author: "admin"
Send by mail Print  Save  Delicious 
Date: Monday, 29 Sep 2014 15:17

Quick question – we have Cloud Foundry in private beta now, is there anyone in the Ubuntu community who would like to use a Cloud Foundry instance if we were to operate that for Ubuntu members?

Author: "mark"
Send by mail Print  Save  Delicious 
Date: Monday, 29 Sep 2014 13:36

September 26'th I undertook a rather daunting task of trialing something I strongly believe in that really took me out of my comfort zone and put me front and center of an audience's attention, for not only my talents, but also the technical implementation of their experience.

The back story

I've been amateur DJ'ing on Secondlife for about the last 7 months, and recently left the metaverse to pursue a podcast format of my show(s). What I found was I really missed the live interaction with people during the recording of the set. It was great to get feedback, audience participation, and I could really gauge the flow of energy that I'm broadcasting. To some this may sound strange, but when your primary interaction is over text, and you see a feed erupt with actions as you put on more high energy music, it just 'clicks' and makes sense.

The second aspect to this was I wanted to showcase how you can get moving with Juju in less than a week to bring a production ready app online and ready for scale (depending on the complexity of the app of course). It's been a short while since I've pushed a charm from scratch into the charm store - and this will definately get me re-acquiainted with the process our new users go through on their Juju journey.

So, I've got a habit of mixing my passions in life. If you know me very well you know that I am deeply passionate about what I'm working on, my hobbies, and the people that I surround myself with that i consider my support network. How can I leverage this to showcase and run a 'Juju lab' study?

The Shoutcast charm is born

I spent a sleepless night hacking away at a charm for a SHOUTCast DNAS server. They offer several PAAS, scaling solutions that might work for people that are making money off of their hobby - but I myself prefer to remain an enthusiast and not turn a profit from my hobby. Juju is a perfect fit for deploying pretty much anything, and making sure that all the components work together in a distributed service environment. It's getting better every day - proof of this is the Juju GUI just announced machine view - where you can easily do co-location of services on the same server, and get a deep dive look at how your deployment is comprised of machines vs services.

Observations & Lessons

Testing what you expect, never yields the unexpected

Some definate changes to just the shoutcast charm itself are in order.

  • Change the default stream MIME from AAC to MP3 so its cross compat on every os without installing quicktime.
  • Test EVERY os before you jam out to production - which may seem like a rookie mistake. I tested on Mac OSX and Ubuntu Linux (default configuration for 14.04) and everything was in order. Windows users however, that are not savvy with tech that stems from back in the 90's were left out in the cold and prompted to install Quicktime when they connected. This is not ideal.
  • the 'automatic' failover that I touted in the readme is dependent on the client consuming the playlist. If the client doesn't support multiple streams in the playlist, its not really automatic forwarding load balancing, but polling failure cases with resources.

Machine Metrics tell most of the story

I deployed this setup on Digital Ocean to run my 'lab test' - as the machines are cheap, performant, and you get 1TB of transfer unmetered before you have to jump up a pricing teir. This is a great mixture for testing the setup. But how well did the VPS perform?

I consumed 2 of the 'tiny' VPS servers for this. And the metrics of the transcoders were light enough that it barely touched the CPU. As a matter of fact I saw more activity out of supporting infra services such as LogStash, than I did out of the SHOUTCast charm. Excellent work on the implementation Shoutcast devs. This was a pleasant surprise!

Pre-scaling was the winner

Having a relay setup out of the gate really helped to mitigate issues as I saw people get temporary hiccups in their network. I saw several go from the primary stream to the relay and finish out the duration of the broadcast connected there.

The fact that the clients supported this, tells me that any time I do this live, I need to have at bare minimum 2 hosts online transmitting the broadcast.

Had this been a single host - every blip in the network would yield dead airspace before they realized something had gone wrong.

Juju Scaled Shoutcast Service

Supportive people are amazing, and make what you do, worthwhile

Those that tuned in genuinely enjoyed that I had the foresight to pre-record segments of the show to interact with them. This was more so I could investigate the server(s), watch htop metrics, refresh shoutcast, etc. However the fan interaction was genuinely empowering. I found myself wanting to turn around and see what was said next during the live-mixing segments.

The Future for Radio Bundle Development

Putting the auto in automation

I've found a GREAT service that I want to consume and deploy to handle the station automation side of this deployment. SourceFabric produces Airtime which makes setting up Radio Automation very simple, and supports such advanced configurations as mixing in Live DJ's into your lineup on a schedule. How awesome is this? It's open-source to boot!

I'm also well on my way to having revision 1 of this bundle completed, since I started the blog post on Friday. Hacked on the bundle through the weekend, and landed here on Monday.

I'll be talking more about this after it's officially unveiled in Brussels.

Where to find the 'goods'

The Shoutcast Juju Charm can be found on Launchpad: lp:~lazypower/charms/trusty/shoutcast/trunk or github

The up-coming Airtime Radio Automation Charm can be found on github

Actual metrics and charts to be uploaded at a later date, once I've sussed out how I want to parse these and present them.

Author: "Charles Butler"
Send by mail Print  Save  Delicious 
Date: Sunday, 28 Sep 2014 18:09

A few weeks ago, I decided to make an experiment and completely rework the global shortcuts of my KDE desktop. I wanted them to make a bit more sense instead of being the agglomerated result of inspirations from other systems, and was ready to pay the cost of brain retraining.

My current shortcut setup relies on a few "design" decisions:

  • All workspace-level shortcuts must use the Windows (also known as Meta) modifier key, application shortcuts are not allowed to use this modifier.

  • There is a logical link between a shortcut and its meaning. For example, the shortcut to maximize a window is Win + M.

  • The Shift modifier is used to provide a variant of a shortcut. For example the shortcut to minimize a window is Win + Shift + M.

I am still playing with it, but it is stabilizing these days, so I thought I'd write a summary of what I came up with:

Window management

  • Maximize: Win + M.

  • Minimize: Win + Shift + M.

  • Close: Win + Escape. This is somehow consistent with the current Win + Shift + Escape to kill a window.

  • Always on top: Win + T.

  • Shade: Win + S.

  • Switch between windows: Win + Tab and Win + Shift + Tab (yes, this took some work to retrain myself, and yes, it means I no longer have shortcuts to switch between activities).

  • Maximize left, Maximize right: Win + :, Win + !. This is very localized: ':' and '!' are the keys under 'M' on my French keyboard. Definitely not a reusable solution. I used to use Win + '(' and Win + ')' but it made more sense to me to have the maximize variants close to the full Maximize shortcut.

  • Inner window modifier key: Win. I actually changed this from Alt a long time ago: it is necessary to be able to use Inkscape, as it uses Alt + Click to select shapes under others.

Virtual desktop

  • Win + Left, Win + Right: Go to previous desktop, go to next desktop.

  • Win + Shift + Left, Win + Shift + Right: Bring the current window to the previous desktop, bring the current window to the next desktop.

  • Win + F1, Win + F2, Win + F3: Switch to desktop 1, 2 or 3.

Application launch

  • Win + Space: KRunner.

  • Win + Shift + Space: Homerun.

Misc

  • Win + L: Lock desktop.

How does it feel?

I was a bit worried about the muscle-memory retraining, but it went quite well. Of course I am a bit lost nowadays whenever I use another computer, but that was to be expected.

One nice side-effect I did not foresee is that this change turned the Win modifier into a sort of quasimode: all global workspace operations are done by holding the Win key. I said "sort of" because some operations requires you to release the Win key before they are done, for example when switching from one window to another, no shortcuts work as long as the window switcher is visible, so one needs to release the Win key after switching and press it again to do something else. I notice this most often when maximizing left or right.

Another good point of this approach is that, almost no shortcuts use function keys. This is a good thing because: a) it can be quite a stretch for small hands to hold both the Win or Alt modifier together with a function key and b) many laptops these days come with the function keys mapped to multimedia controls and need another modifier to be held to become real function keys, some other laptops do not even come with any function keys at all! (heresy I know, but such is the world we live in...)

What about you, do you have unusual shortcut setups?

Flattr this

Author: "Aurélien Gâteau"
Send by mail Print  Save  Delicious 
Date: Sunday, 28 Sep 2014 10:30

If you follow FCM on Google+. Facebook, or Twitter (if not, why not?) then you’ll have seen the post I (Ronnie) made showing our current Google Play stats.

This time I’d like to share with you our Issuu stats:

(this is as of Saturday 27th Sept 2014 – click image to enlarge)

FCM-Issuu

Author: "Ronnie"
Send by mail Print  Save  Delicious 
Date: Sunday, 28 Sep 2014 05:02

Earlier this year, I helped plan and run the Community Data Science Workshops: a series of three (and a half) day-long workshops designed to help people learn basic programming and tools for data science tools in order to ask and answer questions about online communities like Wikipedia and Twitter. You can read our initial announcement for more about the vision.

The workshops were organized by myself, Jonathan Morgan from the Wikimedia Foundation, long-time Software Carpentry teacher Tommy Guy, and a group of 15 volunteer “mentors” who taught project-based afternoon sessions and worked one-on-one with more than 50 participants. With overwhelming interest, we were ultimately constrained by the number of mentors who volunteered. Unfortunately, this meant that we had to turn away most of the people who applied. Although it was not emphasized in recruiting or used as a selection criteria, a majority of the participants were women.

The workshops were all free of charge and sponsored by the UW Department of Communication, who provided space, and the eScience Institute, who provided food.

cdsw_combo_images-1The curriculum for all four session session is online:

The workshops were designed for people with no previous programming experience. Although most our participants were from the University of Washington, we had non-UW participants from as far away as Vancouver, BC.

Feedback we collected suggests that the sessions were a huge success, that participants learned enormously, and that the workshops filled a real need in the Seattle community. Between workshops, participants organized meet-ups to practice their programming skills.

Most excitingly, just as we based our curriculum for the first session on the Boston Python Workshop’s, others have been building off our curriculum. Elana Hashman, who was a mentor at the CDSW, is coordinating a set of Python Workshops for Beginners with a group at the University of Waterloo and with sponsorship from the Python Software Foundation using curriculum based on ours. I also know of two university classes that are tentatively being planned around the curriculum.

Because a growing number of groups have been contacting us about running their own events based on the CDSW — and because we are currently making plans to run another round of workshops in Seattle late this fall — I coordinated with a number of other mentors to go over participant feedback and to put together a long write-up of our reflections in the form of a post-mortem. Although our emphasis is on things we might do differently, we provide a broad range of information that might be useful to people running a CDSW (e.g., our budget). Please let me know if you are planning to run an event so we can coordinate going forward.

Author: "Benjamin Mako Hill"
Send by mail Print  Save  Delicious 
Date: Saturday, 27 Sep 2014 09:54
I heard a really interesting little show on the radio tonight, about the man who explained 'bands of nothing.' "Astronomer Daniel Kirkwood... is best known for explaining gaps in the asteroid belt and the rings of Saturn — zones that are clear of the normal debris." http://stardate.org/radio/program/daniel-kirkwood. He taught himself algebra, and used his math background to analyze the work of others, rather than making his own observations. The segment is only 5 minutes; give it a listen.

This reminded me of the how much progress I used to make when I did genealogy research, by looking over the documents I had gotten long ago, in light of facts I more recently uncovered. All of a sudden, I made new discoveries in those old docs. So that has become part of my regular research routine.

And perhaps all of these thoughts were triggered by the BASH bug which I keep hearing about on the news in very vague terms, and in quite specific discussion in IRC and mail lists. Old, stable code can yield new, interesting bugs! Maybe even dangerous vulnerabilities. So it's always worth raking over old ground, to see what comes to the surface.
Author: "Valorie Zimmerman"
Send by mail Print  Save  Delicious 
Date: Saturday, 27 Sep 2014 00:10

I know the program ended, almost a month ago. But I haven't had the opportunity of sharing my thoughts of the GSOC 2014. This summer, I coded for the BeagleBoard.org organization. It was a great experience. It was my third time trying to be part of the GSOC, and finally I was accepted.

The main idea of the project is a platform for viewing and creating tutorials. You can see it here. Right now I'm working on migrating this to Jekyll. This is the next step the BeagleBoard community is taking.

After the program finish I convinced Jason Kridner cofounder of the BeagleBoard.org to give a small hangout about what's BeagleBoard.org, talk about the Beagle Bone Black and his view of the organizations.

Why I decide to talk with Jason, so he can give a talk? Well for motivating more Honduran students to involve on the open source moment. I was the first Honduran Student, that was part of the Google Summer of Code.

Hope this motivates more Honduran student.
Author: "Diego Turcios"
Send by mail Print  Save  Delicious 
Date: Friday, 26 Sep 2014 22:38

In my previous post, I explained about my personal need to have a Utopic Environment for the purpose of running the test suites since they are require the latest ubuntu-ui-toolkit which is not available for Trusty 14.04 which my main laptop runs. For quite some time I used Virtualbox VMs to get around this issue. But anyone who has used Virtualbox VMs will agree when I say they are too resource intensive and slow making it a bit frustating to use it.

I am thankful to Sergio Schvezov for introducing me to this cool concept of Linux Containers (LXC). It took me some time to get acquainted with the concept and use it on a daily basis. I can now share my experiences and also show how I got it setup to provide a Ubuntu Touch app development environment.

Brief Introduction to LXC

Linux Containers (LXC) is a novel concept developed by Stéphane Graber and Serge Hallyn. One could describe Linux containers as,

LXC are lightweight virtualization technology. They are more akin to an enhanced chroot than to full virtualization like VMware or QEMU. They do not emulate hardware and containers share the same operating system as the host.

I think to fully appreciate LXC, it would best to compare it with Virtualbox VMs as shown below.

image

A LXC container uses the host machine's kernel and is somewhere in the middle of a schroot and a full fledged virtual machine. Each ofcourse has its advantages and disadvantages. For instance since LXC containers use the host machine's kernel, they are limited to the Linux Kernels and cannot be used to create Windows or other OS containers. However they do not have any overhead since they only run the most essential services needed for your use case.

They perfectly fit my use case of providing an utopic environment inside which I can run my test suites. In fact, in this post I will show you some tricks that I learnt which will provide seamless integration between LXC and your host machine to the point where you would be unable to tell the difference between a native app and a container app.

Getting Started with LXC

Stephane Graber's original post provides an excellent tutorial on getting started with LXC. If you are stuck at any step, I highly recommend talking to Stephane Graber on IRC at #ubuntu-devel. His nick is stgraber. The instructions below are a quick way of getting started with lxc containers for ubuntu touch app development and as such I have avoided detailed instructions explaining why we run a certain command.

Without further ado, let's get started!

Installing LXC

LXC is available to install directly from the Ubuntu Archives. You can install it by,

sudo apt-get install lxc systemd-services uidmap

Prerequisite configuration (One-Time Configuration)

Linux containers are run using root by default. However this can be a little inconvenient for our use case since our containers will essentially be used to launch common applications like Qtcreator, terminal etc. So we will be first performing some prerequisite steps to creating unpriviledged containers (run by a normal user).

Note: These steps below are required only if you want to create unpriviledged containers (required for our use case).

sudo usermod --add-subuids 100000-165536 $USER
sudo usermod --add-subgids 100000-165536 $USER
sudo chmod +x $HOME

Create ~/.config/lxc/default.conf with the following contents,

lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536

And then run,

echo "$USER veth lxcbr0 10" | sudo tee -a /etc/lxc/lxc-usernet

Unpriveledged Containers

Now with the prerequisite steps complete, we can proceed with creating the linux container itself. We are going to create a generic utopic container by,

lxc-create --template download --name qmldevel -- --dist ubuntu --release utopic --arch amd64

This should create a LXC container with a ubuntu utopic environment of architecture amd64. On the other hand, if you want to see a list of the various distros, releases and architectures supported and choose it interactively, you should run,

lxc-create -t download -n qmldevel

Once the container has finished downloading, you should be provided with a default user "ubuntu" with password "ubuntu". You will be able to find the container files at ~/.local/share/lxc/qmldevel.

sudo chown -R 1000:1000 ~/.local/share/lxc/qmldevel/rootfs/home/ubuntu

Add the following to your container config file found at ~/.local/share/lxc/qmldevel/config,

# Container specific configuration
lxc.id_map = u 0 100000 1000
lxc.id_map = g 0 100000 1000
lxc.id_map = u 1000 1000 1
lxc.id_map = g 1000 1000 1
lxc.id_map = u 1001 101001 64535
lxc.id_map = g 1001 101001 64535

# Custom Mounts
lxc.mount.entry = /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry = /dev/snd dev/snd none bind,optional,create=dir
lxc.mount.entry = /tmp/.X11-unix tmp/.X11-unix none     bind,optional,create=dir
lxc.mount.entry = /dev/video0 dev/video0 none bind,optional,create=file
lxc.mount.entry = /home/krnekhelesh/Documents/Ubuntu-Projects home/ubuntu none bind,create=dir

Notice the line lxc.mount.entry = /home/krnekhelesh/Documents/Ubuntu-Projects home/ubuntu none bind,create=dir which basically maps (mounts) your host machine's folder to a location in the container. So if we go to /home/ubuntu you would see the contents of /home/krnekhelesh/.../Ubuntu-Projects. Isn't that nifty? We are seamlessly sharing data between the host and the container.

Shelling into our container

So yay we created this awesome container. How about accessing it and installing some of the applications we want to have? That's quite easy.

lxc-start -n qmldevel -d
lxc-attach -n qmldevel

At this point, your command line prompt should show be in the container. Here you can run any command you so wish. We are going to install the ubuntu-sdk and also terminator which is now my favourite terminal.

sudo apt-get install ubuntu-sdk terminator

Type exit to exit out of the container.

At this point our container configuration is complete. This was the hardest and longest part. If you are past this, then we have one last final step left which is to create shortcuts to applications that we would like to launch from within our container. Onward to the next section!

Application shortcuts

So basically here we create a few scripts and .desktop files to launch our applications we just installed in the previous section. First let's create those scripts. I will explain in a moment why we need those scripts.

Create a script called start-qtcreator with the following contents,

#!/bin/sh
CONTAINER=qmldevel
CMD_LINE="qtcreator $*"

STARTED=false

if ! lxc-wait -n $CONTAINER -s RUNNING -t 0; then
    lxc-start -n $CONTAINER -d
    lxc-wait -n $CONTAINER -s RUNNING
    STARTED=true
fi

lxc-attach --clear-env -n $CONTAINER -- sudo -u ubuntu -i \
    env DISPLAY=$DISPLAY $CMD_LINE

if [ "$STARTED" = "true" ]; then
    lxc-stop -n $CONTAINER -t 10
fi

Make the script executable by chmod +x start-qtcreator. What the script essentially does is that it starts the container (if not running already) and then launches qtcreator while ensuring the proper environment variables are set.

We are going to create a similar script for launching terminator as well called start-terminator and make it executable.

#!/bin/sh
CONTAINER=qmldevel
CMD_LINE="terminator $*"

STARTED=false

if ! lxc-wait -n $CONTAINER -s RUNNING -t 0; then
    lxc-start -n $CONTAINER -d
    lxc-wait -n $CONTAINER -s RUNNING
    STARTED=true
fi

lxc-attach --clear-env -n $CONTAINER -- sudo -u ubuntu -i \
    env DISPLAY=$DISPLAY $CMD_LINE

if [ "$STARTED" = "true" ]; then
    lxc-stop -n $CONTAINER -t 10
fi

Now for the very last bit which is the .desktop file, for Qtcreator and Terminator, I created the following .desktop files,

[Desktop Entry]
Exec=/home/krnekhelesh/.local/share/lxc/qmldevel/start-qtcreator %F
Icon=ubuntu-qtcreator
Type=Application
Terminal=false
Name=Ubuntu SDK (LXC)
GenericName=Integrated Development Environment
MimeType=text/x-c++src;text/x-c++hdr;text/x-xsrc;application/x-designer;application/vnd.nokia.qt.qmakeprofile;application/vnd.nokia.xml.qt.resource;application/x-qmlproject;
Categories=Qt;Development;IDE;
InitialPreference=9
Keywords=Ubuntu SDK;SDK;Ubuntu Touch;Qt Creator;Qt

Make sure to replace the Exec path with your path. Save the .desktop file as ubuntusdklxc.desktop in ~/.local/share/applications. Do the same for the terminal desktop file,

[Desktop Entry]
Name=Terminator (LXC)
Comment=Multiple terminals in one window
Exec=/home/krnekhelesh/.local/share/lxc/qmldevel/start-terminator
Icon=terminator
Type=Application
Categories=GNOME;GTK;Utility;TerminalEmulator;System;
StartupNotify=true
X-Ubuntu-Gettext-Domain=terminator
X-Ayatana-Desktop-Shortcuts=NewWindow;
Keywords=terminal;shell;prompt;command;commandline;
[NewWindow Shortcut Group]
Name=Open a New Window
Exec=terminator
TargetEnvironment=Unity

That's it! When you go to the Unity Dash and search for "Terminator", you should see the entry "Terminator (LXC) appear. When you launch it, it will seamlessly start the linux container and then launch terminator from within it. Best part of it is that you won't even notice the difference between a native app and the container app.

Check out the screenshow below as proof!

image

What I usually do is that I have my clock app files in /home/krnekhelesh/Documents/Ubuntu-Projects. I do my coding and testing on the host machine. Then while running the test suite, I quickly open the terminator (lxc) and then run the tests since it already points at the correct folder.

I hope you found this useful.

Author: "Nekhelesh"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader