• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Wednesday, 30 Jul 2014 16:33
Hour and 15 minutes later, platters look really frozen... and heads are leaving watery trails on the harddrive, that clicks. Ok, this is not looking good.

Should not have let it run with water on board -- outside tracks are physically destroyed.

Next candidate: WD Caviar, WD200, 20GB.

This one is actually pretty impressive. It clearly has place for four (or so) platters, and there's only one populated. And this one actually requires cover for operation, otherwise it produces "interesting sounds" (and no data).

It went to refrigerator for few hours but then I let it thaw before continuing operation. Disk still works with few bad sectors. I overwrote the disk with zeros, and that recovered the bad sectors.

Did fingerprint on the surface. Bad idea, that killed the disk.

Ok, so we have two information from advertising confirmed: freezing can and will kill the disk, and some hard drives need their screws for operation.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 30 Jul 2014 16:27
...and did not like what I saw. I installed Debian/testing. Now I know why everyone hates systemd: it turned minor error (missing firmware for wlan card) into message storm (of increasing speed) followed by forkbomb. Only OOM stopped the madness.

Now, I've seen Gnome3 before, and it is unusable -- at least on X60 hardware. So I went directly into Mate, hoping to see friendly Gnome2-like desktop. Well, it look familiar but slightly different. After a while I discovered I'm actually in Xfce. So log-out, log-in, and yes, this looks slightly more familiar. Unfortunately, theme is still different, window buttons are smaller and Terminal's no longer can be resized using lower-right corner. I also tried to restore my settings (cp -a /oldhome/.[a-z]* .) and it did not have the desired effect.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 30 Jul 2014 01:47
or why publishing code is STEP ZERO.

If you've been developing code internally for a kernel contribution, you've probably got a lot of reasons not to default to working in the open from the start, you probably don't work for Red Hat or other companies with default to open policies, or perhaps you are scared of the scary kernel community, and want to present a polished gem.

If your company is a pain with legal reviews etc, you have probably spent/wasted months of engineering time on internal reviews and stuff, so think all of this matters later, because why wouldn't it, you just spent (wasted) a lot of time on it, so it must matter.

So you have your polished codebase, why wouldn't those kernel maintainers love to merge it.

Then you publish the source code.

Oh, look you just left your house. The merging of your code is many many miles distant and you just started walking that road, just now, not when you started writing it, not when you started legal review, not when you rewrote it internally the 4th time. You just did it this moment.

You might have to rewrite it externally 6 times, you might never get it merged, it might be something your competitors are also working on, and the kernel maintainers would rather you cooperated with people your management would lose their minds over, that is the kernel development process.

step zero: publish the code. leave the house.

(lately I've been seeing this problem more and more, so I decided to write it up, and it really isn't directed at anyone in particular, I think a lot of vendors are guilty of this).
Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 29 Jul 2014 07:53

As all software, it took longer than I expected, but today I tagged the first version of pettycoin.  Now, lots more polish and features, but at least there’s something more than the git repo for others to look at!

Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 24 Jul 2014 13:00
CHMI changed their webpages, so that old.chmi.cz no longer worked, so I had to adapt nowcast. My first idea was to use radareu.cz, that has nice coverage of whole europe, but pictures are too big and interpolated... and handling them takes time. So I updated it once more, now it supports new format of chmi pictures. But it also means that if you are within EU and want to play with weather nowcasting, you now can... just be warned it is sligtly slow... but very useful, especially in rainy/stormy weather these days.

Now, I don't know about you, but I always forget something when travelling internationally. Like.. power converters, or the fact that target is in different time zone. Is there some tool to warn you about differences between home and target countries? (I'd prefer it offline, for privacy reasons, but...) I started country script, with some data from wikipedia, but it is quite incomplete and would need a lot of help.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 24 Jul 2014 12:51
So I took an old 4GB (IBM) drive for a test. Oops, it sounds wrong while spinning up. Perhaps I need to use two usb cables to get enough power?

Lets take 60GB drive... that one works well. Back to 4GB one. Bad, clicking sounds.

IBM actually used two different kinds of screws, so I can not non-destructively open this one... and they actually made platters out of glass. Noone is going to recover data from this one... and I have about 1000 little pieces of glass to collect.

Next candidate: Seagate Barracuda ATA III ST320414A, 20GB.

Nice, cca 17MB/sec transfer, disk is now full of photos. Data recovery firms say that screw torque matters. I made all of them very loose, then removed them altogether, then found the second hidden screw and then ran the drive open. It worked ok.

Air filter is not actually secured in any way, and I guess I touched the platters with the cover while opening. Interestingly, these heads do not stick to surface, even when manually moved.

Friends do not let friends freeze their hard drives, but this one went into two plastic back and into refrigerator. Have you noticed how the data-recovery firms placed the drive there without humidity protection?

So, any bets if it will be operational after I remove it from the freezer?
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 23 Jul 2014 05:49
I've scheduled a further 5-day Linux/UNIX System Programming course to take place in Munich, Germany, for the week of 6-10 October 2014.

The course is intended for programmers developing system-level, embedded, or network applications for Linux and UNIX systems, or programmers porting such applications from other operating systems (e.g., Windows) to Linux or UNIX. The course is based on my book, The Linux Programming Interface (TLPI), and covers topics such as low-level file I/O; signals and timers; creating processes and executing programs; POSIX threads programming; interprocess communication (pipes, FIFOs, message queues, semaphores, shared memory),  network programming (sockets), and server design.
     
The course has a lecture+lab format, and devotes substantial time to working on some carefully chosen programming exercises that put the "theory" into practice. Students receive a copy of TLPI, along with a 600-page course book containing the more than 1000 slides that are used in the course. A reading knowledge of C is assumed; no previous system programming experience is needed.

Some useful links for anyone interested in the course:
Questions about the course? Email me via training@man7.org.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 17 Jul 2014 03:31

A “non-blocking” IPv6 connect() call was in fact, blocking.  Tracking that down made me realize the IPv6 address was mostly random garbage, which was caused by this function:

bool get_fd_addr(int fd, struct protocol_net_address *addr)
{
   union {
      struct sockaddr sa;
      struct sockaddr_in in;
      struct sockaddr_in6 in6;
   } u;
   socklen_t len = sizeof(len);
   if (getsockname(fd, &u.sa, &len) != 0)
      return false;
   ...
}

The bug: “sizeof(len)” should be “sizeof(u)”.  But when presented with a too-short length, getsockname() truncates, and otherwise “succeeds”; you have to check the resulting len value to see what you should have passed.

Obviously an error return would be better here, but the writable len arg is pretty useless: I don’t know of any callers who check the length return and do anything useful with it.  Provide getsocklen() for those who do care, and have getsockname() take a size_t as its third arg.

Oh, and the blocking?  That was because I was calling “fcntl(fd, F_SETFD, …)” instead of “F_SETFL”!

Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 16 Jul 2014 03:10

Closure on some old bugs. is a post from: codemonkey.org.uk

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 15 Jul 2014 23:00

The schedule for the 2014 Linux Security Summit (LSS2014) is now published.

The event will be held over two days (18th & 19th August), starting with James Bottomley as the keynote speaker.  The keynote will be followed by referred talks, group discussions, kernel security subsystem updates, and break-out sessions.

The refereed talks are:

  • Verified Component Firmware – Kees Cook, Google
  • Protecting the Android TCB with SELinux – Stephen Smalley, NSA
  • Tizen, Security and the Internet of Things – Casey Schaufler, Intel
  • Capsicum on Linux – David Drysdale, Google
  • Quantifying and Reducing the Kernel Attack Surface -  Anil Kurmus, IBM
  • Extending the Linux Integrity Subsystem for TCB Protection – David Safford & Mimi Zohar, IBM
  • Application Confinement with User Namespaces – Serge Hallyn & Stéphane Graber, Canonical

Discussion session topics include Trusted Kernel Lock-down Patch Series, led by Kees Cook; and EXT4 Encryption, led by Michael Halcrow & Ted Ts’o.   There’ll be kernel security subsystem updates from the SELinux, AppArmor, Smack, and Integrity maintainers.  The break-out sessions are open format and a good opportunity to collaborate face-to-face on outstanding or emerging issues.

See the schedule for more details.

LSS2014 is open to all registered attendees of LinuxCon.  Note that discounted registration is available until the 18th of July (end of this week).

See you in Chicago!

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 15 Jul 2014 15:26

Yikes, almost a month since I last posted.
In that time, I’ve spent pretty much all my time heads down chasing memory corruption bugs in Trinity, and whacking a bunch of smaller issues as I came across them. Some of the bugs I’ve been chasing have taken a while to reproduce, so I’ve deliberately held off on changing too much at once this last few weeks, choosing instead to dribble changes in a few commits at a time, just to be sure things weren’t actually getting worse. Every time I thought I’d finally killed the last bug, I’d do another run for a few hours, and then see the same corrupted structures. Incredibly frustrating. After a process of elimination (I found a hundred places where the bug wasn’t), I think I’ve finally zeroed in on the problematic code, in the functions that generate random filenames.
I pretty much gutted that code today, which should remove both the bug, and a bunch of unnecessary operations that never found any kernel bugs anyway. I’m glad I spent the time to chase this down, because the next bunch of features I plan to implement leverage this code quite heavily, and would have only caused even more headache later on.

The one plus side of chasing this bug the last month or so has been all the added debugging code I’ve come up with. Some of it has been committed for re-use later, while some of the more intrusive debug features (like storing backtraces for every locking operation) I opted not to commit, but will keep the diff around in case it comes in handy again sometime.

Spent the afternoon clearing out my working tree by committing all the clean-up patches I’ve done while doing this work. Some of them were a little tangled and needed separating into multiple commits). Next, removing some lingering code that hasn’t really done anything useful for a while.

I’ve been hesitant to start on the newer features until things calmed down, but that should hopefully be pretty close.

catch-up after a brief hiatus. is a post from: codemonkey.org.uk

Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 14 Jul 2014 22:00
Got a hard drive that would not spin up, to attempt recovery. Getting necessary screwdrivers was not easy, but eventually I managed to open the drive. (After hitting it few times in an attempt to unstick the heads). Now, this tutorial does not sound that bad, and yes, I managed to un-stick the heads. Drive now spins up... and keeps seeking, not getting ready. I tried to run the drive open, and heads only go to the near half of platters... I assume something is wrong there? I tried various torques on the screws as some advertising video suggested.

(Also, drives immediately stick to the platters when I move them manually. I guess that's normal?)

Drive is now in the freezer, and probably beyond repair... but if you have some ideas, or some fun uses for dead hard drive, I guess I can try them. Data on the disk are not important enough to do platter-transplantation.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 09 Jul 2014 11:24
I've released man-pages-3.70. The release tarball is available on kernel.org. The browsable online pages can be found on man7.org. The Git repository for man-pages is available on kernel.org.

This is a relatively small release. As well as many smaller fixes to various pages, the more notable changes in man-pages-3.70 are the following:
  • A new sprof(1) page documents the sprof command provided by glibc.
  • The epoll_ctl(2) and epoll(7) pages add documentation of the EPOLLWAKEUP flag that appeared in Linux 3.5.
  • Various parts of the syslog(2) page were reworked and improved.
  • A number of details were added or improved in the inotify(7) page.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 04 Jul 2014 22:10
The security model on the Google Nexus devices is pretty straightforward. The OS is (nominally) secure and prevents anything from accessing the raw MTD devices. The bootloader will only allow the user to write to partitions if it's unlocked. The recovery image will only permit you to install images that are signed with a trusted key. In combination, these facts mean that it's impossible for an attacker to modify the OS image without unlocking the bootloader[1], and unlocking the bootloader wipes all your data. You'll probably notice that.

The problem comes when you want to run something other than the stock Google images. Step number one for basically all of these is "Unlock your bootloader", which is fair enough. Step number two is "Install a new recovery image", which is also reasonable in that the key database is stored in the recovery image and so there's no way to update it without doing so. Except, unfortunately, basically every third party Android image is either unsigned or is signed with the (publicly available) Android test keys, so this new recovery image will flash anything. Feel free to relock your bootloader - the recovery image will still happily overwrite your OS.

This is unfortunate. Even if you've encrypted your phone, anyone with physical access can simply reboot into recovery and reflash /system with something that'll stash your encryption key and mail your data to the NSA. Surely there's a better way of doing this?

Thankfully, there is. Kind of. It's annoying and involves a bunch of manual processes and you'll need to re-sign every update yourself. But it is possible to configure Nexus devices in such a way that you retain the same level of security you had when you were using the Google keys without losing the freedom to run whatever you want. Here's how.

Note: This is not straightforward. If you're not an experienced developer, you shouldn't attempt this. I'm documenting this so people can create more user-friendly approaches.

First: Unlock your bootloader. /data will be wiped.
Second: Get a copy of the stock recovery.img for your device. You can get it from the factory images available here
Third: Grab mkbootimg from here and build it. Run unpackbootimg against recovery.img.
Fourth: Generate some keys. Get this script and run it.
Fifth: zcat recovery.img-ramdisk.gz | cpio -id to extract your recovery image ramdisk. Do this in an otherwise empty directory.
Sixth: Get DumpPublicKey.java from here and run it against the .x509.pem file generated in step 4. Replace /res/keys from the recover image ramdisk with the output. Include the "v2" bit at the beginning.
Seventh: Repack the ramdisk image (find . | cpio -o -H newc | gzip > ../recovery.img-ramdisk.gz) and rebuild recovery.img with mkbootimg.
Eighth: Write the new recovery image to your device
Ninth: Get signapk from here and build it. Run it against the ROM you want to sign, using the keys you generated earlier. Make sure you use the -w option to sign the whole zip rather than signing individual files.
Tenth: Relock your bootloader
Eleventh: Boot into recovery mode and sideload your newly signed image.

At this point you'll want to set a reasonable security policy on the image (eg, if it grants root access, ensure that it requires a PIN or something), but otherwise you're set - the recovery image can't be overwritten without unlocking the bootloader and wiping all your data, and the recovery image will only write images that are signed with your key. For obvious reasons, keep the key safe.

This, well. It's obviously an excessively convoluted workflow. A *lot* of it could be avoided by providing a standardised mechanism for key management. One approach would be to add a new fastboot command for modifying the key database, and only permit this to be run when the bootloader is unlocked. The workflow would then be something like
  • Unlock bootloader
  • Generate keys
  • Install new key
  • Lock bootloader
  • Sign image
  • Install image
which seems more straightforward. Long term, individual projects could do the signing themselves and distribute their public keys, resulting in the install process becoming as easy as
  • Unlock bootloader
  • Install ROM key
  • Lock bootloader
  • Install ROM
which is actually easier than the current requirement to install an entirely new recovery image.

I'd actually previously criticised Google on the grounds that using custom keys wasn't possible on Android devices. I was wrong. It is, it's just that (as far as I can tell) nobody's actually documented it before. It's important that users not be forced into treating security and freedom as mutually exclusive, and it's great that Google have made that possible.

[1] This model fails if it's possible to gain root on the device. Thankfully this would never hold on what's that over there is that a distraction?

comment count unavailable comments
Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 03 Jul 2014 10:29
It seems that web browser is the limit when it comes to low-powered machines. Chromium is pretty much unusable with 512MB, usable with 1GB and nice with 2GB. Firefox is actually usable with 512MB -- it does not seem to have so big per-tab overhead -- but seems to be less responsive.

Anyway, it seems I'll keep using x86 for desktops for now.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 30 Jun 2014 22:36
We have all suffered from changing requirements. We are almost done with implemention, maybe even most of the way done testing, and then a change in requirements invalidates all of our hard work. This can of course be extremely irritating.

It turns out that changing requirements are not specific to software, nor are they anything new. Here is an example from my father's area of expertise, which is custom-built large-scale food processing equipment.

My father's company was designing a new-to-them piece of equipment, namely a spinach chopper able to chop 15 tons of spinach per hour. Given that this was a new area, they built a small-scale prototype with which they ran a number of experiments on “small” batches of spinach (that is, less than a ton of spinach). The customer provided feedback on the results of these experiments, which fed into the design of the full-scale production model.

After the resulting spinach chopper had been in production for a short while, there was a routine follow-up inspection. The purpose of the inspection was to check for unexpected wear and other unanticipated problems with the design and fabrication. While the inspector was there, the chopper kicked a large rock into the air. It turned out that spinach from the field can contain rocks up to six inches (15cm) in diameter, and this requirement was not been communicated during development. This situation of course inspired a quick engineering change, installing a cage that captured the rocks.

Of course, it is only fair to point out that this requirement didn't actually change. After all, spinach from the field has always contained rocks up to six inches in diameter. There had been a failure to communicate an existing requirement, not a change in the actual requirement.

However, from the viewpoint of the engineers, the effect is the same. Regardless of whether there was an actual change in the requirements or the exposure of a previously hidden requirement, redesign and rework will very likely required.

One other point of interest to those who know me... The spinach chopper was re-inspected some five years after it started operation. Its blades had never been sharpened, and despite encountering the occasional rock, still did not need sharpening. So to those of you who occasionally accuse me of over-engineering, all I can say is that I come by it honestly! ;–)
Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 30 Jun 2014 20:03
As Andi found, and it should be fixed in newest -rcs, but I just did

root@amd:~# mkfs.ext4 -c /dev/mapper/usbhdd

(yes, obscure 4GB bug, how could it hit me?)

And now I have

root@amd:/# dumpe2fs -b /dev/mapper/usbhdd
dumpe2fs 1.41.12 (17-May-2010)
1011347
2059923
3108499
4157075

and

>>> (2059923-1011347)/1024.
1024.0
>>> (3108499-1011347)/1024.
2048.0
>>>

. Yes, badblocks detected error every 4GB.

I'll update now, and I believe disk errors will mysteriously disappear.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Sunday, 22 Jun 2014 14:59
Thinkpad X60 is old, Core Duo@1.8GHz, 2GB RAM notebook. But it is still pretty usable desktop machine, as long as Gnome2 is used, number of Chromium tabs does not grow "unreasonable", and development is not attempted there. But eats a bit too much power.

OLPC 1.75 is ARM v7@0.8GHz, .5GB RAM. According to my tests, it should be equivalent to Core Solo@0.43GHz. Would that make an usable desktop?

Socrates is dual ARM v7@1.5GHz, 1GB RAM. It should be equivalent to Core Duo@0.67GHz. Oh, and I'd have to connect display over USB. Would that be usable?

Ok, lets try. "nosmp mem=512M" makes thinkpad not boot. "nosmp mem=512M@1G" works a bit better. 26 chromium tabs make machine unusable: mouse lags, and system is so overloaded that X fails to
interpret keystrokes correctly. (Or maybe X and input subsystem sucks so much that it fails to interpret input correctly on moderate system load?)

I limited CPU clock to 1GHz; that's as low as thinkpad can go:
/sys/devices/system/cpu/cpu0/cpufreq# echo 1000000 > scaling_max_freq

Machine feels slightly slower, but usable as long as chromium is stopped. Even video playback is usable at SD resolution.

With limited number of tabs (7), situation is better, but single complex tab (facebook) still makes machine swap and unusable. And... slow CPU makes "unresponsive tabs" pop up way too often.

Impressions so far: Socrates CPU might be enough for marginally-usable desktop. 512MB RAM definitely is not. Will retry with 1GB one day.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Saturday, 21 Jun 2014 00:14

I decided to use github for pettycoin, and tested out their blogging integration (summary: it’s not very integrated, but once set up, Jekyll is nice).  I’m keeping a blow-by-blow development blog over there.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 20 Jun 2014 18:34

On Mon Feb 26 2001, I committed the version of x86info. 13 years later I’m pretty much done with it.
Despite a surge of activity back in February, when I reached a new record monthly commit count record, it’s been a project I’ve had little time for over the last few years. Luckily, someone has stepped up to take over. Going forward, Nick Black will be maintaining it here.

I might still manage the occasional commit, but I won’t be feeling guilty about neglecting it any more.

An unfinished splinter project that I started a while back x86utils was intended to be a replacement for x86info of sorts, by moving it to multiple small utilities rather than the one monolithic blob that x86info is. I’m undecided if I’ll continue to work on this, especially given my time commitments to Trinity and related projects.

Transferring maintainership of x86info. is a post from: codemonkey.org.uk

Author: "--"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader