• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)

Date: Monday, 09 Apr 2012 08:26

Originally posted on: http://geekswithblogs.net/kobush/archive/2012/04/09/149258.aspx

This is first in series of articles about project I've been building  in my spare time since last Summer. Actually it all began when I was researching a topic of modeling human motion kinematics in order to create gesture recognition library for Kinect. This ties heavily into motion theory of robotic manipulators so I also glanced at some designs of robotic arms. Somehow I stumbled upon this cool looking open source robotic arm:


It was featured on Thingiverse and published by user jjshortcut (Jan-Jaap). Since for some time I got hooked on toying with microcontrollers, robots and other electronics, I decided to give it a try and build it myself. In this post I will describe the hardware build of the arm and in later posts I will be writing about the software to control it.

Another reason to build the arm myself was the cost factor. Even small commercial robotic arms are quite expensive – products from Lynxmotion and Dagu look great but both cost around USD $300 (actually there is one cheap arm available but it looks more like a toy to me). In comparison this design is quite cheap. It uses seven hobby grade servos and even the cheapest ones should work fine. The structure is build from a set of laser cut parts connected with few metal spacers (15mm and 47mm) and lots of M3 screws. Other than that you’d only need a microcontroller board to drive the servos. So in total it comes a lot cheaper to build it yourself than buy an of the shelf robotic arm. Oh, and if you don’t like this one there are few more robotic arm projects at Thingiverse (including one by oomlout).

Laser cut parts

Some time ago I’ve build another robot using laser cut parts so I knew the process already. You can grab the design files in both DXF and EPS format from Thingiverse, and there are also 3D models of each part in STL. Actually the design is split into a second project for the mini servo gripper (there is also a standard servo version available but it won’t fit this arm).  I wanted to make some small adjustments, layout, and add measurements to the parts before sending it for cutting. I’ve looked at some free 2D CAD programs, and finally did all this work using QCad 3 Beta with worked great for me (I also tried LibreCAD but it didn’t work that well).

All parts are cut from 4 mm thick material. Because I was worried that acrylic is too fragile and might break, I also ordered another set cut from plywood. In the end I build it from plywood because it was easier to glue (I was told acrylic requires a special glue). Btw. I found a great laser cutter service in Kraków and highly recommend it (www.ebbox.com.pl). It cost me only USD $26 for both sets ($16 acrylic + $10 plywood).

Metal parts

I bought all the M3 screws and nuts at local hardware store. Make sure to look for nylon lock (nyloc) nuts for the gripper because otherwise it unscrews and comes apart quickly. I couldn’t find local store with metal spacers and had to order them online (you’d need 11 x 47mm and 3 x 15mm). I think I paid less than USD $10 for all metal parts.


This arm uses five standards size servos to drive the arm itself, and two micro servos are used on the gripper. Author of the project used Modelcraft RS-2 Servo and Modelcraft ES-05 HT Servo. I had two Futaba S3001 servos laying around, and ordered additional TowerPro SG-5010 standard size servos and TowerPro SG90 micro servos. However it turned out that the SG90 won’t fit in the gripper so I had to replace it with a slightly smaller E-Sky EK2-0508 micro servo. Later it also turned out that Futaba servos make some strange noise while working so I swapped one with TowerPro SG-5010 which has higher torque (8kg / cm). I’ve also bought three servo extension cables. All servos cost me USD $45.


The build process is not difficult but you need to think carefully about order of assembling it. You can do the base and upper arm first. Because two servos in the base are close together you need to put first with one piece of lower arm already connected before you put the second servo. Then you connect the upper arm and finally put the second piece of lower arm to hold it together. Gripper and base require some gluing so think it through too. Make sure to look closely at all the photos on Thingiverse (also other people copies) and read additional posts on jjshortcust’s blog:

My assembled arm looks like this – I think it turned out really nice:

Servo controller board

The last piece of hardware I needed was an electronic board that would take command from PC and drive all seven servos. I could probably use Arduino for this task, and in fact there are several Arduino servo shields available (for example from Adafruit or Renbotics).  However one problem is that most support only up to six servos, and second that their accuracy is limited by Arduino’s timer frequency. So instead I looked for dedicated servo controller and found a series of Maestro boards from Pololu.

I picked the Pololu Mini Maestro 12-Channel USB Servo Controller. It has many nice features including native USB connection, high resolution pulses (0.25µs) with no jitter, built-in speed and acceleration control, and even scripting capability. Another cool feature is that besides servo control, each channel can be configured as either general input or output. So far I’m using seven channels so I still have five available to connect some sensors (for example distance sensor mounted on gripper might be useful). And last but important factor was that they have SDK in .NET – what more I could wish for! The board itself is very small – half of the size of Tic-Tac box. I picked one for about USD $35 in this store.

Perhaps another good alternative would be the Phidgets Advanced Servo 8-Motor – but it is significantly more expensive at USD $87.30.

The Maestro Controller Driver and Software package includes Maestro Control Center program with lets you immediately configure the board. For each servo I first figured out their move range and set the min/max limits. I played with setting the speed an acceleration values as well.

Big issue for me was that there are two servos that control position of lower arm (shoulder joint), and both have to be moved at the same time. This is where the scripting feature of Pololu board turned out very helpful. I wrote a script that synchronizes position of second servo with first one – so now I only need to move one servo and other will follow automatically. This turned out tricky because I couldn’t find simple offset mapping of the move range for each servo – I had to divide it into several sub-ranges and map each individually. The scripting language is bit assembler-like but gets the job done. And there is even a runtime debugging and stack view available.

Altogether I’m very happy with the Pololu Mini Maestro Servo Controller, and with this final piece I completed the build and was able to move my arm from the Meastro Control program.


The total cost of my robotic arm was:

  • $10 laser cut parts
  • $10 metal parts
  • $45 servos
  • $35 servo controller
  • -----------------------
  • $100 total

So here you have all the information about the hardware. In next post I’ll start talking about the software that I wrote in Microsoft Robotics Developer Studio 4. Stay tuned!

Author: "Szymon Kobalczyk" Tags: "Robotics, Electronics, Robotic Arm"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 06 Apr 2012 07:57

Originally posted on: http://geekswithblogs.net/kobush/archive/2012/04/06/149226.aspx


Last week I had a pleasure to speak at the Microsoft’s Developer Days 2012 in Sophia, Bulgaria. It was a great conference and I met lots of cool people there.

I did a session about Kinect Hacking. My goal was to give a good understanding of Kinect inner workings, how it can be used to develop Windows applications. Later I showed examples of interesting projects utilizing the full potential the Kinect sensor. Below you can find my slides and source code of one of the demos (the one where “Szymon went to the Moon”).

But I wasn’t the only one to talk about Kinect. On the 2nd day Rob Miles also did a fun session titled “Kinect Mayhem: Psychedelic Ghost Cameras, Virtual Mallets, a Kiss Detector and a Head Tapping Game” (you can watch recording of this session from TechDays Netherlands on Channel9). Later that day Yishai Galatzer made a big surprise during his session about Extending WebMatrix, and showed a plugin enabling to take control of WebMatrix with Kinect gestures. Best thing was that he wrote it during the conference, with no previous experience with Kinect SDK (I might helped him a bit to get started).

Thanks for the invitation and I hope to see you soon!

Author: "Szymon Kobalczyk"
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 05 Sep 2010 16:27

Originally posted on: http://geekswithblogs.net/kobush/archive/2010/09/05/netmf_liquid_crystal.aspx

.NET Micro Framework includes reach graphics capabilities with WPF-like libraries, and quite a few high end development boards (Tahoe II, ChipworkX, or FEZ Cobra to name a few) include fancy graphic TFT screens, often with touch input thrown in as well. But this comes at a significantly higher costs, and requires a speedy CPU. Thus it might seem that if you are using a much cheaper board such as Netduino or one of FEZ family, you are doomed to rely on blinking LEDs only. Not quite so. In most scenarios an alphanumeric LCD might be a cheap alternative (or if you have higher budget you might consider using an OLED screen that also works with Netduino).

altFor a larger project I’m working on I plan to use an alphanumeric LCD to display all sorts of status information. Practically all popular LCDs are controlled by a HD44780 compatible parallel interface chipset. There is wide range of these screens available in various combinations of backlight and text color. The common size is 16x2 characters (16 columns in 2 rows). Take a look at the wide selection of LCDs available from SparkFun. I got mine from local electronics store that has very cool looking combination white text on blue background.

When you buy the LCD take notice if you get the 5V or 3V3 version. I’m sure both work fine with Netduino (which uses 3V3 logic) but make sure to wire them to correct power source pin.

These LCDs are quite easy to wire and program. If you look at the above picture notice the 16pin header on top of the display. The HD44780 chipset uses parallel interface for data communication and supports 8bit and 4bit modes. In the former case it will use at least 10 digital output pins from the microcontroller, in the later at least 6. The R/W pin (read write) bit can be permanently wired to ground. You can find the wiring instructions in the examples for the Arduino Liquid Crystal library – it will work the same way on the Netduino. To avoid soldering you can also try to use the DFRobot LCD Keypad Shield for Arduino, and I learned via twitter that Mr. Kogoro Kotobuki (@kotobuki), is designing a similar LCD shield that appears to work great with Netduino. I was told it’s a prototype and not yet sold, but it looks very nice (see one more photo and the schematics).

However if we take the direct approach, the LCD will use at least six of available digital pins. If you use a microcontroller that exposes a massive number of IO ports (like FEZ Rhino or upcoming FEZ Panda) this might be okay, but in case of Netduino or similar board that has only a handful of digital pins this might be not suitable. In this case you might want to multiplex the output over fewer lines. In my previous post I already talked about using shift registers to work around such situation and in this case we can reduce the number of outputs to only three.

Another alternative could be to buy a Serial Enabled LCD. Usually they include as special daughter board on the back (hence called a “backpack”) that has a MCU responsible for translating the UART communication to the interface of the LCD. So if you can sacrifice one of the UART ports on the Netduino this might be a good option, but they usually cost about $10 more than regular LCDs. You can also buy the serial backpack alone, for example the one from Sparkfun or the LCD117 Serial LCD Kit from Modern Devices.

For my project I will be using the same 74HC595 shift register that I used in my earlier demo. Pavel Bánský already demonstrated how to do this on .NET Micro Framework devices. In his first post he shows how to interface LCD with 3 wires using the 4094 shift register, and then in the second post he added support for an I2C 8-bit port extender. I have modified and extended his library to include functionality available in the Arduino LiquidCrystal library.

Pavel introduced a nice abstraction in his implementation to separate the LCD high level functions from underlying transport interface into separate classes. Thus the constructor of top level class LiquidCrystal requires you to provide an instance of a class implementing the ILiquidCrystalTransferProvider interface. This enables to use this library with all the methods of communication described above. In the source code you will already find an GpioLiquidCrystalTransferProvider and Shifter74Hc595LiquidCrystalTransferProvider for the 74HC595 shift register. Later I’m going to add support for the I2C port expander. I got a nice LCD backpack from JeeLabs shop that uses the MCP23008 chip but I don’t know yet how much different it is from Pavel’s PCF8574P. In preparation for this some common bit handling code is already abstracted in the BaseShifterLiquidCrystalTransferProvicer class.

Here you can see the wiring I used:

Wiring 16x2 LCD to Netduino via shift register

Notice that because this time I’m going to use SPI interface the serial data is connected to pin 11 (MOSI) and pin clock is connected to pin 13 (SPCK). Pin 10 is connected to the latch pin, and in the code it is passed as Slave Select pin to the SPIConfiguration object. Since we are going to use seven outputs from the shift register the one remaining output is connected via 2N2222A transistor to control the LCD backlight. This allows us to turn the backlight on or off from code. Finally the 10K potentiometer is used to control the display contrast.

_MG_1274Photo on the right shows the backpack I created for my LCD (I’ll cut the board later along the marked lines). Notice that because the board will be mounted “upside down” the outputs from the shift register to LCD are reversed, but it’s easy to correct it in the code (Shifter74Hc595LiquidCrystalTransferProvider constructor has the optional BitOrder option).

Below you can find a list of methods exposed by the LiquidCrystal class:

  • Begin(columns, lines) - Use this method to initialize the LCD. Specifies the dimensions (width and height) of the display.
  • Clear() - Clears the LCD screen and positions the cursor in the upper-left corner.
  • Home() - Positions the cursor in the upper-left of the LCD.
  • SetCursorPosition(column, row) - Position the LCD cursor; that is, set the location at which subsequent text written to the LCD will be displayed.
  • ScrollDisplayLeft() - Scrolls the contents of the display (text and cursor) one space to the left.
  • ScrollDisplayRight() - Scrolls the contents of the display (text and cursor) one space to the right.
  • Write(text) - Writes a text to the LCD.
  • Write(buffer, offset, count) - Writes a specified number of bytes to the display using data from a buffer.
  • WriteByte(data) - Sends one data byte to the display.
  • CreateChar(location, charmap)

And its properties:

  • Backlight - Turns the LCD backlight on or off.
  • BlinkCursor - Display or hide the blinking block cursor at the position to which the next character will be written.
  • ShowCursor - Display or hide the LCD cursor: an underscore (line) at the position to which the next character will be written.
  • Visible - Turns the LCD display on or off. This will restore the text (and cursor) that was on the display.
  • Encoding - Get or set the encoding used to map the string into bytes codes that are sent LCD. UTF8 is used by default.

Finally let’s look at the sample code. Here is a simple “hello world” demo:

public static void Main()
    // create the transfer provider
    var lcdProvider = new Shifter74Hc595LiquidCrystalTransferProvider(SPI_Devices.SPI1, 

    // create the LCD interface
    var lcd = new LiquidCrystal(lcdProvider);

    // set up the LCD's number of columns and rows: 
    lcd.Begin(16, 2);

    // Print a message to the LCD.
    lcd.Write("hello, world!");

    while (true)
        // set the cursor to column 0, line 1
        lcd.SetCursorPosition(0, 1);

        // print the number of seconds since reset:
        lcd.Write((Utility.GetMachineTime().Ticks / 10000).ToString());


You can download the source code for this project below. Please keep in mind that not all functions are implemented yet, and some other are not tested.

Update 2010-09-06: Eric D. Burdo figured out how to use this library with LCD from Seedstudio Electronic Brick Starter Kit. The connections were a little tricky but he shows the proper pin assignment for the GpioLiquidCrystalTransferProvider in his blog post. Thanks for sharing!

Update 2010-09-22: The library is now available on CodePlex at http://microliquidcrystal.codeplex.com/

Author: "Szymon Kobalczyk" Tags: "Development, Electronics, Robotics, .NET..."
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 31 Aug 2010 17:24

Originally posted on: http://geekswithblogs.net/kobush/archive/2010/08/31/netduino-controlled-servo-robot.aspx

For my next project I decided to try to upgrade my SERB robot that I showed here last year to use Netduino. This robot was designed by great inventors from oomlout and you can buy it as a nicely packaged kit from them. However because the design is open source you can also download the project files and if you have access to a laser cutter order all the acrylic pieces there. It is also very easy to build following the instructable (you can find few more photos from my build here).

Netduino Controlled Servo Robot

The original SERB robot uses Arduino as its brain, so it was quick upgrade to Netduino thanks to the same form factor and pin layout. On the above photo you can see the Netduino is mounted on the back below the prototyping shield. Oomlout’s instructable suggests to use separate 9V battery for the board, but I found that it works just fine using only 4xAA batteries to power whole setup (Netduino, Servos and Xbee).

Robot drive system

The robot is driven by pair of continuous rotation servos. You can buy such servos from various hobby electronics and robotics shops, but most ordinary servos can be modified this way. Some require to remove potentiometer and replace it with resistor divider, while with others you can leave potentiometer in place but release it from rotating and hold it in centered position. You can do it even with tiny servos. Either way you get a very cheap high torque reversible DC gearmotor. And the best part is that you can control it just as any other servo using PWM, with no need for additional motor driver circuits. Thus I could use the Netduino Servo class that was already written by Chris Seto (I only adjusted the pulse range so it matches default values from Arduino Servo library).

But with all good said above, the one drawback with using a servo is that we don’t have a much control over it’s speed. For most values it will jump right to the max speed. I made few test measuring how far my robot would drive at various power levels (by setting servo from 0 to 90 degrees). As you can see on the graph below for values above 20 it will always drive at maximum speed. However for values from 0 to 16 the speed increase is almost linear thus in my code I’m scaling the speed values to this range. I’m also using a timer to ramp up the speed in smaller increments so the robot doesn’t accelerate or stop to fast. Please keep in mind that this works well with the particular model of servos I’m using (Futaba S3001) but you might need to adjust it for your robot.


I encapsulated the code used to drive the robot in the Serb class located in the NetduinoSerbDemo project. It controls both motors and provides helper classes for various commands. The class RandomTest implements a simple random movement pattern using these commands.


Now that we know that the basic drive operations work we can build some applications on top of it. Before we make the robot go around in full autonomous mode (which requires adding few more sensors so robot can navigate in its small world) lets drive it manually with remote control. It would be nice to have some kind of joystick for it, wouldn’t it? Sure you could use the Sparkfun’s joystick shield (which you need to modify a little bit). But since I don’t have one at hand for this project I’m going to use a Nintentdo Wii Nunchuck (besides oomlout uses the same in another instructable).

You see, smart people found out that Nintendo accessories are communicating with Wiimotes using the I2C bus, so it’s quite easy to interface with them from the microcontroller once you know the protocol. There is also available a handy adapter called WiiChuck that plugs directly to Arduino. In the end it’s a cheap way to get a good quality device with analog stick, two buttons and an accelerometer. I bought mine for under $10 – it’s not original but works just fine.

The I2C pins on Netduino are in the same place as on Arduino (SDA on analog pin4, and  SCL on analog 5), but Arduino can be also set to provide required power on neighboring pins (analog pin 2 for ground, and analog pin 3 for +5V). It seems that this can’t be done on Netduino so I had to plug the adapter to the breadboard and wire it properly instead.


The second project called NetduinoSerbRemote contains the WiiChuck class with my implementation of .NET MF driver for Wii Nunchuck. It is based on the oryginal code by Tod E. Kurt, but I extended it with information found on WiiBrew, Arduino and other forums. For example it implements motion calculations to calibrate the accelerometer output (found in the oomlout version). It also has new I2C device initialization procedure that should disable encryption (this might improve compatibility with some wireless Nunchucks).

After creating the WiiChuck instance, you simply call the GetData method, and it will return true if it succeeded, or false otherwise.  Then you can get the extracted values from properties representing position of analog stick, buttons, and accelerometer. For the robot I’m simply mapping the joystick position to directly to speed and direction. Another mode is activated when you press and hold the Z button. Then it will use values from accelerometer instead of the joystick.

Remote control

Looking at the previous photos some of you might already guessed that I’m going to use XBee radios for communication from my remote to the robot. In fact this is the easiest way to add wireless communication between two microcontrollers. It uses serial ports, thus you only need to connect the corresponding UART pins to the radios to get it working.

Note that there are several models of XBee modules on the market. Some provide many advanced capabilities including mesh networking. But if you are going to use them only to transmit data from one device to another you should get an older version now called XBee 802.15.4 (series 1) that can be easily configures for point-to-point networking. I got mine from Adafruit with corresponding adapters that make it easy to plug the module on breadboard or directly to the USB FTDI-TTL-232 cable. You can learn much more from Lady Ada’s website and from Tom Igoe’s book Making Things Talk.

In this project I decided to use UART1 on both Netduinos, so I put a wire from Netduino digital pin 0 to TX pin on XBee, and from pin 1 to RX (and of course connected it to +5V and ground as well). For communication I adapted the simple protocol used in yet another instructable from oomlout. Each message begins with header ‘AAA’, followed by single letter command code, and optional parameter. Thus messages are always 5 bytes long. Here are currently implemented command codes:

private static class Command
   public const char Unknown = '?';
   public const char Forward = 'F';
   public const char Backward = 'B';
   public const char Left = 'L';
   public const char Right = 'R';
   public const char SetPower = 'P';
   public const char Stop = 'S';
   public const char SetSpeedLeft = 'X';
   public const char SetSpeedRight = 'Y';

On the Netduino acting as the remote the values are read from Wii Nunchuck, mapped to SetSpeedLeft and SetSpeedRight commands, and send to serial port. On the other side the SerbRemoteClient class parses incoming data, and interprets these commands accordingly. I hope the code will be easy to follow for everyone.

The solution also contains a very simple WPF application that I used for testing. To use this app I connected one of the XBee modules to PC via with FTDI USB cable. The name and baud rate of the serial port can be set in the app.config file.

Here is the source code for this project:

Author: "Szymon Kobalczyk" Tags: "Development, Robotics, Electronics, .NET..."
Comments Send by mail Print  Save  Delicious 
Date: Saturday, 21 Aug 2010 03:09

Originally posted on: http://geekswithblogs.net/kobush/archive/2010/08/21/netmf_more_blinking_leds.aspx

Netduino Some time ago I wrote about FEZ Mini and FEZ Domino – first affordable development boards for .NET Micro Framework. Today I’m excited to tell you about another device called Netduino. Similar to FEZ Domino this board is pin compatible with Arduino, and therefore most of Arduino shields should work fine on Netduino. This makes transitioning your project quite easy. Only care should be taken to ensure that shield can run at 3.3V logic levels (because Arduino runs at 5V).

Of course Netduino is much more powerful than Arduino, thanks to Atmel 32-bit microcontroller running at 48Mhz, 128KB flash for code, and 60KB of RAM. What’s more, this is a first truly open source .NET MF device, meaning that you can modify and compile the source code of its firmware yourself. You will find more information and rapidly growing community of Netduino users at http://www.netduino.com/. I got mine yesterday and it looks great. But enough said about Netduino – lets put it to some good use.

You can find few introduction videos at http://www.netduino.com/projects/ to get you up and running with hardware and software configuration so I won’t repeat it here. The first tutorial shows you how to blink a single onboard LED, and it would be easy to do the same with more LEDs – you can simply wire them to all other digital I/O pins of the board. But instead of single LEDs you might want to use some other LED components, such as 10-segment LED bar graph or 7-segment display, and it will work the same way.

7-segement display and 10-segment bar graph componentsNetduino has 14 digital pins, and 6 analog pins that can be used as digital as well. But even with simple displays shown above we will quickly run out of available pins. In this tutorial I will demonstrate how you can use shift registers to extend the number of digital output pins available on the device.

Shift registers are very simple ICs, and one popular chip is 74HC595. It’s referred as 8-bit serial to parallel converter. It requires only three lines connected to the microcontroller (data, click, and latch) to get 8 outputs. But what’s cool is that you can connect more than one of such chips in series in case you need more outputs. You can learn all about shift registers from this Arduino tutorial so I won’t repeat it here, and I’ll only show how to use them with .NET Micro Framework devices.

Hardware setup

I’ve built a simple circuit with two 7-segment displays, so I’m also using two shift registers. You can build it on a breadboard as shown below, but I soldered it on a prototyping board to avoid messing with lots of wires. There is also an Arduino shield available called EZ-Expander Shield that you can use in your projects.

Here are the schematics of the circuit. I made it using Fritzing tool that I already presented here some time ago. The Fritzing file is included with the source code and I would like to thank Stanislav "CW2" Šimícek for creating a custom Netduino part for Fritzing:


A seven-segment display is made of 8 LEDs arranged in a matrix so that it can display decimal digits. It has 5 connectors on each side where central pin is used either as common ground or power (depending if you have a common cathode or anode version). Remaining pins are used to control individual segments. This diagram shows how the connections correspond to the LED segments:

You will notice that in my circuit I have common annode version. Also make sure to put 220Ohm resistor on other pins just like you do to protect regular LEDs. These pins are connected to the corresponding eight outputs of the shift register.

We need to connect only three pins to connect the shift register to netduino. In my circuit I connected these digital pins:

  • pin 11 to serial data input pin (DS)
  • pin 12 to shift register clock pin (SH_CP)
  • pin 8 to storage register latch pin (ST_CP)

Only the first shift register is connected directly to netduino, because we can wire more shift registers in series (cascade). Latch and clock pins are wired directly to same input, but the data pin for second register must be connected to the serial out pin (Q7") of first register. When we send the data signals to the first register after 8th bit it will overflow and forward these signals through the out pin to the 2nd register. You can find detailed explanation on how shift registers work in the article Controlling Your Festive Lights with the .NET Micro Framework at Coding4Fun.

You can download the source code for this project here. While it references Netduino assemblies it should work with little changes on other .NET Micro Framework devices like FEZ Domino, Meridian/P and others.

74HC595 and 7-segment display drivers

Now lets look at the code for controlling the shift register. I have separated this code to a class LED_74HC595_IOProvider. It implements simple interface ILED_IOProvider so it can be reused with different types of LED display components, and in the project you will find classes for 7-segment display and 10-segment bar graph.

Constructor of the LED_74HC595_IOProvider stores number of connected registers, and initializes output ports assigned to the corresponding shift register pins. Optionally you can set if data should be sent in MSB (most significant bit first) or LSB (least significant bit first) mode:

public LED_74HC595_IOProvider(short numRegisters, Cpu.Pin latchPin, 
    Cpu.Pin clockPin, Cpu.Pin dataPin, BitOrder bitOrder)
    _numRegisters = numRegisters;
    _bitOrder = bitOrder;

    _latchPort = new OutputPort(latchPin, false);
    _clockPort = new OutputPort(clockPin, false);
    _dataPort = new OutputPort(dataPin, false);

The Write method takes a buffer of bytes and sends them to the shift registers (each byte represents outputs of one shift register):

public void Write(params byte[] buffer)
    // Ground latchPin and hold low for as long as you are transmitting

    for (int i = 0; i < buffer.Length; i++)
    // Pulse the latch pin high to signal chip that it 
    // no longer needs to listen for information

But all the heavy work is done inside the ShiftOut helper method that signals the bits in individual byte:

private void ShiftOut(byte value)
    // Lower Clock

    byte mask;
    for (int i = 0; i < 8; i++)
        if (_bitOrder == BitOrder.LSBFirst)
            mask = (byte) (1 << i);
            mask = (byte)(1 << (7 - i));

        _dataPort.Write((value & mask) != 0);

        // Raise Clock

        // Raise Data to prevent IO conflict 

        // Lower Clock

Now let’s look at the SevenSegmentDisplay class. The ShowValue method will display the provided decimal number. It uses a static lookup table to map digits to corresponding pattern of LED segments. I simply mapped the patterns from wikipedia. Currently only decimal digits are supported, but one extension to this class might be to allow using hexadecimal output formatting as well.

public void ShowValue(int value)
    // Map digits to output patterns
    var buffer = new byte[_numDigits];
    for (int i = _numDigits - 1 ; i >= 0 ; i--)
        byte digit = digits[value % 10];

        // Negate patten if its common anode display
        if (!_isCommonCathode)
            digit = (byte)(~digit);

        buffer[i] = digit;
        value = value / 10;

    // Write output buffer

Example 1 – using potentiometer

For the simple test we will display a current reading of a potentiometer connected to one of analog inputs on netudiono. Don’t forget to connect Aref pin to 3.3V, otherwise the analog readings will be unknown. This is important fact to keep in mind because arudino has Aref internally connected by default.

This example is implemented in SevenSegmentLedWithPotentiometer class, and devices are setup in its constructor:

public SevenSegmentLedWithPotentiometer()
    // Create IO provider for shift registers
    display_IO = new LED_74HC595_IOProvider(2, LED_Latch_Pin, LED_Clock_Pin, LED_Data_Pin,

    // Create 7-segment display interface
    display = new SevenSegmentDisplay(2, display_IO, false);

    // Initialize analog input for potentiometer
    analogPort = new AnalogInput(Analog_Pin);

    // Set value range to display in 2 digits
    analogPort.SetRange(0, 99);

The main loop simple as well. Each iteration it reads the current value and sends it to the display:

public void Run()
    while (true)
        // Read analog value 
        var value = analogPort.Read();

        // Show the value

        // Wait a little

Example 2 – using SHT15 temperature and humidity sensor

Now that we know that the display works fine lets use it to build a more interesting device, namely a digital temperature and humidity monitor. I’m going to use a Sensirion SHT15 sensor (SHT10, SHT11 and SHT15 works exactly the same and differ only by accuracy).  You can buy a simple to use SHT15 breakout board from Sparkfun.The communication protocol for this sensor is bit tricky, but fortunately Elze Kool has already written a .NET Micro Framework SHT1x driver so I’m going to use it in my example. 

This example is implemented in SevenSegmentLedWithTemperature class. Constructor adds initialization to the SHT1x sensor:

private void Setup()
    // Set initial button state
    ButtonPressed = buttonPort.Read();

    // Soft-Reset the SHT11
    if (SHT1x.SoftReset())
        // Softreset returns True on error
        throw new Exception("Error while resetting SHT11");

    // Set Temperature and Humidity to less acurate 12/8 bit
    if (SHT1x.WriteStatusRegister(SensirionSHT11.SHT11Settings.LessAcurate))
        // WriteRegister returns True on error
        throw new Exception("Error while writing status register SHT11");

    // Run timer for measurments
    sensorTimer = new Timer(MeasureTemperature, null, 200, 500);

In addition we want to setup the onboard button, so that when is pressed display will show humidity instead of temperature. I'm using InterruptPort that will trigger an event each time button is pressed on released. This way I don’t have to poll the button state each time in the main loop:

private void HandleButtonInterrupt(uint data1, uint data2, DateTime time)
        // Update button state
        ButtonPressed = (data2 != 0);

The temperature and humidity measurement takes a bit of time (maximum of 20/80/320 ms for 8/12/14bit resolution) and microcontroller has to wait for the measurement to complete. We don’t want to block the main thread waiting for this, so the obvious choice is to put the measurement in another thread. Here I’m using the Timer class to do measurements every 500 ms.

private void MeasureTemperature(object state)
    Stopwatch sw = Stopwatch.StartNew();

    // Read Temperature with SHT11 VDD = +/- 3.5V and in Celcius
    var temperature = SHT1x.ReadTemperature(SensirionSHT11.SHT11VDD_Voltages.VDD_3_5V, 
    Debug.Print("Temperature Celcius: " + temperature);

    // Read Humidity with SHT11 VDD = +/- 3.5V 
    var humiditiy = SHT1x.ReadRelativeHumidity(SensirionSHT11.SHT11VDD_Voltages.VDD_3_5V);
    Debug.Print("Humiditiy in percent: " + Humidity);

    Debug.Print("Sensor measure time: " + sw.ElapsedMilliseconds);

    Temperature = temperature;
    Humidity = humiditiy;

After moving code for taking measurements and updating button state all that’s left in the main loop is simply updating the display when state changes (I added blinking LED to show how often the update is required):

private void Loop()
    if (_needRefreshDisplay)
        _needRefreshDisplay = false;


        // Display current value
        if (ButtonPressed)
            led.ShowValue((int) Math.Round(Temperature));

            led.ShowValue((int) Math.Round(Humidity));


That’s all. The video below shows both completed exercises. Enjoy!

Author: "Szymon Kobalczyk" Tags: "Development, Electronics, .NET Micro Fra..."
Comments Send by mail Print  Save  Delicious 
Date: Friday, 26 Feb 2010 00:31

Originally posted on: http://geekswithblogs.net/kobush/archive/2010/02/26/138195.aspx

After seeing that some of my friends get fancy customized pictures for their Facebook and Twitter profiles I thought it would be cool to get one too. But not for me – for my wife. You see, she currently works as IT Security Officer (the best I know btw.) but earlier when she started as security auditor and she worked as part of the “tiger team”, she often said she wants her picture like the movie poster from "Men in Black”. So this is what I had in mind:


Given the upcoming Valentine’s Day it occurred to me this could be the perfect gift (you see it gets hard over time in marriage to came up with something unique). But instead of going to websites that offer such service, I filled the job opening on oDesk.com (freelance job exchange – highly recommended). Couple days later I got handful of candidates to choose from. In the end I choose the graphics artist from Thailand, Mr. Wanchana Intrasombat (or Vic for short). After seeing Vic’s profile and gallery (and this picture in particular) I was sure this is the style I’m looking for and that he is the perfect guy for the job.

In the end I can say that you can’t go wrong with Vic. He made truly exceptional job, and working with him was fascinating experience on its own. He was very open to my ideas and quick to accommodate them (even when I changed my mind several times in the process).

So here is the final picture:

Avatar of Asia -final version

And here is another sketch he made just for fun of it:

Asia standing

Go check out Vic’s gallery – he has tons of good stuff there. If you need a great graphics artist or just want to get your own avatar feel free to contact him: vic__unforce (at) hotmail (dot) com

And tell us what you think about the pictures!

Author: "Szymon Kobalczyk" Tags: "Personal"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 25 Feb 2010 01:49

Originally posted on: http://geekswithblogs.net/kobush/archive/2010/02/25/138163.aspx

Yesterday I picked up a nice package from TinyCLR.com. Yes, it is yet another robot (third in fact), but this time it runs .NET! How freaking cool is that!

FEZ Mini Robot Kit

This robot is controlled by FEZ Mini board that runs .NET Micro Framework. What’s interesting is that this board has pin-out compatible with Basic Stamp from Parallax. You can also uses it easily on a breadboard for prototyping (just like Boarduino). TinyCLR.com also offers a larger board called FEZ Domino that has pin-out compatible with Arduino so you can use your existing shields. It is more expensive but notice that this board already includes microSD card reader and USB host connector. Both boards are based on USBizi chipset from GHI Electronics that in turns runs on 72Mhz NXP ARM processor.

FEZ MiniFEZ Domino

But the huge thing for me is that I can program this robot in Visual Studio and benefit from all the debugging goodies (breakpoints, variable inspection, stepping, etc.).

The robot kit itself comes from INEX Robotics – company that makes many interesting educational kits for robotics and electronics. Over there the robot kit is called POP-BOT and only difference is that it is controlled by Arduino compatible board. But you can also download the POP-BOT Manual that has 130 pages of many cool project ideas (line tracking, object avoiding, controlling servo motor, and serial LCD). I’m sure it will be lots of fun converting these projects to .NET Micro Framework.

FEZ Mini Robot

Two reflective sensors are included in the kit (useful for line following and edge detection projects), and you can order additional components both from TinyCLR.com and other robotics sites. Many construction parts are included in the kit so it is very easy to attach additional sensors or other parts. As you can see on the picture above, I already added a Sharp IR distance sensor in front (so I can teach the robot to not bump on walls). I also added an Xbee expansion board on the back so one day I can control the robot remotely (and my Holy Grail is to connect the robot to Microsoft Robotics Developer Studio).

Here is a video of the robot running the default program:

And here is my little robot builder crew at work:

Antos builds a robot

Of course both FEZ Mini and FEZ Domino boards can be used not only for robotics, but also in many different applications. Go to the TinyCLR.com website and check out the Projects tab. For quick introduction to .NET MF you should also download the Beginners Guide to C# and .NET Micro Framework, free ebook from Gus Issa.

In my previous post I complained how upset I’m that there is no cheap .NET Micro Framework hardware for hobbyists. Now I can take it back. IMHO we finally have a very powerful alternative to Arduino and similar platforms, with the price that won’t break the bank (especially with FEZ Mini).

Author: "Szymon Kobalczyk" Tags: "Robotics, Development, Arduino, .NET Mic..."
Comments Send by mail Print  Save  Delicious 
Date: Monday, 04 Jan 2010 00:36

Originally posted on: http://geekswithblogs.net/kobush/archive/2010/01/04/137328.aspx

Just after finishing my multicolor RGB controller shield for Arduino, I came across Fritzing, a program that lets you convert your breadboard prototypes into a physical PCB. Since I got one design working at hand, I decided to give it a try. The process is very straightforward.

First you simply put all the components in the breadboard view. The parts library contains most common parts, and you use wires to connect them – exactly the same as you would do on your physical prototype:

Fritzing - Breadboard viewNext you can switch to schematic view where you can check if all the connections are correct:

Fritzing - schematic viewFinally you go to the PCB view and layout your parts on the board. Then you need to route all traces, and while Fritzing does quite a good job with auto-routing, it is worth to spend some time and review the design. In my case I was able to eliminate all the jumper wires – needed otherwise because currently Fritzing supports only single-sided PCBs.

Fritzing - PCB view In my case I designed the board to fit as an Arduino shield. As you can see it consists of a 12V voltage regulator circuit for external power source and ULN2003 converter to drive two RGB outputs.

After you are happy with you can save it in multiple output formats. For me the the etchable PDF format was especially handy because I could print it as a paper prototype and check if all the physical parts will fit properly.

Fritzing also offers a website where you can upload your projects along with any additional documentation, images, and source code. You can find my project page here: http://fritzing.org/projects/multicolor-rgb-led-controller-shield-for-arduino/

On top of that, starting with January 2010 they start a Fritzing Fab Service, and produce the real, physical PCBs. In preparation for this, they selected 24 projects and made them manufactured. I was very happy that my design was selected as one of these projects.

I got my board short after Christmas, and it looks very professional (at least for me :-)):

And here is the final product:


As a geek I always wanted to hack the hardware like this, but at the same time I was intimidated by my little knowledge of electronics. So its truly  amazing how much can be done with such simple tools, and how much I learned in past half a year since my good friend Marcin Ksiazek introduced me to Arduino. For me it’s just the essence of the Arduino phenomen – perfect combination of open source hardware, software and the community.

On the side note, I’m bit upset that the same didn’t happened with the .NET Micro Framework, because I think on the software side it has even greater potential (like you can write and debug your code in Visual Studio you know and love). And although I think making the platform open source is a great move, the hardware costs are the biggest barrier right now. Especially if you consider using it for few hobby projects (I don’t know what’s the pricing looks like for mass production).

I will be exploring .NET Micro Framework shortly so stay tuned.

Author: "Szymon Kobalczyk" Tags: "Arduino, Electronics"
Comments Send by mail Print  Save  Delicious 
Date: Saturday, 02 Jan 2010 05:20

Originally posted on: http://geekswithblogs.net/kobush/archive/2010/01/02/137309.aspx

For me one of the coolest new features added in Silverlight 4 is support for capturing video and audio using webcams and microphone. As we’ve seen during Scott Guthrie’s keynote demonstration it is now possible to capture image frames from video stream and apply some interesting effects to it. On top of that we can even process the video and audio streams directly on the client (i.e. inside the browser).

Because Silverlight 4 Beta was already available at PDC I could try the webcam support right away, and so I jumped to refresh my computer vision projects. Unfortunately I never did find time to finish this but I think it’s still worth publishing the code as it is.

One things I was playing with before was the Touchless SDK. It is an open source .NET library for visual object tracking created by Mike Wasserman. In short it identifies objects with specific color characteristics and sends you their position within the image scene enabling to use these objects as input devices. It means you can control your applications without touching the screen or keyboard (and thus it’s called touchless :-)

Original project page is here: http://touchless.codeplex.com/

Touchless SL

Over a year ago I successfully used this library in WPF so now it was quite easy to convert it again to Silverlight 4. You can check it out and download the source code on the demo page. Below I summarize some notes of things I learned about Silverlight 4 API in hopes that Silverlight team can address these issues before final release.

As a starting point I used this excellent summary by Mike Taulty. The main class that handles video processing is the CameraTracker. It extends from VideoSink in order to get access to incoming video samples. Each sample frame is represented by byte array which format is specified by VideoFormat structure. In order to get current VideoFormat you have to override the OnFormatChange method (call to videoSource.VideoCaptureDevice.DesiredFormat will throw exception).

I learned that although VideoFormat specifies the PixelFormatType, currently it is an enum with only two values: Format32bppArgb and Unspecified. It’s WPF counterpart – PixelFormat structure - offers much more including masks and bytes per pixel. And instead of enum we get a static PixelFormats class with number of common formats declared as properties. I hope this will be changed in Silverlight 4 RTM.

Another thing to be aware of is that the video frame buffer is flipped vertically – so you need to read the image lines from bottom to top. I handle this in the GetPixel and SetPixel methods of RawBitmapAdapter helper class:

public RgbColor GetPixel(int x, int y)
    int p = _scan0 + (y * _stride) + (x * _bytesPerPixel);
    byte b = _imageData[p];
    byte g = _imageData[p+1];
    byte r = _imageData[p+2];
    return new RgbColor(r, g, b);
public void SetPixel(int x, int y, RgbColor color)
    int p = _scan0 + (y * _stride) + (x * _bytesPerPixel);
    _imageData[p] = color.Blu;
    _imageData[p+1] = color.Grn;
    _imageData[p+2] = color.Red;

In both cases I first calc the offset of pixel in the byte buffer using the scan0 and stride values. Stride is specified in VideoFormat but we have to guess the scan0 value (it specifies the offset of the first line):

if (_stride < 0)
    _scan0 = -_stride * (_height - 1);

Would be helpful the get it in the VideoFormat too. One last comment regarding VideoFormat is I think it’s constructor should be made public.

Before initializing the camera app displays a list of available devices. I tried to bind the collections from my ViewModel however it turns out that calls to GetAvailableVideoCaptureDevices and GetDefaultVideoCaptureDevice return different instances of CaptureDevice so you can’t do something like this:

CaptureDevices = CaptureDeviceConfiguration.GetAvailableVideoCaptureDevices(); 
SelectedCaptureDevice = CaptureDeviceConfiguration.GetDefaultVideoCaptureDevice();

Also I noticed that none of the devices returned by GetAvailableVideoCaptureDevices has IsDefaultDevice flag set, so I had to use FriendlyName to match it.

Because CaptureSource is used in similar way to MediaElement I think it should expose similar API. In particular it would be helpful to get events when it state changes. We should also get some notification when camera is connected or disconnected.

In order to show the markers recognized by Touchless I paint these areas on an overlay. In current version I tried to implement this using a custom MediaStreamSource with double buffering as demonstrated by Pete Brown. It works but you can see the overlay lags significantly in respect to camera video. So I’m going to switch back to WriteableBitmap instead, but I’m not sure how it will perform either. In particular I’m concerned that WriteableBitmap doesn’t have a way to indicate that we are going to begin updating the buffer. In the WPF version you first need to issue Lock to get access to the BackBuffer. I was bit surprised that it’s not required in Silverlight version and I’m concerned how efficient it is without it. 

Altogether the camera API looks very promising and definitively opens many possibilities. However I think the most common scenario people will be trying to implement will be some sort of online IM client, and for this we would also need some generic video codecs for streaming. Right now Silverlight doesn't even have codecs for Jpeg/PNG images so I think it should be added in the first place. In the meantime you can try to use these as an alternative: http://imagetools.codeplex.com/ Updated: Please see this great post by Rene Shulte on saving webcam snapshots to JPEG using FJCore library.

Last thing, unrelated to camera API, is regarding support for commands that was also added in Silverlight 4. I found that the command CanExecute method won’t be required automatically on user input as it happens in WPF so you need to manually send CanExecuteChanged event for each command.

Author: "Szymon Kobalczyk" Tags: "Development, Silverlight"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 27 Nov 2009 17:42

Originally posted on: http://geekswithblogs.net/kobush/archive/2009/11/27/136572.aspx

I’ve came across this interesting thread on Arduino forum about using IKEA DIODER and other RGB LED strips to build mood/ambient light. This allows you to create the same effect that you can now find on some TVs. I thought it would be a fun little project to do, and here is a short clip to show you the result:

In case you would like to build it yourself here is how I did it.

The Hardware

To get multicolor LEDs some people are hacking IKEA DIODERs but it’s quite expensive, so I followed the advice found here and ordered two RGB Multicolored 1-Meter LED strips from DealExtreme. Those are flexible plastics strips with 30 multicolor LEDs on each (you can also get it as a bar). It has 4 wires with +12V common anode and separate cathodes for RGB channels.

To control the lights I adopted the circuit shown in Markus post. I put it together on Adafruit’s Proto Shield for Arduino (which is nice because it had the IC pattern for the chip). 


Instead of multiple transistors this circuit uses the ULN2003 chip, so that each PWM pin from Arduino is connected to one side of the chip and the RGB channels from LEDs are connected to the other side. Because LEDs require 12V voltage there is a separate power connector (Arduino is still powered from USB). Since I had only a 16V power supply I’m also using a 12V voltage regulator. Next to it I have a push switch to turn the LEDs on/off quickly. The 10K potentiometer on the other side is used to select color hue for one of the manual modes. One push button is wired to Arduino reset pin, and the other is used to switch modes.

I know it could be made prettier but I yet have to learn how to design and produce a custom PCB (please contact me if you’d like to help). If you don’t want to solder this yourself here are some other alternatives that you might consider:

For initial testing I used the Arduino sketch written by Markus. It has three modes: specific color selected by a potentiometer, pulsate the specified color, and smooth transition between random colors. You can find more details in his post.

The Software

Of course it would be hard to get ambient light working for every video source you feed to your TV, but its lot easier to do it when signal is going through a PC. This summer I build a new HTPC based on a Zotac ION-ITX board (cause its super small and quiet). Recently I ditched Windows 7 Media Center in favor of open source XBMC, mostly to get hardware accelerated VDAUP playback. Here are the steps to run it on a minimal Ubuntu install (but I admit I took me three days to figure out how to do it properly).

Apart from the video player we also need additional piece of software that will constantly sample the image visible on screen and produce average color for each LED strip. Thankfully there are existing programs that can do this for us. One of them is called Boblight and turns out to work quite well both in Windows and Linux (but again you get a little speed boost on the later).

In essence Boblight runs in backgrounds, accepts color input from clients and converts them to commands that it sends to the devices. I have to say I was pleasantly surprised to see how flexible it is, and how easy it was to configure it for my setup. You can find more details on the configuration on this page, and my config file is included in the download below.

The final piece is the code running on the Arduino that will translate these commands to corresponding light levels for each channel. Boblight uses a simple serial protocol called LTBL (Let There Be Light) that is described here. In the download you will find my sketch that handles this communication.

Currently I have Boblight setup to automatically start before XBMC loads so it will turn on the lights on startup (some advice can be found here). This setup serves me well, but as I said Boblight runs Windows too. And you can also try other software like Momolight, AtmoLightVLC Media Player, and other projects found at Ambilight4PC.

Author: "Szymon Kobalczyk"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 28 Oct 2009 02:02

Now that .NET Framework 4.0 Beta 2 is out let’s look again what is available for building multi-touch application in WPF. In Beta 1 we got only a preview of manipulation and inertia components. With Beta 2 we finally get access to whole touch input system, and it looks very close to what was shown on PDC last year. Here is an overview from MSDN:

Elements in WPF now accept touch input. The UIElement, and UIElement3D, and ContentElement classes expose events that occur when a user touches an element on a touch-enabled screen. The following events are defined on UIElement and respond to touch input. Note that these events are also defined on UIElement3D and ContentElement.

In addition to the touch events, the UIElement supports manipulation. A manipulation is interpreted to scale, rotate, or translate the UIElement. For example, a photo viewing application might allow users to move, zoom, resize, and rotate a photo by touching the computer screen over the photo. The following events have been added to UIElement:

To enable manipulation, set the IsManipulationEnabled to true.

This is exactly what we were waiting for. I started playing with this new API and I will try to share some samples soon. But here is one problem I run into already. When I tried to use touch in ordinary WPF application, I noticed that all Buttons have a weird behavior, so that when I touched the Button it didn’t go into a pressed state immediately (I’m testing this on HP TouchSmart). Instead it only fires the Click event when I lift the finger. Testing it more I noticed that on any UIElement it won’t fire the MouseDown event until I lift the finger or slide it quite a bit.

To help me diagnose this issue I created a simple test application shown here:


From left to right I have gray rectangles that react to input events from mouse, stylus and touch. Each rectangle will change color to orange when mouse or stylus is over, and to green when is pressed. In addition, on press I capture the appropriate device and during capture the border changes color to red.

This test confirms that when I use touch panel then the xxxDown events don’t arrive immediately regardless of the device used. I spent quite a while trying to figure it out and trying various settings in both the Touch and Pen Control Panel and the NextWindow’s USB Config utility.

Finally I noticed that somehow this works perfectly fine when I use manipulations. From this I quickly found that this problem goes away when you set IsManipulationEnabled property on the element. Turns out that the side effect of setting this property to true is also changing the stylus properties to disable PressAndHold and Flicks gestures on the element. This explains this problem because stylus engine has to postpone these events trying to interpret the gestures. You can see the difference by selecting the appropriate checkbox on the window.

However although now we get the xxxDown events immediately (which will be very useful for manipulations) this doesn’t fix the problem with Button pressed state. You will notice that MouseDown event is still deferred regardless of the IsManipulationEnabled settings. I believe that in this case this might be caused by input logic in Windows itself. In fact the only way I could affect this was to force the touch panel to report input as mouse events (using NextWindow’s USB Config tool).

In the end I believe the proper fix for this would require extending all WPF built-in controls so that they understand the touch events and react accordingly. At the same time some controls might get some multi-touch specific behaviors as well. For example it was already announced that ScrollViewer will be enhanced to support multi-touch panning. In case of Button it was mentioned several times, that when simultaneous touches occur the correct behavior should be to fire the Click event only after the last touch is lifted. I hope that we get some of these enhancements in the RTM timeframe.

You can download the sample code here.

Author: "Szymon Kobalczyk" Tags: "Multitouch, Development, WPF"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 06 Jul 2009 09:27

Last Friday I’ve received new Futaba S3001 servos. Turns out they are more easier to modify to Hitec’s, and at the same seem more reliable. I’ve simply followed this Instructable, and you can also find some additional photos in my gallery. What most important the servos are also easier to calibrate, so now my SERB will actually stop in place when I want it.

Below is a short video of the completed SERB robot where it just runs in random directions:

This weekend I’ve also started looking at Microsoft Robotics Development Studio 2008. This environment promises easy entry level to robotics. However this is true only when you have a compatible robotics platform (like Lego Mindstorms NXT, fishertechnik, iRobot and few others). There are pretty good tutorials on how to control these robots both from C# and from VPL. However before I can apply them to my SERB robot, first I need to implement the hardware interface and generic services to control the robot. Not much information on this and closest what I found so for is the Robotics Tutorial 6 – Remotely Connected Robots. It would be cool to get more detailed example on writing such services, but fortunately source code of services for other robots is included. In particular I’m trying to understand the code for iRobot and this additional service for VEX Robotics. If you know of any other good examples or tutorials please let me know.

Author: "Szymon Kobalczyk" Tags: "Development, Robotics"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 02 Jul 2009 07:05

Few weeks ego we went to an exhibition in one of Kraków’s malls to celebrate the 100 year anniversary of robotics. Besides some nice exhibits there was also a place where kids could try to build a robot from Lego Mindstorms NXT and Fishertechnik sets. We spent few hours there with my older son Jaś and even managed to build a walking robot.

It was fun for both of us so I started looking if we could continue playing with robots at home. Unfortunately most of the sets I found are pretty expensive (starting at $200). This is where my good pal Marcin Książek turned me to the Arduino electronic prototyping platform.

Arduino is really a simple and cheap microcontroller kit that can be easily connected to a PC and programmed. What makes it fun is that there are also many extensions that come as “shields” and adapters for the board and add capabilities such as XBee and Ethernet communication, accelerometer, DC motor control, GPS or even a touch LCD. The platform is open source so there are tons of information and fun projects on the web. Good place to start would be the Arduino Experimentation Kit (ARDX) – you can buy the kit from oomlout, but it’s also open source so you can download all guides and buy the parts in your local electronics store yourself. I also enjoy reading the book “Getting started with Arduino” by Massimo Banzi. And make sure to send the Arduino gift guide to your relatives. There are even Arduino user groups such as Howduino in Liverpool.

[In Poland you can buy Arduino from Nettigo]

All of this makes Arduino a great basis for building a simple robot. And in fact there is already a project on Instructables that shows exactly How to build a Arduino Controlled Servo Robot. Again this project is open source so you can buy a kit from oomlout or download the blueprints and find all the parts yourself. The body of the robot is done from acrylic sheets and I found a local shop with a laser cutter that could cut the parts for me. Most of the other parts can be easily found in your local DIY store.

One exception are the servo motors. Normal servos for RC models are constrained to movement in 180 degrees range, but what we need here is a “Continuous Rotation” servo. You can get one that was pre-modified such as Parallax, GWS S35 or Hitec HSR-1425CR. Another option is to modify the servo yourself, and there are also many instructions on how to do it on the web. I found that it’s really not hard to do, but you have to work a bit on the calibration.

So having all the parts I could finally begin the assembly. Below you can see is a photo gallery where we build our first SERB robot. As you can see I’m one happy geek dad, because both kids were pretty interested with this.

Right now the robot can’t do much besides running around in random directions, but I have some plans to make it smarter.

  1. Add remote control via Xbox pad or WiiMote. There is already a project showing how to use a netbook to control the robot via Skype. But being a .NET geek I would like to use Microsoft Robotics Studio for this.
  2. Add wireless connection with XBee so that robot can roam freely without cables the but still being controlled from the PC.
  3. Add some sensors so the robot gets more information from the world. I think first I would add bumpers to detect obstacles as shown in this project. Later I might also add wheel encoders (such as Nubotics WheelWatcher) for odometry.
  4. Play with some AI algorithms.

So it looks like I already have found myself a summer project :-)

Author: "Szymon Kobalczyk"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 17 Jun 2009 10:22

Glenn Block TourNext week several .NET User Groups in Poland will host a special guest - Glenn Block, Program Manager in the .NET FX team (and previously PM for p&p “Prism”). As you probably already know, Glenn is the man behind the Managed Extensibility Framework (MEF) project and thus this will be the main topic of his talks:

Building openly extensible applications in .NET 4.0

Are you tired of building monolithic style apps? Are you tired of hacking your app to bits to meet just one more requirement. Do you want to enable third parties to provide add-on value to your apps? 

If the answer to any of these is yes, then come learn about the new Managed Extensibility Framework which ships in .NET 4.0. 

Applications built on MEF dynamically discover and compose available components at runtime. This makes MEF ideal for third-party extensibility scenarios, where the type and number of extensions are undefined. With MEF you can enable customers and third-parties to take your apps where no man has gone before.

During the tour Glenn will visit following cities:

You can register for the meeting in the city near you by clicking on the links above. More information on the official website. Also visit Glenn’s blog and leave him a comment if you like to hear about any particular topic.

Author: "Szymon Kobalczyk" Tags: "People, User Groups"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 27 Apr 2009 07:09

Wczoraj zakończyła się rejestracja do konkursu o tytuł Speaker Idol 2009, a zatem znamy już nazwiska siedmiu śmiałków którzy zmierzą się ze sobą w środowy wieczór. Oto oni:

  1. Marcin Kruszyński – Innowacje w Silverlight 3
  2. Michał Brzozowski – Zasada „SOLID”
  3. Wiesław Nizio – Microsoft Excel i Windows Forms. Co wybrać Ole Automation, czy OleDb?
  4. Jakub Malinowski – CRUD UI dla danych w .Net
  5. Krystian Czepiel – Wykorzystanie Modelu MVC przy tworzeniu aplikacji ASP.NET
  6. Łukasz Podolak – Software Craftsmanship
  7. Arkadiusz Beer – Pex: Zautomatyzowane generatory testów

Już teraz możecie się zapoznać z tematami ich prezentacji opublikowanymi na stronie konkursu

Serdecznie zapraszamy na spotkanie – wieczór zapowiada się naprawdę interesująco!

Author: "Szymon Kobalczyk" Tags: "People, User Groups"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 16 Apr 2009 07:41

Speaker Idol 2009Chcesz sprawdzić się w roli prezentera? Weź udział w konkursie Speaker Idol i zostań idolem publiczności CodeCamp 2009! Wystarczy 10 minutowa prezentacja. Zasady są proste, a nagrody ciekawe...

Co należy zrobić?

  1. Przygotować krótką prezentację (do 7-10 minut).
  2. Zgłosić swoje uczestnictwo do 26 kwietnia 2009 (niedziela) włącznie.
  3. Wygłosić prezentację na spotkaniu KGD 29 kwietnia 2009 (środa) i zachwycić publiczność.
  4. Wygrać w głosowaniu publiczności i odebrać nagrodę!

Zachęcamy do nadsyłania swoich zgłoszeń na adres si@codecamp.pl

Co można wygrać?

  • Możliwość wystąpienia na CodeCamp 2009
  • Visual Studio 2008 Professional
  • Lincencje na produkty firmy Telerik
    Do wyboru: OpenAccess for ORM, Sitefinity Telerik Reporting, RadControls for Winforms, RadControls for ASP.NET Ajax, RadControls for WPF lub RadControls for Silverlight
  • Nevron Chart for .NET Lite edition
  • Narzędzia NDepend
  • Plecaki Microsoft
  • Kamerka internetowa i myszka
  • i inne...

Więcej informacji na stronie: http://codecamp.pl/pl/speakeridol.aspx

Strona spotkania: http://ms-groups.pl/kgd.net/47_spotkanie/default.aspx

Author: "Szymon Kobalczyk" Tags: "User Groups, People"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 10 Mar 2009 21:35

4DevelopersShort while after I wrote about resources for multi-touch on Windows 7 Daniel D left a comment that got me very excited:

MultiTouchVista now has a driver that emulates multitouch hardware for Windows 7

I got to try it out myself! Bit later I got everything up and running and I’m happy to report that it’s all true: you can now effectively emulate multi-touch devices under Windows 7. You can see it yourself on this video. In fact it works so well that I was able to demonstrate it last Saturday at 4Developers conference.

That being said, the process to get it running involves few steps so I thought it would help to have a walkthrough to follow for anyone who would like to try it. So here it goes.


Of course you need a PC. Note that computer vision puts CPU to heavy use, and I was able to get barely 15 FPS on my laptop with 1.6Ghz, and thus the interaction wasn’t very smooth. In fact Windows 7 specification requires multi-touch devices to report at least 50hz per finger.

The software that we are going to use allows to work with number of protocols. One option is to simply connect multiple USB mice to emulate multi-touch. But it is much more fun to build your own surface-like table. For start you can built a MTmini table designed by Seth Sanders – it takes about 20 minutes to build and all you will need is:

  • Webcam – people try different kinds of webcams for this, but I think even something simple with decent video should work. I have great results using Logitech QuickCam Pro 9000 because it has excellent video quality in 640x480 and most important you can set the focus and other parameters manually.
  • Tracing paper – it’s purpose is to diffuse the light coming to the camera so only objects very close to the surface (i.e. your fingers) look sharp. Initially I used just ordinary printer paper, but tracing paper gives more even image.
  • Photo frame – no, I’m not talking about any fancy LCD frames. It’s just plain wood frame (word of advice: bad things will happen if you try to repurpose frame from your wedding photo). Frame I use has A4 paper format (210mm × 297mm) and fits nicely on top of the carton box from the packs of A4 office paper.
  • Cardboard box, duck tape, and scissors.

When you have all the materials here is what you do:

  1. Put the camera at the bottom of the box facing up, and glue it with the duck tape. You might need to cut out the hole for the camera’s USB cord.
  2. Remove the back board from the photo frame and put only the tracing paper on the glass.
  3. Put the frame on top of the box.

You can also watch this video for step by step instructions.

Here is my completed setup:

My MTmini setup


Obviously you need to have Windows 7 Beta installed. So hopefully you was able to download it while it lasted.

Next thing to download and configure is tBeta. Here is description of it from official website:

The Beta, tbeta for short, is a open source/cross-platform solution for computer vision and multi-touch sensing. It takes an video input stream and outputs tracking data (e.g. coordinates and blob size) and touch events (e.g. finger down, moved and released) that are used in building multi-touch applications. tbeta can interface with various web cameras and video devices as well as connect to various TUIO/OSC enabled applications and supports many multi-touch lighting techniques including: FTIR, DI, DSI, and LLP with expansion planned for the future (custom modules/filters).

Download the software from the above website and unzip to folder you choose. Before you run it you might want to change configuration to use higher resolution if your webcam supports it. Go to \tbeta\data folder and open config.xml file. Specify the correct values for  WIDTH and HEIGHT in CAMERA_0 node.

One more thing I did to get more predictable results was to turn off all automatic adjustments in camera software. For Logitech QuickCam this includes switching to manual focus, and turning off Automatic settings for RightLight, exposure and gain. We are not going to move the camera around or dramatically change the lightning conditions so it will be best to make all settings constant.

When you run tbeta.exe you should see something like this:


The tricky part is calibration, which means adjusting the sliders for all filters at the bottom so that you get best recognition accuracy. Unfortunately I don’t know any rules that would help you with this. You would have to play with the settings a bit to get a feeling how it works. If you make it right you should see an outline and unique id associated with each of your fingers as seen on the above screenshot. When all works well you can minimize this window (press spacebar) so it won’t consume CPU for rendering.

The last piece of this puzzle is MultiTouchVista project that I mentioned in previous post as well. This framework, which actively developed by Daniel D himself (aka nesher), adds support of multi-touch to the current version of WPF. But for purpose of this article the important thing is the recently added Windows 7 multi-touch driver. It’s not even officially released yet, so you would need to go directly to Source Code tab and download latest bits (I run it on changeset 18685).

You can follow instruction in this video on how to install the driver and run the services. Here is a short summary of the same:

  1. Compile all projects for MultiTouchVista. Follow instructions on this page.
  2. From Windows Explorer go to MultiTouchVista\Main Source folder, and while holding down Shift right click on Multitouch.Diver folder and select “Open Command Window Here”.
  3. In this window type: install driver.cmd, press Enter and ignore all warnings about not certified drivers.
  4. Open Device Manager and confirm that new driver called “Universal Software HID device” is installed under “Human Interface Devices”. Video suggests to disable and then enable the driver to ensure it is working correctly.
  5. To verify the driver is installed you can open “Pen and Touch” applet from Control Panel. It should now contain the Panning tab.
  6. Back in Explorer go to MultiTouchVista\Main Source\Output folder and run  Multitouch.Service.Consol.exe. By default it loads the MultipleMiceInputProvider so if you should see red dots indicating “virtual cursor” for each USB mouse you have attached. But to use your MTmini table you would need to change configuration so it connects to tBeta through TUIO interface.
  7. From MultiTouchVista\Main Source\Output run Multitouch.Configuration.WPF.exe. From list of Available devices select Tuio and click the arrow button in middle to make it Active device. Then click “Restart service” button to apply the new configuration.
  8. Finally go to MultiTouchVista\Main Source\Multitouch.Driver.Console\bin\debug and run Multitouch.Driver.Console.exe.
  9. Now if you put finger on surface, you should see the input messages coming to the console window. And of course now you should be read to use it as input device for Windows.

Try it on several touch enable applications that I listed in my last post (XPS Viewer and Paint are great for quick demo). You can also find some C++/C# examples on MSDN Code Galery.

Great thanks for Daniel D for leaving this inspiring comment, and even bigger compliments for developing such great library. Please don’t hesitate to let me know if I can help with anything.

Author: "Szymon Kobalczyk" Tags: "Development, WPF, Smart Client"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 18 Feb 2009 10:56

Last month I wrote about two conferences coming up soon in Poland. Organizers were busy all the time working on providing you with greatest speakers and technical content, and today we know the full agenda of both events.

4Developers, 7th March, Kraków

Here is the agenda for .NET track:

  • Ted Neward - Busy .NET Developer’s Guide to F#: Introduction
  • Marcin Celej - Wykresy przy użyciu biblioteki .NET Charts
  • Szymon Pobiega - Inversion of Control w systemach zbudowanych w oparciu o obiektowy model domeny
  • Marcin Kruszyński - Model-driven development w technologii “Oslo”
  • Barbara Fusińska - Od aspektów do Design by Contract, czyli wyraź swoje intencje
  • Mateusz Kierepka - BOKU aktualnie KODU (Xbox 360) – czyli nowatorski wizualny język programowania
  • Raymond Lewallen

I’m especially proud that we also have KGD speakers representing us on this conference.

You can find more information and registration form here: http://4developers.org.pl/

(There is a special discount available for KGD.NET members, so contact me before you register.)

C2C 2009, 14th March, Warsaw

I think C2C shoots for more sophisticated audience, and provides great content from both well known international speakers, as well as our polish best selected from speaker idol contest. Just take a look at list of sessions for .NET track:

  • Marek Byszewski - Tour de VSTS 2010
  • Julia Lerman - My Favorite Entity Framework Tips & Tricks
  • Piotr Leszczyński - Kolejny kontener Dependency Injection? NIE - dziękuję! Czyli o koncepcji meta-kontenera słów kilka
  • Ingo Rammer - Hardcore Production Debugging of .NET Applications
  • Udi Dahan - Avoid a Failed SOA - Business and Autonomous Components to the Rescue
  • Artur Paluszyński - Interakcyjne sceny 3D w Windows Presentation Foundation

Also check out the agenda for SQL and IT Pro tracks here: http://2009.c2c.org.pl/pl/sessions.aspx

Updated: Registration will be open soon. Registration ended very quickly because number of tickets was very limited. Hope you made it though and we can meet there.

Both conferences look very promising, and I do my best to participate in both.

Author: "Szymon Kobalczyk" Tags: "User Groups, People"
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 15 Feb 2009 21:45

Ever since I got my first digital camera I was trying to create panoramic photos. This means take multiple shots of the view around me and then stitch them together into one image to create a panorama.

At the time I used a tool called PTGui and below you can see couple attempts from that time.

Long Beach, CA

Harbor in Long Beach, CA (3 photos)


Castle near Vienna, Austria (2 photos)

This tool is still around, but nowadays we have some easier options. 

Windows Live Photo Gallery

If you use Windows Live Photo Gallery you probably already found out, that it has an option to create panoramic photos. Simply select all photos in your series and use menu option “Create panoramic photo”. Application doesn’t even ask you any further questions and immediately starts the conversion. When it’s done it will only ask you where to save the final image.

Here are the same shots stitched using this tool:



One of the problems with panoramic photos is that algorithm needs to figure out the correct projection so that your scene would have natural proportions all the way (this is similar problem as projection of Earth surface on maps in cartography). In my opinion even with default settings the photos from Windows Live Photo Gallery are much less deformed (especially on the sides) than the previous examples. I’m not saying that PTGui is a bad tool - probably you can get the same result with it if only you can figure out the correct settings.

Microsoft ICE

But there is another tool that you might not know about called Image Composite Editor (ICE) available from Microsoft Research. In fact it’s the same algorithm that is used in Windows Live but gives you some more control on the conversion process.


There are three groups of options that you can select:

  • Stitch – allows you to select type of camera motion used to create the shots. You have choice between 3 types of planar motion, and rotating motion, but most of the time you can leave it on Automatic.
  • Crop – while in WLPG you have to crop your images manually, here you can do it automatically or adjust the crop before saving the output image.
  • Export – the output image can be saved not only as a single bitmap image (JPEG, TIFF, BMP or PNG), but also as multi-resolution tiled formats like HD View and Silverlight Deep Zoom.

But the best feature is that you can preview your scene in 3D. And with enough photos you can even create a 360 degree view all around you. This view also allows you to adjust panorama’s center and curvature, and even change the type of projection from planar to cylindrical or spherical.


Tip: Notice that in above panorama not all photos are aligned on the same baseline. This is because I shot them quickly from free hand. To get best results always use a tripod and lock the camera to use the same settings for each shot in the series.

Deep Zoom Composer

Because panorama image consists of multiple high resolutions photos stitching them together will easily create some insane resolutions. Clearly putting such large files directly on your webpage isn’t the best option. Also scaling them down contradicts the point because you loose all the fine details. Here Silverlight DeepZoom comes to the rescue, and it’s very convenient that Microsoft ICE includes it as one of the export options.

But the same option was also added in recent versions of Deep Zoom composer. Add your photos in Compose mode, select all, right click on them, and select “Create panoramic photo”. Like in Windows Live Photo Gallery, you can’t adjust any parameters of the algorithm. Of course, you can combine benefits of both tools: first stitch your images in ICE and save it as high-resolution image, then import it to Deep Zoom Composer.


However, the main difference is that Deep Zoom Composer allows you to easily publish your panorama to PhotoZoom, or if you want to have more control you can host it yourself on Silverlight Live Streaming - simply follow the instructions in this tutorial.

Below you can see two panoramas that I created this way from photos shot last week on our vacations in Egypt (click image to open Silverlight page).


Beach at the Sharm Grand Plaza Hotel (8 photos)


Desert outside Sharm El Sheik (11 photos)

As you can see creating panoramic photos is quite easy nowadays with all the tools available to do the hard job for you. I hope that this encourages some of you to take some great shots on your next trip. And don’t hesitate to leave me link to your panorama in the comments.

Author: "Szymon Kobalczyk"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 13 Feb 2009 07:19

Last couple of months I’ve been working on very cool project that utilizes new multi-touch features in Windows 7. Although I can’t talk yet about our product, I thought it would be good to start sharing my experience with multi-touch programming. I’ll begin with some general resources to get you started.


Of course first thing you need is a multi-touch capable hardware. As far as I know currently there are only three devices available on the market. Those lucky to be at PDC could see that most demos were run on HP TouchSmart All-in-One PC, or Dell Letitude XT tablet. Later in December HP released TouchSmart tx2z tablet, and this week Dell announced specs on Letitude XT2. For my work I use HP TouchSmart IQ504 PC.

The good news is that we have already seen some announcements from other manufacturers, and can expect number of devices to increase closer to Windows 7 release date.

In terms of touch-screen technology both tablets use the same DuoSense capacitive digitizer from N-Trig, while HP All-in-one PC uses optical overlay developed by NextWindow. The main difference is that NextWindow device supports only two touch points, while N-Trig’s can recognize more contacts.

Unfortunately multi-touch works only with dedicated hardware, so you can’t use other digitizers (like Wacom’s), touchpads or TabletPCs. Also there is no way to emulate multi-touch on Windows 7, for example by attaching multiple mice, although this is supported on Surface SDK emulator as demonstrated by Scott Hanselman (see around 14:30).


In terms of software you need two things: Windows 7 and proper multi-touch driver for your device. Currently I run on Windows 7 Beta 1 (build 7000 for x86), and when I did clean install today most of the drivers are now available through Windows Update. The only driver I had to install manually was for Ralink WLAN adapter. However the TouchSmart version I have doesn’t have TV tuner, so if yours does you might need to install some additional software. Fortunately Kurt Brockett published detailed guide how to setup Windows 7 on HP TouchSmart.

If by any chance the multi-touch drivers won’t install for you automatically, you can download them directly from NextWindow’s or N-trig’s websites:

To verify all works fine open any page in IE8 and you should be able to use two fingers for zooming.

Here are some other areas in Windows 7 that were enhanced with multi-touch features:

  • Panning with inertia is enabled “everywhere” where scrollbars exist.
  • Paint
    • Choose a brush from the “brush gallery” and you can then multi touch finger paint.
  • Games
    • Hearts/Solitaire have been optimized for touch
  • Shell
    • Windows Snapping (Aero Snap) with Touch
    • Aero Peek with Touch
    • Taskbar Jump Lists with Increased spacing
  • Windows Media Player
  • Windows Photo Viewer
    • Zoom, Rotate, Panning and Flicks
  • XPS Viewer
    • Gestures (Zoom, Two-Finger Tap, Panning)
  • Media Center
    • Direct Panning in most Scrollable Views and Menus
  • Touch Keyboard / TIP
    • Multi-touch touch keyboard
  • Internet Explorer 8
    • Panning
    • Drag Menu (Address Bar) with Increased Spacing
    • Increased Spacing for Favorites with Touch
    • Gesture (Zoom, Panning, Flicks back and forth)
  • Windows Live Photo Gallery
    • Gesture support in the Viewer

You can also download IdentityMine’s Air Hockey game from: www.identitymine.com/airhockey

Tip: If you are using touch on regular basis touch it is a good idea to increase the size of Windows UI (fonts, icons, etc.) by changing your display settings to Medium (125%). This will make Windows much more “touchable”.

Multi-touch Programming

Most of what we know so far about multi-touch API’s introduced in Windows 7 and WPF 4.0 comes from PDC session by Reed Townsend and Anson Tsao.

There are also two later sessions from WinHEC 2008: Multi-Touch in Windows 7 Overview covers some basics and repeats information from PDC, while Multi-Touch Driver Development and Logo Compliance is more toward hardware developers.

Recently published Windows 7 SDK Beta includes header files for WM_TOUCH, WM_GESTURE messages, and related functions and structures. It also includes some documentation and samples on this (also available on MSDN).

When you have all of above installed, you can try some code samples from hands on labs from PDC available on MSDN Code Gallery. These samples are both in C++ and C#. But to get it working in WPF you will need some more advanced framework in place, and I suggest you first take a look at the Multi-touch Vista project on CodePlex. You can take a look at MIRIA SDK that adds multi-touch support to Silverlight apps. Finally, for those of you that already work with TUIO libraries (like Touchlib) here is an applet to translate WM_TOUCH to TUIO messages.

More information in multi-touch and related technology can be found in the great NUI Group Forum. You can also find all information how to build multi-touch device yourself.


I hope that this information will get you started, and I will try to publish more information on multi-touch programming in C# in next few days. If you have any questions or suggestions please leave a comment.

Author: "Szymon Kobalczyk" Tags: "Development, Smart Client, WPF"
Comments Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader