• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Tuesday, 13 Sep 2011 20:26

Here are my notes from the Steven Sinofsky keynote at BUILD.

Keynote started with a video of developers, designers etc. working on Windows 8 giving their favorite features in Win8.

  • ~450 million copies of Win7 sold (1500 non-security product changes seamlessly delivered)
  • Consumer usage higher than XP
  • 542 million Windows Live sign-ins every month

Lots of change in Windows

  • Form factors/UI models create new opportunities (touch)
    • "People who say touch is only for small or lightweight devices are wrong. As soon as you use touch on a tablet, you're going to want to touch on your desktop & laptop."
  • Mobility creates new usage models – e.g. use while reclining on a couch
  • Apps can't be silos - "customers want a web of applications"
    • Apps to interact easily
    • Services are intrinsic

What is Win8?

  • Makes Windows 7 even better - everything that runs on Win7 will run on Win8
  • Reimagines Windows from the chipset (ARM work) through the UI experience
    • All demos shown today are equally at home on ARM and x86

Performance / Fundamentals

Kernel Memory Usage

Win 7 RTM 540 MB 34 processes
Win 7 SP 1 404 MB 32 processes
Win 8 Dev Preview 281 MB 29 processes

Demos

User Experience (Julie)

  1. Fast and fluid - everything's animated
  2. Apps are immersive and full screen
  3. Touch first - keyboard/mouse are first-class citizens ("you're going to want all three")
  4. Web of apps that work together - "when you get additional apps, the system just gets richer and richer"
  5. Experience this across devices and architectures
  6. Notes from Julie's demo
  • Picture password - poke at different places on an image (3 strokes) to login
  • Tiles on the home screen - each is an app - easily rearranged. Pinch to zoom in/out
  • On screen keyboard pops up
  • Swipe from right side to bring up Start screen - swipe up from bottom to get app menus ("app bar") - relevant system settings (e.g. sound volume/mute) also appear
  • Select text in a browser - drag from right side to see "charms" - these are exposed by apps. One is "Share" - shows all apps that support the "Share contract".
    • Think of sharing as a very semantically rich clipboard.
    • Target app can implement its own panel for information (e.g. login, tags, etc.) for sharing when it's the target.
  • Search
    • Can search applications, files - apps can also expose a search contract to make it easy for search to find app-specific data.
  • Inserting a picture
    • Shows pix on computer
    • Social networking sites can add content right into picture file picker
  • Showed settings syncing from one machine to another machine she is logged in on that is an ARM machine.

Metro-style Platform/Tools (Antoine)

  • Current platform a mixed bag - silo of HTML/Javascript on top of IE, C#/VB on top of .NET & Silverlight, and
  • Metro apps can be built in any language
  • Reimagined the Windows APIs - "Windows Runtime" (Windows RT).
    • 1800 objects natively built into Windows - not a layer.
    • Reflect those in C#/VB.Net/C++/C/JavaScript
    • Build your UI in XAML or HTML/CSS
  • Launch Visual Studio 11 Express - new app to build Metro apps.
    • Pick the language you want - pick the app template you want.
  • Enable millions of web developers to build these apps for Windows.
  • Code you write can run either locally or in a browser from a web server - just JavaScript and HTML 5.
  • New format - App Package - that encapsulates
  • Use mouse or touch seamlessly - no special code.
  • Modify button to bring up file picker dialog…
    • Also allows connecting to Facebook if the app that connects FB photos to the local pictures is there - every app now gets access to FB photos.
  • Adding support for the "Share" contract is 4 lines of JS
  • Use Expression Blend to edit not just XAML but HTML/CSS.
    • Add an App Bar - just a <div> on the HTML page.
    • Drag button into there to get Metro style where commands are in the app bar
  • Uses new HTML 5 CSS layout as Grid. Allows for rotation, scaling, etc. Center canvass within the grid.
  • Expression lets you look at snapped view, docked view, portrait, landscape.
  • 58 lines of code total
  • Post app to the Windows Store
    • In VS Store / Upload Package…
    • Licensing model built into app package format. Allows trials.
    • Submit to Certification
      • Part of the promise of the store to Windows users is the apps are safe and high quality.
      • Processes can be a bit bureaucratic.
      • Does compliance, security testing, content compliance.
      • Will give Developers all the technical compliance tools to run themselves.
    • The Store is a Windows app. Built using HTML/JavaScript
  • Win32 Apps
    • Not going to require people to rewrite those to be in the store.
    • Don't have to use Win8 licensing model.
    • Give the Win32 apps a free listing service.
  • XAML / Silverlight
    • Using ScottGu sample SilverLight 2 app.
    • Not a Metro app - input stack doesn't give touch access.
    • How to make it a Metro app?
      • Runtime environments between SL and Win8 are different.
      • Had to change some using statements, networkin layer.
      • Reused all the XAML and data binding code - it just came across.
      • Declare it supports "Search" and add a couple of lines of code.
    • Also can use same code on the Windows Phone.
    • "All of your knowledge around Silverlight, XAML just carries across."
  • If you write your app in HTML5/CSS/XAML, it will run on x86/x64/ARM. If you want to write native code, we'll help make it cross-compile to these platforms.
  • IE 10 is the same rendering engine as for the Metro apps.
  • Can roam all settings across your Win8 machines - including you app settings if you want.

Hardware Platform (MikeAng)

  • 8 second boot time - win7 pc.
  • UEFI
  • New power state called "Connected Standby"
    • Windows coalesces all the timer and network requests, turns the radio on periodically to satisfy them, then goes back to very low power consumption.
    • But because app requests are getting satisfied they are up to date as soon as you press "ON"
  • USB 3 ~4x faster at copying a 1 GB file than USB 2
  • Can boot Win8 from up to 256 TB drive.
  • Direct Compute API - can offload compute loads to GPU
  • Every Metro app has hardware acceleration UI baked in.
  • Doing work with OEMs on testing sensitivity of touch hardware
    • Windows reserves only one pixel on each side for the Windows UI, so sensitivity important.
  • Down to 1024 x 768 for Metro apps. If 1366 x 768, get full Windows UI (side-by-side snap in). Any form factor - about resolution.
  • Have a sensor fusion API - accelerameter, touch.
  • NFC - near field communication - business card can have a little antenna built in to send data to Win8.
  • Integrating device settings (web cam, HP printer, etc.) into Metro UI rather than as a third-party app.
  • Ultra Books
    • Full core powered processor in a super-thin and light package.
    • Some are thinner than legacy connectors - RJ45 and VGA - they are bumps.
    • These things are mostly battery.
  • Samsung PC giveaway - to all BUILD attendees
    • 64 GB SSD
    • 4 GB RAM (Steven: "so you can run Visual Studio")
    • AT&T 3G included for one year (2GB/mo)
    • Windows tablet + development platform.
    • 2nd generation core i5
    • 1366x768 display from Samsung - amazing
  • Refresh your PC without affecting your files
    • Files and personalization don't change.
    • PC settings are restored to default
    • All Metro apps are kept - others are removed.
    • Command-line tool to establish base image for this for pros.
  • Hyper-V in the Windows 8 client
  • ISOs get mounted as DVD drives.
  • Multi Mon -
    • Screen background extends
    • Task bar customizes to multi-mon - can have identical across two mons or have per-monitor task bar (show only apps running on that monitor)
    • Ctrl/PgDn to switch Metro start screen between the two monitors - develop on one, test on another.
  • Keyboard works the same - type "cmd" from Metro Start screen and are in search for CMD.

Cloud Services (ChrisJo)

  • Windows Live mail Metro client connects both Exchange and Hotmail.
    • Full power delivered by ActiveSync.
  • Windows Live Metro calendar app.
  • Bring together all the Friends through Linked In, Facebook, Windows Live.
  • Photos
    • Connected to Facebook, Flickr, local photos.
    • Written as a Metro app.
  • SkyDrive - 100 million people.
    • Every Win8 user, every Win Phone has a SkyDrive.
    • Also accessible to developers - access the same way as you would use local store.

Wrap

  • Used college interns to develop sample apps included in dev preview build.
  • 17 teams (2-3 devs per team).
  • 10 weeks.

Developer Preview (not Beta).

Learn more:

MSFT will let everyone download the Developer preview starting tonight.

http://dev.windows.com

  • X86 (32- and 64-bit)
  • With Tools + Apps or just Apps
  • No activation, self-support.
Author: "Mike J Kelly"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 12 Sep 2011 23:00

I'm down in LA (OK, Anaheim actually...) for the BUILD Windows 8 conference - what was previously known as the "Professional Developers' Conference" (PDC) - not sure if the fact that it's changed names means that they want a broader appeal beyond just professional developers, but we'll see.  At $1600-$2400 to attend (plus travel) I doubt too many hobbyists are coming.

I'll be posting here my impressions from the keynotes and from the sessions I attend.  There is a rumor that a new tablet with Win8 is going to be distributed to attendees - we'll see.  Microsoft gave an Acer tablet to all attendees at the last PDC I attended in 2009.  Looking forward to getting my hands on a beta of Win8 and especially comparing it to my current favorite tech device, the iPad2 I bought in June.  There are some promising indicators - a friend sent me a link to this video showing an absolutely amazing bootup time for Win8.  We'll see...

 

Author: "Mike J Kelly" Tags: "build, windows 8"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 08 Feb 2011 00:56

For the auction at my kid’s school, I volunteered to set up a simple web site to be able to review items available; there is no online bidding at present.  I decided to use the BidNow Azure sample as a base for this, figuring it will also give me a chance to play around a bit with Identity Foundation to enable creating an account on the auction site.

I downloaded the BidNow sample and followed the directions under Getting Started on the home page to install the various required components – this required me to update my Azure tools to the release from the October 2010 PDC (it’s been a while since I’ve played with Azure) and the AppFabric SDK as well as the Identity Foundation SDK and runtime.

There are a set of automated scripts the developers of BidNow have provided to configure things, which is good and bad.  It’s good in that the configuration is somewhat complex and the scripted PowerShell scripts take you through setting it up by asking you a set of questions, TurboTax-style.  It’s bad in that error-handling in the scripts isn’t very robust and so a number of times I encountered stack traces and exception messages from PowerShell which quickly disappeared from my console as the PowerShell scripts blithely continued running, clearing the screen of previous errors before I could even read them.  It’s too bad that PowerShell doesn’t have an easy way of comprehensively logging everything that happened so I could go back and review which steps had failed so I could try to do those manually.  For instance, here is one error I managed to capture:

Error Setting up - 1-28-11

It’s unclear whether the error is expected since a Yahoo provider didn’t exist, or whether the error occurred trying to set up the ACS.  It’s also unclear whether the scripts are re-runnable or not, and afraid of further screwing up my configuration, I didn’t try that.

Instead, I figured I could figure out whatever steps the scripts missed and just manually do those.  So I built the sample, fired it up and got a home screen – so not all is bad, it could at least connect to the database under local dev fabric and display items.  But when I clicked “log in” the problems started.

The code apparently uses AppFabric Labs identity providers to allow for sign-in across a number of services – Windows Live, Google, Facebook, etc.  That’s great and one reason I picked it.  However, it does this by using some JavaScript to retrieve the identity providers as JSON and then populate the login screen – so debugging why that isn’t working isn’t too straightforwar.  The symptoms are that I just see a “Please Wait” screen when I click login; this is the default HTML for that division on the page, which is replaced by a JavaScript callback when the JSON for the identity providers is delivered.  So apparently it is never getting called back to repopulate the content of that division.

I figured I’d just try the URL the page issues to retrieve the AppFabric Labs identity providers and see what I got.  The URL is something like: (spacing added for readability):

https://bidnow-sample.accesscontrol.appfabriclabs.com:443/v2/metadata/IdentityProviders.js?
                 protocol=wsfederation&realm=http%3a%2f%2flocalhost%3a8080%2f&
                 reply_to=http%3a%2f%2flocalhost%3a8080%2fLogOn.aspx&version=1.0

It seemed wrong to me that it was referring to the bidnow-sample namespace within AppFabric labs, not the namespace I had created, so I figured that was probably one of the things that didn’t get updated by the scripts.  However, plugging this URL into a web browser, I get this:

image

Hmm, so it’s not complaining about an invalid namespace, as I’d expect; it’s complaining about the realm (whatever the hell that is) and which appears to point to the Azure local dev fabric.  Looking a bit into what a “realm” is, I find that this is the URI that the issued token from ACS will be valid for.  Searching a bit on the web, I found this page which explains how to set up ACS with Windows Azure.  It explains that when registering the “relying party application” with ACS, you have to specify the URI – I didn’t do that when I set up my AppFabrics lab info.  (This page also has more in-depth information about “relying party applications” and the realm – one of the challenging things about learning any new technology like this is that there are a bunch of new terms which you have to first learn; a good resource here is the December 2010 MSDN article “reintroducing” :) ACS; I guess you have to write an article re-introducing something when the first documentation on this just led to ho-hums and scratching heads.  But this article actually guides you through the necessary steps of configuring the ACS namespace and realm and all up on the AppFabrics Labs web site reasonably well.

Now the MSDN article is concerned with a generic ASP.Net web site, not the Azure BidNow sample, so there’s a bit of reading between the lines here.  After getting you to configure ACS, the MSDN article has you add an STS Reference to your web site.  BidNow already has this configured, so you just have to change the configuration of this.  Open the web.config file in the BidNow.Web project and change the places where it refers to bidnow-sample.accesscontrol… to <namespace>.accesscontrol… where <namespace> is the name of your ACS namespace, created as the MSDN article describes.

You can do this manually, or you can just follow the steps in the article by clicking “Add STS Reference…” to the BidNow.Web project.  The directions in the article are a bit out of date – it seems that some of the UI has changed – but basically, you have to click through, paste in the information from the ACS web site as described (to the “WS-Federation Metadata” URL).  You also have to tell the wizard to generate a certificate for your application – the default of “no encryption” didn’t work for me.

This got me past the first step – when I build and run the app and go to the BidNow home page and click “Login”, I now get a list of providers:

image

So far, so good.  Clicking on Google takes me to the Google federated login and I am able to login using my GMail account – I am then prompted to share my GMail email address with the ACS – this is as expected and is covered in the article.  However, when I click Approve here, I get this error:

image

Hmm…  Searching for this error on the web, I find a helpful explanation on acs.codeplex.com:

The rule group(s) associated with the chosen relying party has no rules that are applicable to the claims generated by your identity provider. Configure some rules in a rule group associated with your relying party, or generate passthrough rules using the rule group editor.

OK, so where in the article it told me “just use default rules”, I guess it wasn’t quite right.  Actually, it was – but I missed a step in my haste.  I forgot to click the “Generate Rules” for the default rules group for the providers selected on the ACS portal:

image

Now you would think that default rules would be, oh, I don’t know, defaulted but apparently not unless you click the button to generate them.  Yup, I’m understanding more and more why they had to write an article re-introducing this service.

After generating these, going back to my dev fabric hosted BidNow, I am able to login!

Author: "MikeKelly"
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 09 Jan 2011 19:52

For a client engagement, I was provided VMWare images.  I don’t have VMWare, but have a server running Windows Server 2008 R2 with Hyper-V.  So I needed a conversion from the VMWare image to a Hyper-V image.

As is often the case, I figured I must not be the first person to want to do this, so there’s probably info on the web about it.   Back when I worked as a dev on Office, we had a strategy of being a universal receiver – no matter what format you had, you could open it in Office.  This was in the days when WordPerfect 4.1 and Lotus were the market leaders, so it makes sense to make it super easy for people to move from what they have to your format.  I figured the same must be true for Hyper-V, the kind of underdog in the VM space.

Sure enough, I found a good blog post from John Robbins describing how he did this.  John is doing a more complex migration than I need –he’s moving his whole environment, including an Active Directory domain controller from VMWare to Hyper-V.  I just have a couple of virtual disk images I need to be able to run under Hyper-V.  But John’s post links to just what I needed - a tool which does a sector-by-sector conversion from the VMWare .VMDK format for virtual hard disks to Hyper-V’s VHD format. 

But that gives me a virtual hard drive with the image – it doesn’t give me a virtual machine.  Here are the steps to convert that from the VMWare virtual machine information provided:

  1. Download the VMDK to VHD Converter from VMToolkit.
  2. Use it to convert the VMWare VMDK (virtual disk image) to a Hyper-V VDK (virtual disk image).  This creates a new file that is a sector-by-sector copy of the original virtual hard disk.
  3. Start Hyper-V Manager and click on your server name in the tree control on the left.
  4. Click New / Virtual Machine… and name it and configure memory/networking.
  5. When you get to step 4 (Connect Virtual Hard Disk), click the second option “Use an existing virtual hard disk” and point it at the VHD you created from the VMDK.
  6. Start the Virtual Machine.    Depending on whether the virtual configuration is significantly different than the VMWare image you received, Windows may need to configure hardware and restart the VM – this will happen automatically.

That’s about it – pretty easy migration from VMWare to Hyper-V!

Author: "MikeKelly"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 19 Nov 2010 19:32

I have a large spreadsheet I’ve gotten from a colleague.  To help categorize it for a pivot table, I added a column which is “group”.  I then added a table in a different sheet that mapped people to groups – and used the VLOOKUP function in the “Group” column to lookup the group from the mapping table sheet. 

I then went to pivot the data by clicking “Summarize with Pivot Table” in the “Table Tools” ribbon section, but the pivot table field list doesn’t contain my group column.

At first I thought that maybe for some reason calculated fields wouldn’t be included in the pivot table – but this made no sense and there are are other calculated fields in my source data.

After poking around a bit on the web, I found this post which describes the problem and how to fix it.  Basically, Excel has a “pivot table” cache which needs to get refreshed.  Since other pivot tables had been created in the workbook based on my source data by the person who gave it to me, Excel “knew” what the source data looked like – and in its view, it didn’t have a “Group” column.  Simply going to the pivot table sheet, selecting the “Pivot Table Options” tab and clicking “Refresh” made the “Group” column appear in my pivot table field list.

Author: "MikeKelly"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 18 Nov 2010 00:58

I’ve just been listening to Ed Norton interviewing Bruce Springsteen at the Toronto Film Festival earlier this fall at the premiere of the movie “The Promise: The Making of ‘Darkness on the Edge of Town’”.  One of the comments Springsteen made in the interview reminded me of a point that Fred Brooks made at the Construx Software Executive Summit last week (which was a great event).

Springsteen put it this way:

And I said man, there’s other guys that play guitar well. There's other guys that really front well. There’s other rocking bands out there. But the writing and the imagining of a world, that's a particular thing, you know, that's a single fingerprint. All the filmmakers we love, all the writers we love, all the songwriters we love, they have they put their fingerprint on your imagination and then on - in your heart and on your soul. That was something that I'd felt, you know, felt touched by. And I said well, I want to do that.

Fred Brooks (author of “The Design of Design”, and of course, famously, “The Mythical Man-Month”) put it a bit differently:

“Great design does not come from great processes; it comes from great designers.    Choose a chief designer separate from the manager and give him authority over the design and your trust.”

Both are really saying the same thing – great, consistent, beautiful designs always come from a single mind expressing himself or herself.

I had lunch with a friend who now works at Facebook and he told me he’s working alone on his project – if he gets to the point where he needs others, he will enlist them, but so far he hasn’t.  He said this is how most things happen at Facebook and the processes would make an experienced software manager like me aghast.  No specs.  No schedules.  No testing beyond developer-driven testing to a percentage of users on the live site.  The ability for any developer to deploy to the live site.  But it’s hard to argue with the results.  This may be how to scale out – have a consistent design (Zuckerberg still oversees the overall design of the site, according to David Kirkpatrick, author of the Facebook Effect) but then allow for autonomy within that design.

Author: "MikeKelly"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 01 Oct 2010 17:50

For a consulting project, I recently had to join my laptop to an Active Directory domain at a client's workplace.  Suddenly, my home computers can no longer see the computer.  I found that the laptop could see shared items on my home network, though.

I went to the "Network and Sharing Center" to see if there was some setting I had to tweak and found this:

Hmmm... so I'm kinda/sorta still part of a homegroup, it sounds like.  Searching for the message highlighted I found this Windows Online help topic that explains it:

http://windows.microsoft.com/en-US/windows7/Switching-between-your-home-and-workplace-networks

So it turns out that yes, you can see other computers from the domain-joined laptop, but the other computers on the homegroup can't see the domain-joined laptop any longer.  I suspect this is a security thing.

 

Author: "MikeKelly" Tags: "windows, homegroup"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 17 Sep 2010 21:13

I noticed a while ago that my Outlook 2010 connected over IMAP to my company email and Outlook Connector for MSN to my personal Hotmail account was always in the process of trying to send messages – even when I hadn’t written a message.  This started causing more problems as things would sit in my Outbox rather than going, so I decided yesterday to get to the bottom of it.

I found this post which explained that it could be an old read receipt.  I downloaded the tool referenced in there and ran it but didn’t find any read receipts – but did find two old email messages sitting in outbox for a Personal Folders file (not the default PST as the post mentions – that one was empty, but there was another one that had messages sitting in it).  Deleting those as described in the post (admittedly a rather geeky process that involves using this tool to call specific MAPI APIs, but it’s well outlined in the blog post how to do this) fixed the problem.

There is a $50 tool called OutlookSpy (with a  free 30-day trial) that was also mentioned and might help some folks who find the tool I used a bit too geeky.

Author: "MikeKelly" Tags: "outlook"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 03 Aug 2010 21:45

I’m doing some performance testing on Azure tracing to figure out the overhead of trace statements in code, especially when the tracing level is such that the statement does nothing – but clearly it has to do something to figure out to do nothing.  So I have a simple Worker Role that calculates the 10,000th number in the Fibonacci sequence.  I run this with tracing every 100th time through the loop (but the trace level set such that the trace statement won’t emit the output) and with no tracing code. 

To gather the data, I use a timer which I put into a simple value class:

[Serializable]
public struct TimerInstance
    {
        public TimerInstance(string str, Int32 nTicks)
        {
            _name = str;
            _ticks = nTicks;
        }

        private readonly string _name;
        private readonly Int32 _ticks;

        public string Name { get { return _name == null ? String.Empty : _name; } }
        public Int32 Ticks { get { return _ticks; } }

        public override string ToString()
        {
            return string.Format("{0,20}: {1,10:N}", Name, Ticks);
        }
    }

Then I just have a list of these which I add to as I run each test:

    static private List<TimerInstance> _Timers = new List<TimerInstance>();

At the end of a batch of tests, I want to persist out all the timer values into XML. I decided it was easiest to just store those in Azure blob storage. I can then import these into Excel where I can do some analysis to figure out the average overhead, minimum, etc.

This seemed super easy, but I found that serializing to XML is not as straightforward as I thought it would be.

First, I added the <pre>[Serializable]</pre> attribute to the struct (as shown above).  But what I ended up with was something like this:

<?xml version="1.0"?>

<ArrayOfTimerInstance xmlns:xsd="<a href="http://www.w3.org/2001/XMLSchema"">
                                    ">http://www.w3.org/2001/XMLSchema"</a>
                      xmlns:xsi="<a href="http://www.w3.org/2001/XMLSchema-instance"">
                                    >">http://www.w3.org/2001/XMLSchema-instance"</a>>


    <TimerInstance />
    <TimerInstance />
    <TimerInstance />
    <TimerInstance />
</ArrayOfTimerInstance>

I couldn’t figure out why the public properties (Name and Ticks) of the items in the list weren’t getting serialized.

After poking around a bit, I realized two things:

  • The Serialize attribute has nothing to do, really, with XML Serialization – it marks the object as binary serializable which I wasn’t interested in for this.
  • There is an interface, IXmlSerializable, you can implement on a struct or a class to control how XML serialization happens.

So I modified my struct as follows:

    public struct TimerInstance : IXmlSerializable
    {
        public TimerInstance(string str, Int32 nTicks)
        {
            _name = str;
            _ticks = nTicks;
        }

        private readonly string _name;
        private readonly Int32 _ticks;

        public string Name { get { return _name == null ? String.Empty : _name; } }
        public Int32 Ticks { get { return _ticks; } }

        public override string ToString()
        {
            return string.Format("{0,20}: {1,10:N}", Name, Ticks);
        }

        #region IXMLSerializable Members

        public System.Xml.Schema.XmlSchema GetSchema()
        {
            throw new ApplicationException("This method is not implemented");
        }

        public void ReadXml(System.Xml.XmlReader reader)
        {
            throw new ApplicationException("XML deserialization of TimerInstance not supported");
        }

        public void WriteXml(System.Xml.XmlWriter writer)
        {
            writer.WriteStartAttribute("Name");
            writer.WriteString(Name);
            writer.WriteEndAttribute();

            writer.WriteStartAttribute("Ticks");
            writer.WriteString(Ticks.ToString());
            writer.WriteEndAttribute();
        }

        #endregion
    }

and used this code to serialize it into a MemoryStream:

        // Return items that should be persisted.  By convention, we are eliminating the "outlier"
        // values which I've defined as the top and bottom 5% of timer values.
        private static IEnumerable ItemsToPersist()
        {
            // Eliminate top and bottom 5% of timers from the enumeration.  Figure out how many items
            // to skip on both ends.  Assuming more than 10 timers in the list, otherwise just take
            // all of them
            int nTimers = (_Timers.Count > 10 ? _Timers.Count : 0);
            int iFivePercentOfTimers = nTimers / 20;
            int iNinetyPercentOfTimers = _Timers.Count - iFivePercentOfTimers * 2;

            return (from x in _Timers
                    orderby x.Ticks descending
                    select x).Skip(iFivePercentOfTimers).Take(iNinetyPercentOfTimers);
        }

        // Serialize the timer list as XML to a stream - for storing in an Azure Blob
        public static void SerializeTimersToStream(Stream s)
        {
            if (_Timers.Count > 0)
            {
                System.Xml.Serialization.XmlSerializer x = new System.Xml.Serialization.XmlSerializer(typeof(List));
                x.Serialize(s, ItemsToPersist().ToList());
             }
         }

Notice that I have a LINQ enumerator which sorts the list and eliminates the outliers - which I define as the bottom and top 5% of timer values. Probably something wierd was going on when these happened, so I don't want the outliers to skew the mean of my overhead calculation.

Once I have this, it's a pretty straightforward thing to store the memory stream in an Azure Blob using UploadFromStream:

        // Serialize the timers into XML and store in a blob in Azure
        private void SerializeTimersToBlob(string strTimerGroupName)
        {
            // add the binary to blob storage
            try
            {
                // Create container if it doesn't exist.  Make it publicly accessible.
                if (_blobContainer == null)
                {
                    CloudBlobClient storage = _storageAccount.CreateCloudBlobClient();
                    storage.GetContainerReference("timers").CreateIfNotExist();
                    if ((_blobContainer = storage.GetContainerReference("timers")) == null)
                        throw new ApplicationException("Failed to create Azure Blob Container for timers");
                    _blobContainer.SetPermissions(
                                      new BlobContainerPermissions()
                                                 { PublicAccess = BlobContainerPublicAccessType.Blob });
                }

                // Remove invalid characters from timer group name and add XML extension to create Blob name.
                string  strBlobName = strTimerGroupName.Replace(':', '.') + ".xml";
                var blob = _blobContainer.GetBlobReference(strBlobName);
                MemoryStream ms = new MemoryStream();

                // Serialize all the recorded timers in XML into the stream
                TraceTimer.SerializeTimersToStream(ms);

                // Seek back to the beginning of the stream.
                ms.Seek(0, SeekOrigin.Begin);

                // Save the stream as the content of the Azure blob
                blob.Properties.ContentType = "text/xml";
                blob.UploadFromStream(ms);
                Diagnostics.WriteDiagnosticInfo(Diagnostics.ConfigTrace,
                                                TraceEventType.Information,
                                                WorkerDiagnostics.TraceEventID.traceFlow, "Added " + strBlobName);
            }
            catch (Exception exp)
            {
                Diagnostics.WriteExceptionInfo(Diagnostics.ConfigTrace,
                                               "Saving Timer Information to Azure Blob", exp);
            }
        }

(Diagnostics is my own class to provide TraceSource-based logging to Azure). The only additional trick I needed here was to Seek on the MemoryStream to the beginning - I was getting empty blobs before I did that.

The final result:

<?xml version="1.0"?>
<ArrayOfTimerInstance xmlns:xsi="<a href="http://www.w3.org/2001/XMLSchema-instance"">

http://www.w3.org/2001/XMLSchema-instance"</a>

xmlns:xsd="<a href="http://www.w3.org/2001/XMLSchema"">

http://www.w3.org/2001/XMLSchema"</a>>
  <TimerInstance Name="Fibonacci Generation (Trace) 1" Ticks="468294" />
  <TimerInstance Name="Fibonacci Generation (Trace) 9" Ticks="410877" />
  <TimerInstance Name="Fibonacci Generation (Trace) 3" Ticks="402439" />
  <TimerInstance Name="Fibonacci Generation (Trace) 5" Ticks="388574" />
  <TimerInstance Name="Fibonacci Generation (Trace) 8" Ticks="385690" />
  <TimerInstance Name="Fibonacci Generation (Trace) 2" Ticks="385261" />
  <TimerInstance Name="Fibonacci Generation (Trace) 10" Ticks="376425" />
  <TimerInstance Name="Fibonacci Generation (Trace) 6" Ticks="345973" />
  <TimerInstance Name="Fibonacci Generation (Trace) 7" Ticks="339461" />
  <TimerInstance Name="Fibonacci Generation (Trace) 4" Ticks="331053" />
</ArrayOfTimerInstance>

 

Author: "MikeKelly"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 16 Jul 2010 22:02

There’s an interesting article about performance of server apps in the July 2010 Communications of the ACM somewhat provocatively titled “You’re Doing It Wrong”.  In it, Poul-Henning Kamp, the architect of an HTTP cache called Varnish, describes the “ah ha!” moment (on a night train to Amsterdam, no less) where he realized that traditional data structures “ignore the fact that memory is virtual”.

The problem is that very large address spaces encourage very large “in-memory” data structures (“The user paid for 64 bits of address space,” Kamp writes, “and I am not afraid to use it”) but those structures are not really in (physical) memory.  Depending on the virtual memory “pressure” (i.e. how many logical VM pages are actually on disk) memory accesses can vary by a factor of ten (if a page has to be retrieved from disk, even a very fast disk obviously cannot match the performance of RAM).  So algorithms that pretend that all memory accesses are equal can be quite inefficient compared to the algorithm for the Varnish cache that Kamp describes which "actively exploits” the fact that it is running in a VM environment.

There is a bigger point here, which is that all the layers of abstraction we’ve built (virtual machines, IL, class libraries, etc.) can be quite seductive. I made an entire class serializable into XML yesterday by adding one line of code (“[Serializable]” to the class declaration).  But part of me thought, hmmm, I wonder what the compiler actually generates now to do that?  It’s important that especially in hotspots we are aware of what the cost of abstraction is and actively look for ways to reduce it. 

Many of the comments about this article on the ACM site for this article can be summed up as “ho hum – you think you’ve discovered something new?”  But others are more in agreement with my perspective: I have been a practicing computer scientist (i.e. I write and design computer software for a living) for thirty years and I have not seen something as clearly written as this article about how to write code to exploit a VM environment.  The fact that some can point to theoretical pieces that have been written just shows how removed from practice that theory is.  But it is also a sign of how much the CACM has changed that it is running articles like this interesting to practitioners.

Author: "MikeKelly" Tags: "development, Azure"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 12 Jul 2010 23:04

To fix a problem with a corrupted Outlook profile, I created a new profile which caused all my Contacts to be initialized from my Windows Live account.  Pretty much everything came in OK, but one of the annoying things is that the “default mailing address” isn’t set right.  For whatever reason, Outlook doesn’t just do the logical thing here and say, well, the mailing address is home if there’s only a home address, work if there’s only a work address and if there’s both, I don’t know.  For 80+% of my contacts, I have only a work or home address but not both, and this would work great.  But instead, Outlook requires you to identify which address is the mailing address.  I had done this manually in my old contacts, but apparently this isn’t one of the Outlook Contact fields that is saved up to Windows Live – so it existed only in the local Outlook file.  When I blew that away and created a new profile, all that was lost.

Why does this matter?  When you do a mail merge, the mailing address is what’s used.  Plus I have a favorite view that shows name, mailing address, email, home and work phone in a list.  It’s a handy way to look things up quickly.  For all my contacts, mailing address is blank.

After living with this for a bit, I pulled out my old “VBA for Microsoft Office 2000 Unleashed” (yup, it’s been a while…) and wrote a little Outlook macro to do the right thing here:

Sub FixUpContacts()

On Error GoTo ExitFunc

Set olns = Application.GetNamespace("MAPI")
Set MyFolder = olns.GetDefaultFolder(olFolderContacts)
' Set MyItems to the collection of items in the folder.
Set MyItems = MyFolder.Items
For Each SpecificItem In MyItems
    Dim itemChanged As Boolean
    itemChanged = False
    
    If SpecificItem.SelectedMailingAddress = olNone Then
        If SpecificItem.HomeAddress <> "" And SpecificItem.BusinessAddress = "" Then
            SpecificItem.SelectedMailingAddress = olHome
            itemChanged = True
        ElseIf SpecificItem.HomeAddress = "" And SpecificItem.BusinessAddress <> "" Then
            SpecificItem.SelectedMailingAddress = olBusiness
            itemChanged = True
        End If
        If itemChanged = True Then
            SpecificItem.Save
        End If
    End If
Next

ExitFunc:

End Sub
Author: "MikeKelly" Tags: "development, outlook"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 30 Jun 2010 22:06

I use the Outlook Connector for MSN to access my MSN mail, calendar and contacts in Outlook 2010.  I also have a separate SMTP/POP account for my business which I read in Outlook.  It’s nice to have all the email in one place so I don’t miss things.  Windows Live Mail is a good program and lets you do something similar, but it segregates the accounts so you have to remember to read two different Inboxes which I don’t like as much.

However, for a while I get intermittent errors from Outlook that it couldn’t sync with my MSN mail and the error codes are the always oh-so-helpful 80004005 (which if you track it down has the symbolic code E_FAIL and means in English “tough luck – something went wrong.  Can’t say anything more.”)  Fortunately, Connector gets some sort of secondary error code and that was usually 4350, although I’ve also seen 4401.

Outlook 2010 Sync Error 2


Outlook 2010 Sync Error

The advice in the error message – “please verify your account is configured correctly by first accessing your mail on the web” probably applies in some number of cases but not in mine.  I was of course able to read mail through the web interface to MSN/Hotmail just fine.  What’s wierder is that this error didn’t seem to actually cause any badness – i.e. MSN mail seemed to still come in and leave just fine.  So that made it possible to do the irresponsible thing and ignore this error for longer than I should have – one of the screen shots above was taken a month ago, and I never bothered to make the time to deal with it then.

Today, I finally contacted Microsoft support.  It’s a bit of a windy road to get there, but it was a free online chat and I have to say the guy I talked with (Ashutosh) knew what he was doing.  His approach (which makes total sense in retrospect) was to do the soft equivalent of reinstall – he created a new Outlook profile, connected to my MSN Hotmail account, and booted Outlook with that profile.  He also advised that my approach of having the SMTP account deliver mail to the same Inbox  as the MAPI MSN account was not a good idea – apparently these protocols don’t play well together when they’re in the same inbox folder, especially when there is a lot of mail items – is 27,450 a lot?  Smile  There is some advice on the web that some Outlook Connector errors can be caused by corrupted items in the Safe Senders or Blocked Senders list.  The advice there is to use the Web interface to Hotmail to delete all those, but that seems overkill to me – instead, I logged on to Hotmail using the web interface and just reviewed the list, clearing out things that seemed like I no longer needed and finding a couple that were really long email addresses that I deleted on the hunch that those could be causing some problem.

So for anyone else who runs into this,

a. You can contact support at http://support.microsoft.com – for me at least it was no charge and pretty quick.

b. You can try to create a new profile yourself - http://support.microsoft.com/kb/829918 describes how.  Don’t delete the old profile until all the mail items, calendar, etc. are visible when you boot Outlook with the new profile.  Then you can use the Mail control panel to delete the old profile and remove the associated local storage since it’s now duplicated in the new profile.  Be aware that downloading a lot of Hotmail items can take a very long time – hours.

c. If you add a second account to the profile (e.g. an SMTP account) have it deliver to somewhere other than the same Inbox as the MSN/Hotmail account (e.g. “Work Inbox”).  You can then create an Outlook Search Folder to consolidate mail from the two separate delivery folders.  By isolating the SMTP inbox from the MAPI inbox that MSN/Hotmail is using, you apparently work around a problem that can lead to this problem.

Good luck!

 

UPDATE – June 30, 2010

After doing this and having it work for a few days, I tried to sync with my Windows Mobile phone (a Samsung Blackjack II).  I found that Calendar and Contact items weren’t syncing – no errors, but updates on the phone weren’t reflected in Outlook and updates in Outlook weren’t showing up on the phone.  After contacting MS support again, they suggested ending the “partnership” in Windows Mobile Device Center and re-establishing it – apparently there is some link to the Outlook profile in there.  I did that, and synced again – and now all my Contacts and Calendar items were gone from the phone! 

After contacting MS support again and this time talking with Dinker, we figured it out.  When creating a new profile, Outlook has a setting that (in Outlook 2010) you access through File / Account Settings in Outlook 2010:

Outlook 2010 Account Settings - 1


There you’ll find a tab for “Data Files”.  Windows Mobile Device Center syncs items in the default data file; below I have the dialog as it appears after I set the MSN data file as the default one, not the Outlook Data File:

Outlook 2010 Account Settings - 2


Before I did this, “Outlook Data File” was set as the default data file so WMDC was syncing calendar and contact items from there.  Note that this shows up in Outlook as separate calendars (“My Calendar” is the one syncing to Windows Live Calendar):

Outlook 2010 Calendars

and as separate Contacts lists as well (obviously, “Contacts – mikekelly@msn.com” is the one synced with Windows Live):

Outlook 2010 Contacts


Changing the default caused the WMDC to correctly sync.

Author: "MikeKelly" Tags: "outlook"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 29 Jun 2010 21:01

I’ve come across a good new blog, Coding Horror, written by Jeff Atwood, one of the founders of one of my favorite coding sites, Stack Overflow.  In reading through some of the older posts, I came across one that is close to my heart which is on working remotely, or what we at Microsoft called “distributed development” (since we didn’t like the assumption that one side is “remote” and the other “central”).  During my last few years at Microsoft I worked on a team that had members in different geographies, and also did a bunch of work on collecting best practices for Microsoft development teams new to the practice.

At Microsoft, I used to say, “Redmond is Rome” – in other words, the heart and soul of the company is still centralized in Redmond, Washington.  Since Microsoft is obviously a global company, it has always had locations all around the world – but most of the offices are sales, marketing and consulting, not core product development.  That’s starting to change, though, with the rise of large development centers in Hyderabad, India; Beijing and Shanghai, China; Haifa, Israel; Dublin, Ireland; and here in the United States in Silicon Valley, Boston and North Carolina; there also is a center in Vancouver, British Columbia.  Due to acquisitions, there are also a number of smaller sites doing product development all around the world, including in Portugal, France, Singapore, Germany, Norway, and Switzerland.

All of this places the Redmond-based teams in the midst of the distributed development challenge – how do you effectively manage software projects when you can’t gather all the people in a room for a quick meeting?  Software, far more than other engineering disciplines, has long relied on informal communication methods (like hallway encounters) to solve problems, generate new ideas and communicate quick status.

Jeff’s post offers a number of good solutions, as does my former colleague, Eric Brechner.  While I was part of Engineering Excellence at Microsoft (a team focused on improving the internal engineering tools, processes and people) I collected best practices and case studies on an internal site we called “http://distdev”.  In this way, we helped teams new to distributed development learn the tips and tricks from their colleagues, as well as avoid common mistakes (e.g. we always schedule meetings at times convenient for the Redmond participants, and don’t bother adding a dial-in number or Live Meeting ID to allow people not in Redmond to attend.  Sometimes just moving a 4 PM meeting to 10 AM makes a huge difference for the remote participants – while not really inconveniencing the Redmond participants at all.  Another trick I’ve seen used where the time zone differences are particularly challenging is alternating who is inconvenienced – so during daylight-savings time in the U.S. we arrange meetings at times that are convenient for U.S. participants and inconvenient for our colleagues in India; during standard time in the U.S., Redmond participants would be inconvenienced as meetings would be held during normal working hours in Hyderabad.  This also has the advantage of getting both sides (not just one) to experience the pain.

Ultimately, though, the goal isn’t pain but gain.  Jeff’s and Eric’s posts referenced above have a number of good suggestions on how to achieve that.  These come down to just a few basic rules:

  • Communicate, communicate, communicate.  Use all the tools available to you – IM, video chat, Skype, email, etc.  Remember that it is much easier for things to go off track for a longer time when people don’t informally run into each other all the time and have the opportunity to say, “You’re doing what??? I’m doing that!”  or “You’re doing what??  We cut that last week!”
  • Virtual tools are no replacement for real relationships.  A friend once wrote a book on software development with a memorable rule: “Don’t flip the Bozo bit”, by which he meant, don’t assume your colleagues are fools.  It’s easier to do that when you’ve never met the colleagues face-to-face.  Travel has to be part of any successful distributed development effort and it should be both ways – so both sides develop the informal relationships that grease the wheels of getting things done and act as a brake on assumptions that the other person “doesn’t get it”.
  • Follow the basic rules of distributed systems.   In technical solutions, we’ve learned that those things that need high-bandwidth communications should be physically adjacent (ideally in the same box), while those that can tolerate more latency can be further distributed physically.  The same rules apply to development teams.  So for instance it’s less ideal if the PM/designer and developer for a feature are physically remote – it can of course work, and there are many examples of it working, but it’s better if you have in each location a self-contained team of PM, dev and test (the typical Microsoft product development triad) working on a feature, while in another location you have a similar team working on a different feature.  Of course there is still communications required between these teams – but the higher-bandwidth communication is contained within one physical location.
Author: "MikeKelly"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 22 Jun 2010 22:28

Windows makes available a wide variety of performance counters which of course are available to your Azure roles and can be accessed using the Azure Diagnostics APIs as I described in my recent MSDN article on Windows Azure Diagnostics.

However, it can be useful to create custom performance counters for things specific to your role workload.  For instance, you might count the number of images or orders processed, or total types of different data entities stored in blob storage, etc.

I wasn’t actually sure that I would have permissions to create custom performance counters and it turns out that right now you do not – all the code described below works in Development fabric (where you are running with rights not available to you in the Cloud).  Note as well that the symptoms when deploying to the Cloud with this code in place is that the role will continually be “Initializing” – it will never start.

Custom Performance Counters aren’t yet supported in the Azure Cloud fabric due to restricted permissions on Cloud-based Azure roles (see this thread for details) so this post describes how it will work when it is supported.

Create the Custom Performance Counters

  1. You create your counters within a custom category.  You cannot add counters to a category, so you have to create a new performance counter category using PerformanceCounterCategory.Create in System.Diagnostics. The code below shows how to create a few different types of counters. Note that there are different types of counters: you can count occurrences of something (PerformanceCounterType.NumberOfItems32); you can count the rate of something occurring(PerformanceCounterType.RateOfCountsPerSecond32); or you can count the average time to perform an operation(PerformanceCounterType.AverageTimer32). This last one requires a "base counter" which provides the rate to calculate the average.
                // Create a new category of perf counters for monitoring performance in the worker role.
                try
                {
                    if (!PerformanceCounterCategory.Exists("WorkerThreadCategory"))
                    {
                        CounterCreationDataCollection counters = new CounterCreationDataCollection();
    
                        // 1. counter for counting totals: PerformanceCounterType.NumberOfItems32
                        CounterCreationData totalOps = new CounterCreationData();
                        totalOps.CounterName = "# worker thread operations executed";
                        totalOps.CounterHelp = "Total number of worker thread operations executed";
                        totalOps.CounterType = PerformanceCounterType.NumberOfItems32;
                        counters.Add(totalOps);
    
                        // 2. counter for counting operations per second:
                        //        PerformanceCounterType.RateOfCountsPerSecond32
                        CounterCreationData opsPerSecond = new CounterCreationData();
                        opsPerSecond.CounterName = "# worker thread operations per sec";
                        opsPerSecond.CounterHelp = "Number of operations executed per second";
                        opsPerSecond.CounterType = PerformanceCounterType.RateOfCountsPerSecond32;
                        counters.Add(opsPerSecond);
    
                        // 3. counter for counting average time per operation:
                        //                 PerformanceCounterType.AverageTimer32
                        CounterCreationData avgDuration = new CounterCreationData();
                        avgDuration.CounterName = "average time per worker thread operation";
                        avgDuration.CounterHelp = "Average duration per operation execution";
                        avgDuration.CounterType = PerformanceCounterType.AverageTimer32;
                        counters.Add(avgDuration);
    
                        // 4. base counter for counting average time
                        //         per operation: PerformanceCounterType.AverageBase
                        // NOTE: BASE counter MUST come after the counter for which it is the base!
                        CounterCreationData avgDurationBase = new CounterCreationData();
                        avgDurationBase.CounterName = "average time per worker thread operation base";
                        avgDurationBase.CounterHelp = "Average duration per operation execution base";
                        avgDurationBase.CounterType = PerformanceCounterType.AverageBase;
                        counters.Add(avgDurationBase);
    
    
                        // create new category with the counters above
                        PerformanceCounterCategory.Create("WorkerThreadCategory",
                                "Counters related to Azure Worker Thread", PerformanceCounterCategoryType.MultiInstance, counters);
                    }
                    catch (Exception exp)
                    {
                    	Trace.TraceError("Exception creating performance counters " + exp.ToString());
                    }
  2. When you create a performance counter category, you need to specify whether it is single-instance or multi-instance. MSDN helpfully explains that you should choose single-instance if you want a single instance of this category, and multi-instance if you want multiple instances. :) However, I found a blog entry from the WMI team that actually explains the difference - it is whether there is a single-instance of the counter on the machine (for Azure, virtual machine) or multiple instances; for example, anything per-process or per-thread is multi-instance since there is more than one on a single machine. As you can see, since I expect multiple worker thread role instances, I made my counters multi-instance.
  3. After creating the counters, you need to access them in your code. I created private members of my worker thread role instance:
        public class WorkerRole : RoleEntryPoint
        {
    
    	...
    
            // Performance Counters
            PerformanceCounter  _TotalOperations = null;
            PerformanceCounter  _OperationsPerSecond = null;
            PerformanceCounter  _AverageDuration = null;
            PerformanceCounter  _AverageDurationBase = null;
    	...
  4. Then in the code after I create the counter category if it doesn't exist, I create the members:
                try
                {
    
                    // create counters to work with
                    _TotalOperations = new PerformanceCounter();
                    _TotalOperations.CategoryName = "WorkerThreadCategory";
                    _TotalOperations.CounterName = "# worker thread operations executed";
                    _TotalOperations.MachineName = ".";
                    _TotalOperations.InstanceName = RoleEnvironment.CurrentRoleInstance.Id;
                    _TotalOperations.ReadOnly = false;
    
                    _OperationsPerSecond = new PerformanceCounter();
                    _OperationsPerSecond.CategoryName = "WorkerThreadCategory";
                    _OperationsPerSecond.CounterName = "# worker thread operations per sec";
                    _OperationsPerSecond.MachineName = ".";
                    _OperationsPerSecond.InstanceName = RoleEnvironment.CurrentRoleInstance.Id;
                    _OperationsPerSecond.ReadOnly = false;
    
                    _AverageDuration = new PerformanceCounter();
                    _AverageDuration.CategoryName = "WorkerThreadCategory";
                    _AverageDuration.CounterName = "average time per worker thread operation";
                    _AverageDuration.MachineName = ".";
                    _AverageDuration.InstanceName = RoleEnvironment.CurrentRoleInstance.Id;
                    _AverageDuration.ReadOnly = false;
    
                    _AverageDurationBase = new PerformanceCounter();
                    _AverageDurationBase.CategoryName = "WorkerThreadCategory";
                    _AverageDurationBase.CounterName = "average time per worker thread operation base";
                    _AverageDurationBase.MachineName = ".";
                    _AverageDurationBase.InstanceName = RoleEnvironment.CurrentRoleInstance.Id;
                    _AverageDurationBase.ReadOnly = false;
                }
                catch (Exception exp)
                {
                    Trace.TraceError("Exception creating performance counters " + exp.ToString());
                }
  5. Note that I use "." for machine name, which just means current machine, and I use the CurrentRoleInstance.Id from the RoleEnvironment to distinguish the instance of each counter. If instead you wanted to aggregate these across the role rather than per-role instance, you could just use RoleEnvironment.CurrentRoleInstance.Name.

Using the Custom Performance Counters

Once you've created the performance counters, you now want to access them in your code - for instance, in the part of your worker role where you are actually doing work to track the time it takes to process a unit of work. As you know, Azure calls the Run method of your role to do the work, and this method should not return - it instead runs an infinite loop waiting for work to do and then doing it. Let's look at some simple code to use the performance counters defined above:

_TotalOperations # worker thread operations executed
_OperationsPerSecond # worker thread operations per sec
_AverageDuration average time per worker thread operation
_AverageDurationBase average time per worker thread operation base
Here's the code for Run that uses these; note that it uses the System.Diagnostics Stopwatch class which provides access to a higher-accuracy timer than simply calling DateTime.Ticks (see this blog post for more information)

        public override void Run()
        {
            while (true)
            {
                Stopwatch watch = new Stopwatch();
                // Test querying the perf counter.  Take a sample of the number of worker thread operations at this point.
                CounterSample operSample = _TotalOperations.NextSample();

                // Increment worker thread operations and operations / second.
                _TotalOperations.Increment();
                _OperationsPerSecond.Increment();

                // Start a stop watch on the worker thread work.
                watch.Start();

                // Do the work for this worker
                DoWork();

                // Capture the stop point.
                watch.Stop();

                // Figure out average duration based on the stop watch, then increment the base counter.
                _AverageDuration.IncrementBy(watch.ElapsedTicks);
                _AverageDurationBase.Increment();

                // Here's how you calculate the difference between a sample taken at the beginning and a sample at this point.
                // Note that since this is # of operations which is monotonically increasing, this is kind of a boring use of
                // a performance counter sample.
                Trace.WriteLine("Worker Thread operations performance counter: " + 
				CounterSample.Calculate(operSample, _TotalOperations.NextSample()).ToString());
            }
        }
This code also shows how to take two samples of a counter and calculate the difference - the counter I'm showing is rather uninteresting, but it illustrates the approach.

Monitoring the Performance Counters

OK, so you've added updating the performance counters in your code, but now you want to be able to see the values while your Azure process is running. The Azure Diagnostics Monitor collects this information in a local cache but by default does not persist it anywhere unless you ask it to. You can do this by adjusting the DiagnosticMonitorConfiguration with code like this (note this is from my role OnStart method):

            DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration();
            TimeSpan tsOneMinute = TimeSpan.FromMinutes(1);
            TimeSpan tsTenMinutes = TimeSpan.FromMinutes(10);

            // Set up a performance counter for CPU time
            PerformanceCounterConfiguration pccCPU = new PerformanceCounterConfiguration();
            pccCPU.CounterSpecifier = @"\Processor(_Total)\% Processor Time";
            pccCPU.SampleRate = TimeSpan.FromSeconds(5);

            dmc.PerformanceCounters.DataSources.Add(pccCPU);

            // Add the custom counters to the transfer as well.
            PerformanceCounterConfiguration pccCustom1 = new PerformanceCounterConfiguration();
            pccCustom1.CounterSpecifier = @"\WorkerThreadCategory(*)\# worker thread operations executed";
            pccCustom1.SampleRate = TimeSpan.FromSeconds(20);
            dmc.PerformanceCounters.DataSources.Add(pccCustom1);

            PerformanceCounterConfiguration pccCustom2 = new PerformanceCounterConfiguration();
            pccCustom2.CounterSpecifier = @"\WorkerThreadCategory(*)\# worker thread operations per sec";
            pccCustom2.SampleRate = TimeSpan.FromSeconds(20);
            dmc.PerformanceCounters.DataSources.Add(pccCustom2);

            PerformanceCounterConfiguration pccCustom3 = new PerformanceCounterConfiguration();
            pccCustom3.CounterSpecifier = @"\WorkerThreadCategory(*)\average time per worker thread operation";
            pccCustom3.SampleRate = TimeSpan.FromSeconds(20);
            dmc.PerformanceCounters.DataSources.Add(pccCustom3);

            // Transfer perf counters every 10 minutes. (NOTE: EVERY ONE MINUTE FOR TESTING)
            dmc.PerformanceCounters.ScheduledTransferPeriod = /* tsTenMinutes */ tsOneMinute;

            // Start up the diagnostic manager with the given configuration.
            try
            {
                DiagnosticMonitor.Start("DiagnosticsConnectionString", dmc);
            }
            catch (Exception exp)
            {
                Trace.TraceError(""DiagnosticsManager.Start threw an exception " + exp.ToString());
            }

This code also adds a standard system performance counter for % CPU usage.

Note that you can either pass the DiagnosticMonitorConfiguration to DiagnosticMonitor.Start or you can change the configuration after the DiagnosticMonitor has been started:

           try
           {
		var diagManager = new DeploymentDiagnosticManager(
						RoleEnvironment.GetConfigurationSettingValue("DiagnosticsConnectionString"),
						RoleEnvironment.DeploymentId);
		var roleInstDiagMgr = diagManager.GetRoleInstanceDiagnosticManager(
						RoleEnvironment.CurrentRoleInstance.Name,
						RoleEnvironment.CurrentRoleInstance.Id);
		DiagnosticMonitorConfiguration dmc = diagManager.GetCurrentConfiguration();

		... Make changes as needed to dmc, e.g. adding PerformanceCounters as above...

		roleInstDiagMgr.SetCurrentConfiguration(dmc);

Once you've transferred the performance counter data, they go into the Azure table storage for the storage account you've passed (i.e. the account specified by DiagnosticsConnectionString in ServiceConfiguration.cscfg). You can then either use a table storage browser to look at WADPerformanceCountersTable or you can use the very helpful Windows Azure Diagnostics Manager from Cerebrata Software (there is a free 30-day trial). This tool reads the raw data from the Azure table storage and allows you to download it, graph it, etc.

References

Thanks to the following which provided invaluable information along the way to figuring this out:

Author: "MikeKelly" Tags: "development, Azure"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 17 Jun 2010 19:50

One of the practices I advise with Windows Azure services (and really any service) is self-monitoring to find problems that may not be fatal but are indications of serious problems developing.

I happened to run across a good example of how to do this on the Azure Miscellany blog.

In this case, the writer, Joe, had a web role which was spiking CPU usage in some circumstances that happened only in production.  Of course the problem occurred only intermittently and couldn't be reproed in the development fabric (all the really gnarly problems manifest like this.)

In the post, Joe writes about how he added code to his web role to monitor its CPU usage and when it went above 90%, his monitoring code triggered a crash dump of the role which he was then able to analyze offline to identify where the CPU usage spike was happening.

This general approach can be used in a number of situations.  Basically, if you are able to write code to detect when your role (whether a worker role or web role) is in a bad state, you are then able to trigger a crash dump to capture the state - or you could at that point tweak some of the SourceSwitch values to increase the verbosity of logging to try to see what's going on.  In some cases, no run-time action is required - you just need to log for operations staff that something odd is happening with enough information for them to resolve the problem (e.g. it could be a configuration setting not set correctly during a deployment.)

You could even have a separate role monitoring the role and then working through the Azure Service Management API to tweak the role that is experiencing problems.

While at Microsoft, I worked with a Microsoft Research developer on a prototype tool called HiLighter that identified patterns in log data to alert operational staff to problems.  Because this analysis process runs in real time alongside the production services, and it learns from past problems, it is able to more quickly identify a developing problem than a human is able to do looking at reams of real-time log and performance data.

Think of Diagnostics broadly – not just as logging, but also as active monitoring of problems in your roles.

Author: "MikeKelly"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 07 Jun 2010 20:04

My MSDN Magazine article on diagnostics and logging in Windows Azure is now available in the June 2010 issue and online.

There was some material that I didn’t have space to include or that I’ve learned since submitting the final revision of this to MSDN in March.   I’ll use this blog entry to capture some of that.

First, Rory Primrose’s blog has a few good posts on performance issues of tracing.  Rory talks about thread-safety issues and getting better performance even in the face of non-thread-safe listeners, and also the fact that the default configuration adds a default listener to put tracing output in the Debug console in Visual Studio (i.e. effectively passing everything that comes through the tracing pipeline to Debugger.Log).

Second, I recommend using a logging framework – either using an existing one or rolling your own.  You’ll get more flexibility in formatting and a single place to extend logging as you discover things you want that it makes sense to put in the framework.  I’ve been playing with a framework from the Microsoft Application Development Consulting (ADC) group in the U.K. called simply UKADC.Diagnostics.  Josh Twist from that group has done a nice job of providing a good overview of using the standard System.Diagnostics features (on which Azure Diagnostics is based) and then showing how to incorporate their open-source library, UKADC.Diagnostics, into that.  I wrote a custom listener for UKADC.Diagnostics which directs the output to the Azure logging listener.  There is more information on creating a customer listener in this MSDN article by Krzysztof Cwalina, although UKADC.Diagnostics provides a base class, CustomTraceListener, that simplifies this quite a bit.  The approach I’ve taken – of writing a UKADC listener which redirects to the Azure listener – may not be the most efficient since it requires an extra level of indirection.  One of the questions I plan to research further and will include in a later blog post is more on performance of tracing, including answering perhaps the most basic question: how much overhead does tracing add to my application if I’ve disabled at run-time the level of tracing being invoked, i.e. if I have a Verbose trace call in a method and I’m only logging errors, how much does it cost to simply get far enough through the trace stack to realize that Verbose traces are disabled?  I’ve not found good information on this elsewhere.

While Azure provides the logging and tracing information you write through the TraceSources in your app, it also provides access to a bunch of system logs (e.g. IIS logs, system event logs, etc.).  There is a good post from Mix on getting IIS logs out of Azure.

Hope you find the article helpful – post any questions here and I’ll try to respond, and I’ll also be posting more as I get the UKADC.Diagnostics working with Azure.

Author: "MikeKelly"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 28 Apr 2010 22:13

For a consulting project, I’ve been playing around with getting the Skype public API to work with Microsoft Robotics Developer Studio (RDS) by building a DSS service that communicates with Skype.

I’ve gotten this working and wanted to use the Visual Programming Language to wire up a simple test of sending datagrams through Skype.  So I just set up Skype on two machines, and one of them is running my RDS app to drive Skype through the API to send a datagram.  I figured I didn’t really care what the other side did with the datagram, I just wanted to see if I could send it.

I found, though, a problem that is alluded to but not really explicitly stated in the Skype API documentation.

For an app-to-app connection to work, both sides have to establish the Application (in Skype terminology).  In other words, both sides of the conversation have to have done a

CREATE APPLICATION Xyzzy

using the Skype API to create the “Xyzzy” application (note: application is Skype’s name for an app-to-app channel). 

Once both sides have created the channel, one side (typically the originating caller) has to do a:

ALTER APPLICATION Xyzzy CONNECT otheruser

where Xyzzy is the application created and otheruser is the user you’re calling.

If one side creates the channel and the other (e.g. the recipient of the call) hasn’t, the channel doesn’t work.  That’s because you’ll never get a response from the Skype API on the side that does create the channel giving the stream to use, i.e. you’ll never get:

OnReply to Command 0(): APPLICATION Xyzzy STREAMS otheruser:1

which tells you that “otheruser” can now receive communications through the app-to-app channel.

Like a lot of things in programming, this is obvious once you realize it – you can’t send through a channel unless both sides have established the channel.  I was hoping I could test one side, but no go.

So I just had to write a bit of code to use the Skype public API on the other side (the machine I was calling) to create the channel.  Note that it doesn’t appear that the guy receiving the call has to do the ALTER APPLICATION … CONNECT thing – it happens automatically when the other side does.

Author: "MikeKelly" Tags: "development, Robotics, Skype"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 09 Apr 2010 18:41

As part of a project I’m doing for a Seattle startup, I’m playing with Microsoft Robotics Developer Studio (RDS).  I downloaded the 2008 release and generated a simple program to use a desktop joystick (basically a UI element displayed on the screen) to control an iRobot Create.  That worked fine – it was cool to direct the robot around.  But of course, what I really want is to have my code take the place of the joystick and control the robot.

I used a feature of VPL (the Visual Programming Language RDS uses to prototype these things) to generate the whole sheebang as a “service” so I could look at the code being generated to hook the joystick up to the iRobot and then shim in my own code in its place.  The problem is that when I did this, I got this error from VS:

The type or namespace name 'Robotics' does not exist in the namespace 'Microsoft' (are you missing an assembly reference?)  

pointing at this line generated by the RDS code:

using drive = Microsoft.Robotics.Services.Drive.Proxy;

Sure enough, trying to add a reference to Microsoft.Robotics. anything fails since that namespace doesn’t exist on my machine.

Hmm…

I guessed that I was missing an install of something, perhaps the “CCR and DSS toolkit” – but talking to someone on the Robotics dev team at MS, I verified that I had the same assemblies as he did. 

I then realized I was using Visual Studio 2010 Release Candidate – could that be the problem?

Sure enough, rebuilding this under Visual Studio 2008 fixed the problem. So for some reason, VS 2008 is able to find the Microsoft.Robotics assemblies – but VS 2010 is not.  A friend on the Robotics dev team confirms that there are a few tweaks needed to make the toolkit work with Visual Studio 2010.

That then took me on to the next problem –getting this to run.  When I run it in VS 2008, I get an error:

The thread 0x670 has exited with code 0 (0x0).
*** Initialization failure: Could not start HTTP Listener.
The two most common causes for this are:
1) You already have another program listening on the specified port
2) You dont have permissions to listen to http requests. Use the httpreserve command line utility to run using a non-administrator account.
Exception message: Access is denied

 

Trying to use the httpreserve tool (which is installed as part of RDS), I get an error that the port can’t be reserved.  But if I run VS 2008 as an administrator (Start / Visual Studio 2008 / Microsoft Visual Studio 2008 (right click / Run as Administrator)) and then run the compiled RDS service – it works.  So there definitely is a problem with the non-admin user accessing the http port 50000 (for some reason, I don’t have this problem on my second machine which also has RDS installed).  I figured there is some permission group I need to be part of to enable this – but which one?  I wish the Microsoft RDS tools didn’t “dumb down” their explanation of what was happening here and instead told me what precisely was failing.

I decided to just move this to another machine where DSS is working and where I had VS 2008 installed, so copied the files over there. I ran into a couple of more problems before I got this working:

The final two problems:

* In the Post Build step that is generated (to run DSSProxy), a “%” sign was translated at some point into the URL-encoded equivalent, %25, which caused the ReferencePaths to not be generated correctly and caused the cryptic DssProxy error: “Not a DSS Assembly”

* After fixing that, I still got the error and what I finally noticed is that I have both Robotics 2008 and Robotics 2008 R2 installed on my machine.  I am building this under the R2 release, but for some reason the assembly references are to the 2008 release – i.e. they point to

Reference Assemblies/Microsoft/Robotics/v2.0

instead of

Reference Assemblies/Microsoft/Robotics/v2.1.

This also confuses DSS Proxy. Deleting those references in VS and adding the correct ones from the V2.1 directory (references for Microsoft.Ccr.Core, Microsoft.Dss.Base, and Microsoft.Dss.Runtime fixed the problem and I am now running!

Author: "MikeKelly" Tags: "development"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 07 Apr 2010 00:06

I am at the Azure Firestarter event in Redmond today which just finished with a panel discussion and Q&A.  Here are my notes.

Q: For Dallas, do I need to use C# to access the data?

A: No - it's pretty easy to use C# classes but Dallas exposes an OData feed so you need a language that can access a REST API.

Q: Is there pricing on the CDN?

A: Still preview. No pricing info.

Q: You mentioned you could run an EXE by hosting it in WCF. Will it recycle with the role or will it run persistently?

A: When I do this, I create a worker role that starts the process, wait for exit and if it exits throw an exception - which causes Azure to recycle the role and restart the EXE.

Q: Is it possible to have a private cloud?

A: Not today. No offering for that today. But it's under consideration.

Q: If I have an app that can't be deployed via xcopy - it requires something installed or configed, can it be run in Azure?

A: If the MSI can run as a non-administrator, you could conceivably run that and wait for it to finish in your OnStart method in the web role. Admin mode is coming later.

Q: Can we change the affinity?

A: No - once it's set cannot change.

Q: Suppose you have a web role that needs to adjust affinity based on work week differences in US, Europe and Asia?

A: If there are times when you don't need it in the U.S. - you could redeploy to a data center. You can create three applications - one in each place - but all will have different domain names. You could set up a DNS record on top of it.

Q: What kind of techniques have you seen to secure the SQL Azure connection string?

A: Those configuration files are encrypted when uploaded - you can use code-based cryptography to encrypt and decrypt the connection string when reading it.

Q: Is there a plan to expand COUNT support in Azure Tables?

A: No.

Q: When you roll out changes to the OS, how do we make sure it doesn't break?

A: The guest OS in the virtual machine doesn't change unless you update the OS Version string. The base OS actually running on the machine is updated but not the guest OS.

Q: Is there a Patterns & Practices document for building on-premises apps?

A: There is Windows Azure guidance is working on it. Eugenio Pace has been blogging on this at http://blogs.msdn.com/eugeniop/

Q: Is there a way to authenticate to Azure and use the service fabric to proxy authenticate?

A: No clear what the question is.

Q: What is your replication and disaster safety policy?

A: Create three replicas within the data center and also one shadow copy to another data center within the same region. If you choose US - North Central, there also is a US - South Central and we'll make a copy there. You'll never see it - it's only for disaster recovery if a data center is lost. Georeplication is planned for the future.

Q: What are the plans for SharePoint on Azure?

A: None of the slides put SharePoint on it now - not sure what the plans are around offering SharePoint developer services.

Q: Only thing we can't automate now using Service Management API is to create hosted services. Are there plans to change that?

A: Probably.

Q: How can I test in the cloud without exposing to the entire Internet?

A: You can write code to do this. There is nothing in the platform that helps. You could have an IP address whitelist and not allow through anything not on the whitelist.

Q: Does SQL Azure support page compression?

A: No

Q: Does 50 GB limit include the log?

A: No

Q: How do you manage compatibility if you upgrade to SP1?

A: Don't apply the service pack itself. We make modifications to the engine themselves. Will announce that as part of the service updates.

Q: Is SQL Azure good for OLTP or OLAP applications?

A: Primary application is light workload OLTP - types of traffic you see within a departmental application - hundreds of transactions per second. Have some customers hosting the cubes in the cloud and running analysis services on premises, but planning to make this available hosted this year.

Q: When will you get transparent data encryption?

A: SQL Azure is a multi-tenant system - managing the keys is a hard problem. It's on the roadmap but have some work to do.

Q: Why isn't the CLR enabled?

A: Haven't done enough testing - have to be sure that no malicious user can do something evil with a DLL even if it's marked safe. It's on the roadmap but have to be really careful about security around this.

Q: Can I use Bing to crawl my database?

A: Have a SQL Azure Labs site and can expose your DB as an Odata feed. Power of that is you can expose to non-MS clients like an iPhone app.

Q: When you said multiple tenants share the same data file did you mean that multiple databases are within the same MDF file?

A: That's correct.

Q: So each SQL Azure database isn't really a database it's effectively a set of tables?

A: There was an "under the hood" talk at PDC that you can look at to learn more about how this works.

Q: Isn't it scary to have SQL Server exposed on the Internet?

A: There is a firewall feature that restricts access to SQL Server based on IP addresses, and also prevent some user name /password changes.

Q: Can you put Microsoft PII data on an Azure database?

A: Not sure - need to get back to you on that.

Q: Can you use SQL Profiler?

A: No - but we are adding some DMVs to get that information.

Q: Can we authenticate to Azure using RSA two-factor ID?

A: No - today only a Live ID for the windows Azure portal

Q: Can we install Windows Media Services or third-party services?

A: Anything you can do as a non-administrative user you can do. You can add a managed module but to add a native module, you have to be able to do things you can't do as a non-admin. Have two features coming this year: admin mode and the VM role which allows you to deploy a VM instead of the base image.

Q: How long does it take to deploy millions of records to SQL Azure?

A: Depends on Internet connection speed, whether you're doing single inserts or bulk copy API.

Q: Mark was talking about session state - how do you do that without SQL Server

A: You can do it with the ASP.Net sample available on code gallery. There's another sample on codeplex that uses SQL Azure. You do want to test the performance on those though. Neither one of those clean out the session data for you. There's a KB article on how to set up the session state. Don't have a SQL Agent infrastructure to do cleanup - will need to roll your own.

Q: When will you increase beyond 50 GB?

A: Are there bigger databases in the future? Can probably assume there will be. Keep in mind that you really want to think about a scale out pattern.

Q: What's the motivation for multi-tenanting MDF files?

A: Better use of resources.

Q: Can you say what the releases are for Azure?
A: This year is all I can say.

Q: What's the best pattern for implementing a cold storage / hot storage model?

A: Cold storage wherever you started should probably move to Windows Azure storage at some point. Where to put the hot data depends on the access patterns - a lot of writes? A lot of reads? Can put data in SQL Azure that they're doing a lot of querying against but then migrate to even a Windows Azure blob after a while.

Q: If someone has 3 TB of data can they send you the data on a DVD to upload it to SQL to avoid connection costs for the upload?

A: No plans at present but it's a great idea.

Q: What kinds of things are people doing with Python?

A: Haven't seen much on Python.

Q: Is there academic pricing for Azure?
A: There is programmatic things where we give it out to universities. Talk to your account manager.

Q: You mentioned throttling - at what point does an app get cut off?

A: Answer differs between Windows Azure and SQL Azure. For Windows Azure table storage, it's about 500 rps. Beyond that we start to suspect a DoS attack. You'll get error messages and handle those and back off. For SQL Azure - there's no connection limit but it's like being on an airplane you can spread out a bit. If you start using all the resources on the box, you will get a throttling specific error message and kicks in around 5 minutes per query. The way to handle this is scale out. Buy more databases to avoid hammering one, or do better partitioning so it can be spread out. The limit is per partition. For tables, it's a partition key; for blobs, it's the blob name; for queues, it's the queue name. For SQL Azure database, it's the database.

Q: At SxSW there was a Facebook toolkit deployed. Any experiences?

A: It's based on best practices of people who've actually built some Facebook apps on Azure.

Q: Is there a standard way to synchronize between on-premises and cloud DBs?

A: Yes -it's on the Windows Azure portal.

Author: "MikeKelly" Tags: "development, Azure"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 06 Apr 2010 23:14

I am at the Azure Firestarter event in Redmond today and just heard Mark Kottke talk about his experiences as an app development consultant with Microsoft on migrating existing customer apps to Azure.  Here are my notes; slides and sample code are to be posted later and I will update the post with them when they are.

Migrating to Windows Azure - Mark Kottke

  • Helping early adopter customers move to Azure
  • Most were new applications but some were migrations.
  • How to plan a migration
    • What is easy?
    • What is hard?
    • What can you do about it?
  • Why migrate?
    • Flexibility to scale up and down easily
    • Cost
  • Is it better to start over or just migrate existing code?
    • Migration "seems" easier - code already written, tested, no users to train, …
    • But…
  • Six customers
    • Kelley Blue Book
      • Now in two data centers and costs quite a bit to keep up the second data center
      • ASP.Net MVC app with SQL Server
      • Migrated pretty quickly
      • Use Azure as a third data center (in addition to their two existing data centers)
    • Ad Slot
      • Migrated storage to Azure tables.
      • Biggest change because of difference from SQL
      • Similar data architecture to the BidNow application.
    • Ticket Direct
      • New Zealand's ticket master
      • Used SQL Azure
    • RiskMetrics
      • Pushed the scaling the most of the case studies.
      • It had scaled to 2000 nodes by the time of PDC 2009
      • Do Monte Carlo analysis for financial applications. The more trials they can run, the better the results.
      • Their key IP is a 32-bit DLL that does this - they were able to move that without rewriting it.
    • CSC - Legal Solutions Suite
      • ASP.Net application like Kelley Blue Book
    • CCH Wolters Kluwer
      • Financial Services ISV
      • Sales tax engine to integrate with ERP systems. Currently sold as an on-premises solution with regular updates for the sales tax rate changes.
      • Moving to the cloud made it easier for customers - no need to do updates or manage software on-site.
      • Sales Tax engine was written in a 4GL from CA called FLEX that generates .NET code but it worked fine.
  • Typical apps are comprised of n-tiers: presentation, services, business logic, data. First thought is to migrate everything - but could migrate just part of it.
    • e.g. Could migrate just the presentation layer - avoids dealing with all the security issues.
      • Benefit is get development team up to speed on Cloud, processes modified to include Cloud, …
  • Q: Could you upload a BLG file (Binary Log File) that shows what RAM usage, what transactions, etc. and have the Azure ROI tool estimate the cost of running this service on Azure?
  • Q: How about if you put the cost of your demo apps on the apps - just knowing what the cost of a demo app would be helpful.
    • A: Good idea.
  • Data Storage Options
    • Have MemCacheD, MySQL in addition to other options discussed (SQL Azure, Azure Tables, etc.)
    • To migrate a DB to SQL Azure…
      • Using SQL Server Save or Publish Scripts, extract all the information about the local DB.
      • Using Management Studio connected to SQL Azure, create a new DB.
      • Open the script file created from the existing local DB
      • Execute it against the SQL instance.
      • Can use SQL Azure Migration Wizard (http://sqlazuremw.codeplex.com/)
    • Q: Will the database automatically scale up from 1 GB -> 10 GB?
      • A: No, but you can use the portal to modify it.
  • Caching / State Management
    • Velocity - not yet available for Windows Azure
    • Can use ASP.NET cache but not shared across instances.
    • Can use ASP.NET membership / profile providers - on Codeplex there is one that uses SQL Azure.
    • Can use Memcached. Pretty straightforward to use for caching session state.
    • Cookies, hidden fields - any client-side providers basically work fine.
  • For app.config and web.config - how much of your state needs to be able to be changed when the app is deployed
    • Move to ServiceConfiguration.cscfg to allow for run-time update.
    • Leverage RoleEnvironmentChanging and RoleEnvironmentChanged events if you want to be able to change default behavior of recycling the role on any config change.
    • Use RoleEnvironment.IsAvailable to determine whether you are running in the cloud or on premises.
  • Using WCF Service endpoints that are exposed only within the role - not exposed to the Internet, not load balanced.
    • Need to call Windows Azure APIs to discover the services and wire up once the server is running.
    • Important for Memcached to share caching.
    • WCF Azure samples: http://code.msdn.microsoft.com/wcfazure
  • Legacy DLLs
    • To call a 32-bit DLL use a WCF host process
    • For external processes (EXEs)
      • Host in a worker Role
  • Can deploy to staging and then flip to production - the staging has a GUID.
    • No requirement to use staging but it is a good practice to test.
    • Sometimes need to change configuration changes between staging (test) and production
  • Start with development fabric / development storage
    • But if using SQL Azure - better to use SQL Azure in the cloud because there are a enough differences between the local SQL and SQL Azure that it's worth testing against the cloud SQL
  • Then test development fabric / cloud storage
  • Then cloud fabric / cloud storage
Author: "MikeKelly" Tags: "development, Azure"
Comments Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader