I said in my last post that the benefit of using Whitehorse has been marginal so far. To my surprise, I never got a single email refuting this. I still stand by that comment, although I am being a little hard on Whitehorse.
At this stage in the design process, I've painted a picture of my application design, and a picture of my logical datacenter design. I use the word picture explicitly, because that is what it is. It is a visual representation, without any additional information (metadata) attached. Kind of a flat representation of a 3D world. Today, I'm going to describe how I started to add that additional information, initially to the application design.
If you refer back to the application design in my previous post, you'll see I started by selecting one area of functionality we would be delivering (point-of-sale customer and sales order), and then built up an application design. To the basic POS system, I added some additional features to support further development, like an event publisher to allow new systems to be attached to the application, and a library of common services (InfrastructureLibrary). Remember also that my application utilises published SAP Web Services, which in my initial design I emulates using an ASP.NET Web Service (SAPWS).
The first thing I did today was to try to access the real SAP Web Services. I consulted the SAP API documentation and found the address of the WSDL files that described the Web Services I planned to utilise. Then, within the Application Diagram, I deleted the dummy ASP.NET Web Service (SAPWS), and added an External Web Service instead. When I did this, I got a dialog, which allowed me to specify the path to the WSDL file for the SAP API I wanted to use. I entered the URL, and this created an external Web Service in the Application Designer.
At this point I bumped into a couple of issues. Firstly, SAP exposes each API call as a separate web service, instead of a single web service with multiple web methods. This meant that I had to create a total of 8 separate external web services in the application designer, one for each SAP API I planned to call. This could make for a pretty cluttered Application Design if you planned to access a large number of SAP APIs.
The second issue came up when I looked at the signature of the SAP web services. These were RPC/encoded web services, rather than document/literal. I wanted to stick to document/literal web services, following the guidelines of the WS-I Basic Profile. I believe that later versions of SAP provide better web service interfaces. However, it became apparent that putting a web service façade across the front of the provided SAP web services was going to be a good idea.
You can see my Application Design above. Note that I cropped it a bit, to focus on the area of interest. At this stage, I had linked to each of the SAP web services, and I had defined the web services (and web methods) for my SAPFacade (a less granular set of business web services, implemented in terms of native SAP web services). I still need to define the rest of my applications, but at this point I thought I'd have a little fun….
I checked everything into Visual Sourcesafe (I'm working stand-alone off my laptop, so no Team Server L), labelled the version, and then checked it all out again. Then I right-clicked on SAPFacade, and selected "Implement Application". VSTS created the ASP.NET Web Service project (in C#), automatically added web references to all of the SAP web services I had linked to in my Application Design, and created stubs for the methods of the two web services I had defined. Selecting one of the web methods, I was then immediately able to write the following code, with completion etc.
At this point, I really began to see some benefits from Whitehorse. Usually when I'm working with corporate customers, I don't get the luxury of completely "green fields" development. What I mean by that is that I invariably have to link my applications to existing applications/systems, and that I need to deploy those applications onto existing (sometimes shared) infrastructure (more about that next week). With a "static" design tool like Visio I typically can't pull in definitions from existing systems, and integrate them with my evolving design (although to be fair, I can pull in web services definitions in some other design tools). My Application Design was now starting to capture significant meta-data behind the visual elements, and more importantly, this meta-data is automatically transferred to the application developer (in the form of class definitions etc). I was beginning to get hooked!
One last really COOL feature! I talked above about the fact that I wasn't that happy with the exposed SAP web services, and I implemented a façade to those services which provided a web service interface that better suited me. Now of course as I continue to develop this application, I will build up a pretty comprehensive SAPFacade (a number of web services, grouped around specific business entities, such as SalesOrders, and Customers). Of course within my customer, other designers will be working on related applications, and these designers will benefit from the work I have done providing business level wrappers around the SAP web services. SO, what I can now do, is right-click on the SAPFacade application within the Application Designer, and select "Add To Toolbox". Now this component will be available on my toolbox, for re-use in other application designs (and I can export it for others to use as well). Of course I can use this to add other enterprise wide common components, like InfrastructureWS. Now that ROCKS!
More next week, as I start to "flesh out" my logical Datacenter Design.
My post yesterday didn't work very well because I posted the pictures of the designer onto a web site that had download limitations. The result was that most people who tried to view the post couldn't see the pictures. Now fixed, thanks to Nigel Parker, who loaned me some webspace. Thanks Nigel
Tomorrow, I'll post the second article in this series.
As a result, I was both excited, and a little bit worried, when one of my corporate customers asked me to help them architect and design a new series of applications they were going to build using VSTS. Excited, because I was finally going to get a chance to really exercise this cool technology, and a bit worried, because I wondered just how much value it would actually add to the design process, and whether it would really work as promised (it still a beta after all).
This blog is the first in a series where I'm going to try to explain how I went about using Whitehorse, and how effective I found it.
The customer had already developed an enterprise architecture which covered off the major applications they needed to build in the coming months, and also provided a whole lot of guidelines around how those applications should be architected (leading towards a SOA). The customer also already had some applications deployed using VS 2003, in an existing datacenter environment.
We started by identifying a candidate application. This was a retail POS application which was going to be browser based, and would provide retail customer management, and order creation/update. At the back-end, this functionality was all provided by SAP. We planned to integrate with SAP via SAP Web Services.
We started by laying out our initial application design. You can see the initial design below. The various applications are laid out in tiers, with a presentation tier containing the POS web site (CustomerSalesOrder), an application tier containing process level services (StoreOpsPS), another application tier containing the more atomic web services (SAPFacade, StoreOpEvents, and InfrastructureServices), and a data tier.
The key applications for the retail POS system are the CustomerSalesOrder web site, and the StoreOpsPS higher level services. As we started looking at those more closely, we realised that there were going to be some common services which would be shared by various applications. We decided to group these as InfrastructureServices, and these included things like auditing, authentication etc. We also wanted to be able to add integration with other systems easily, so we decided to implement an event publisher (StoreOpsEventPublisher), which applications could subscribe to. This was not required for the initial application, but we felt it would come in handy later on.
At this stage, we did not have access to the SAP system, so we mimicked that with a ASP.NET Web Service called SAPWS. We put a couple of Web Services into this proxy based on the SAP documentation. Later we'll replace this with the actual SAP web services.
At this stage, we talked to the infrastructure team, and got them to help us lay out the existing infrastructure using the Logical Datacenter Designer. That diagram is shown here
As you can see, they have two logical tiers, with presentation and application in the WebZone, and then several "data" tiers (SAPZone, DatabaseZone, and MainframeZone). From my perspective, although SAP and the mainframe have application logic, I only access them as data servers.
One thing that was not immediately apparent to me was that I needed to actually connect up endpoints even within a zone. So, when I first built this it would not validate against my Application Design, because I hadn't specified the LocalWS, or the EventQueue (OK, so I should RTFM). I found that I could connect up various Web Services, database connections, and generic connections, and then set constraints around them. For example I was able to specify that the protocol for the EventQueue had to be MSMQ. Cool.
At this point, I created a deployment diagram, and then spent some time putting all the applications into the appropriate servers, and linking up these within and across the zones. This took an hour or so, mainly due to my incomplete understanding of how it all worked. I also laid out the operations on each of the Web Services, ready for these to be created.
So, after all this, did Whitehorse help me design this system? At this stage in the process, I'd have to admit the benefit was still marginal. It did help, in that it forced the various interested parties to cooperate, and design not only how the application was going to work, but also how it was going to be deployed, but at this stage, we could have done all this with Visio.
HOWEVER, stay tuned for my next blog, when I'll describe how we continued to work with these design views, and make them much richer than they are here...
As I said in my previous post, I've been trying out a lot of beta software recently. One of the things I've been working with a lot is Visual Studio Team System. I reckon its pretty cool (actually, I'd have trouble going back to the previous version now), and really wish I'd had something like it in some of the projects I've done in the past.
To encourage you all to try it as well, we (the Microsoft New Zealand Developer and Platform team) have decided to run a competition to encourage you to try it out. The competition comes in two flavours, to appeal to both individual developers, and to development teams, and the prizes are pretty cool. There's about $100K in prizes up for grabs, for New Zealand developers only, so you have a great chance to win. Plus you get the chance to become world famous in New Zealand, because the winners get drawn at TechEd, which will give you some great exposure to the New Zealand development community.
Best of luck...
Scott Woodgate talks about Rolling Thunder, when he speaks about various cool features in the new BizTalk 2006, so I thought I'd share some of my own Rolling Thunder for the various technologies I've been checking out over the last 6 weeks, starting with BizTalk 2006.
Rolling Thunder #1 - The BizTalk Configuration Utility
One of the biggest complaints I've had from customers is that while installing BizTalk is pretty simple, the configuration wizard was just too complex. If you made any mistakes, and forgot to install a feature, or wanted to make changes, you often ended up unconfiguring (hopefully you remembered to do that), and then re-running the configuration. Well, the first thing that hit me in the eye was the new configuration wizard. Simplicity itself to run, and you could run and re-run it to install features, and then uninstall features as needed. I was pretty pleased about that, because I tried to install BizTalk onto SQL 2005, so I ended up uninstalling the BAM Events, and generally changing things around several times.
Rolling Thunder #2 - The BizTalk Flatfile Schema Wizard
Scott's already blogged about this, but its such a cool feature, I had to mention it as well. I wish I'd had something like this back in BizTalk 2000 days....
Rolling Thunder #3 - WSE 3 MTOM support
I was giving a talk about WSE and Indigo to our local .NET User Group. I was also pretty intensely using Visual Studio 2005 Team Suite for my development, so I wanted to show both WSE2 and WSE3. Rebecca Dias kindly gave me access to an early version of the WSE3 CTP (which you can get here), so I put together a bunch of samples showing security in WSE2 with Visual Studio 2003 (on a VPC), and then showed MTOM support in WSE3 directly off my laptop. A lot of people have been struggling with sending binary data across web services, with some using WS-Attachments, and some using MIME. Finally, with an implementation of MTOM available, we can all move to a standard way to do this. And it's so simple too.
I've also looked at the security features in WSE3, but more on that in another post. WSE3 is very cool though, and I recommend you check out the CTP.
And before anyone says, I know I should probably put all this beta software into a VPC (and I do for some builds and demos), but it really does work very well together.
Well, its been a while since I last posted. What with the changes to our blogging infrastructure, plus my workload suddenly increasing dramatically, I just never seemed to get around to writing another blog.
So what have I been doing. Well, I got asked to do a couple of talks about Visual Studio Team System (VSTS), at local developer conferences, and then we had Connect 05 in Christchurch, Wellington and Auckland. Didn’t seem that bad, but I really like to make sure I am prepared for talks, so it all ended up taking a lot longer than I anticipated (but was very interesting).
VSTS is truly huge. I’ve spent the last 7 years building increasingly complex server systems on Microsoft technologies, everything from the Rugby World Cup web site, to message exchanges built on BizTalk Server, so I really relate to the designers that ship with VSTS. For the talk, the demo script involved a fairly simple 3 tier web application, which you hook together, and then deploy to a logical datacenter. Impressive in itself, but I decided to have a bit of fun, and ended up plugging in the application and datacenter designs of several of my past projects. This took a lot of time, but was pretty cool.
I’ll be doing a lot more with this stuff in the next few weeks, so stay tuned.
I promised a few links from my Business Process Management session this morning…
The BizTalk Server Security paper - http://www.microsoft.com/biztalk/techinfo/whitepapers/2004/security.asp
Using Microsoft Tools for Business Process Management - http://www.microsoft.com/biztalk/techinfo/whitepapers/2004/business_process.asp
BizTalk Server 2004: Understanding BPM Servers - http://www.microsoft.com/biztalk/techinfo/whitepapers/2004/bpmservers.asp
BizTalk Technical Guides, including deployment, performance, security - http://msdn.microsoft.com/BPI/TechGuides/default.aspx
Thanks to those of you who attended. I got some great questions, and feedback.
Well, here I am at the Microsoft Regional Architect Forum on the Gold Coast. I just caught Martin Fowler’s keynote on Enterprise Application Architecture Patterns, and patterns generally. Martin is an engaging speaker, and I was pretty keen to catch his ideas on patterns after I worked with his colleague, Gregor Hohpe on the Integration Patterns book. Martin also gave out a URL to a survey he’s done on “good” (Martin was quick to stress that this is his idea of “good”) sources of pattern information. You can get that here
I’m a sucker for a nice analogy! I find they are a great way to help you to understand complex ideas and systems, by referencing stuff you understand well. Of course there is the danger that you can carry one too far, but I still find I understand and remember stuff better when someone provides a nice vision for me. For example, I’ve always liked Pat Helland’s analogies about IT, first the Emissaries and Fiefdoms, and then the Growth of American cities aligned to the development of IT.
Yesterday I presented at the Service Oriented Architecture conference here in Auckland, and I also sat in on a session by Mark Colan from IBM. Mark made a nice analogy between the way systems were originally developed in a monolithic fashion, then object-oriented, and then SOA, likening it to walls (bear with me here).
Mark started the analogy by showing us an ancient wall, composed of irregular bricks, with large mortar chunks in between, and even some pieces that had been added on years later, and cemented into place. This was analogous to our monolithic applications, held together by code, which largely could not be changed, although some retro-fit did happen, tacked into place by more custom code. The next picture was a more modern wall, composed of bricks that were all the same size, still obviously held together by mortar. This was like o-o, because the components fitted together well, but you still needed code to hook them together, and once hooked together, rearranging them is impractical. The last picture Mark showed was of a Lego “sculpture” of Harry Potter (you can see it if you ever fly into or out of Auckland Airport). This was SOA. Bricks still the same, but they could easily be composed in a lot of interesting ways, and even after being built, they could be rearranged into new compositions. Kind of licke the cement was now re-useable, as well as the bricks.
The other point Mark made was that the guy who built the Lego sculpture had several attempts at it before he got it right, which parallels the potential rapid development style possible with SOA.
Nice one Mark!
Cool. The MSMQ Adapter (as different from the MSMQT Adapter) has now been released for BizTalk. You can download here.
I’ve been waiting for this, because I do a lot of development on my machine, and want to have MSMQ and BizTalk on the same machine. If I installed (registered) MSMQT, I wouldn’t be able to have MSMQ on that machine. If I had MSMQ, I couldn’t read the messages from the queue (you get the idea).
Now onto that Publish Subscribe pattern example…
If you haven’t already seen it, check out http://patternshare.org. It’s a table of architecture and design patterns, split into Application, Integration, Operational etc patterns. It pulls in and references patterns from GoF, our Integration Patterns book, The Enterprise Application Patterns book, and other sources, and links you to the pattern synopsis if its available on the net. A cool resource for architects.
A friend of mine, Alex James, has just started blogging. Alex owns his own small company, building some pretty slick software which allows applications to share data or datalayers. You can find his blog here, and download his software from his web site here.
Looking at the number of blogs though, it seems Alex has been spending too much time blogging. Get back to work man!
Well, it’s certainly been a long time between blogs. Since I last blogged, I applied for and got a job back in New Zealand with Microsoft. The job is with the Developer and Platform Evangelism team base din Auckland. My role is Architect Advisor, working with key Enterprise and Partner architects in the community. Its certainly going to be change after over 7 years as a consultant with MCS in New Zealand and Australia.
Of course the new job meant we had to pack up all our stuff in Sydney, and move back over to New Zealand, which was a mammoth task. That’s 3 moves in 2 years now, so I don’t plan to do any more anytime soon. Add in getting kids settled into new schools, finding a place to live, and unpacking some ridiculous number of boxes, and you’ll see why I haven’t had time to write.
Still that’s all behind now, so I plan to catch up on my writing over the next months or so. I’m currently also working with the PAG team in Redmond, building a reference architecture around the Integration Patterns book I helped to write last year. You can find the book here. That’s been pretty interesting, as we completely build out the bank solution using various Microsoft technologies. It also fits in well with some speaking engagements I have in the coming weeks, at the New Zealand “Service Oriented Architecture” conference, and in Australia at the Microsoft Regional Architects Forum. Hopefully I’ll bump into many of my architect friends there. I’m doing a couple of round-table sessions, on Integration Patterns, and on Business Process Management.
A customer of mine bumped into an interesting trap when building a BizTalk Orchestration. Basically, they have a fairly complex orchestration, and at various points in this orchestration, they needed to write some information to an Oracle databse. They wrote a simple OracleHelper component, and tested and debugged that. Then they added a whole lot of expression shapes in the orchestration, where each expression shape had a line something like this...
bResult = bErrorOccurred || OracleHelper.UpdateApplicationStatus(statusFlag);
The intent was that if Oracle access failed (for example if the Oracle Server was not available), then the result flag would be updated, and we would know whether or not a DB update was successful.
The customer rang me to say that orchestration was sometimes not calling the Oracle helper component, and this behaviour was very inconsistent. He blamed BizTalk.
Now orchestrations are converted to intermediate C# code, and then compiled. If you show this expression to an experienced C# coder, they'll quickly point out to that this is very bad code. The problem is that the C# compiler will attempt to do expression optimisation. If you execute this expression, and bErrorOccurred is true, then bResult will always be true, so the C# compiler will remove the call to the OracleHelper, because the outcome of this call cannot affect the final result of the expression. The C# compiler will change this code to
bResult = bErrorOccurred;
Once I pointed this put, they changed the code to
bResult = OracleHelper.UpdateApplicationStatus(statusFlag) || bErrorOccurred;
and everything worked fine. I guess BizTalk is predictable and consistent after all :-)
I've been a digital camera enthusiast for about 4 years now. Like many people I imagine, I've used various imaging software to do basic improvements to the photos I take, like red eye removal, improvement of levels in the photo, cropping etc.
I've been kind of keen to do more, so last week I bought a book, entitled "Photoshop Restoration and Retouching", by Katrin Eismann (ISBN 0-7357-1350-2). This book is fantastic! I've looked at quite a few books, but none of them cover the topic in such detail, and make it so understandable for the amateur.
The thing I really love about this book is that as well as showing you how to do the fantastic (like removing a person, or stitching back together a photo that has been torn up), it also shows some fantastic techniques for solving the more mundane problems with your photos, like having peoples faces in shadow against a bright back-drop, for example when you are at the beach.
I've already worked away at about 10 photos of my own that were potentially great, but for various reasons has small flaws, and in each case, I've managed to apply the lessons in the book to get some great results.
I totally recommend this book!
Well, I finally finished the talk late last night, and presented it at the OASIS conference this morning. It seemed to be fairly well received, with many in the audience having similar experiences to share about SOA work they had done. This talk was very technology agnostic, to fit the audience which were mainly architects, with a lot of J2EE guys present. To my surprise, quite a few people came to talk to me afterwards about some very technology specific issues (mainly about BizTalk 2004), so I had a few very interesting discussions.
Now on to the next challenge, building some comprehensive business rules using BizTalk 2004….
1. Using interception services to provide functionality common to all services (many people call these cross-cutting concerns). A common mechanism for implementing this is to use pipelines to intercept the incoming and outgoing messages, and pass them through a series of handlers that provide the interception services (things like authentication, authorisation etc). A great example of this kind of architecture is Shadowfax (now called Enterprise Development Reference Architecture). BizTalk also uses a similar architecture.
2. Composing services from multiple other services (for an example of this, look at this chapter in the Integration Patterns book). This is a good pattern for combining information from multiple sources, providing a single interface to multiple (possibly horizontally) scaled systems etc.
3. Building completely new business processes from existing services (see this chapter in the Integration Patterns book). This pattern describes hoe build on your existing services to create new business processes, and how to make these new processes resilient, with transactions, exception handling etc.
Tomorrow, I’ll work on the final section (Run…), where I talk about Business Activity Monitoring, and Business Rules.
I do have one question to throw out to anyone that wants to venture an opinion. When designing schemas for an enterprise, it’s pretty common to build canonical schemas. These represent the authoritative definition for entities within the enterprise. I usually recommend that schemas be built for each business process being exposed as a web service. However, some architects I have encountered propose to create a single (large) canonical schema that represents the entire enterprise. I can see some benefits from this approach (guaranteed common definition across the entire enterprise), but I think there are also many disadvantages (very complex schema, difficult to create a schema that all business units agree on). If you have any thoughts, or experiences to share about this, I’d be interested to hear.