This is a strange post in many ways. First, I didn't post for a looong time at p&p. Second, I moved on and went to a group where it was too early to blog about what we were dong. And third, now I'm not even a Microsoft employee.
My new blog address is http://edjez.instedd.org.
I left patterns & practices in 2007. After having started its .NET efforts, started and grew the "bluebricks" program that allowed us to ship all the 'guidance' (what a fluffy word!) as source code you could take into production, which eventually evolved into the Application Blocks, EntLib, GAT/GAX, Software Factories, etc... Of course it went many iterations in the very able hands of the amazing people that are at, or passed through, patterns & practices. The people that worked at p&p and the user communities with which we built close ties are the fondest memories of that time.
When I left p&p I went to an internal group near Ray Ozzie, doing prototyping and putting together approaches for 'innovation' at Microsoft. You won't see much about it yet but I hope they eventually get an external identity, because it was the coolest thing to hit Microsoft in my almost 9 years there. There's just random tidbits leaked over the web, and I couldn't blog at all about what was going on, but even like that it was worth it. It was a dynamic and energetic team led by people who know what they are doing but also crazy enough to break the molds that don't work.
But there was something constantly tugging at my heartstrings and I left Microsoft on October 2007 to join a technology non-profit called InSTEDD. We do advanced technologies for disaster response and global health. Using and doing Open Source. From SMS-based applications for data sharing to visualization and data mining. Although I had a great time at Microsoft, i am having a blast every day working with Google, Facebook, Linux, SalesForce.com, Sun, (...and Microsoft) taking technology as the means, not the end - and it is fulfilling to be working directly with communities where technology holds so much potential.
And you know what? It's challenging all my assumptions of architecture, design, and the economics of software development and use. In a way, I feel like I've jumped some years into the future, and now I am living every day what I felt a glimpse of some years ago (see my 2006 blog post - Forerunners of tomorrow's enterprise architectures ). There's a lot of patterns to be mined here. I hope I do a good job of communicating them.
InSTEDD is inherently cross-platform, but for some fluke reason we have a lot of .NET stuff going on right now, so you might want to check it out (http://www.instedd.org). Of course, we'll make the right chunks run on Mono.
My new blog : http://edjez.instedd.org. This is my last post here.
My new 'customers' live very different lives, but I think there is something to be learnt from every one. To Microsoft and patterns & practices chaps - thanks. Please forward folks to my new blog if you think there's value.
To all customers I've ever worked with - I hope what I've done has been of good use - and it's been a privilege working with you. And if you have a Corporate Social Responsibility angle or have an aching need to donate to what probably is the only humanitarian NGO doing TDD on the planet, you know where to find me.
Tom and I are making a set of simultaneous posts about a recent addition to the EntLib 3.0 family of Application Blocks - the Policy Injection Application Block (PIAB). His post is an introduction and high-level description, but in this one I wanted to get a little bit behind the scenes and expose the 'why' behind its existence and design.
This post will make little sense as a first introduction, so
1. Go read Tom's post for a quick overview
2. Read this post for a behind-the-scenes look at this new block
Our work is still a Community Technology Preview (CTP) and we are looking for feedback! So please don't be shy.
What's the goal?
At patterns & practices we realize that a huge part of helping make enterprise applications easier to build is having good separation of concerns (SoC). Separation of concerns is about allowing the right people work in the right areas at the right time. This also allows different parts of your systems evolve at different paces.
In other words, a system that has bad separation of concerns makes it hard for people with same or different responsibilities to work in parallel, makes it hard to maintain a body of code without affecting other pieces of code, and forces decisions to be made at times that may not be the best for the team building the solution.
There are many ways to help separate concerns - via tools, platforms, methodologies, programming languages, etc. Our observation is that when the platform supports some level of separation of concerns, it is simpler for the tools and methodologies to follow suit.
Essentially the PIAB allows you to specify code that will run before and after members of your components in your application, as specified in a model, with no real code change requirements in your apps. It implements the common Composition Filters pattern.
The scenarios where this can help range from common ones - such as exception shielding, adding perfmon counters around method calls and other types of instrumentation, security checks - to quite interesting ones - versioning, multiple dispatch, parameter validation, interception for test stub assertions, and so on.
This sounds familiar...
People familiar with Aspect Oriented Programming (AOP), Aspect Oriented Software Development (AOSD) and Composition Filters (CF) should be familiar with this concept, and should resonate with many of the concepts in the PIAB. Over the years I have been personally involved in many discussions about SoC within Microsoft, including an AOSD summit we held last year. For many years I have been collecting a list of challenges customers solve with these approaches, and I can tell you it goes beyond 'instrumentation' which tends to be the first use people imagine.
I also hear many people express that this approach only applies to, and should maybe be constrained to, 'hard' boundaries in the application, such as service boundaries. The issue here is, these boundaries are quite subjective - they could be layers in an application that are not physically distributed, or they could defined by runtime usage -e.g. within a WorkItem or Module in CAB.
For example, WCF has Behaviors for adding functionality at service boundaries, and p&p has been providing blocks and examples on how to use these, if they fulfill your need. With the approach of the PIAB you can apply policies at boundaries that make sense in your design, or apply policies regardless of whether they fall on a conceptual 'boundary' or not.
So I want this to be an in-depth tour of the PIAB exposing some key design decisions, and what drove them.
What's the overall design?
The design of the PIAB can be described quite succinctly.
Imagine we have a target object with a member that we want to add some behaviors/policies to.
We use an interception mechanism to get in the way of calls going to that member, collect a list of policies that apply using a matching rules mechanism, run the chain of handlers specified by those policies in a chain of responsibility and at the other end dispatch the call to the target. Once the target is done - successfully or with exceptions - the stack unwinds, returning through each handler and finally back to the caller.
A policy is a set of handlers in a specific order, and a set of matching rules that specify to what targets it should apply.
Let's drill down into the specifics.
What is the interception mechanism and why?
First of all let me tell you that I am all for having better transparent interception in .NET as a platform. As the CLR and our languages continue to evolve, I expect this to become easier. However as of now the situation is pretty much a pick your poison scenario. Each option forces nontrivial tradeoffs. So here is one of our design choices:
All the infrastructure to gather and run policies is independent of the interception choice. We just provide a default.
That is, while we chose a default, you can use other interception/'weaving' mechanisms and re-use the rest of the design. Or we might ship others after the fact, if there is demand.
We chose remoting proxies as our default, and you will see that in the CTP.
Choosing a default was a hard thing. Here are some of the alternatives we evaluated.
Remoting proxies : Using Real/TransparentProxy
- Requires construction with a special factory
- Requires class to derive from MBRO, or to have an explicit interface.
- Proxy is treated as the real type by the type system (.NET special cases type identity checks around these objects)
Context Bound Objects : Using ContextBoundObject and attributes
- Allows interception of ‘new’
- Not recommended for customer code - therefore p&p won't ship it as part of guidance!
Assembly rewriting : Taking the IL of your assembly and injecting more IL into your classes
- Eliminates the need for proxy classes
- Completely compatible with the type system and ‘new’
- Can’t be used for external strong-named assemblies without delay-signing (pros and cons)
- Not supportable by Microsoft Product Support (PSS) as of today--> ouch
Generating derived classes : Taking your classes and generating wrapping classes that derive from them
- Requires construction with a special factory
- Only works for virtual methods on non-sealed classes
- Could result in type system issues
Generating inline interception code : Taking your classes and adding C#/VB code 'around' method bodies
- Requires code to be written in a special way
- Interception code is explicit, rather than “magic” (which has pros and cons)
- Requires source code for intercepted objects
Other more obscure mechanisms we discarded : Things that work but we wouldn't like to ship and you wouldn't like to maintain
- Using CLR profiler APIs
- Using CLR JIT Debugger callbacks for runtime IL rewriting
Of course we aren't the first ones to go through these decisions. But being p&p we have to be careful and explicit about our choices as they are Microsoft's recommendation on how to tackle the problem for enterprise production environments, and we get a very broad usage. This doesn't mean other options are wrong or disrecommended, just that our tradeoffs may be driven by different forces.
The approach requires the use of a factory instead of just new for objects that have policy, which can be frustrating. If you are using our Software Factories or specifically Composite UI (CAB), Composite Web UI (CWAB) or a future Web Service Software Factory v3 preview, you are probably already using Object Builder under the hood for dependency injection, and we will provide an Object Builder strategy to make the addition of this PIAB functionality transparent to the rest of the app.
We will be running perf tests against our Reference Implementations and other real-world apps and getting impact numbers, to share performance data with you. Needless to say, using Hello World as a test scenario with this interception choice yields predictably terrible perf results. Then again , Hello World doesn't need separation of concerns; and the PIAB will not wrap classes if they don't have policy that should apply to them.
Again, you could take any other mechanism of your choice (e.g. assembly rewriting) and use the rest of the infrastructure as is.
Let's move on to the next important design element
How do you specify which policies apply to what targets?
Which policies apply to which target members is informed by a set of 'matching rules', which can be set up at runtime programmatically, or from XML config (with tool support), or from attributes (not in this CTP).
A matching rule is essentially a predicate that answers the question - does a policy apply to this member? A namespace matching rule allows you to say something like 'apply this policy if: the namespace of the class containing this method starts with MyCompany.MyApp.BusinessOperations'. You can add matching rules which get ANDed.
Why these matching rules and not just an xml schema that says these members of these types, or something like it?
The matching rules allow you to define what applies where in an expressive way, for simple or complex criteria - avoiding a one-size-fits-all model that is hard to learn for simplest cases a hard to scale to complex cases.
One of our challenges was getting a level of expressiveness in "what applies where" for cases that were application architecture specific - e.g. "All classes that are Services in CAB" or specific to usage patterns - e.g. "all classes that are presenters in view-presenter pairs". We collected a list of common criteria and two things became clear: One, our xml schema for matching rules was growing beyond what we wanted and two, by looking down our product roadmap, we were confident we didn't have all the types of criteria we wanted.
So, by having this matching rules design, each matching rule can have its own schema that is expressive and makes sense for the context. We are providing matching rules implementations for common criteria out-of-the-box, such as matching namespaces, base classes, method signatures etc. You can create matching rules that use your own XML schemas, custom languages (like those used for pointcuts by some AOP frameworks) and DSLs if you fancy. All you need to do is implement IMatchingRule.
We also have a Tag attribute and matching rule that allows you to define your own semantics of members. For example, "Apply Audit policy to every method marked as 'Critical'" is as simple as defining the Audit policy with a logging handler, and using the Tag("Critical") or your own 'Critical' attribute type on the appropriate methods:
public void DoSomethingImportant()
I am not personally a fan of matching based on string comparisons of class and method names, but there is a matching rule for that too.
What are policies, handlers and what can they do?
A policy is a set of handlers in a specific order, and a set of matching rules that specify to what targets it should apply.
For example you can define an "Exception Shielding" policy, with an Exception Handler and a Validation Handler. You can then define matching rules that say that "Exception Shielding" applies to "all methods of classes in the data layer"
One of the challenges in these Composition Filters (CF) or Chain of Responsibility (CoR) style of designs that can have multiple handlers, is how do you know which is the effective order of all the handlers around an object? E.g. obviously, returning cached responses before authorization is not the same as authorizing before checking the cache....
We looked at how customers dealt with this, and in many cases this ended up not being a real problem as they were willing to explicitly, manually, set the order of the handlers to make sense for the outcome. This means there is no automagic sequencing/sorting of handlers, and you define their sequence inside the policy definition in the config tool. Also, policies are ordered themselves, so if more than one policy applies to a target then you can predict the order of handlers.
A handler is the actual object that sits in the call chain before the method gets invoked. These are some interesting characteristics about handlers:
- Handlers are executed as a chain of responsibility. This means that handlers will be on the stack by the time the target gets called. This also means handlers can decide if and when to call the next handler. For example, a validation handler may decide to return an exception without continuing the call chain. Handlers also can do work on the way in to the target, or on the way out, or both.
- Handlers get access to the call member info, parameter infos, actual argument values, and out/return values. This means you can inspect, log and even change incoming/outgoing arguments as needed
- Handlers can get access to exceptions, to log them, etc as appropriate.
- Handlers have access to the target object, so they can check its state, or even do operations on it as needed.
- Handlers can keep/share information in a loosely coupled way via a dictionary which is associated with the call. This is important as there is only one instance of configured handler per policy per appdomain. We set up the chain of responsibility behavior using delegates, so instead of keeping a pointer to the 'next' handler, handlers get a delegate reference to the Execute of the next handler, allowing us to reuse the handlers in more scenarios.The shared dictionary allows you to keep track of data not part of the call - e.g. the start time of the call; to compute total execution time on the way back.
As you see, we had a design principle, that
Handlers are powerful and can inspect and manipulate the call deeply. We won't try to protect handler authors from making mistakes by removing usefulness
What can I do with this out of the box?
We are planning to include some handlers to add utility out of the box: Validation, Logging, Exception Handling, Performance Counters, Authorization, and Caching. We also plan to have some common matching rules.
This is all very scary. I can't know what will happen by looking at the code! YES - That is the whole point of separating the concerns!. If you want to have hints about what will happen in your code, you can use the tagging attributes to remind you that certain things will happen - but how they happen is up to policy.
How will this impact my debugging? Hopefully the existence of policies can assist your support and troubleshooting by providing information to help you isolate problems. Once you are debugging the code, as of now you will see extra goop. We haven't looked into providing VS tools to help abstract this out, but it is possible to build compelling tools that make you aware of the extra handlers without forcing you to step around foreign code.
Do I have to use factory methods to create my objects? Yes, with the interception mechanism we provide. You can avoid this requirement if you do the extra work of implementing another mechanism. If you do, please share back into the community!
Why don't you call this AOP? If you know about AOP, you can see where it is similar and where it differs from AOP solutions out there. The handlers fulfill the role of 'advice', the matching rules replace 'pointcut languages', and the 'weaving' is black-box based on interception. However the reality is most of our customers have not heard of AOP or AOSD. Please put yourself in their shoes for a second. We wanted a name that was reflective of the use and benefit the block brings, rather than using an unfamiliar term that only the exceptions understand. We also wanted to have no implicit expectations on the design that may be inherited from AO solutions. However, if you think this is a dumb move, please holler on our CodePlex community.
OK, If I use this now what is the road moving forward, how will other pieces of the .NET platform/language support this scenario? This is where I remind you that p&p is not the CLR team, and there is nothing specific announced to date targeting this scenario as part of the .NET platform. One could imagine using C# partial methods for some cases where you need to call out to PIAB handlers in-line, and maybe simplify code-generation of wrapper classes. Dynamic languages provide some runtime manipulation of members, e.g. replacing a member implementation with a wrapping member implementation, or augmenting the implementation with the code that today resides in the handlers. All these just would be enabling technologies to simplify creation of PIAB like infrastructure pieces.
I am using an AOP solution today. Do I care about this? Great! You might find in the provided handlers some useful sample code that can help you plug EntLib blocks into your own infrastructure. Keep using what you have if you are happy with it! Also, if you have built infrastructure pieces yourself for similar purposes, check our EULA and if all is well feel free to reuse pieces, or feel free to post handlers you may have built adapted for the PIAB, or other code, in Codeplex.
We hope you will find this to be a useful addition to your toolkit. Sorry for not announcing this earlier, but we did start development 3 weeks ago and we weren't sure this would make it in until this week.
Thanks a lot to Christa Schwanninger (Siemens Corporate Technology) for her feedback and refinements to this blog post.
Daniel wrote a nice little post about status of building Software Factories today - using DSLs to express intent around common models and Guidance Automation Extensions (GAX) to automate common use cases. See his perspective here
As the team who has shipped most of the Software Factories from Microsoft, patterns & practices continuously tries to help you build better applications using less effort by applying these technologies. Once the rubber meets the road, we all learn, and we build infrastructure to support better experiences today and at the same time provide feedback to the VS.NET folks building next versions, so they can add features that will make p&p's and your lives easier down the road. Entlib Config and GAT/GAX was an example of such infrastructure.
But where is it going? We think there are many opportunities in the following areas:
1. Integration of multiple models - obviously you can express parts of your application with models, but the moment you start adding different domains - especially crosscutting ones, think authorization; or models at different levels of abstraction- you obvioulsy require to have dependencies, relationships and interoperability/integration of models. A good infrastructure for model integration would allow you to yuxtapose concepts around workflow, and manageability events, for example, without having any unecessary dependencies that wouldn't allow you to use that same manageability model around a totally different concept like web service calls or business data state transitions.
2. Bringing in the team to the factory - Right now the factory experience is a lonely one - you build a thing, move on. Artifacts like models and code survive, but beyond this sharing there is no coordination to the collaboration. Just like GAX can be used to automate 'atomic' use cases that are done by one dude/dudette sitting at Visual Studio, we think that something akin to small team workflows can be used to express outcome-centric activities that coordinate many smaller actions. These are not end-to-end processes, rather small pieces of activities that can't be done all at once - like setting up an environment for mobile app testing, or building/reviewing/checklisting against a threats & countermeasures model.
3. Specializing aspects of the software engineering process - we have ways of expressing with VSTS the process (workitem templates, content, etc) that you use for building stuff. What happened if the process could be specialized in the right areas to help you build not just stuff but a special shape of stuff? Are there templates in requirements, bugs, development tasks etc that would be more valuable to you if they were specially built to help teams build (for example) web services?
Of course a key concern I hear is - wow, with all this stuff won't things just become heavier and more context-dependent to the point I won't be able to use anything just because I'm not living in your little world? - and I understand it. My hope is that we can work on our packaging (which isn't precisely a beauty right now!) so that you can take the parts that you need, yet at the same time have a great experience with the whole. And all of what I blogged about is just the work needed to give your team a guided experience when building solutions. Every team in p&p building a factory has to make a balance between distilling the proven-practice guidance itself, and expressing that guidance in runtime, tooling and content. I hope that, as we continue to roll out more of these, you out there in the community will keep us honest and help us make the right tradeoffs.
We just released the Web Client Software Factory to Codeplex. Go get it.
I thoroughly enjoyed this project. The team rocked, we had to deal with new technologies and we did a large set of deliverables targeting web apps specifically, which I felt we were slightly behind on compared to other technical areas.
And we obsoleted UIP (finally) with the new PageFlow application block that uses Worflow Foundation by default to let you design flow-rich use cases. There's guidance for UIP users to simplify migration; including a re-do of the old UIP quickstart using the new page flow block (Michael Stuart- are you listening?)
And we have a first cut at composite web apps in the CWAB (Composite Web Application Block), putting in a container model (courtesy of Object Builder) that allows you to do dependency injection to acheive patterns like MVP and MVC, as well as simplify dealing with state and so on. It starts covering the scenarios of composing modules that make up a site - not composing across sites - and allows us to start building a foundation for future scenarios you require more guidance on.
And we have a full reference app that shows all this together.
And documentation explaining the essential design concepts and activities that come together building an app like it.
And visual studio extensions that simplify the job of creating apps imitating the design of the RI using all of the above.
There you go. Enjoy! Now we will listen to your feedback and observe usage patterns to see what opportunities we should tackle in the next release... which may come out sooner than you think. There's areas where we would like to push the ball forward and areas where we may need to cover gaps. Let us know.
Please note we shipped it because it was ready - not because we ran out of stickies for iteration planning... :)
I like this game - You need to reveal five things about me that most people don’t know, and tag 5 more folks. I'm going to add my own spin and add 5 pictures.
Here we go:
- I am from Argentina, with a Polish family and went to a Scottish school. I played the bagpipes. In a kilt. It is there that..
- ...I was able to see my first programs run about a year after I wrote them, once I had access to a computer - I had written them on paper. But now I use computers (sometimes) and
- ... I am a private pilot, and love flying around the US pacific northwest.
- ...but a lousy snowboarder, and broke my spine in the midst of entlib v1. I somehow used the infinitesimal chance I had of recovering well, so I’m fine. When I think things are bad, I wiggle my toes and the smile returns.
- I believe in spending time making this a better place and that individuals can change the world, which has led me to multiple engagements with environmental, education and humanitarian scenarios. That is a set of tough problems. I spend time on that when I can. For example, recently at Strong Angel III
Here are my 5 pictures -
So who is "it" now?
You don't need to post pictures...that's just me
Have fun! Enjoy the Entlib 3 CTP and Upcoming Web Client Software Factory - with an full-new " Page flow" application block that replaces UIP for web apps and uses WF as the default workflow engine.
We have just released oour first community drop of the Web Client Software Factory. www.msdn.microsoft.com/webclientfactory
As we continue working on the project, we expect to provide:
- Give best practices for web app development on .NET 2.0
- Guidance around pageflow and eventually migration assistance from UIP to the new guidance
- Some guidance on AJAX
- Integration of the Security & Perf & Scale best practices
It is also one of the first projects we'll try to keep on CodePlex instead of GotDotNet, so please provide feedback on the site experience as well.
1. www.cabpedia.com - started by John Socha-Leialoha. John worked with us on the Mobile Client Software Factory and is currently doing some prototype work for a requirements gathering tool, and has set up this excellent community wiki so that folks can update and contribute content. The body of knowledge in there is already impressive!
2. An overview of CAB for those using it in the smart client software factory.... http://msdn.microsoft.com/library/en-us/dnpag2/html/SCSFOverviewOfCompositeUIAppBlock.asp
Just returned from Strong Angel III - a week-long excercise and demonstration in the use of ICT (Information and Communication Technology) for humaitarian and first-response missions. I was there to learn many things, and one of the things I brought back are many requirements that systems must fulfill in order to work in complex, emotionally vexing environments. I beleive that the solutions that work in these constrained and dynamic situations have inside certain elements that are indicative of the enterprise needs of tomorrow...so I try to pay attention.
It ended being a mixture of 'jazz and lego blocks' as Nigel Snoad would say, with collaboration with the Microsoft team around SSE (Simple Sharing Extensions), building a model-driven dynamic forms generator for mobile apps for data collection built around an SSE distribution model, and writing glue code for demonstrations of interoperability with Google Earth and Sahana, an open-source web site.
Here are my key takeways that are relevant to this blog's audience. My first one is a vent on the utter complexity our user have to deal with when using most software being built today regardless of the platform it is built on.
My observation is - The tools, languages etc. we use to build enterprise applications are getting better - however we are using them to build quite disempowering systems. Essentially, we are getting better at building the same old stuff. I believe a quantuum leap is needed in applicationas architecture if we want to build systems that empower the users.
Here are some characteristics that I would expect from the next generation of applications:
- User-driven UI customization, integration and mashups. We took a small step with CAB and other composable architectures, but that accounts for a small percentage of what is needed. Imagine being able to take data flows, drive UIs out of them, tie them into worklows, on the same client app you use with no code.
- Interoperability over the use of standard protocols and to the maximum extent possible standard data schemas over the wire. Across platforms.
- Greater usability.
- No need to predict the data flow topology (as long as you can govern it and secure it). i.e. the same app can be configured to work in client-server, 3-tier, peer to peer, and other modes. The decentralization of state management and data flow could even be followed by a decentralization of manageability, and security features.
I expect to see some of the learnings incorporated into p&p deliverables over time. I'll see if where I can post my code about the forms and the SSE library so folks can use it. It is quite primitive but it is a start.
If you want to read more about Strong Angel III you can see news articles all over the web. Here is one good for techies: http://blogs.zdnet.com/BTL/?p=3545
This week most of the Mobile Client Software Factory dev team is at MEDC, in Las Vegas, presenting the cool stuff we've built over the last few months. (You guys should bee proud!)
It includes a loosely coupled collection of application blocks, some guidance packages, and documentation to help you build an end-to-end mobile application with guidance around some critical areas.
The guidance we've put together so far -since around January?- includes:
- A Mobile version of CAB
- A Connection Management app block to detect connection state, and a block to help you manage configuration for application endpoints for different networks
- A mobile DAAB with a super-simple DB data access mapping helper
- A Disconnected Service Agent Block + Guidance Package to help you build apps that work when disconnected and that roam networks; queuing requests; configuring web service proxies appropriately and all that
- A SQL Server replication block that simplifies how you create and manage data replication to your device; if you choose to use this as a way of getting reference data into the app
- An Orientation Aware control that allows you to design different layouts for different screen sizes and orientations and form factors (square, rectangular, etc) directly in the VS.NET designer
- A Unit Test Runner that allows you to write tests for the full framework using VSTS tools but then runs them in the emulator without changing your test code
- A Reference Implementation app that shows you a simple application using all these areas together illustrating good design patterns for your business logic
In the upcoming 2 months we plan to formalize the Reference Implementation design patterns into Guidance Packages; and depending on how well we do on velocity add a couple of more domains (self-updating mobile apps could be one such area, or Pin-based authentication and key storage). You can stay up to date by getting weekly drops from our community (which include code, docs and presentations about the topic).
We have a great team, and an awesome Expert Advisors board including people from large enterprises, ISVs, MVPs, and OpenNetCF. We will disclose the names of the advisors, as we get their permission :)
This has also been a great experiment for Microsoft in which a product team is building a Microsoft SKU in collaboration with p&p. (You can read about the precursor to this here) The collaboration is fun, as we learn how to integrate a p&p team doing XP on weekly iterations in the west coast of the US; with a Dynamics team doing monthly SCRUMs in Denmark. It's involved some travelling to keep things together but we have been doing great by attending each others iteration planning, having a shared team blog, and capturing tidbits of info in a shared wiki. The Dynamics team is building a specific Composite UI shell for Mobile apps, and a way to have model-driven orchestration of WorkItems as a way of allowing extensibility and customization of the apps they will build. Pretty darn cool!
Please go and join the community to see the p&p project; and let us know whether it will help you build your next mobile app!
Great news and great approach - Lego opens up its Mindstorms NXT toolkit, and their board of 'Expert Advisors' will be able to share their experiences with the community. It must be an exciting day for the NXT team - if I were to draw a parallel to patterns & practices process, it is the day when you let your advisors tell your future users what they really think - no PR tweaks, fidgety marketing folks or controlled messages in the way.
It is excting to see the approach of involving users throughout the product development be used in a product I care about. Here's the link:
To quote (highlights mine):
“Most often, innovation comes from the core community of users. Our ongoing commitment to enabling our fan base to personalize and enhance their MINDSTORMS experience has reached a new level with our decision to release the firmware for the NXT brick as open source,” said Søren Lund, director of LEGO MINDSTORMS. “When we launched the legacy MINDSTORMS platform in 1998, the community found ways to do these things on their own, and we were faced with the question of whether to allow it, which we decided to embrace and encourage. Now, given the strong user base and versatility and power of the NXT platform, the right to hack is a ‘no brainer.’ We’re excited to see how our open approach will push new boundaries of robotic development and are eager for all enthusiasts to share their creations with the community.”
Today also marks the first time that the original 14 MINDSTORMS Users Panel members and 100 recruited members of the MINDSTORMS Developer Program, an exclusive group of enthusiasts charged with helping guide the product development process, can publicly share their initial experiences and experiments with the NXT platform. During a four-month process, participants have had access to a secure Web forum where they can communicate with one another, learn more about the project, debate issues, create solutions and support the ultimate launch of LEGO MINDSTORMS NXT in August. "
Let’s say you are talking about a program with someone. Imagine stakes are somewhat high. Suddenly one of these two phrases gets thrown out:
1. "It could be faster"
In any but the healthiest of teams, these are great words to put folks on defensive and stop rational conversations about code. Suddenly the neurons spin (?) and we wonder- is there something we missed? Is my code wrong? Did I screw it up all over the place? Does this guy/gal have some privileged piece of information? When someone tells you "It could be faster" - what's your typical reaction…?
2. "It's for security reasons"
That's another one. And it's funny I even dare to bring this up, being from Microsoft and all that. Who would be brave enough enter a debate about code being written for such an honorable purpose? Who would be enough of a security expert to dare question the existence of code that is there to protect us from evil, shame and regret? Do you want your grandchildren to remember you for being the dude/dudette that left that gaping security hole open? That was exploited by that worm that took civilization back to the 19th century in one fell swoop? I know I don't want to be associated with an accidental return to hunter/gatherer lifestyle ...so we block the issue in our heads and move on.
What's going on here?
These are just two examples of what I call 'invoking ghosts' in discussions about software. We bring up a topic that is vague, crosscutting, obscure, to 1) provide an excuse to do work or not do work, independently of how sensible it is to do it or not 2) win a debate against The Other Idea 3) raise FUD levels and gain control of the situation or reduce others' credibility 4) basically steer the conversation away from the code and into people manipulation.
Now, don't get me wrong. Perf and security are important (Doh! I sound like an idiot....I guess you can quote me on that...No, the first phrase) Things can be faster and you could be doing things for real security measures - but how to tell? What do you do if in a conversation someone invokes these topics as ghosts, and steer it back into a productive place?
Fortunately, I think there are good patterns and tools to communicate around these issues. A huge one is to be transparent about the goals and unknowns. This basically puts on the table 1. What is it that you are trying to achieve, and 2. An acknowledgment that the understanding of the problem and the shape of the solution are still evolving.
What has helped me is to get to a shared (implicit or explicit) commitment to an approach:
1) Establish the needs & the priorities
2) Analyze current situation and set a goal for an improvement
3) Work on improving the system, checkpoint against goals
4) do it all again
This actually sounds quite obvious, but it has helped me bring sanity to discussions about quality attributes such as performance & scalability, security, manageability, maintainability, and other areas where 'ghosts' tend to live. Quality attributes tend to be particularly good 'haunted houses' because of their crosscutting, distributed nature, and because we tend to have less than ideal tools to think about tests of success.
If you look at the 4 steps above the first one is about establishing needs - figuring out your tests, if you will. There has been a lot of work for many years around helping people figure out their tests/needs for quality attributes.
For security you have Threat Modeling, for manageability you have Health Modeling, for performance you have Perf Modeling, and I hope good frames come up for other quality attributes as well. But what I particularly like about these models is that they aren't models about the solution, but models for tests (and steps 2, 3 & 4 are just process guidance to help us make them pass…the “green and refactor” of the “red, green, refactor” cycle).
In my opinion the focus on tests first for these complex areas makes these approaches and tools intrinsically suitable for expressing partial, incomplete or overlapping goals (which tends to happen whenever humans are involved..), and helping people iteratively collaborate around defining, verifying and refining them.
By starting with tests, we support an inductive approach to building the solution, akin to what you do with TDD. Now maybe we still don't have good tools to automatically test our systems based on all these testing models, but I hope that's just a matter of time (base DSL for tests that can be specialized for specific domains, anyone?). Eventually, these tools could become part of the suite of tests that drive the creation of the solution, treated as acceptance tests tied to business needs, and maybe even verifiable against systems at runtime.
…In the meantime it takes toil, tinkering, and a dabbling with mental experiments. But most of all I think it takes clear communication between people. Here’s some resources around these topics that I’ve used to help me scare the ghosts away from the conversations, and keep them productive, collaborative, and smart:
What makes a good threat model – by J.D. Meier
MSDN page having lots of things threat model (the generic portal)
Rico Mariani’s blog – Entry with the foreword of the p&p perf & scale guide with a quote of Donald Knuth quoting Tony Hoare saying ‘
Health Modeling - whitepaper on the Windows Server. They’ve done us all the favor of dumping the health model of a printer in XML. Whee! (But can you write a test runner for it?)
(Whither my simplicity model paper? Related to maintainability of code? I digress)
Kudos: It’s clear that J.D. has done a lot of work in this area. He’s the best I know at distilling these areas in significant, actionable, and insightful ways.
(OK, My cereal is going something like "Ed - W-T-F-is-this-fluffy-post? Why don't you tell us about dependency injection, aspects, executable models, Object Builder, using XAML to define types dynamically, gps, CAB running in mobile devices, workflow and UI or concurrency or whatever?" A: "I believe writing software is a social activity. I believe that if we learn to communicate, improve how we reason, become more self-aware, and question assumptions in more effective ways we will plain simply write much better software and enjoy it more along the way. Plus - it's my blog. And my doctor told me I shouldn't talk to cereal.)
Often I get the question 'what is an application block' and 'why that name'? We have lots of articles, webcasts etc describing them how we've built them and how to extend them or build your own. Then people ask 'what is the difference between a block or a library or a framework' and I remind them that we use the Application Block name to reflect a set of expectations around packaging, design and how they could be used.
Using the word “blocks” was done to evoke a set of patterns about how p&p -and hopefully eventually more of MS - chooses to distribute reusable chunks of bits, without constraining them too much a priori. The approach has allowed our team, communities and others to participate in their evolution rather than try to invent something different altogether. There is an existing taxonomy of chunks of reusable code, and using the word "Blocks" was not done to try to introduce a new item in this taxonomy (that would be arrogant) or redefine existing terms.
There's plenty that's been written about code libraries (things you can use), frameworks (things that affect the architecture of your code a bit more), app containers and the likes, so I won't go there. I'll just say why I didn't think this would be a good distinction to revolve around when you are creating names for a new product line.
I believe good names tend to be:
- Unspecific enough that you can accept variability of the named thing.
- Metaphorical. The concept of the name evokes visual, physical/physiological and metabolic actions.
- Associative and work by metonymy, just like 'White House' represents a government, a president, and a big dome, and not a building.
I believe a good name does not need to reflect every detail, or every characteristic, or the complete position in the taxonomy, of the thing it describes. For engineers it causes some heartburn (to realize that better names are not necessarily better for classification) but it's easy to embrace once you understand why.
(If you are interested in this you can read Lakoff or cognitive science papers from the last couple of decades. The one thing that strikes me the most is the impact of physical evolution on the brain's workings, which is one of those conclusions that just strikes me as insightful. What dictates the shape of the box we tend to think in? Thanks Ward for recommending this author)
Back to code. You can read plenty about the distinctions most of the experts in the community put forward between libraries and frameworks. I will oversimplify, but in the end libraries are things I can use, and frameworks tend to mess with the architecture and design of my own code. Maybe containers implementing DI tend to clarify the edges a little bit, by allowing an otherwise hefty-framework-author have better boundaries between what you use and what affects you.
The one difficulty I find when trying to classify blocks in terms of "libraries" of "frameworks" is that folks will not understand application architecture in objective ways. For example, enterprise library is IMO a great example of "Library" chunks. However, if you ask around many folks consider EntLib to 'deeply affect the architecture of their app'. One man's library is another man's framework. On the other hand, CAB has lots of pieces which are canonical "framework" chunks and a gentle sprinkling of "library" pieces. It has both gentle and forceful ways of suggesting a design for some areas of your code. I do think we could have packaged CAB up a bit better by breaking out specific pieces, but we all have constraints.
So to summarize them, the name "Application Block" does not reflect a precise place in the taxonomy of reuse, rather, a loose metaphor that encompasses a family of assets, and how you use them. Using blocks should be a matter of choice, they work together, they can be piled up, you can use many or a few, they will probably have commonality internally and externally as dictated by efficiency and usability, and they should be fun.
Disclaimer: JD Meier suggested I get to blogging and after some discussion why I haven't been doing this, I decided to create a section for 'off the deep end' posts. The description is is a reference to Gödel: Thoughts, Ramblings, and content not computable through predicate calculus.
My hope is that I can put things in this section with an implicit disclaimer.
Nuff said already! Just go get it!
It includes the following favorites:
- Caching Application Block
- Cryptography Application Block
- Data Access Application Block
- Exception Handling Application Block
- Logging Application Block
- Security Application Block
Migration & Compatibility: As you know in p&p we can't guarantee 100% compatibility of APIs between releases. But we will strive to change as little as possible; and many blocks have the same API as in the .NET 1.1. releases. Specifically, we try to have the least impact on APIs used by business logic developers first, then least impact on extensions, and finally we won't worry about changing deep hidden internals. In this release of entlib includes migration documentation to move from the previous version of a block to the new one, if there were any changes.
Internals In addition EntLib includes ObjectBuilder which is the dependency injection foundation for how the Application Blocks wire up internally and get affected by config and instrumentation, and is the container architecture on which other p&p deliverables (such as the Composite UI Application block) are based. The dependency injection is an important characteristic of the internals. You don't care about this if all you are doing is using the blocks as-is. You care if you are building your own blocks, or extending our blocks in involved ways. It's good becuase it assists an internal design that is decoupled, which allow us/you to version configuration, instrumentation, and all blocks independently. And that is good because improvements, updates and fixes have less widespread impact - which shortens the dev, test and release time; and that's good of you. It does mean that the internals aren't easy to just look at and follow, because the 'red thread' that ties all together is spread in more places. I know many people miss the internal simplicity of the original application blocks from many years ago. I do want to dhare the observation all that the tradeoff has accelerated our team- which means more stuff for you - and allowed more extensibility and customization by partners and companies and the community.
THANKS! I wanted to thank all the partners, customers and community members who have been relentless in giving us feebdack and who 'keep us honest' with their feedback from applying early bits and the finished blocks in their apps. Please keep doing so - you are the reason why we come to work every!
(Sorry to my team for the delay in getting this out - I realized I was 'saving' and not 'posting' the entry)
I'm glad to see that David's blog got reactivated with the release of CAB. David Hill was a great supporter throughout CAB project - he did some key implementations at customer accounts that fed a lot of 'been there, done that' experience, I think he essentially picked the name, and was a general instigator across Microsoft of the importance of the scenario, and reviewer of our final application block. David is now working on the Windows Forms team on how these concepts can make it into the .NET platform & VS.NET...and he's actively in the CAB community, so seek him out and don't hesitate to tell him about your experiences in the space!
And so it releases. This has been a great project and team to work with. I feel privileged to have been able to work with all the folks who made it possible, and to have made such a step forward implementing a close customer connection throughout the project.
So what is CAB? It's an application block that provides common infrastructure components and a programming model for smart client applications that are composed out of views and business logic that might come from different teams or need to evolve independently. It has been architected considering many a convergence of many patterns observed in large successful customer applications and whre the platform and tools are going in the future in this space.
- WorkItems: a programming abstraction to simplify encapsulating use cases into 'contexts' that have shared state and orchestrating logic and nested recursively
- Plug-in infrastructure: providers for enumerating available modules and loading them into the environment, and orchestration of the application bootstrap
- Shared shell abstractions: a set of interfaces that allow logic to 'share a shell' and to facilitate separation of concerns between UI-intensive shell development and business logic development
- Workspaces - a set of interfaces that specify how to show controls in an a given area or style - such as portal, tabbed, MDI windows, etc
- UI Extension sites - named 'slots' in a shell where you want to add controls such as menus or status bar panes
- Commands - a common way of hooking up multiple UI events to a specific callback inside your application
- Composition infrastructure that helps objects find each other and communicate - such as the ability to share state, auto-wire-up of pub-sub of events
- A service locator/lifetime container/dependency injection++ foundation
- Built on ObjectBuilder - which allows you to extend the architecture specifying what it means to 'contextualize' an object.
- A reflective architecture that you can explore to see the current state of the application, and a visualization architecture that allows architects and troubleshooters have views that exploit this reflective nature and can show you the internals of the application structure and how it's running while it's live (see the sample visualization in a separate download).
This is a 'large' app block - you will probably want to do the hands-on-labs and see the webcasts to understand the key concepts in depth. But you can get going with little effort by looking at the quicktstarts and included how-to documentation.
When the time comes around, we'll be working on showing how WinFX technologies such as Windows Workflow and WPF (Avalon) can be used in CAB applications. As always, the goal of these p&p assets is to help you build your applications in that make the best use of the existing platform and sets you up for simplified adoption of the next generation.
I hope you enjoy it. To all the customers who participated in our expert advisory board, on-site visits, workshops, webcasts and gotDotNet community - THANK YOU.
//7-12 Updated to add link to Visualization download
...and says there's "a new PAG" (patterns & practices) group. It's quite strange - nothing is new but I can't contradict him. I guess we are good at reinventing ourselves as a team...
Year 1 we did infrastructure – configure your switch and firewall and storage and stuff like that in what is now MSA
Year 2 I started .NET guidance with some books and articles and started working on the app blocks DAAB 1.0, anyone?
Year 3 Saw the next big wave of app blocks DAAB, UIP, CMAB, Updater, etc – and things like Shadowfax.. it was a year full of hard work getting the group's charter solidified in the company
Year 4 was about integration, growth and enabling others more – the patternshare wiki, enterprise library, GAT tooling...
Year 5 is about consolidating tools & frameworks around scenarios and exploring p&p as rapid customer-centric engineering capability for MS, and becoming a hub of patterns and Agile goodness
What will year 6 be? what do you think? Models? WinFX? New scenarios? Working with community differently? Content as a service and VS.NET as the client?
For more of a candid perspective on the history and philosophy of patterns & practices you can see patterns & practices Inside Out on Channel9 .. although very few people care about this article, I'm glad we wrote it. It gives me a link to point to in introspective blog posts like these.
Quickie blog post, just to get jamming:
- EntLib internals webcast. Yes, EntLib and CAB ship with ObjectBuilder, our common factory & dependency injection infrastructure ++. Answering the FAQs: Yes, we have one. Yes, it's this one. Yes, you can build something like Spring with it, if you wanted to. Yes, we want your feedback and you to do cool stuff with it. And Yes, I am hoping that folks implement a creation strategy that do a transparent proxying, see my next post why. Interested in ObjectBuilder and how we externalized the config interpretation and wiring to instrumentation out of each app block? See this:
So what's this Configuration Source thing meant to be? Why do you have so many kinds of factories? How can I reuse the instrumentation capabilities in my own app? What's different in the new Configuration Designtime?
Get the answers to these and many more questions at the Enterprise Library for .NET Framework 2.0: Core Architecture webcast. It's on this coming Thursday, December 8th at 10:00am US Pacific Time / 18:00 UTC. Register now to view the webcast live (this is best if you want to ask us questions), or as always it will be available on demand from the next day
By the way, shipping EntLib on ObjectBuilder was not an easy feat, congrats to the team for pulling it off and consolidating the EntLib and CAB factory and dependency injection approach...at one point in time I started wondering..."what have I done! Include a new component as part of the foundation of something that needs to ship ASAP for what some think is a hazy fluffy ivory-tower architectural benefit! I'll be lynched!". But our p&p devs + testers rock. (Guys - if you are reading this: If you don't pull these feats, nobody can. However, if I ever have to ship a full OS, I'll keep the learning in mind).
Aspect Oriented Programming Workshop We recently held a workshop around AOP here at MS. I was glad I finally got to meet Prof. Mehmet Aksit in person (from Twente, of Composition Filter fame). I hoped I would also see Christa Schwanninger from Siemens but she's having a baby (congrats Christa & Michael!). My conclusion? To better influence Microsoft the AOSD community should 'consolidate efforts' and express the commonalities that exist in approaches. I mean, there ain't that many e.g. in runtime you'll find 3 main flavors - assembly rewriting, using dynamic languages, or using transparent proxies. Only so many pointcut expressions cover a large % of advice types, and so on. I think it would be good to have a pattern language around AOSD. The AOP alliance has most of the material needed but sometimes I feel the discussions around problem space and solution space get mixed; or that the academic efforts focus on the uniqueness of each solution (which they should) instead of the commonality (which is a lousy strategy for a researcher - to write about what everyone understands!). Mehmet was going to try to get a AO on .NET wiki started somewhere to get this dialogue around commons going, when I find out that it materializes, I'll post.
In the meantime, if anyone from the AO community wants to define a factory for proxying, please consider doing it via an ObjectBuilder strategy. It will help others use your solution, on stuff we've built e.g. CAB. And oh, if only I got a dollar for every time I meet a customer who could have benefited from richer interception on .NET. If you want it too ping a comment here, I'll print it nail it to doors on bldg 42.
No, I don't have MS's official stand on AOSD, so please don't ask me- there's an old obscure BillG 2nd hand quote from an Australian interview where he allegedly said MS would be embracing it. Definitively the platform is embracing more and more requirements of separation of concerns throughout, which is good. Indigo looks great in this regard. I'd like to see more of a unified approach one day. But a lot of questions are unresolved yet (when you think about trust models around assemblies, for example, who is allowed to advice what?). So I don't know what the end solution would look like. Especially when I think about crosscutting concerns and concurrency, hm. We'll see.
The infamous TDD article gets pulled Unless you have been living in a tupperware, you know there was a fallout over some article about TDD deep in the tree of MSDN VSTS content. Yes, it got removed. Here's the link, anyways. It described TDD in terms of some steps that in my opinion was just..plain..wrong...and it irked folks in the TDD community. Anyways, here are my key points late in the game:
Sorry Jimmy! I'm so late to this flame party it's not even funny... Jimmy Nilsson pings me for comment, I start writing a post in this web form and my machine runs out of battery...there goes my IE with all my text. It would have been a good post. To your question...no, what the article described is not what we do when we say we do TDD in patterns & practices. I respected Randy Miller's reaction a lot, even though he didn't get to blog about it. And I truly beleive in TDD as a design technique.
To MS folks...how about some internal review cycles...there's agile and TDD lists... one... email... away.
I'm glad the community responded so vigorously. Kudos to all who by nature or effort were able to articulate a response that did not evoke a Turret syndrome diagnosis.
I would also like to see those passionate about the subject take advantage of the situation and engage with MS in a proactive way. I mean, it's good agile approaches will be getting more visibility. As the word gets spread, it may get mangled. That's just 2nd law of thermodynamics. MS and its tools channel is a huge megaphone for the agile message. I know, many people ask, "who needs a megaphone in this era of blogs & social web 2.0 long-tail software"? Well, not everyone is in-the-know of what Michael Feathers or Brian Button are saying, or has their RSS pointing at the community gurus..so influencing an ISV like MS to help spread the word in a way the respected leaders think it's right not only will help the overall industry, but also creates a market for those engaging in it. Consider this personal estimate: probably more Microsoft customer-devs relate OOP advice to Deborah Kurata than the GoF. So what about ganging up on MS and saying "we will publish books, training, etc and furnish you at a fee with agile-community-approved guidance that the masses can take a shot at without requiring going on a 4-month spiritual walkabout with Kent Beck on some rainy hill in Oregon". I think it could be a win.
What's the news? CAB shipping, EntLib shipping, and we are off to a great start with patterns & practices in 2006 and starting to plan the wave of stuff for WinFX+ timeframe technologies.... Avalon...Indigo...Workflow...LinQ This is my 5th year in this group. Any one year is never similar to the previous one. Hope you are finding the existence of p&p as helpful as I enjoy working here.
Namasté! no, wait, that's Soma, not me. Thank you!