The good news is that I'm going to be blogging again. The not-exactly-bad news is that I'll be blogging over at http://mikefizmaurce.wordpress.com. Actually, that's good news. I'll be blogging elsewhere because in 90 minutes, I'll no longer be a Microsoft employee.
I'm heading to Nintex, a great SharePoint partner, as their Vice President of Product Technology. Nintex is in the business of creating solutions that augment SharePoint technology. They have a great workflow solution on the market for WSS 3.0 and MOSS 2007. You may remember that, back in the bad ol' days of SPS 2003, they provided SmartLibrary, a product that added undelete, audit trails, and approval routing to document libraries. They're clever guys that way. And they're about to release something that's just as clever, if not more so, for IT Pros. Follow me to http://mikefitzmaurice.wordpress.com and I'll tell you all about it.
But this isn't supposed to be an ad for my new employer. It's a farewell message. So here goes...
I haven't been blogging for a year and a half. There are reasons for that, though. The most important one is that the reason I started blogging, which was to share info with you on what's going on and to advise you on smart developmet investments, has been subsumed into the SharePoint Team Blog. When I started, that blog didn't exist. If you wanted inside, informal, unvarnished info, it was my blog or Arpan Shah's. Since then, the whole gang has gotten involved, and guys like Lawrence Liu have become cheerleaders and coordinators of those efforts.
The second reason is that I stopped working in SharePoint Marketing almost a year ago. I've still been part of the team, but I took on a role that essentially had me acting as a liason between the product group and Microsoft's field sales teams, providing technical and competetive assistance when things got escalated. I'd done technical marketing for five years, and while I loved it, I needed to do something different. Acting as a competitive strategist and technical diplomat was perfect. But that job was focused within the company, so blogging outwardly became even less urgent. I still lobbied the dev teams incessantly, and some of the blog postings you've read were influenced by things I've coaxed or pushed them to write.
This departure isn't motivated by a desire to leave Microsoft, though. I like the job I've had for the last eleven months, to be honest. But this opportunity was dropped into my lap and was too good to pass up. The guys behind Nintex have been good friends, and their company's culture mixes productivity with fun. Plus I get to evangelize and steer product development. And help build things that augment a great platform in a very agile environment. It just feels right.
I started Microsoft in late 1997 in Microsoft Consulting Services, often creating collaboration and messaging solutions using MAPI, Exchange client and server add-ins, Outlook development, and plenty of other things. I was part of the consulting squad that helped the very first early adopter customers beta test "Tahoe", which would become SharePoint Portal Server 2001, and was part of a team that authored best-practice deployment solutions for optimal usage of SPS 2001 and SharePoint Team Services in a corporate intranet. I became a Technical Product Manager in time to bring SPS 2003 and WSS 2.0 to fruition, and haven't strayed from SharePoint-land ever since.
And I'm not straying from SharePoint-land now.
I'll be blogging about workflow as a general subject and on specific uses of Nintex Workflow to solve problems. I'll be blogging about solutions to enhance IT productivity, organizaitonal intelligence, development issues, and a lot more.
But give me about a week. I'm going home to Canada to visit family before I jump on the Nintex bandwagon, just in time to see everyone at TechEd.
Love & Kisses,
I jumped the gun in yesterday's second post when I said the SDKs were ready along with the downloadable trials. They are indeed finished being written, but there are more hoops than you can imagine getting things staged, approved, tested, etc. for distribution on MSDN. The date of availability is the 28th of this month (November 2006).
While we're on the subject of SDKs and particular and developer information in general, I wanted to make a couple of quick comments. You may have seen them elsewhere, but maybe you haven't.
- Our strategy this time has been to make the SDKs as good as possible rather than get something out the door and backfill it for two years with whitepapers. We had a really good team spend a lot of time on this for the 2007 releases, and the situation is markedly different from the 2003-era offerings. I hope you'll notice the difference.
- There *will* be whitepapers, but they'll be focused on best practices, recommendations, case studies with code, big picture advice coupled with tactical advice, that sort of thing. But there will be a multi-month gap before the first one comes out, because:
- <tirade>You can't define best practices without observing several practices and choosing the best; offering "best practices" on Day 1 would mean we already know every way people will use our handiwork (naive at best) or we're just making stuff up.</tirade>
We'll publish opinions on recommended practices in the team blog, my blog, and plenty of other blogs, but until we can see them validated out in the wild, we'll continue to call them best practices.
- The November 28 drop of the SDK won't be the last. We'll do incremental drops as needed or wanted, and sometimes we'll fold whitepaper content directly into the SDK where it makes sense. The goal is always going to be that the SDK is the ultimate authoriative reference.
- We're at work on training and certification for developers (we're also working on training/certificaiton for IT Pros, but that's for Joel Oleson to discuss):
- We're opting to deliver these as e-learning rather than as instructor-led classroom content. We can reach more people and offer it when you need it, where you need it this way.
- You'll still need to go through the SDK once you're done with this content, mind you. Nothing ever absolves you from knowing the SDK :-)
- We'll release WSS 3.0-targeted content in early 2007; MOSS 2007 content will follow a few months later. All of the WSS content applies equally to MOSS; the MOSS courseware will solely cover features unique to MOSS, so the WSS course, or equivalent knowledge, will be a prerequisite. The exams will surface shortly after the elearning courses surface.
Couple this with the number of books under development, the third-party courseware offerings, the wide variety of content you can find on the Web in general and the blogosphere in particular, etc., you're not going to have trouble finding out what you need to know this time.
While we generally lock down features many months before we get to RTM and spend all of the remaining time on performance and stability, there are a few – very few – number of times when we find we really have to squeeze in one more feature at the last minute. It happened with the RTM release of Microsoft Office SharePoint Server, and it required the cooperation of both the Search team and the team that works on the Business Data Catalog. The feature? Support for custom query time trimming of search results.
You know how we can trim search results so users only see links to items they can actually open? That works great today as long as the content source being indexed could serialize the Access Control Lists for each item we'd index. When that works, we actually record the ACL in the index at index gather (a.k.a., crawl) time. And since we know who you are when you execute a search, we'd just compare your userid/group memberships to those in the ACL and either display or not display the search result accordingly.
But what do you do when the content source in question can't serialize its ACLs (e.g., a normal Web site), has a completely different approach to security (LOB systems), or can't serialize ACLs in a way we'd be able to make sense of them (some third-party content repository with its own user/group management)?
I'll tell you what you do. You create a component that implements the ISecurityTrimmer interface. We'll call its Initialize() function once and then hit it with one or more calls to the CheckAccess() function, passing you URLs and context info. Your job is to call whatever code you have to call for each of these URLs and return to us a bitarray that tell us which URLs can be viewed by this user. We'll keep calling CheckAccess() until we build up enough results to fill a page (or until you set a flag that tells us you want us to stop).
This is all documented in the SDK that became available for download today at http://msdn.microsoft.com/office/sharepoint. Search for "Custom Security Trimming" and it will all fall into place.
Now, this would be great by itself, but the BDC gang did their part as well. There's a BDC Custom Security Trimmer in the box. As long as the application definition XML file has an Entity with a Method with a MethodInstance of type "AccessChecker", we'll use it for custom trimming purposes.
Before someone says it in comments, of course this can have an impact on query-time performance. How much will depend on your LOB system's API performance and the ration of results to authorized results. But the alternatives are (a) don't search repositories that don't provide ACLs at index gathering time, (b) don't worry about security on search results of such data, or (c) create a custom protocol handler for searching such sources and have it incorporate a security mapping layer of some kind that translates the back-end system's security model into something that looks like NTFS ACLs. Me, I'd take the tiny hit and reap the massive benefits J.
I'll work on getting an example of both a custom trimming component and a BDC application definition that references an AccessChecker method posted to GotDotNet, and when I do, I'll post links here.
More postings on integration topics to follow. Yes, I'm back and blogging up a storm for a while…
We get asked about what we're doing about AJAX quite a lot these days, especially about ASP.NET for AJAX (the technology formerly known as "Atlas"). We *love* AJAX. We actually use AJAX (albeit not ASP.NET for AJAX) all over the place in SharePoint sites. Dropdown menus when clicking on Edit Control Buttom (ECB) menus? AJAX. The Settings menu on a site's home page? AJAX.
As for ASP.NET for AJAX, we love it (we're ASP.NET developers ourselves – how could we not? J), we're excited to see Beta 2 emerge, but given that we started working on Windows SharePoint Services 3.0 and Microsoft Office SharePoint Server 2007 in late 2003, it wasn't possible to engineer those products to ensure 100% compatibility at RTM.
We've been investigating this issue very carefully. We've been working directly with the ASP.NET team to come up with the nicest possible way of giving you the ability to make all of this work. But we're not ready yet. We know some things work fine, other things will work fine once one or both of our teams have done some extra work, and a few (hopefully very few) things won't be supportable in SharePoint sites.
Until we're ready, though, we'd truly appreciate it if you let us work it out rather than trying to hack a solution of your own. It will take some time, but whatever we're able to work out will be something we can support.
Here's what we can say with confidence today:
- We will, when we release Service Pack 1 for Windows SharePoint Services 3.0, officially support some – but probably not all – uses of ASP.NET in SharePoint site application pages, site pages, and Web Parts. Note that we just shipped WSS 3.0, and no, we don't yet have a target date for SP1. But when it comes out, it will include any work we believe we have to do for this (along with the usual bug fixes, etc.). We'll specifically tell you which ASP.NET for AJAX techniques are supported and which aren't. We'll have tested those scenarios and we'll know what happens with the techniques we wind up supporting.
- Until we release Service Pack 1 for Windows SharePoint Services 3.0, ASP.NET for AJAX is not supported. You're welcome to experiment with it, but we cannot endorse you using ASP.NET for AJAX on a production deployment of WSS 3.0, MOSS 2007, etc. If you do so anyway, you're in the support business for that kind of thing and/or you'll have to depend on the community for assistance. I, and the team, doesn't wish to be mean about this, but when we say we support something, it's because we've tested it under many, many use cases, know what to expect, etc. We can't say that today.
Besides, you're going to have enough to get done with SharePoint Products and Technologies given that you can download WSS 3.0 on the 16th of this month, aren't you?
I'm here in Auckland with Arpan Shah presenting at Tech Ed: New Zealand sessions and meeting every Kiwi customer and partner I can. They've been working me pretty hard; I got off the plane at 5:10am this past Friday morning and started meetings with customers, partners, and local Microsoft people by 8:00am. Then again, I did offer to do so...
One word to surviving a 13-hour flight: Ambien. I got 8 hours of sleep in the plane thanks to the stuff, and I got it at the right time so I woke up just in time for that 5am landing.
I've been getting plenty of good questions, observations, comments, etc, and it's been good. Because of the smaller number of sessions both here and in Australia, we can't do the normal 3-or-4 sessions on WSS development and a specific set of MOSS developer topics -- but that's actually been useful.
It served as a forcing function to create a single presentation called "Everything We Can Possibly Cover about SharePoint Technology Development in One Hour". It's a mile-wide, a foot deep, and it's goal is to (a) enumerate everywhere you can add code to WSS and MOSS, (b) steer people towards the massive amount of materials we've been creating/distributing to get you ramped up on v.next. It's all about "what" as opposed to "how". I refined it to the point where one really can cover it in an hour if one is careful, and it seemed to go over Pretty Darned Well here.
I'll turn it into a webcast shortly after returning to Redmond, but after TechEd Australia, my wife (who's meeting me in Sydney tomorrow) and I are heading out on walkabout for two weeks. I'll try and post a deck somewhere as well, but the MSDN blog environment has some pretty strict limits on storage.
If you attended this past March's Office Developers Conference, I gave the original version of this talk there, but it's been refined and refactored a lot since then. And thanks to Ryan Duguid of Microsoft New Zealand, we've worked out exactly how much in the way of demo content helps tell the story in an hour.
Any time I visit Another Land, I try to do my homework in advance. It paid off in a big way here because I got to visit Napier, Art Deco Capital of the Southern Hemisphere, and got back in time to see the Bledisloe Cup
WSRP, as I covered a few days ago, is an interoperability standard. It’s platform- and language-neutral. It’s all about requesting and transmitting chunks of HTML using SOAP. Today I’d like to weigh in on something usually uttered in the same breath as WSRP — namely, JSR168.
The fact that they’re uttered in the same breath is (a) a problem, and (b) no doubt fueled by some (not all) overzealous vendors who’d like you to believe you can’t have one without the other.
JSR168 is a standard for how to code portlets in Java. It’s a specific set of class definitions for authoring portlet code so it can be installed into multiple vendors’ Java-based portal products. That’s it. The WSRP connection comes from the fact that many Java-based portal vendors that can use JSR168 portlets also have code that can turn around and serve them up as WSRP producer Web services. My previous posting on WSRP already pointed out the potential issues of this approach for both network traffic management and for licensing, but that’s the connection.
If you want to compare apples to apples, JSR168 competes with Web Parts. Both have the potential to support WSRP. So let’s do that now…
- Language: JSR168 portlets must be coded in Java. Web Parts can be coded in any .NET language.
- Cross-Product: JSR168 portlets can, in theory, be written originally to be used in one vendor’s Java-based portal product but can be deployed in another vendor’s Java-based portal product later. In reality, the usual cautionary tale about “write once, debug everywhere” is very likely to warrant attention. Web Parts, — today — admittedly only run in one portal product of which I’m personally aware: SharePoint Portal Server.
- Cross-Application: JSR168 portlets are just that — portlets. They’re only designed to work in portal products. Web Parts, on the other hand, can run in many, many different kinds of applications. They originated in SharePoint Portal Server, but Windows SharePoint Services allows you to use them in collaborative workspaces, many WSS-based solutions do the same thing, and with the release of ASP.NET 2.0 and its newer Web Part control set, any ASP.NET application can make use of Web Parts — not just portals. We at Microsoft think that a user-customizable modular interface should be a basic right.
- Consuming WSRP: Depending on the vendor, some portal products directly consumer WSRP Web services. Others opt to provide a JSR168 portlet designed to consume a WSRP Web service. In the successor to SharePoint Portal Server due out later this year, we’ll take the latter approach, including a fully-supported, robust Web Part that can consume WSRP Web services. A proof-of-concept that was tested against several vendors’ WSRP implementations was posted to GotDotNet last year, with source code, if you can’t wait for the successor to SPS 2003.
- Producing WSRP: Most vendors with portals supporting JSR168 include code to emit them as WSRP producer Web services. I’ll admit that we don’t do something this with Web Parts today. Then again, .NET is great for authoring Web services and NetUnity Software has tools for building WSRP producers on .NET.
If you’re only interested in Java-based products, and you’re planning to buy ready-to-use portlets from third parties, asking for JSR168 compliance increases the number of portlets from which you can choose. If you’re mostly building your own portlets, or you want to support a .NET or mixed environment, there are other ways that I think are smarter.
Smarter Way #1 is to use WSRP, but use it the final way I described it in my previous post. Create a portal UI staging tier consisting of WSRP producer Web services. Write them in .NET, in Java, or whatever you’d like. Render those WSRP producers using consumer code provided with any portal product. It’s service oriented architecture-friendly. It’s easier to deploy. It’s easier to modify.
Smarter Way #2 is even more fun. Reduce the need to code Web Parts altogether by relying on a great new tool that we’ll be introducing in Office “12” called the Business Data Catalog. We’ll auto-provision Web Parts based on known metadata about Web services and data sources. Conveniently, this will be the subject of my next post.
But I’ll close with one final thought, since I introduced the subject in the context of customers evaluating portal products. I really, really hope that people are coming to realize (and I know full well that many people get this already) that portals are much, much more than containers for portlets. It’s about the services behind those portlets at least as much, probably much moreso, than the portlets themselves.
I thought I’d start this next wave of postings with the subject I’ve avoided the longest — WSRP. This one will be a little long…
What Is WSRP?
The abbreviation stands for Web Services for Remote Portlets. There are plenty of details, but in essence, WSRP defines how to use SOAP to request and receive blocks of HTML (as opposed to making a method request and getting back XML data, which is what we usually request/receive using SOAP). It’s defined by an OASIS committee, whose site URL is http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wsrp. Microsoft is a member.
What Isn’t WSRP?
WSRP is not magic. It’s also not a shortcut to good application design. And only if circumstances align in exactly the right pattern does it save time and money. We’ve been struggling with how to talk about WSRP for some time. We’re perfectly happy to support it — Microsoft loves Web services and interoperability standards, and WSRP is both of these. We’re not, however, especially eager to promote it — at least not the same way some (but by no means all) of “the other guys” have done. Like any tool, it’s good for some things and not so good for others…
Why Ask For WSRP?
Generally, three hypothetical reasons come to mind:
1. A customer has put together a request for proposals, and want submissions to support as many standards as possible for risk-reduction reasons. They haven’t considered whether they’d actually make use of it, but standards are generally good things to support. Let’s call this the “checkbox” motivation.
2. A customer has one portal product onsite and wants to buy another one. They don’t want to rewrite all of their portlets for the new platform. Being able to reuse a portlet in two places is attractive. In fact, let’s take it one step further, as some WSRP proponents have done, and say that the customer has a preexisting Web applciation (not porlets at all) and they want to surface pieces of it inside their newly-purchased portal product. Let’s call this the “quick reuse” motivation.
3. A customer provisions a great many sites on a shared Web farm, and doesn’t want to support installing custom portlet code in that shared environment. Being able to run the portlet on some extra middle tier, and just redisplay the results on their production portal machines, is very attractive. Let’s call this the “remoting” motivation.
How Good An Idea Is This?
1. Checkbox Motivation: Not a good idea at all. This is knee-jerk blindness. If you truly need a standard, of course you should demand support for it. But if it’s not going to help you solve a legitimate need, no one is served by demanding this.
2. Quick Reuse Motivation: Reuse is good, but the devil is always in the details. Most of the proponents of WSRP describe scenarios like what I’ve drawn here:
(Hey, it’s a blog, not a formal publication. Clippings of quick Tablet PC sketches just seemed like the right thing to do...)
It took time to develop the portlet running in Portal 1 here, and since Portal 2 isn’t from the same vendor, it’s attractive to have Portal 1 expose that portlet as a WSRP producer Web service that Portal 2 can consume and display. Sometimes, the producer isn’t even a portlet running in a portal product. It’s some code that’s been added to encapsulate an entire page-based application so it can be displayed elsewhere as a portlet — essentially doing IFRAME-ing via SOAP.
Now is the time to make sure everyone has started assessing this in the light of a Service Oriented Architecture (SOA), something every major player in the industry has been advocating for over a decade when we started talking about three-tier application design, etc. At a minimum, you should split the user interface, the business logic, and the data retrieval into three parts.
If you did the right thing in the first place and coded most of your functionality into a “traditional” Web service and hosted it on a middle tier server, the incremental cost of creating a quick portlet UI for interacting with it is trivial. The fact that you have vendor-specific portlets is no big deal. It looks like this:
Heck, FrontPage can, in a minute or so, generate a dataview Web Part that calls/displays Web service activity. I’m sure some of “the other guys” have quick generators for Web service-coupled portlets as well. And if you can wait until later this year, the Business Data Catalog that’s coming in Office “12” servers will eliminate the need to write any Web Part code in many scenarios (I’ll blog on this subject after I’m done with this one).
What’s more, making sure you offload most of your work to a middle tier Just Makes Sense. What if you want to reuse that logic in a non-portal application? An integration/orchestration server like BizTalk Server, WebMethods, etc.? What if you want to use it in a standalone Web application? Portlets are user interface components; they should not become substitutes for Web services that execute business rules and retrieve information.
“Heavy” portlets weighed down with a lot of code are not often a good idea. The motiviation to reuse them increases as the portlets get “heavier”. Reusing heavy portlets is compounding the problem — and two wrongs do not make a right.
Reuse works when we code for reusability. It takes planning and time to architect and code for reuse. It’s not magic.
What’s more, the approach used in my first drawing has the potential for two big unpleasant side effects: network traffic and licensing. Most application server environments are optimized around passing traffic from the browser to the Web server to the app server to the data and back. What happens when you have multiple farms of Web servers suddenly making frequent side-to-side callouts to each other in order to render pages? It introduces bottlenecks that have to be addressed by entirely new load-balancing logic and resources. Moreover, given that your portlet is running in a portal product, and its use has suddenly widened in scope to more users, you might find yourself paying more in Portal 1 license fees for all of the new traffic/usage that Portal 2 brings in.
But I’m not bashing WSRP itself; Microsoft certainly isn’t (it wouldn’t sit on the committee if it hated the standard). I’m bashing bad design, bad design often espoused by some of WSRP’s more vocal proponents. Actually, there is a way to implement WSRP that is anything but crazy, and focuses on reuse in a way that works as part of an SOA. So, while still thinking about resuse, let’s bring in the…
3. Remoting Motivation: Remoting is a very nice idea. I can understand this one completely in several cases. So let’s, depending on your point of view, either add a new tier to our SOA or split the UI tier into two parts. In other words, this:
The portals are responsible for final UI rendering, but there’s now a UI staging tier that’s handled by a WSRP producer server. Now we’re cooking with gas. Requests/responses travel up/down the optimized path, and the thing serving up the portlet is a Web service application server. Any portal product that has a general-purpose WSRP consumer (which we will have in the box with Office “12”) can make use of that Web service. You should still put most of the logic in the “traditional” Web service so it can be reused by non-portal applications.
This approach doesn’t hide the fact that a WSRP producer is a Web service. In fact, it embraces it. And while my opinion is admittedly pro-Microsoft, and I dare you to find a platform more conducive to building and using Web services than .NET, if you want to do this using Java, go for it.
We put an example of this (mainly to show it can be done than to serve as a best practice example) on GotDotNet, but if you’re looking for a commercial effort that helps you quickly generate WSRP producer Web services, it’s worth mentioning our friends at NetUnity Software (www.netunitysoftware.com). They have libraries, Visual Studio add-ins, and deployment tools for quickly authoring and deploying .NET-based WSRP producer Web services that can be consumed by any bona finde WSRP consumer. It’s almost always cheaper to deploy these kinds of producers than taking an approach that presses existing portal products into service as WSRP producer servers.
So, Should We Just Stop Writing Web Parts and Switch to Creating WSRP Portlets Using This Last Approach?
Of course not. Well, sometimes we might want to do so, but the line of reasoning to apply here is the same line of reasoning to apply when deciding whether to write a .NET assembly or a Web service. Locally-running code is usually faster than a remotely-called Web service. Locally-running code in the form of a Web Part has access to more information about the server in which it’s running, as well as the page and site to which it’s been added, as well as the other Web Parts on the page (using Web Part connections — although version 2.0 of the WSRP specification is supposed to provide something for this). Deploying locally-running code to all servers in a farm can be annoying, but if you’re load-balancing that middle tier, it’s the same amount of work, and WSS “v3” will provide better tools to deploy custom code to entire server farms. They require less infrastructure.
Whatever you do, don’t skip the step where you encapsulate business logic in a “normal” Web service. You’ll get real reuse by doing so, and it will make the job of coding “normal” Web Parts or WSRP producers all the more easy.
It all comes down to developer productivity, ease of deployment, ease of maintenance, ease of reuse, etc. The devil’s in the details. It’s up to you to decide, but please, please, please make that decision an informed one.
My blog isn’t so much a running news log as it is an opinion column and platform for communicating practical advice. That having been said, I’ve been slacking off for too long. Well, take heart, as I’ve built up a backlog of things to cover, so you can expect quite a few postings for the next several days. And I promise to post more frequently or die trying. In the meantime…
Kudos to Our Friends/Customers/Partners in the UK!!!
Nick Swan at Dot Net Solutions asked me to help get the word out about the first meeting of the SharePoint UK User Group. Point thy browser at http://suguk.org/forums/thread/72.aspx and ready yourself for cameraderie and enlightenment.
The "SharePoint Team Blog" -- It's About @#$% Time!
If you haven’t seen the Sharepoint Team Blog yet, take a look. That blog comes from the Grand Poobah of All Things Involving SharePoint Products and Technologies, Kurt Delbene (whose actual title is Corporate Vice President of the Office Server Group).
In fact, Kurt’s the only guy who can get away with referring to a single “SharePoint Team”, since all technologies based on SharePoint sites report up to him. Make no mistake, there are multiple teams working on multiple offerings (most obviously, Windows SharePoint Services and all of the Office “12” servers using SharePoint technology). But Kurt’s view is truly big-picture. If you want to know why the teams have made the decisions they’ve made, he’s the guy — ask him.
And don’t forget the guest postings from other team members. Jeff Teper, who’s responsible for portals, Web content management, and more, weighed in a few days ago as well.
The blog isn’t exclusively focused on development, either, so tell your infrastructure-focused friends. Heck, tell your business value-focused friends, too — if you think they can handle it :-)
This is one item I’ve been holding back for a while. FrontPage “12” will be a great SharePoint site designer on many, many fronts. Like the current version, using it to edit a page in a SharePoint site will cause the page to become unghosted. Unlike the current version, ghosting will cease to be much of a problem. Here’s why…
There are two reasons why we don’t like unghosting: (1) it makes caching a lot harder and increases the number of database fetches, having a negative impact on perfrmance, sometimes by as much as 20%, and (2) it makes managing those unghosted pages very, very difficult. Enforcing an update to a template just isn’t something that’s workable.
Good news: (1) a combination of caching improvements done by the ASP.NET team and the WSS team will result in unghosting having little if any performance impact, and (2) thanks to WSS’ ability to use ASP.NET 2.0 master pages and given that FrontPage “12” supports their design and use as well, you can use master pages to keep your unghosted pages organized.
Want to know who’s responsible for this feature? Maurice Prather. Want to bet he’ll expand on this over at his blog sometime soon?
So, after the second quarter of next year, those of you who’ve been afraid of FrontPage needn’t be. FrontPage loves you — get ready to reciprocate.
So it’s 10am as I start this posting, which I’m typing while I bounce back and forth between Ryan Stocker’s talk on the Web content management services you’ll see in Office “12” and Rob Lefferts’ talk on our unified approach to managing documents across Office “12” servers and clients. It’s the only time we’re truly viciously competing with ourselves in Office server-land, so I suppose we should count our blessings.
We had three sessions go into overflow rooms and one of them get scheduled for a repeat, so mega-kudos to those of you who are showing up to see what we’ve got in store. For those of you who couldn’t be here, allow me to provide a quick recap of what happened.
- Mike Ammerlaan did a session on ASP.NET 2.0 and WSS “v3”. Big highlights: Rather than wrapping and controlling ASP.NET, we now work within it in a much better behaved way. We’re supporting master pages, ASP.NET 2.0 Web Parts (more details on this later today or tomorrow), security providers (so you don’t have to authenticate against a Windows Domain), and a lot more.
- Mike Ammerlaan also delivered a talk on what’s up with templates, definitions, and more. The big news here is that site definitions get simpler. The will reference a new thing called Feature Definitions, a way to define a coherent part of a SharePoint site that collectively does something specific. A feature can contain lists, Web Part definitions, event handlers, content types, all kinds of things. You can activate them and deactivate them at will. They’ll be intelligently scoped so you can manage them a lot more cleanly, too.
- Dustin Friesenhahn presented what’s up with changes to the WSS platform for document and data storage. Big news: per-item permissions, a recycle bin, custom column indexes, etc. Event handlers cover more events, and they can be syncrhonous. We have a way to impersonate identities if the code is trusted. But the big news our use of content types. I mentioned this yesterday, and Dustin spelled it out today.
- Mike Morton finished the day with a wrap up of what’s in the collaboration application we deliver with WSS that sits on top of our platform — there is indeed a developer story there. He covered email integration (you can use SharePoint sites for email archiving or discussion list membership in “v3”), directory integration (we can control directories when you create/modify sites and create/edit groups to match, and you can write providers for it. He also covered our new synchronization APIs you can use for server-to-server and server-to-smart client applications, and he dissected how the next version of Outlook will use it. Oh, and the new alerts functionality, too.
Today we’ll be covering document management, Web content management (which, together, comprise Enterprise Content Management, or ECM, which I’ll cover within the next few days), how WSS and Office servers will use and expand upon the Windows Workflow Foundation, search, our Business Data Catalog, and some of our business intelligence and reporting facilities. I’ll try and do a recap of that tomorrow.
Moreover, over the next several days and weeks, I’ll provide digest recaps of the sessions taking place here. Stay tuned — I wasn’t kididng about the gag being removed.
Two new blogs have been created in the past few days that I thought you should bookmark:
If you’re building applications for the U.S. Federal Government, the Microsoft Developer and Platform Evangelism team just started up a blog for issues and knowledge sharing that’s unique to your universe, and they know how to speak your language. They’re good guys (I used to be one of them in a past life), and they want to help in a big, big way. This blog is for all development issues, including SharePoint technology but a lot of other stuff, too.
The even bigger news is that P.J. Hough, the Group Program Manager for Windows SharePoint Services, has started up a blog for himself and his team They’re just getting started, and they’ve never blogged before, so go easy on them, especially since they’re very hard at work bringing Beta 1 to life later this fall.
That post I made on Monday? Namely: “FitzBlog : Web Part Interoperability -- Good News and Bad News”? It’s out. It was U2U’s news to break, and Mike Ammerlaan just helped them break it here at his PDC session on WSS “v3”/ASP.NET 2.0.
The news? Son Of SmartPart. Jan Tielens and the U2U gang have done it again, and they’ve created a WSS “v2” Web Part than can contain an ASP.NET 2.0 Web Part. Go hit their info site for details. Rock on, guys.
I just got out of Steven Sinofsky’s keynote session on Office “12” here at the PDC, and that means one thing. Actually, multiple, things, but the big thing is that I can stop being (as much of) a jerk about not saying anything about what we’re doing next. Let the disclosure begin!!!
Yesterday morning, Bill Gates and Chris Caposella revealed the new Office “12” client UI. It’s a client thing, so I’m not going to talk about that for now. This morning, Steven showed off a bunch ‘o’ stuff, so I’ll start off in this post by just listing ‘em.
- RSS. Everything about sites, lists, libraries, etc., can be syndicated via RSS automatically.
- Blogs and Wikis. Templates and features in the box.
- Content Types. These aren’t just like SPS 2001’s document profiles. They define sets of metadata, but they also contain view information. And associated workflows. And events bound to them (synchronous or asynchronous). And you can have more than one in the same list/library.
- Workflow. Windows Workflow Foundation is embedded in WSS. It’s used everywhere.
- Recycle Bin. We did it. It’s scoped to a site and captures deleted documents, items, etc. It has a user restore and an administrative restore.
- Per-item security. Even on list items
- FrontPage has evolved into a feature set that makes it truly a SharePoint site designer. (Ghosting’s still around, but it won’t be a problem anymore. I’ll explain why in an upcoming post.
- Forms services in Office “12” servers. That’s right — design a form in the InfoPath rich client, publish it as a SharePoint site, and it can be either viewed/filled out in the full smart client or in a browser as HTML.
- Search. Better APIs. Better results. Alternate search suggestions (misspelled words,etc.). A highly customizable default Web-based UI.
- Office “12” servers will also contain a Business Data Catalog, a facility that registers LOB application data and Web services. Once that’s happened, BDC-aware Web Parts can pull data from them, we can index them, and a lot more. There’s a session on this tomorrow.
- Office “12” servers will also be able to take a spreadsheet published to a SharePoint site and reneer it as an HTML application.
- Mobile views of SharePoint lists. That’s right, a way to render a list on a mobile device.
- Lists now have a Business Data type that will use the aforemntioned Business Data Catalog
- Access will be able to treat SharePoint site data as fulll-blown data sources.
Mike Ammerlaan’s talk on how WSS v3 interoperates with ASP.NET 2.0 is starting. More later.
So we’re getting ready to unveil a little bit tomorrow and a lot on Wednesday regarding WSS “v3” and our Office “12” servers, but a couple of things have popped up that are very, very relevant to people that want to develop Web Parts using ASP.NET 2.0 and use them in today’s versions of WSS and SPS. As the old cliche goes, I’ve got good news and bad news….
The good news first: within a couple of days, some people we all (well, at least I) know and love will be announcing something very nice and very helpful for those of you who want to develop ASP.NET 2.0 Web Parts and use them in WSS (once Service Pack 2 is released, of course). It’s their news to announce, so I’ll wait for them to do it and post and update with a link.
The bad news: it will only work with WSS (again, with SP2). The SPS team tried, but ran into too many issues to be able to support running SPS 2003 Service Pack 2 on the ASP.NET 2.0 runtime. So if you have SPS in the picture, you can’t use Visual Studio 2005 or ASP.NET 2.0. You can call Web services and redisplay pages written with ASP.NET 2.0, but that’s it. SPS 2003 SP2 will actually do a lot of good things (more news on that coming up soon), but unfortunately, ASP.NET 2.0 support won’t be one of them.
WSS-only deployments will be able to run on version 2.0 of the .NET framework and will allow — up to a point — the use of some new ASP.NET 2.0 technology in the process. The announcement at which I’ve already hinted will take advantage of that aggressively.
Starting tomorrow — PDC news from the floor, the hallways, and the session rooms….
I’ve been at a lot of conferences in the past few months, and the question I was asked most often had to have been around what we support and don’t support when it comes to site and portal customizations, modifcations to site definitions, and so on. The thing that spawned these questions is this article on TechNet. In a nutshell, it suggests in pretty strong words that if you modify our out-of-the-box site definitions, we won’t support them anymore, and that if you create new site definitions, we won’t support changes to them once you’ve actually created even one site based on them.
Don’t panic. You’re not in trouble. You can still customize site definitions. Allow me to explain…
1. What We Mean By “Support”
When we say something is supported, we mean a number of things, including (a) it’s our code, so (b) we’ve tested it an know how it’s supposed to behave, and (c) when you’re having trouble making it behave as designed, we’ll help you fix it, and (d) it conforms to our expectations as to what has been placed where so we can automate changes to it (very important when applying service packs).
If you modify a site definition, then it becomes your code, at least in part. We won’t know what you did, and we won’t know how your changes will affect performance, stability, or security. But that doesn’t mean that you’re all alone if you choose to customize a site, an area, or a site definition.
2. You Can Still Put Yourself Back Into a Supportable State At Will
If you encounter a problem that will require support, and you’ve customized WSS or SPS into an unsupportable state, and you can roll back your changes, and the problem is still there — then we’ll work with you to fix it. What that means is that you should take note of the changes and customizations you make so you can remove and reapply them at will. It’s more work, but it’s worth it (especially if we’d otherwise be overwriting your changes when we install a service pack).
And, of course, if the problem goes away when you roll back your changes, then the problem is with your changes. But even then, you’re not out in the cold — you still have options…
3. Actually, Microsoft Has a Support Vehicle for Custom Code
Product Support Services and/or standard Premier Support are designed to support our code. The aforementioned statement on supportability pertains to what they’ll do for you because they know what our products do and how to make them behave the way they were designed to behave. But they’re not the only support vehicle Microsoft offers.
We also have Premier Support for Developers (PSFD). They support your code when you develop with our APIs, tools, or platforms. They charge by the hour, not by the incident, but they’ll support anything you write, customize, etc. PSFD will work with you on custom site definitions, Web Parts, and just about everything else.
So, for maximum supportability when you need to customize your sites, I’d urge you to (a) keep track of your changes and plan to be able to remove and reapply them, (b) work out a PSFD agreement (get in touch with your Microsoft office for details).
And above all, don’t panic.
The short answer: Heck, yeah!
The long answer: For the love of God, don’t try it in production yet. SQL Server 2005 hasn’t shipped. But as we’ve been testing our beta code for WSS Service Pack 2, due out later this year, we’ve been testing it against SQL Server 2005 betas, and it appears to work just fine. Our plan is to support SQL Server 2005 as a data store for the current shipping version of WSS if Service Pack 2 is applied. Circumstances could force us to change those plans, but those are the plans.
I don’t yet know for certain if we’ll be able to say the same for Microsoft Office SharePoint Portal Server 2003. I’ll tell you as soon as I get a ruling on it.
The next releases of SPS and WSS will both work with either SQL Server 2005 or SQL Server 2000. We’re not going to make you upgrade everything in your server room. That’d be mean. We’re not mean. Well, maybe a little: there may be a few features in SPS, I believe, that will only activate if you have access to a copy of SQL Server 2005. But that’d be about it. SQL Server 2005 is great, and you should upgrade to it, but we won’t force you to do it against your will.
Heck, for that matter, if you haven’t heard it yet, the next version of Windows SharePoint Services will sitll run on Windows Server 2003. Ditto SharePoint Portal Server.
Head over to http://commnet.microsoftpdc.com/content/sessions.aspx to see a nearly-complete list of what we can tell you to date. Specifically look at the “Office and SharePoint” track. Joe Andreshak and I are producing this track (he’s doing Office client technology, and I’m doing Office server technology). We’re both happy and relieved to finally see a preliminary list surface on a public site. We’re all ears if you have concerns or requests. We’ll read them and seriously think about them, but I can’t promise we’ll be able to fulfill them.
Would you like to know what the list doesn’t say?
- Many session titles and descriptions will change by the time you pick up the final conference guide in L.A. By then, the titles/descriptions will cite specific names and specific feature functionalities. So if some of them appear vague, it’s because it’s referring to technology we won’t announce until the PDC — the description you see is the best we can do for now.
For example, let’s use the one thing we can talk about before September, namely what’s going on with ASP.NET 2.0 Web Parts. If it, too, had to be kept under wraps until September, a session that would otherwise be called “Devleloping ASP.NET 2.0 Web Parts for Use In SharePoint Sites” would have to be entitled “Advancements in Web Part Technology for SharePoint Sites” until the day of the event. Get it?
Sorry it has to be this way. But in the meantime, feel free to creatively interpret some of the otherwise vague session titles. I won’t confirm or deny any speculation, and neither will Joe, but the sessions, when finally delivered, will be anything but general and vague. I promise.
- This actually isn’t a complete list. There are a couple of sessions we can’t tell you about yet. There’s no way to provide even a title that wouldn’t give away a new feature or product that we won’t announce until the event. So we have to keep the whole thing under wraps. That’s just for a couple of sessions.
- We decided at the last minute to name the track “Office and SharePoint”. Yes, I wanted it to read “Office System and SharePoint Services”, but that was determined to be too long. We opted to keep the WSS content close to the Office content for simplicity’s sake. Note, however, that many of the WSS sessions are (or will be) cross-listed in the Data and Systems track. And should be.
- That track name should suggest to you that SharePoint Services is alive and well and very much a part of Windows Server 2003. Even in the next release.
- If you’re wondering why something isn’t listed, note that our sessions are focused on things you can build, not things you’ll use to build them. It’s very possible that a session will highlight a server API, a smart client, a design tool, and a development tool.
- It’s inevitable that we’ll use the PDC to show products business value to some extent, but that will absolutely take a back seat to showing their developer value. The overall value of the next iteration of the Office System will happen elsewhere. Deployment and infrastructure issues will be covered elsewhere. If you’re wondering why we don’t talk about something, ask yourself if it’s truly a development topic. If the answer is “not really”, that’s why it’s not on the list.
- We also aren’t talking about anything being released prior to late next year, nor are we talking about things released after the big “Office 12”/WSS “v3” release.
That’s it for now. Oh, one other thing. The ask-the-experts people, session speakers, and hands-on lab proctors will be actual program managers, developers, or testers of the products being discussed. We’ll have booths as well for those of you with questions on currently-shipping technology, and we’re working to staff those with the most experienced, battle-scarred solution developers we can find.
I know that, while I’m travelling, I should just file blog postings from the road since it’s more dynamic and compelling ‘n’ stuff, but I didn’t. I’m sorry. On the plus side, I’ve been taking notes, and I have a lot to blog about for several days.
I’ve been to TechEd North America, SharePoint Advisor Live, the Collaboration Technologies Conference, and TechEd Europe. Great events all. You’ve been talking to me, and I’ve been paying attention. Expect individual blog posts on supportability, interoperabiltiy with other products, the PDC, and several other topics.
But before I move on…
Cool Partner Product Shout-Out: Longitude
First off, I blogged about Ontolica last month, and wanted to point out another good partner enhancement for SPS search, namely Longitude from BA-Insight. They do at least two very clever things (they do more than two, but I particularly care about two). One is that they inspect SPS’ taxonomy informaiton and provide an easy interface to further filter search results.
The really clever thing they do, though, from my developer evangelsim bias, is hook into the indexing gathering process and add an event to our index pipeline (the set of things the index gatherer does when it decides a document has been modified, such as index it, check for notification alerts, check for topic assistant pattern matches, etc.). Longitude will create viewable images of multiple pages of each document being indexed. It stores them in its own database, and provides SPS UI for adding file viewing to the browser-based search results experience.
The extra clever bit is that we haven’t documented the index pipeline APIs – they had to have figured this out by trial and error. But now that someone has proven that there’s a commercially viable reason to do such a thing, it gives me ammo to go back and persuade our SPS Search team to, indeed, document the darned things.
The SharePad workspace on GotDotNet has been updated. It now supports getting/setting custom properties with every open/save. If you want an example of how to use FrontPage RPCs, the URL protocol, etc., this one’s ready to go.
I’ve already mentioned that the Interlink Group deserves special thanks for building this to our specs and running down all of the implementation details, but I wanted to extend specific thanks to Bob Brumfield for doing the vast majority of the coding and research, as well as David Appel, who provided support, code slinging, testing, and validation.