• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Monday, 03 Mar 2014 19:37

I had a question asked me recently regarding Windows auth and NTFS permissions. Here’s the question:

When I run an IIS site using Windows Authentication, is there a way to let
the Application Pool account access files in disk instead of the logged in
user. Ideally, the developer, (or operations) can list the users in web.config only, without the need to add these users to the file permissions or add them to some AD group that has these permissions.

Unless you work with these permissions often, it may be difficult to understand the situation well, so let me explain this another way.

To have a properly secured server, you should use the principle of least privilege, which essentially says that you should grant only what is absolutely required to enable a service to work, and nothing more. If you do this properly then you should have a tight list of permissions on disk for your website.

The difficulty comes when you use Windows authentication—rather than anonymous authentication—to grant access to a website, or a part of a website. What if you want to use IIS’s URL Authorization to manage access rather than using NTFS to manage access.

Keep reading and I’ll explain further. First let’s gain more of an understanding on how IIS security works.

Basic permissions required for anonymous access

When you use anonymous access, a clean setup will implement the following settings:

  • Anonymous authentication for the website should be set to use the Application pool identity
    image
  • Permissions on disk should be granted to:
    • SYSTEM: Full control
    • Administrators: Full control
    • Application pool identity: Read or Modify, depending on your requirements. (It’s useful to the AppPoolIdentity account if you are only accessing local resources)
    • Users required for FTP, web publishing or any other access to content on disk.
      image

If you can achieve this (and you should!), you will have a site which is accessible to only the one site, which minimizes the attack surface if another site on the same server is untrusted, or if it is exploited.

Basic permissions required for Windows authentication

However, what if you want to use Windows auth to grant or deny users access to your site based on their Windows’ accounts.

First, you would turn off anonymous authentication so that users are required to authenticate with a Windows account.
image

There are now two options for the authorization part—which is to determine which Windows accounts are allowed and which are not:

  1. NTFS: Depend on the NTFS permissions (ACLs) on disk to determine which users have access (e.g. User1 is granted access but User2 isn’t). If you grant a user access on disk then they can access the site. If they do not have access then … well, they don’t have access.
  2. URL Authorization: Use IIS and/or ASP.NET’s URL Authorization. By default all users are granted access, but you can change this. Following is an example which has the default Allow All Users removed, and User1, User2, and the Administrators group granted access.
    image
    These settings are saved to the site’s web.config file, so you can set them manually too, and of course set them at the server level or other places in the IIS configuration hierarchy.

When using the URL Authorization method, you would need to grant access on disk to the Windows account (e.g. User1), basically meaning that option #1 and #2 are both used simultaneously.

Back to the original question

Let’s get back to the original question. What if you don’t want to have to grant the Windows accounts access on disk (#1 above) but you want to use URL Authorization (#2 above) to authorize Windows accounts access to your site?

Or, to word it another way, what if you want to use #2, without having to worry about #1 too.

Which users require access to disk?

This is possible, but let me step aside again and briefly explain how access to disk is determined.

The website accesses the disk by using the w3wp.exe worker process, which is essentially the application pool. The identity set for that app pool (e.g. IIS Apppool\Site001) is used in some situations on disk. In the anonymous access situation that was mentioned above, the site says to always use the application pool identity for anonymous users. So in that case you only need to grant the identity of the application pool to disk (SYSTEM and Administrators are beneficial for other usage, but not for actually running the site).

When using Windows authentication, the application pool identity (e.g. IIS Apppool\Site001) is used for some access but the Windows account (e.g. User1) is used for other access. It depends on the impersonation settings of your application or framework that you’re using. Therefore, you would generally need to grant access to the application pool identity, plus every Windows account (e.g. User1, User2, User99) which needs access to your site.

Back yet again to the original question

Let me try again to answer the original question, now that we’ve covered the aforementioned concepts.

What if you don’t want to have to maintain the disk permissions for each of the Windows users who you will add over time?

You have at least three options for this:

  1. You can be sloppy and just grant the Users or Everyone account access to disk. But please, please don’t do this on a production server. In fact even on your dev machine it’s not a good habit to get into.
  2. You can create a Windows group and whenever you have a new user who needs access to a site, you would add them to that group. This is a great solution and generally the best option. However, the original question above stated that they didn’t want to do this in their case, so that leads to #3.
  3. You can use the Connect As setting in IIS to have all disk access use a single account, regardless of the configuration. This means that you will not have to make any changes to Users or Groups, but instead you can make an IIS change to the list of allowed users and they will be granted access. If you are using IIS 7.0 or IIS 6.0 then you may need to consider this.
  4. In IIS 7.5, a new setting was introduced called authenticatedUserOverride. This enables you to say that you will use the application pool identity (Worker Process User) instead of the authenticated user’s identity. This is exactly what we’re looking for in this case.

The #4 option

Just like you can set the anonymous user to always run as the application pool identity, you can do the same for authenticated users. This was introduced in IIS 7.5. You can find the official documentation here.

You can change this using Configuration Editor, AppCmd or any of the APIs to update the settings, and you can set this at the server, site, or application level.

Here are the steps that are required to set the permissions for this requirement:

  1. Turn on Windows auth and turn off Anonymous auth:
    image
  2. Using URL Authorization, grant only the users who should have access to the site:
    image 
  3. For the NTFS permissions, grant the following:
    • SYSTEM: Full control
    • Administrators: Full control
    • Your application pool identity account (e.g. IIS AppPool/Site001): Read or Modify, depending on your requirements.
    • Users required for FTP, web publishing or any other access to content on disk. 
       image
      (Notice that I used the same images as I did previously … so far everything is the same.
  4. Go to the site (or server or subfolder) in IIS Manager and open Configuration Editor.
    image
  5. For the Section selection, choose “system.webServer/serverRuntime”
  6. Select UseWorkerProcessUser
    image 
  7. Click Apply 

Alternately, for Steps 4-7 you can do the same with AppCmd using the following command. Make sure to replace the site name with your site name:

appcmd.exe set config "Site001" -section:system.webServer/serverRuntime /authenticatedUserOverride:"UseWorkerProcessUser" /commit:apphost

Viola! You have a minimalistic permission set on disk that supports Windows authentication without having to update your permissions on disk. Simply update URL Authorization whenever you need to grant or deny access to Windows users.

Note that this does not address non-disk access like integrated authentication for SQL Server or access to registry keys or other network resources. They will still use either the application pool identity or the Windows user account’s identity, depending on the impersonation settings that your application uses.

As you can see, there are multiple configurations that you can use for anonymous or Windows based authentication. You have a lot of flexibility to set this up the way that makes the most sense to you.

Just make sure that you don’t settle on a non-secure method. IIS provides a highly secure and manageable solution as long as you use it correctly. Hopefully this articles provides an understanding of the basic building blocks necessary to do so.

Author: "OWScott" Tags: "IIS, IIS7, Windows Server, IIS8"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 29 Jan 2014 16:17

IIS URL Rewrite has five different types of actions. They are: Rewrite, Redirect, Custom Response, Abort Request, and None. And if you have ARR (Application Request Routing) installed, then at the server level you’ll also see Route to Server Farm. The two most common actions are the Rewrite and the Redirect.

A common question that comes up for people who just start working with URL Rewrite is: what is the difference between a rewrite and a redirect? I remember wondering the same thing.

Fortunately there are some very clear cut-and-dry differences between then.

Simply put, a redirect is a client-side request to have the web browser go to another URL. This means that the URL that you see in the browser will update to the new URL.

A rewrite is a server-side rewrite of the URL before it’s fully processed by IIS. This will not change what you see in the browser because the changes are hidden from the user.

Let’s take a look at some other differences between them:

Redirect

Rewrite

Client-side Server-side
Changes URL in browser address bar Doesn’t change URL in browser address bar
Supports the following redirects:
301 – Permanent
302 – Found
303 – See Other
307 - Temporary
Redirect status is non-applicable
Useful for search engine optimization by causing the search engine to update the URL. Also useful for search engines by using a friendly URL to hide a messy URL.
Example:
http://yourdomain.com to http://www.yourdomain.com in the browser
Example:
http://localtest.me/articles/how-to-win-at-chess is a friendly URL for http://localtest.me/articles.aspx?name=now-to-win-at-chess
Can redirect to the same site or an unrelated site. Generally rewrites to the same site using a relative path, although if you have the ARR module installed you can rewrite to a different site. When you rewrite to a different site, URL Rewrite functions as a reverse proxy.
The page request flow is:
  • Browser requests a page
  • Server responds with a redirect status code
  • Browser makes 2nd request to the new URL
  • Server responds to the new URL
The page request flow is:
  • Browser requests a page
  • URL Rewrite rewrites the URL and makes request (still within IIS) for the updated page
Fiddler is a great tool to see the back and forth between the browser and server. Tools like Process Monitor and native IIS tools are best for getting under the covers.

Let’s take a look at some further examples:

A redirect changes the URL in the browser, like in the following examples:

Add a www to the domain name:image

Enforce trailing slashes or force to lowercase:image

Mapping of old to new URL after a site redesign, and let search engines know about it:
image

A rewrite doesn’t change the URL in the browser, but it does change the URL before the request is fully processed by IIS.

In the following example the URL is a friendly URL in the browser but the final URL seen by ASP.NET is not as friendly: 
image

Or you can use any part of the URL is a useful way by rewriting the URL. Again, the URL in the browser remains the same while the path or query string behind the scenes is changed:image

These are just some examples. Hopefully they clarify the difference between a rewrite and a redirect in URL Rewrite for IIS and help you with your URL Rewriting.

Author: "OWScott" Tags: "IIS, IIS7, URL Rewrite, IIS8"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 28 Oct 2013 14:39

Software developer and IT Pro job interviews can be a lot of fun when you’re prepared for them, but they can be scary and overwhelming when you’re not. Since this is regarding your job which you’ll spend 50% of your waking hours at during the week, the more at ease you are with the interviews, the better you’re going to be at landing your ideal job.

Whether you’re a seasoned professional or newer to the field, you will benefit from brushing up on your interview skills.

Ben Weiss from Infusive Solutions showed me a creative resource to help prepare for job interviews. Yes, interviews with an ‘s’. There can be up to four interviews for a single job position (with Human Resources, Senior Developer, Software Manager, and the Chief Technology Officer), each of which require unique preparation and execution.

I don’t have an interview coming up but reading this almost made me want to prepare and see how I would do in an interview today.

SNAGHTML469531dbWhat makes this document unique is that it’s a lot of fun as it compares the process to preparing to take on one of the end of level bosses in your favorite game. Think Mario, Zelda or Duke Nukem. Each game boss requires different weapons or skills, making the analogy not only fun, but applicable too.

At the risk of getting off topic, have you seen the size comparison of Sci-Fi’s greatest machines and monsters? If not, check it out.

I was impressed with the document because it’s a nice  easy 17 page read which I felt was the right length to be a complete resource, but it wasn’t too long either. It also has links to other resources.

Ben and the other co-authors are trying to bring awareness to their companies. Ben helps software developers and IT pros with job placement in the Tri-state area. However, he agreed to place a link to the document that doesn’t ask for any of your information. The document is available as a free download without asking for anything in return.

If you have a job interview coming up I encourage you to download the PDF doc and check it out. Save it for later if needed. I believe that it will help you in your preparation in a fun and fully worthwhile way.

You can download it here.

Author: "OWScott" Tags: "General"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 24 Oct 2013 16:18

There are times when you need to reverse proxy through a server. The most common example is when you have an internal web server that isn’t exposed to the internet, and you have a public web server accessible to the internet. If you want to serve up traffic from the internal web server, you can do this through the public web server by creating a tunnel (aka reverse proxy).

Essentially, you can front the internal web server with a friendly URL, even hiding custom ports.

For example, consider an internal web server with a URL of http://10.10.0.50:8111. You can make that available through a public URL like http://tools.mysite.com/ as seen in the following image. The URL can be made public or it can be used for your internal staff and have it password protected and/or locked down by IP address.

image

This is easy to do with URL Rewrite and IIS. You will also need Application Request Routing (ARR) installed even though for a simple reverse proxy you won’t use most of ARR’s functionality. If you don’t already have URL Rewrite and ARR installed you can do so easily with the Web Platform Installer.

A lot can be said about reverse proxies and many different situations and ways to route the traffic and handle different URL patterns. However, my goal here is to get you up and going in the easiest way possible. Then you can dig in deeper after you get the base configuration in place.

URL Rewrite makes a reverse proxy very easy to set up. Note that the URL Rewrite Add Rules template doesn’t include Reverse Proxy at the server level. That’s not to say that you can’t create a server-level reverse proxy, but the URL Rewrite rules template doesn’t help you with that.

Getting Started

First you must create a website on your public web server that has the public bindings that you need. Alternately, you can use an existing site and route using conditions for certain traffic.

After you’ve created your site then open up URL Rewrite at the site level.

image

Using the “Add Rule(s)…” template that is opened from the right-hand actions pane, create a new Reverse Proxy rule.

image

If you receive a prompt (the first time) that the proxy functionality needs to be enabled, select OK. This is telling you that a proxy can route traffic outside of your web server, which happens to be our goal in this case. Be aware that reverse proxy rules can be dangerous if you open sites from inside you network to the world, so just be aware of what you’re doing and why.

image

The next and final step of the template asks a few questions.

image

The first textbox asks the name of the internal web server. In our example, it’s 10.10.0.50:8111. This can be any URL, including a subfolder like internal.mysite.com/blog. Don’t include the http or https here. The template assumes that it’s not entered.

You can choose whether to perform SSL Offloading or not. If you leave this checked then all requests to the internal server will be over HTTP regardless of the original web request. This can help with performance and SSL bindings if all requests are within a trusted network. If the network path between the two web servers is not completely trusted and safe then uncheck this.

Next, the template enables you to create an outbound rule. This is used to rewrite links in the page to look like your public domain name rather than the internal domain name. Outbound rules have a lot of CPU overhead because the entire web content needs to be parsed and updated. However, if you need it, then it’s well worth the extra CPU hit on the web server.

If you check the “Rewrite the domain names of the links in HTTP responses” checkbox then the From textbox will be filled in with what you entered for the inbound rule. You can enter your friendly public URL for the outbound rule. This will essentially replace any reference to 10.10.0.50:8111 (or whatever you enter) with tools.mysite.com in all <a>, <form>, and <img> tags on your site.

image

That’s it! Well, there is a lot more that you can do, this but will give you the base configuration. You can now visit www.mysite.com on your public web server and it will serve up the site from your internal web server.

You should see two rules show up; one inbound and one outbound. You can edit these, add conditions, and tweak them further as needed.

SNAGHTML325382fc

One common issue that can occur without outbound rules has to do with compression. If you run into errors with the new proxied site, try turning off compression to confirm if that’s the issue. Here’s a link with details on how to deal with compression and outbound rules.

I hope this was helpful to get started and to see how easy it is to create a simple reverse proxy using URL Rewrite for IIS.

Author: "OWScott" Tags: "IIS, IIS7, URL Rewrite, ARR, IIS8"
Comments Send by mail Print  Save  Delicious 
Date: Saturday, 21 Sep 2013 14:21

A fairly common request for URL Rewrite is to prepend a www to all 2nd level domains, regardless of the domain name. Consider the following domain names:

  • http://domain1.com
  • http://domain2.net
  • https://domain3.org

Following is an IIS URL Rewrite rule which will add the www to domain names without requiring you to create multiple rules. It will also maintain the http or https while doing so.

<rule name="Prepend www to 2nd level domain names" enabled="true" stopProcessing="true">
    <match url=".*" />
    <conditions trackAllCaptures="true">
        <add input="{HTTP_HOST}" pattern="^([^.]+\.[^.]+)$" />
        <add input="{CACHE_URL}" pattern="^(.+)://" />
    </conditions>
    <action type="Redirect" url="{C:2}://www.{C:1}/{R:0}" />
</rule>

This will result in the following URLs:

  • http://www.domain1.com
  • http://www.domain2.net
  • https://www.domain3.org
  • If you want to exclude a particular 2nd level domain name then simply add a negated third condition for the domain name which you want to exclude:

    <add input="{HTTP_HOST}" pattern="$domain4\.com^" negate="true" />
    Author: "OWScott" Tags: "IIS, IIS7, IIS8, URL Rewrite"
    Comments Send by mail Print  Save  Delicious 
    Date: Saturday, 06 Apr 2013 15:06

    Microsoft IIS Server has what appears to be an odd default for the application pool recycle time. It defaults to 1740 minutes, which is exactly 29 hours. I’ve always been a bit curious where that default came from. If you’re like me, you may have wondered too.

    Wonder no longer! While at the MVP Summit this year in Bellevue WA I had the privilege again of talking with the IIS team. Wade Hilmo was there too. Somehow in the conversation a discussion about IIS default settings came up, which included the odd 1740 minutes for the app pool recycle interval. Wade told the story of how the setting came into being, and he granted me permission to share.

    As you can imagine, many decisions for the large set of products produced by Microsoft come about after a lot of deliberation and research. Others have a geeky and fun origin. This is one of the latter.

    The 1740 story

    image

    Back when IIS 6 was being developed—which is the version that introduced application pools—a default needed to be set for the Regular Time Interval when application pools are automatically recycled.

    Wade suggested 29 hours for the simple reason that it’s the smallest prime number over 24. He wanted a staggered and non-repeating pattern that doesn’t occur more frequently than once per day. In Wade’s words: “you don’t get a resonate pattern”. The default has been 1740 minutes (29 hours) ever since!

    That’s a fun little tidbit on the origin of the 1740. How about in your environment though? What is a good default?

    Practical guidelines

    First off, I think 29 hours is a good default. For a situation where you don’t know the environment, which is the case for a default setting, having a non-resonate pattern greater than one day is a good idea.

    However, since you likely know your environment, it’s best to change this. I recommend setting to a fixed time like 4:00am if you’re on the East coast of the US, 1:00am on the West coast, or whatever seems to make sense for your audience when you have the least amount of traffic. Setting it to a fixed time each day during low traffic times will minimize the impact and also allow you to troubleshoot easier if you run into any issues. If you have multiple application pools it may be wise to stagger them so that you don’t overload the server with a lot of simultaneous recycles.

    Note that IIS overlaps the app pool when recycling so there usually isn’t any downtime during a recycle. However, in-memory information (session state, etc) is lost. See this video if you want to learn more about IIS overlapping app pools.

    You may ask whether a fixed recycle is even needed. A daily recycle is just a band-aid to freshen IIS in case there is a slight memory leak or anything else that slowly creeps into the worker process. In theory you don’t need a daily recycle unless you have a known problem. I used to recommend that you turn it off completely if you don’t need it. However, I’m leaning more today towards setting it to recycle once per day at an off-peak time as a proactive measure.

    My reason is that, first, your site should be able to survive a recycle without too much impact, so recycling daily shouldn’t be a concern. Secondly, I’ve found that even well behaving app pools can eventually have something sneak in over time that impacts the app pool. I’ve seen issues from traffic patterns that cause excessive caching or something odd in the application, and I’ve seen the very rare IIS bug (rare indeed!) that isn’t a problem if recycled daily. Is it a band-aid? Possibly, but if a daily recycle keeps a non-critical issue from bubbling to the top then I believe that it’s a good proactive measure to save a lot of troubleshooting effort on something that probably isn’t important to troubleshoot. However, if you think you have a real issue that is being suppressed by recycling then, by all means, turn off the auto-recycling so that you can track down and resolve your issue. There’s no black and white answer. Only you can make the best decision for your environment.

    Idle Time-out

    While on the topic of app pool defaults, there is one more that you should change with every new server deployment. The Idle Time-out should be set to 0 unless you are doing bulk hosting where you want to keep the memory footprint per site as low as possible.

    image

    If you have a just a few sites on your server and you want them to always load fast then set this to zero. Otherwise, when you have 20 minutes without any traffic then the app pool will terminate so that it can start up again on the next visit. The problem is that the first visit to an app pool needs to create a new w3wp.exe worker process which is slow because the app pool needs to be created, ASP.NET or another framework needs to be loaded, and then your application needs to be loaded. That can take a few seconds. Therefore I set that to 0 every chance I have, unless it’s for a server that hosts a lot of sites that don’t always need to be running.

    There are other settings that can be reviewed for each environment but the two aforementioned settings are the two that should be changed almost every time.

    Hopefully you enjoyed knowing about the 29 hour default as much as I did, even if just for fun. Happy IISing.

    Author: "OWScott" Tags: "IIS, Performance Tuning, IIS7, Windows S..."
    Comments Send by mail Print  Save  Delicious 
    Date: Tuesday, 29 Jan 2013 17:33

    Application Request Routing (ARR) is a great solution for load balancing and other proxying needs. I’m a big fan and have written often about it.

    I had someone ask me this week about WebDAV support for ARR. With his ARR setup, he noted that WebDAV creation, uploads, and downloads worked well, but renaming and moving files did not. He aptly noticed in the IIS logs that the source IP address showed as being from the ARR server rather than from the original user.

    Here’s what the IIS log recorded:

    2013-01-28 10:36:20 10.11.12.13 MOVE /Documents/test https://domain.com/Documents/test 80 User 10.9.8.7 Microsoft-WebDAV-MiniRedir/6.1.7601 400 1 0 31

    The 10.9.8.7 IP Address is an IP address on the ARR server. And yes, the real IP address has been replaced with a made-up IP to protect the innocent.
     
    To be honest, I haven’t tested WebDAV with ARR yet, but the issue sounded like the proxy-in-the-middle issue. I suggested installing ARRHelper on the web servers, and sure enough, that took care of it. While I didn’t setup a full repro myself, he confirmed that it worked for him after the ARRHelper install so I feel quite confident in saying that WebDAV will work with ARR as long as ARRHelper is installed.

    So for anyone working with WebDAV and ARR and it doesn’t work for you, it’s likely that ARRHelper needs to be installed on the web servers.

    You can find ARRHelper here, and if interested, I have a short video explaining what it is and why it’s important.

    Author: "OWScott" Tags: "IIS, IIS7, ARR, IIS8"
    Comments Send by mail Print  Save  Delicious 
    Date: Wednesday, 16 Jan 2013 14:37

    A friend of mine asked me recently how to handle a situation with a dot (.) in the path for an MVC project.  The path looked something like this:

    http://domain.com/aspnet/CLR4.0

    The MVC routes didn’t work for this path and the standard IIS 404 handler served up the page instead. However, the following URL did work:

    http://domain.com/aspnet/CLR4.0/

    The only difference is the trailing slash. For anyone that runs into the same situation, here’s the reason and the solution.

    What causes this inconsistency

    The issue with the first path is that IIS can’t tell if the path is a file or folder. In fact, it looks so much like a file with an extension of .0 then that’s what IIS assumes that it is. However, the second path is fine because the trailing slash makes it obvious that it’s a folder.

    We can’t just assign a wildcard handler to MVC in IIS for all file types because that will break files like .css, .js, .png, and other files are processed as static images.

    Note that this would not be an issue if the dot is in a different part of the path. It’s only an issue if the dot is in the last section. For example, the following would be fine:

    http://domain.com/aspnet/CLR4.0/info

    So how do we resolve it?

    As I mentioned, we can’t simply set a wildcard handler on it because it will break other necessary static files. So have about three options:

    1. You could always change the paths in your application. If you’re early enough in the development cycle that may be an option for you.
    2. Add file handlers for all extension types that you may have. For example, add a handler for .0, another for .1, etc.
    3. Use a URL rewriter like IIS URL Rewrite to watch for a particular pattern and append the training slash.

    Let’s look at these second two options. I’ll mention now up front that the URL Rewrite solution is probably the best bet, although I’ll cover both in case you prefer the handler method.

    Http Handler Solution

    You can handle specific extensions by adding a handler for each possible extension that you’ll support. This maps to System.Web.UI.PageHandlerFactory so that ASP.NET MVC has a handle on it and you can create an MVC route for it. This would apply to ASP.NET, PHP, or other frameworks too.

    C:\Windows\System32\inetsrv\appcmd.exe set config "Sitename" -section:system.webServer/handlers /+"[name='.0-PageHandlerFactory-Integrated-4.0',path='*.0',verb='GET,HEAD,POST,DEBUG',type='System.Web.UI.PageHandlerFactory',preCondition='integratedMode']" /commit:apphost
    C:\Windows\System32\inetsrv\appcmd.exe set config "SiteName" -section:system.webServer/handlers /+"[name='.1-PageHandlerFactory-Integrated-4.0',path='*.1',verb='GET,HEAD,POST,DEBUG',type='System.Web.UI.PageHandlerFactory',preCondition='integratedMode']" /commit:apphost

    Run this appcmd command from the command prompt to create the handlers.

    Make sure to update “Sitename” with your own site name, or leave it off to make it a server-wide change. And you can update ‘*.0’, ‘*.1’ to your extensions.

    If you do create the site-level rule, make sure to save your web.config back to your source control so that you don’t overwrite it on your next site update.

    IIS URL Rewrite Solution

    Probably the best solution, and the one that my friend used in this case, is to use URL Rewrite to add a trailing slash when needed. The advantage of this is that you can use a more general pattern to redirect the URL rather than a bunch of handlers for each specific extension.

    This assumes that you have IIS 7.0 or greater and that you have URL Rewrite installed. If you’re not familiar with URL Rewrite, check out the URL Rewrite articles on my blog (start with this one).

    Note: If you want, you can skip this section and jump right to the next section for an easier way to do this using URL Rewrite’s rule wizard.

    The following rule watches for a pattern of exactly “something.{digits}” (without a trailing slash). If it finds it then it performs a redirect and appends the trailing slash. It also confirms that it’s not a file or directory that exists on disk.

    <rule name="Add trailing slash for some URLs" stopProcessing="true">
    <match url="^(.*\.(\d)+)$" />
    <conditions>
    <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
    <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
    </conditions>
    <action type="Redirect" url="{R:1}/" />
    </rule>

    To apply this rule, using IIS Manager you can create a dummy rule as a placeholder and then edit web.config and replace your placeholder rule with this rule. If this still doesn’t make sense then be sure to review the articles I mentioned above. The configuration location is in <configuration><system.webServer><rules><rule>

    An added benefit of this rule is that you’ll make the search engines happy by having just one path for each page, rather than possibly two with and without the slash. This will help with your SEO rankings.

    I didn’t think of it at the time but just now I realized that you could use a general match url pattern of “.(*[^/])” and it will work for you too. The reason is that the check for IsFile and IsDirectory will ensure that your static images will continue to be served directly from disk so you won’t break them. So feel free to use <match url="(.*[^/])" /> instead if you want to add the trailing slash for all paths that don’t have them.

    The Real Easy Solution (URL Rewrite wizard)

    In fact, to make this easier yet, you can use URL Rewrite’s existing rule wizard to add the trailing slash. You must apply this at the site or folder level since the trailing slash wizard doesn’t exist at the server level.

    1. From IIS Manager, open URL Rewrite at the site or folder level.
      image
    2. Click “Add Rule(s)…” from the Actions pane.
      SNAGHTML14017621
    3. Select “Append or remove the trailing slash symbol” rule.
      image
    4. Use the default option of “Appended if it does not exist”
    5. Press OK.

    That’s it! You’ll have a rule added which will append the trailing slash for all non-physical file or folder paths that don’t already have the trailing slash. Not only will it handle dots in the path but the search engines will be happy too.

    As with the http extension solution, if you create this as a site or folder-level rule, it will be applied to your web.config file. Make sure to update web.config in your source control so that you don’t lose your changes on your next site deployment.

    Author: "OWScott" Tags: "IIS, ASP.NET, IIS7, URL Rewrite, MVC, II..."
    Comments Send by mail Print  Save  Delicious 
    Date: Tuesday, 13 Nov 2012 22:14

    IIS 8 on Windows Server 2012 doesn’t have any fixed concurrent request limit, apart from whatever limit would be reached when resources are maxed.

    However, the client version of IIS 8, which is on Windows 8, does have a concurrent connection request limitation to limit high traffic production uses on a client edition of Windows.

    Starting with IIS 7 (Windows Vista), the behavior changed from previous versions.  In previous client versions of IIS, excess requests would throw a 403.9 error message (Access Forbidden: Too many users are connected.).  Instead, Windows Vista, 7 and 8 queue excessive requests so that they will be handled gracefully, although there is a maximum number of requests that will be processed simultaneously.

    Thomas Deml provided a concurrent request chart for Windows Vista many years ago, but I have been unable to find an equivalent chart for Windows 8 so I asked Wade Hilmo from the IIS team what the limits are.  Since this is controlled not by the IIS team itself but rather from the Windows licensing team, he asked around and found the authoritative answer, which I’ll provide below.

    Windows 8 – IIS 8 Concurrent Requests Limit

    Windows 8 (Basic edition) 3
    Windows 8 Professional, Enterprise 10
    Windows RT N/A since IIS does not run on Windows RT

    Windows 7 – IIS 7.5 Concurrent Requests Limit

    Windows 7 Home Starter 1
    Windows 7 Basic 1
    Windows 7 Premium 3
    Windows 7 Ultimate, Professional, Enterprise 10

    Windows Vista – IIS 7 Concurrent Requests Limit

    Windows Vista Home Basic (IIS process activation and HTTP processing only) 3
    Windows Vista Home Premium 3
    Windows Vista Ultimate, Professional, Enterprise 10

    Windows Server 2003, Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 allow an unlimited amount of simultaneously requests.

    Author: "OWScott" Tags: "IIS, Windows Vista, IIS7, Windows 7, Win..."
    Comments Send by mail Print  Save  Delicious 
    Date: Wednesday, 07 Nov 2012 14:04

    IIS URL Rewrite supports server variables for pretty much every part of the URL and http header. However, there is one commonly used server variable that isn’t readily available.  That’s the protocol—HTTP or HTTPS.

    You can easily check if a page request uses HTTP or HTTPS, but that only works in the conditions part of the rule.  There isn’t a variable available to dynamically set the protocol in the action part of the rule.  What I wish is that there would be a variable like {HTTP_PROTOCOL} which would have a value of ‘HTTP’ or ‘HTTPS’.  There is a server variable called {HTTPS}, but the values of ‘on’ and ‘off’ aren’t practical in the action.  You can also use {SERVER_PORT} or {SERVER_PORT_SECURE}, but again, they aren’t useful in the action.

    Let me illustrate.  The following rule will redirect traffic for http(s)://localtest.me/ to http://www.localtest.me/.

    <rule name="Redirect to www">
      <match url="(.*)" />
      <conditions>
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
      </conditions>
      <action type="Redirect" url="http://www.localtest.me/{R:1}" />
    </rule>

    The problem is that it forces the request to HTTP even if the original request was for HTTPS.

    Interestingly enough, I planned to blog about this topic this week when I noticed in my twitter feed yesterday that Jeff Graves, a former colleague of mine, just wrote an excellent blog post about this very topic.  He beat me to the punch by just a couple days.  However, I figured I would still write my blog post on this topic.  While his solution is a excellent one, I personally handle this another way most of the time.  Plus, it’s a commonly asked question that isn’t documented well enough on the web yet, so having another article on the web won’t hurt.

    I can think of four different ways to handle this, and depending on your situation you may lean towards any of the four.  Don’t let the choices overwhelm you though.  Let’s keep it simple, Option 1 is what I use most of the time, Option 2 is what Jeff proposed and is the safest option, and Option 3 and Option 4 need only be considered if you have a more unique situation.  All four options will work for most situations.

    Option 1 – CACHE_URL, single rule

    There is a server variable that has the protocol in it; {CACHE_URL}.  This server variable contains the entire URL string (e.g. http://www.localtest.me:80/info.aspx?id=5)  All we need to do is extract the HTTP or HTTPS and we’ll be set. This tends to be my preferred way to handle this situation.

    Indeed, Jeff did briefly mention this in his blog post:

    … you could use a condition on the CACHE_URL variable and a back reference in the rewritten URL. The problem there is that you then need to match all of the conditions which could be a problem if your rule depends on a logical “or” match for conditions.

    Thus the problem.  If you have multiple conditions set to “Match Any” rather than “Match All” then this option won’t work.  However, I find that 95% of all rules that I write use “Match All” and therefore, being the lazy administrator that I am I like this simple solution that only requires adding a single condition to a rule.  The caveat is that if you use “Match Any” then you must consider one of the next two options.

    Enough with the preamble.  Here’s how it works.  Add a condition that checks for {CACHE_URL} with a pattern of “^(.+)://” like so:

    image

    How you have a back-reference to the part before the ://, which is our treasured HTTP or HTTPS.  In URL Rewrite 2.0 or greater you can check the “Track capture groups across conditions”, make that condition the first condition, and you have yourself a back-reference of {C:1}.

    The “Redirect to www” example with support for maintaining the protocol, will become:

    <rule name="Redirect to www" stopProcessing="true">
      <match url="(.*)" />
      <conditions trackAllCaptures="true">
        <add input="{CACHE_URL}" pattern="^(.+)://" />
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
      </conditions>
      <action type="Redirect" url="{C:1}://www.localtest.me/{R:1}" />
    </rule>

    It’s not as easy as it would be if Microsoft gave us a built-in {HTTP_PROTOCOL} variable, but it’s pretty close.

    I also like this option since I often create rule examples for other people and this type of rule is portable since it’s self-contained within a single rule.

    Option 2 – Using a Rewrite Map

    For a safer rule that works for both “Match Any” and “Match All” situations, you can use the Rewrite Map solution that Jeff proposed.  It’s a perfectly good solution with the only drawback being the ever so slight extra effort to set it up since you need to create a rewrite map before you create the rule.  In other words, if you choose to use this as your sole method of handling the protocol, you’ll be safe.

    After you create a Rewrite Map called MapProtocol, you can use “{MapProtocol:{HTTPS}}” for the protocol within any rule action.  Following is an example using a Rewrite Map.

    <rewrite>
      <rules>
        <rule name="Redirect to www" stopProcessing="true">
          <match url="(.*)" />
          <conditions trackAllCaptures="false">
            <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
          </conditions>
          <action type="Redirect" 
            url="{MapProtocol:{HTTPS}}://www.localtest.me/{R:1}" />
        </rule>
      </rules>
      <rewriteMaps>
        <rewriteMap name="MapProtocol">
          <add key="on" value="https" />
          <add key="off" value="http" />
        </rewriteMap>
      </rewriteMaps>
    </rewrite>

    Option 3 – CACHE_URL, Multi-rule

    If you have many rules that will use the protocol, you can create your own server variable which can be used in subsequent rules. This option is no easier to set up than Option 2 above, but you can use it if you prefer the easier to remember syntax of {HTTP_PROTOCOL} vs. {MapProtocol:{HTTPS}}.

    The potential issue with this rule is that if you don’t have access to the server level (e.g. in a shared environment) then you cannot set server variables without permission.

    First, create a rule and place it at the top of the set of rules.  You can create this at the server, site or subfolder level.  However, if you create it at the site or subfolder level then the HTTP_PROTOCOL server variable needs to be approved at the server level.  This can be achieved in IIS Manager by navigating to URL Rewrite at the server level, clicking on “View Server Variables” from the Actions pane, and added HTTP_PROTOCOL. If you create the rule at the server level then this step is not necessary. 

    Following is an example of the first rule to create the HTTP_PROTOCOL and then a rule that uses it.  The Create HTTP_PROTOCOL rule only needs to be created once on the server.

    <rule name="Create HTTP_PROTOCOL">
      <match url=".*" />
      <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{CACHE_URL}" pattern="^(.+)://" />
      </conditions>
      <serverVariables>
        <set name="HTTP_PROTOCOL" value="{C:1}" />
      </serverVariables>
      <action type="None" />
    </rule>
     
    <rule name="Redirect to www" stopProcessing="true">
      <match url="(.*)" />
      <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
      </conditions>
      <action type="Redirect" url="{HTTP_PROTOCOL}://www.localtest.me/{R:1}" />
    </rule>

    Option 4 – Multi-rule

    Just to be complete I’ll include an example of how to achieve the same thing with multiple rules. I don’t see any reason to use it over the previous examples, but I’ll include an example anyway.  Note that it will only work with the “Match All” setting for the conditions.

    <rule name="Redirect to www - http" stopProcessing="true">
      <match url="(.*)" />
      <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
        <add input="{HTTPS}" pattern="off" />
      </conditions>
      <action type="Redirect" url="http://www.localtest.me/{R:1}" />
    </rule>
    <rule name="Redirect to www - https" stopProcessing="true">
      <match url="(.*)" />
      <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
        <add input="{HTTPS}" pattern="on" />
      </conditions>
      <action type="Redirect" url="https://www.localtest.me/{R:1}" />
    </rule>

    Conclusion

    Above are four working examples of methods to call the protocol (HTTP or HTTPS) from the action of a URL Rewrite rule.  You can use whichever method you most prefer.  I’ve listed them in the order that I favor them, although I could see some people preferring Option 2 as their first choice.  In any of the cases, hopefully you can use this as a reference for when you need to use the protocol in the rule’s action when writing your URL Rewrite rules.

    Further information:

    Author: "OWScott" Tags: "ARR, IIS, IIS7, IIS8, URL Rewrite"
    Comments Send by mail Print  Save  Delicious 
    Date: Wednesday, 18 Jul 2012 13:40

    While the dreaded Blue Screen of Death (BSOD) occurs less frequently with newer versions of Windows than it did in years past, there are still times when the BSOD reveals itself. 

    I just ran into four BSOD’s on two Windows Server 2012 machines and I had the ‘opportunity’ to analyze a memory.dmp file today, so I thought I would post quick instructions on how to get a handy summary of the memory dump.

    I’ve had this ”I Found a Fix” debugging page bookmarked for years and I’ve used it many times, so I need to give full credit to ifoundafix for their helpful steps.  The only change I have below is to include updated paths.

    It’s possible to debug remotely, and you may have requirements to do that.  My quick instructions here are for local debugging.  The debugging tools are very stable and if you install just what you need then they are small and a quick install, so running this on a production machine is generally safe, but you must make that decision for your particular environment.

    This can be accomplished with 7 easy steps:

    Step 1. Obtain and install the debugging tools.  The links do change over time, but the following link is currently an exhaustive page which includes Windows Server 2012 and Windows 8 Consumer debugger tools, Windows 7, Vista, XP and Windows Server 2003.

    Debugging Tools Windows

    All you need to install is the “Install Debugging Tools for Windows as a Standalone Component (from Windows SDK)” and during the install only select "Debugging Tools for Windows".  Everything else is used for more advanced troubleshooting or development, and isn’t needed here.  Today I followed the link to “Install Debugging Tools for Windows as a Standalone Component (from Windows SDK)” although for a different OS you may need to follow a different link.

    Step 2. From an elevated command prompt navigate to the debugging folder. For me with the latest tools on Windows Server 2012 it was at C:\Program Files (x86)\Windows Kits\8.0\Debuggers\x64\.  You can specify the path during the install.

    Step 3. Type the following:

    kd –z C:\Windows\memory.dmp (or the path to your .dmp file)

    Step 4. Type the following:

    .logopen c:\debuglog.txt

    Step 5. Type the following:

    .sympath srv*c:\symbols*http://msdl.microsoft.com/download/symbols

    Step 6. Type the following:

    .reload;!analyze -v;r;kv;lmnt;.logclose;q

    Step 7. Review the results by opening c:\debuglog.txt in your favorite text editor.  Searching for PROCESS_NAME: will show which process had the fault.  You can use the process name and other information from the dump to find clues and find answers in a web search.  Usually the fault is with a hardware drivers of some sort, but there are many things that can cause crashes so the actual analyzing of the dump may take some research.

    Often times a driver update will fix the issue.  If the summary information doesn’t offer enough information then you’ll need to dig further into the debugging tools or open a CSS case with Microsoft.  The steps above will provide you with a summary mostly-human-readable report from the dump.  There is much more information available in the memory dump although it gets exponentially more difficult to track down the details the further you get into windows debugging.

    Hopefully these quick steps are helpful for you as you troubleshoot the unwelcome BSOD.

    Author: "OWScott" Tags: "Troubleshooting, Windows Server, Windows..."
    Comments Send by mail Print  Save  Delicious 
    Date: Saturday, 19 May 2012 03:29

    It’s hard to believe that it’s been 10 years since my first day at OrcsWeb. Today is my last official day, but I’ll still be close by. I have a number of ties here, including being a customer through Vaasnet.

    So much has changed in this time. Ten years ago I began working for OrcsWeb from Canada. Nine years ago I moved my family down here to North Carolina and assumed the role of Director of Technology.  I was able to be a part of the company as it grew in staff, servers, customers, and reputation. I feel honored to be a part of OrcsWeb during these exciting years.

    During my time at OrcsWeb I have been given opportunities to attend conferences, meet and become friends with top technical experts in the field, write articles, co-author two books, and speak at conferences and code camps. It was through OrcsWeb that I was given opportunities to be active in the community, to become a Microsoft MVP and an ASPInsider.

    I’m grateful to Brad and Karla Kingsley who have always treated me like more than an employee. They have always encouraged me to grow and to pursue my dreams.

    I’m thankful to Jeff Graves who has been accommodating to my evolving schedule and less than full time availability.  And in terms of technical smarts, Jeff tops the list!  And I’m also thankful of the rest of the team at OrcsWeb who are experts in the field, and with whom it’s always been a privilege to work.

    Moving forward, I have two main focuses. I’ll be able to spend more time on Vaasnet (a company I co-founded with Jeff Widmer) to see the company position itself further in the market and to strengthen both the product and the brand.  Additionally, I’m working in a part time basis with Dynamicweb an established CMS and eCommerce company in Europe who is just moving into the US market. Dynamicweb has a strong product already and I’m excited to work with the leadership team in the US. Expect to see more of Dynamicweb in the coming months and years.

    I just want to reiterate a big thanks to OrcsWeb for helping write such an important chapter in my life. And it’s with excitement that I look forward to the next chapter of my life.

    Author: "OWScott"
    Comments Send by mail Print  Save  Delicious 
    Date: Monday, 14 May 2012 15:59

    Save this URL, memorize it, write it on a sticky note, tweet it, tell your colleagues about it! 

    localtest.me (http://localtest.me)

    and

    *.localtest.me (http://something.localtest.me)

    If you do any testing on your local system you’ve probably created hosts file entries (c:\windows\system32\drivers\etc\hosts) for different testing domains and had them point back to 127.0.0.1.  This works great but it requires just a bit of extra effort.

    This localtest.me trick is so obvious, so simple, and yet so powerful.  I wouldn’t be surprised if there are other domain names like this out there, but I haven’t run across them yet so I just ordered the domain name localtest.me which I’ll keep available for the internet community to use.

    Here’s how it works. The entire domain name localtest.me—and all wildcard entries—point to 127.0.0.1.  So without any changes to your host file you can immediate start testing with a local URL.

    Examples:

    http://localtest.me
    http://newyork.localtest.me
    http://mysite.localtest.me
    http://redirecttest.localtest.me
    http://sub1.sub2.sub3.localtest.me

    You name it, just use any *.localtest.me URL that you dream up and it will work for testing on your local system.

    This was inspired by a trick that Imar Spaanjaars introduced me to. He created a loopback wildcard URL with his company domain name.  I took this one step further and ordered a domain name just for this purpose.

    I would have liked to order localhost.com or localhost.me but those domain names were taken. So to help you remember, just remember that it’s ‘localtest’ and not ‘localhost’, and it’s ‘.me’ rather than ‘.com’.

    I can’t track usage since the domain name resolves to 127.0.0.1 and never passes through my servers, so this is just a public tool which I’ll give to the community. I hope it gets used. And, since I can’t really use the domain name to explain itself, please spread the word and tell others about it.

    Some examples on how to use it would include:

    • Creating websites on your dev machine.  site1.localtest.me, site2.localtest.me, site3.localtest.me.
    • Great for URL Rewrite (IIS) or mod_rewrite (Apache) testing: redirect.localtest.me, failuretest.localtest.me, subdomain.localtest.me, city1.localtest.me.
    • Any testing on your local system where a friendly URL would be useful.

    I hope you enjoy!

    Author: "OWScott" Tags: "IIS, IIS7, Windows Server, Windows 7, Tr..."
    Comments Send by mail Print  Save  Delicious 
    Date: Friday, 20 Apr 2012 01:24

    You can find this week’s video here.

    This week answers two Q&A questions from viewers. DNS Load Balancing and then some discussion and a walkthrough using Application Request Routing (ARR) for a Content Delivery Network (CDN).

    There’s a growing movement towards Content Delivery Networks (CDN); fronting web farms and geographically dispersing websites. This week I continue with Q&A’s from viewers, taking questions on DNS Load Balancing and CDNs.

    Question 1:

    I would love to see some clever DNS load balancing (not sure what capability windows is offering). Flesik

    Question 2a:

    I would love to see an end-2-end CDN redundant network setup (dns balancing, ARR nodes, parent notes etc). Flesik

    Question 2b:

    I would be interested in seeing a series on building an ecdn or ecn using ARR and what the best practices would be to scale it out geographically.

    It seems ARR is sold that way... really not sold but talked about. I have tried to put my theory out and try it but i just don't know the best way to route my clients to their designated locations. Can you help out with an awesome weblog maybe? Adam

     

    The following URL is the one I mentioned in the video: http://learn.iis.net/page.aspx/649/deploying-application-request-routing-in-cdn

    In this week’s video we look at DNS load balancing and geo-location issues that Google faces by using DNS to determine a user’s location. We also take a look at using Microsoft Application Request Routing (ARR) to create a CDN.

    This is week 50 of a 52 week series for the web pro. You can view past and future weeks here: http://dotnetslackers.com/projects/LearnIIS7/

    You can find this week’s video here.

    Author: "OWScott" Tags: "IIS, IIS7, ARR, Web Pro Series, DNS"
    Comments Send by mail Print  Save  Delicious 
    Date: Tuesday, 27 Mar 2012 13:37

    You can find this week’s video here.

    This week I'm taking Q&A from viewers, starting with what's new in IIS8, a question on enable32BitAppOnWin64, performance settings for asp.net, the ARR Helper, and Indexing Services.

    This week we look at five topics.

    Pre-topic:
    We take a look at the new features in IIS8. Last week Internet Information Services (IIS) 8 Beta was released to the public. This week's video touches on the upcoming features in the next version of IIS. Here’s a link to the blog post which was mentioned in the video
    Question 1:

    In a number of places (http://learn.iis.net/page.aspx/201/32-bit-mode-worker-processes/, http://channel9.msdn.com/Events/MIX/MIX08/T06), I've saw that enable32BitAppOnWin64 is recommended for performance reasons. I'm guessing it has to do with memory usage... but I never could find detailed explanation on why this is recommended (even Microsoft books are vague on this topic - they just say - do it, but provide no reason why it should be done). Do you have any insight into this? (Predrag Tomasevic)

    Question 2:

    Do you have any recommendations on modifying aspnet.config and machine.config to deliver better performance when it comes to "high number of concurrent connections"? I've implemented recommendations for modifying machine.config from this article (http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx - ASP.NET Process Configuration Optimization section)... but I would gladly listen to more recommendations if you have them. (Predrag Tomasevic)

    Question 3:

    Could you share more of your experience with ARR Helper? I'm specifically interested in configuring ARR Helper (for example - how to only accept only X-Forwards-For from certain IPs (proxies you trust)). (Predrag Tomasevic)

    Question 4:

    What is the replacement for indexing service to use in coding web search pages on a Windows 2008R2 server? (Susan Williams)

    Here’s the link that was mentioned: http://technet.microsoft.com/en-us/library/ee692804.aspx

    This is now week 49 of a 52 week series for the web pro. You can view past and future weeks here: http://dotnetslackers.com/projects/LearnIIS7/

    You can find this week’s video here.

    Author: "OWScott" Tags: "IIS, Performance Tuning, Web Pro Series,..."
    Comments Send by mail Print  Save  Delicious 
    Date: Thursday, 01 Mar 2012 20:07

    With the beta release of Windows Server 8 today, Internet Information server (IIS) 8 is available to the public for testing and even production workload testing.  Many system administrators have been anxious to kick the tires and to find out which features are coming.

    I’ll include a high level overview of what we will see in the upcoming version of IIS.  The focus with this release of IIS 8 is on the large scale hoster.  There are substantial performance improvements to handle thousands of sites on a single server farm—with ease.  Everything that I mention below is available for download and usage today.

    Forgive me if there are typos.  I’m writing this while at the MVP Summit in Seattle while trying to listen to another session at the same time.  Thanks to the IIS team who gave detailed demos on this yesterday and gave me permission to talk about this.

    Real CPU Throttling

    Previous versions of IIS have CPU throttling but it doesn’t do what most of us want.  When a site reaches the CPU threshold the site is turned off for a period of time before it is allowed to run again.  This protects the other sites on the server but it isn’t a welcome action for the site in question since the site breaks rather than just slowing down. 

    Finally in version IIS 8 there are kernel level changes to support real CPU Throttling.  Now there are two new actions for sites that reach the CPU threshold.  They are Throttle and Throttle under load.  If you used WSRM to achieve this in the past, you no longer need to do so, and the functionality is improved over what is available with WSRM.

    The throttle feature will keep the CPU for a particular worker process at the level specified.  Throttling isn’t applied to just the primary worker process, but it also includes all child processes, if they happen to exist.

    The Throttle under load feature will allow a site to use all possible CPU if it’s available while throttling the worker process if the server is under load.

    The throttling is based on the user and not specifically on the application pool. This means that if you use dedicated users on more than one app pool then it throttles for all of app pools sharing the same user identity. Note that the application pool identity user is unique so if you use the app pool identity user—which is common—then each app pool will be throttled individually.

    This is a welcome new feature and is nicely implemented.

    SSL Scalability

    Unless you deal with large scale site hosting with many SSL certificates you may not have realized that there is room for improvement in this area. 

    Previous versions of IIS have limited secure site density.  Each SSL site requires its own IP address and after adding a few SSL sites, startup performance becomes slow and the memory demand is high.  Every certificate is loaded into memory on the first visit to an SSL site which creates a large memory footprint and a long delay on the first load. 

    In IIS 8 the SSL certificate count is easily scalable to thousands of secure sites per machine with almost instantaneous first-loads.  Only the certificate that is needed is loaded and it will unload after a configurable idle period.  Additionally, enumerating or loading huge numbers of certificates is substantially improved.

    SNI / SSL Host Header Support

    Using host headers and a shared IP address with SSL certificate has always been problematic.  IIS 8 now offers Server Name Indication (SNI) support which allows many SSL sites to share the same IP.  SNI is a fairly new feature (within the last few years) which allows host headers to work with SSL. It does this by carrying the target host name in the TLS handshake rather than the encrypted part of the packet.

    IIS 8 makes SNI support a first class citizen in the site bindings.

    Note that SNI doesn't work on all browsers. For example, Internet Explorer in Windows XP does not support SNI.  Read more about that from Eric Law's blog post. Over 85% of browsers is use today support SNI, but since it's not 100%, it will not work universally. However, like the adoption issue with host headers in the '90s, it will a fully supported before we know it. More details with a list of browsers can be found here: http://en.wikipedia.org/wiki/Server_Name_Indication

    This sets the stage for sharing IP addresses which is extra important as ipv4 IPs become more valuable and consolidation of IPs becomes the trend. 

    SSL Manageability - Central Certificate Store (CCS)

    In IIS 7 managing SSL is labor intensive, particularly for server farms.  All certificate must be imported on every machine in the farm.  When setting up new servers you must account for time needed to import certificates when scaling out, and even on small server farms.  In previous versions keeping certificates in sync between servers is difficult to manage and often requires manual steps.

    In IIS8 there is a new Central Certificate Store (CCS).  Central Certificate Store allows storing certificates on a central file share instead of each machine.  You can point the servers to a single network share, or use replication like DFS-R to sync the folders between machines.

    Renewal and syncing is as simple as xcopying pfx files to the location that you specify when enabling CCS on the web server.  Enabling CCS is straight forward too.  It works very similar to enabling Shared Configuration.

    CCS compliments the SNI functionality to support sites with multiple certs and a single IP.

    The mapping of bindings to certificates uses a bit of magic … by convention rather than configuration. This is important for extremely large lists of certificates. Now you don't need to select them from a huge list. The value of the host header needs to match the name of the cert. Your CCS folder will have many .pfx files with names that match the domain name.  Basically the name of the .pfx files in the certificate store is the primary key.

    If you use a wildcard cert then it needs to be named _.domain.com.pfx.

    As you would assume, there is support for Multiple Domain Certificates (Unified Communications Certificate [UCC]). If you use multiple domain certificates using the subjectAltName feature of the certificate then you just create multiple copies of the pfx, one for each subjectAltName.

    Note that you can use the old method which binds to by certificate identifier and it works the same as it did in the past.

    Furthermore there is a neat feature for the central repository that allows grouping by expiration date, which groups by "Today / This Week / Next Week / Next Month / Later" which is handy for seeing which certificates are ready to expire.

    With these changes to the certificates, it makes for a powerful solution for large scale webfarm hosting with multiple tenants.

    Dynamic IP Restrictions

    Information about this is already available on the web, but it's moving along and getting closer for the final release.

    FTP Logon Restriction

    Yay. A new FTP IP Restrictions module is coming! This is similar in concept to Dynamic IP Restrictions for HTTP. One of the key differences is that it does gray listing rather than black listing. When someone is blocked, they are only blocked for the sample period (e.g. 30 seconds). This is nice because it's enough to thwart or slow brute force and common name password attacks, but legit invalid attempts can continue to attempt to log in without waiting for long periods of time.

    What's extra nice about having this feature is that you can set it slightly more sensitive than your domain username lockout policy so that brute force attacks don't cause your username to be locked out from too many invalid attempts. The FTP IP Restrictions can throttle the hack attempts without locking out your domain users.

    Application Initialization Module

    Previously known as the application warm-up module which was pulled for a time, now it's ready in full force as Application Initialization Module.

    This allows spinning up sites and pages before traffic arrives and handling of requests in a friendly way while the application first loads. It's not uncommon for a site to take a minute or longer on the first load (yes SharePoint admins, we feel your pain).  This allows you to protect the end user from being the person that triggers this.

    It's possible to set a warm-up page at the server level as a single setting, or you can use powerful URL Rewrite rules for more flexibility.

    You can also ensure that your load balancer’s health test page doesn’t serve up a valid response until the site is fully initialized according to your preferences.  Then the load balancer will bring a node into rotation only after the entire warm-up has completed.

    Configuration Scale

    The IIS configuration files (e.g. applicationHost.config) can handle very large files with ease now.  There are substantial performance improvements in the upcoming version. Only administrators with large numbers of sites on the same server or server farm (think thousands) would have noticed before, but for large scale performance the new changes are paving the way for huge scale.

    Web Sockets

    It’s important to include Web Sockets in this list too.  Apart from some brief information I really haven’t looked into Web Sockets in detail yet so I’ll just include a great link from Paul Batum on it.  Web Sockets does require Windows 8 or later on the server side. 

    All in all these are welcome changes.  While previous versions of IIS already did a great job of handling massive amounts of traffic, IIS 8 now can handle thousands (or tens of thousands) of sites and their extensive configurations on a single server farm.  With HTTP and FTP logon restrictions, CPU throttling, the Application Initialization Module, and large scale SSL and configuration improvements, IIS 8 brings a number of welcome improvements.

    Author: "OWScott" Tags: "IIS, IIS8, Windows Server, Windows Serve..."
    Comments Send by mail Print  Save  Delicious 
    Date: Tuesday, 21 Feb 2012 15:58

    You can find this week’s video here.

    This lesson covers ways to troubleshoot IIS FTP. When it works, it works well, but if you run into issues getting an FTP account working it can sometimes be difficult to resolve. This video will help you understand some helpful tricks and it will walk you through ways to isolate and resolve the issue.

    Over the last five weeks we’ve been looking at IIS FTP. See the list below to jump to a specific FTP topic.  This week we explore some troubleshooting techniques and review the following FTP connectivity stack.

    • DNS Resolution/Network Connectivity
    • Firewall Access (Passive/Active / Secure?)
    • IIS Bindings
    • Authentication
    • Authorization
    • Isolation Mode / File paths
    • NTFS Permissions

    There were two external resources which I referenced. They are:

    This is now week 48 of a 52 week series for the web pro and it is the final of a 5-week mini-series on IIS FTP. The five weeks include:

    You can find this week’s video here.

    Author: "OWScott" Tags: "IIS, FTP, IIS7, Web Pro Series"
    Comments Send by mail Print  Save  Delicious 
    Date: Monday, 13 Feb 2012 14:28

    You can find this week’s video here.

    Have you ever wondered what FTP Active mode or Passive mode means? Do you have a good understanding of the FTP data channel or control channel? It can be difficult to fully understand FTP, which firewall ports to enable, and how to navigate the two communication channels. This lesson will hopefully clear up these questions and more.

    This week’s video lesson takes a deep dive into FTP Active vs. Passive modes. As part of this you’ll get a chance to see the various modes in action, see what the traffic looks like in Wireshark, see exact firewall rules, learn about stateful FTP, find out about Explicit FTPS and Implicit FTPS, and learn about the FTP data channel and control channels.

    This week's video lesson is the 4th of a 5-week mini-series on IIS FTP. The five weeks include:

    • Week 1: IIS FTP Basics
    • Week 2: IIS FTP and IIS Manager Users
    • Week 3: IIS FTP and User Isolation
    • Week 4: IIS FTP Firewall settings, Active vs. Passive
    • Week 5: IIS FTP Troubleshooting plus FTP Host Headers

    This is now week 47 of a 52 week series for the web pro, and the 4th of a 5 week mini-series on IIS FTP. You can view past and future weeks here: http://dotnetslackers.com/projects/LearnIIS7/

    You can find this week’s video here.

    Author: "OWScott" Tags: "IIS, FTP, IIS7, Web Pro Series"
    Comments Send by mail Print  Save  Delicious 
    Date: Saturday, 04 Feb 2012 04:11

    Today I wanted to find a way to flush the IIS FTP logs on-demand.  The logs for IIS FTP flush to disk every 6 minutes, and the HTTP logs every 1 minute (or 64kb).  This can make troubleshooting difficult when you don’t receive immediate access to the latest log data.

    After looking everywhere I could think of, from search engine searches to perusing through the IIS schema files, I figured I had better go to the source and ask Robert McMurray.

    Sure enough, Robert had the answer and even wrote a blog post in response to my question with code examples for four scripting/programming languages (C#, VB.NET, JavaScript, VbScript).

    There is not a netsh or appcmd solution though, so the scripting or programming options are the way to do it.  Actually, you can also flush the logs by restarting the Microsoft FTP Service (ftpsvc) but, as you would assume, it will impact currently active FTP sessions.

    This blog post serves three purposes. 

    1. It’s a reference pointing to Robert’s examples
    2. I’ll include how to do the same for the HTTP logs
    3. I’ll provide a PowerShell example which I based on Robert’s examples

    1. The reference is mentioned above already, but to give me something useful to write in this paragraph, I’ll include it again. Programmatically Flushing FTP Logs.

    2. For HTTP there is a method to flush the logs using netsh.

    netsh http flush logbuffer
    .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    This will immediately flush the HTTP logs for all sites.

    3. The FTP logs can be done from PowerShell too.  Here’s a script which is the PowerShell equivalent of Robert’s examples.  Just update $siteName, or pass it as a parameter to the script.

    Param($siteName = "Default Web Site") 
     
    #Get MWA ServerManager
    [System.Reflection.Assembly]::LoadFrom( "C:\windows\system32\inetsrv\Microsoft.Web.Administration.dll" ) | Out-Null
    $serverManager = new-object Microsoft.Web.Administration.ServerManager 
     
    $config = $serverManager.GetApplicationHostConfiguration()
     
    #Get Sites Collection
    $sitesSection = $config.GetSection("system.applicationHost/sites")
    $sitesCollection = $sitesSection.GetCollection() 
     
    #Find Site
    foreach ($item in $sitesCollection){ 
     
        if ($item.Attributes.Item("Name").Value -eq $siteName){
            $site = $item
        }
    }
    #Validation
    if ($site -eq $null) { 
        Write-Host "Site '$siteName' not found"
        return
    }
     
    #Flush the Logs
    $ftpServer = $site.ChildElements.Item("ftpServer")
     
    if (!($ftpServer.ChildElements.Count)){
        Write-Host "Site '$siteName' does not have FTP bindings set"
        return
    }
     
    $ftpServer.Methods.Item("FlushLog").CreateInstance().Execute()

    I hope one of these programming/scripting options come in handy for times when you want immediate access to the latest FTP log data.

    Author: "OWScott" Tags: "IIS, IIS7, Windows Server"
    Comments Send by mail Print  Save  Delicious 
    Date: Monday, 23 Jan 2012 16:34

    You can find this week’s video here.

    I’ve been looking forward to releasing this week’s video.  IIS FTP User isolation is an interesting topic because it offers a lot of power and flexibility but it’s not very intuitive because of how it’s managed.

    This week we walk through the five isolation modes to gain a full understanding of the IIS FTP method of configuration for user isolation.

    IIS FTP is a powerful application, but some of the flexibility is hidden through a unique convention based method of management. It’s easy to miss the fact that IIS FTP allows the ability to have multiple users who can be directed to different folders and be fully isolated from each other. For example, you can have a designer1 who has access to the whole site while designer2 has access to just project1 only, while—if you set it up correctly—you can feel confident that designer2 can’t gain more access than they are allowed.

    IIS FTP requires understanding a few core principles to manage it effectively and to ensure that you don’t overlook key security settings that would allow users to gain more access than they should. IIS FTP 7.5 offers five different isolation modes, each of which targets a different situation.

    This is now week 46 of a 52 week series for the web pro, and the 3rd of a 5 week mini-series on IIS FTP. You can view past and future weeks here: http://dotnetslackers.com/projects/LearnIIS7/

    Also, if you’re reading this early enough, I’m taking questions for the last couple weeks of the series.  Read more about it here.

    You can find this week’s video here.

    Author: "OWScott" Tags: "IIS, FTP, IIS7, Web Pro Series"
    Comments Send by mail Print  Save  Delicious 
    Next page
    » You can also retrieve older items : Read
    » © All content and copyrights belong to their respective authors.«
    » © FeedShow - Online RSS Feeds Reader