• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Monday, 03 Mar 2014 19:37

I had a question asked me recently regarding Windows auth and NTFS permissions. Here’s the question:

When I run an IIS site using Windows Authentication, is there a way to let
the Application Pool account access files in disk instead of the logged in
user. Ideally, the developer, (or operations) can list the users in web.config only, without the need to add these users to the file permissions or add them to some AD group that has these permissions.

Unless you work with these permissions often, it may be difficult to understand the situation well, so let me explain this another way.

To have a properly secured server, you should use the principle of least privilege, which essentially says that you should grant only what is absolutely required to enable a service to work, and nothing more. If you do this properly then you should have a tight list of permissions on disk for your website.

The difficulty comes when you use Windows authentication—rather than anonymous authentication—to grant access to a website, or a part of a website. What if you want to use IIS’s URL Authorization to manage access rather than using NTFS to manage access.

Keep reading and I’ll explain further. First let’s gain more of an understanding on how IIS security works.

Basic permissions required for anonymous access

When you use anonymous access, a clean setup will implement the following settings:

  • Anonymous authentication for the website should be set to use the Application pool identity
    image
  • Permissions on disk should be granted to:
    • SYSTEM: Full control
    • Administrators: Full control
    • Application pool identity: Read or Modify, depending on your requirements. (It’s useful to the AppPoolIdentity account if you are only accessing local resources)
    • Users required for FTP, web publishing or any other access to content on disk.
      image

If you can achieve this (and you should!), you will have a site which is accessible to only the one site, which minimizes the attack surface if another site on the same server is untrusted, or if it is exploited.

Basic permissions required for Windows authentication

However, what if you want to use Windows auth to grant or deny users access to your site based on their Windows’ accounts.

First, you would turn off anonymous authentication so that users are required to authenticate with a Windows account.
image

There are now two options for the authorization part—which is to determine which Windows accounts are allowed and which are not:

  1. NTFS: Depend on the NTFS permissions (ACLs) on disk to determine which users have access (e.g. User1 is granted access but User2 isn’t). If you grant a user access on disk then they can access the site. If they do not have access then … well, they don’t have access.
  2. URL Authorization: Use IIS and/or ASP.NET’s URL Authorization. By default all users are granted access, but you can change this. Following is an example which has the default Allow All Users removed, and User1, User2, and the Administrators group granted access.
    image
    These settings are saved to the site’s web.config file, so you can set them manually too, and of course set them at the server level or other places in the IIS configuration hierarchy.

When using the URL Authorization method, you would need to grant access on disk to the Windows account (e.g. User1), basically meaning that option #1 and #2 are both used simultaneously.

Back to the original question

Let’s get back to the original question. What if you don’t want to have to grant the Windows accounts access on disk (#1 above) but you want to use URL Authorization (#2 above) to authorize Windows accounts access to your site?

Or, to word it another way, what if you want to use #2, without having to worry about #1 too.

Which users require access to disk?

This is possible, but let me step aside again and briefly explain how access to disk is determined.

The website accesses the disk by using the w3wp.exe worker process, which is essentially the application pool. The identity set for that app pool (e.g. IIS Apppool\Site001) is used in some situations on disk. In the anonymous access situation that was mentioned above, the site says to always use the application pool identity for anonymous users. So in that case you only need to grant the identity of the application pool to disk (SYSTEM and Administrators are beneficial for other usage, but not for actually running the site).

When using Windows authentication, the application pool identity (e.g. IIS Apppool\Site001) is used for some access but the Windows account (e.g. User1) is used for other access. It depends on the impersonation settings of your application or framework that you’re using. Therefore, you would generally need to grant access to the application pool identity, plus every Windows account (e.g. User1, User2, User99) which needs access to your site.

Back yet again to the original question

Let me try again to answer the original question, now that we’ve covered the aforementioned concepts.

What if you don’t want to have to maintain the disk permissions for each of the Windows users who you will add over time?

You have at least three options for this:

  1. You can be sloppy and just grant the Users or Everyone account access to disk. But please, please don’t do this on a production server. In fact even on your dev machine it’s not a good habit to get into.
  2. You can create a Windows group and whenever you have a new user who needs access to a site, you would add them to that group. This is a great solution and generally the best option. However, the original question above stated that they didn’t want to do this in their case, so that leads to #3.
  3. You can use the Connect As setting in IIS to have all disk access use a single account, regardless of the configuration. This means that you will not have to make any changes to Users or Groups, but instead you can make an IIS change to the list of allowed users and they will be granted access. If you are using IIS 7.0 or IIS 6.0 then you may need to consider this.
  4. In IIS 7.5, a new setting was introduced called authenticatedUserOverride. This enables you to say that you will use the application pool identity (Worker Process User) instead of the authenticated user’s identity. This is exactly what we’re looking for in this case.

The #4 option

Just like you can set the anonymous user to always run as the application pool identity, you can do the same for authenticated users. This was introduced in IIS 7.5. You can find the official documentation here.

You can change this using Configuration Editor, AppCmd or any of the APIs to update the settings, and you can set this at the server, site, or application level.

Here are the steps that are required to set the permissions for this requirement:

  1. Turn on Windows auth and turn off Anonymous auth:
    image
  2. Using URL Authorization, grant only the users who should have access to the site:
    image 
  3. For the NTFS permissions, grant the following:
    • SYSTEM: Full control
    • Administrators: Full control
    • Your application pool identity account (e.g. IIS AppPool/Site001): Read or Modify, depending on your requirements.
    • Users required for FTP, web publishing or any other access to content on disk. 
       image
      (Notice that I used the same images as I did previously … so far everything is the same.
  4. Go to the site (or server or subfolder) in IIS Manager and open Configuration Editor.
    image
  5. For the Section selection, choose “system.webServer/serverRuntime”
  6. Select UseWorkerProcessUser
    image 
  7. Click Apply 

Alternately, for Steps 4-7 you can do the same with AppCmd using the following command. Make sure to replace the site name with your site name:

appcmd.exe set config "Site001" -section:system.webServer/serverRuntime /authenticatedUserOverride:"UseWorkerProcessUser" /commit:apphost

Viola! You have a minimalistic permission set on disk that supports Windows authentication without having to update your permissions on disk. Simply update URL Authorization whenever you need to grant or deny access to Windows users.

Note that this does not address non-disk access like integrated authentication for SQL Server or access to registry keys or other network resources. They will still use either the application pool identity or the Windows user account’s identity, depending on the impersonation settings that your application uses.

As you can see, there are multiple configurations that you can use for anonymous or Windows based authentication. You have a lot of flexibility to set this up the way that makes the most sense to you.

Just make sure that you don’t settle on a non-secure method. IIS provides a highly secure and manageable solution as long as you use it correctly. Hopefully this articles provides an understanding of the basic building blocks necessary to do so.

Author: "OWScott" Tags: "IIS, IIS7, Windows Server, IIS8"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 03 Mar 2014 19:37

I had a question asked me recently regarding Windows auth and NTFS permissions. Here’s the question:

When I run an IIS site using Windows Authentication, is there a way to let
the Application Pool account access files in disk instead of the logged in
user. Ideally, the developer, (or operations) can list the users in web.config only, without the need to add these users to the file permissions or add them to some AD group that has these permissions.

Unless you work with these permissions often, it may be difficult to understand the situation well, so let me explain this another way.

To have a properly secured server, you should use the principle of least privilege, which essentially says that you should grant only what is absolutely required to enable a service to work, and nothing more. If you do this properly then you should have a tight list of permissions on disk for your website.

The difficulty comes when you use Windows authentication—rather than anonymous authentication—to grant access to a website, or a part of a website. What if you want to use IIS’s URL Authorization to manage access rather than using NTFS to manage access.

Keep reading and I’ll explain further. First let’s gain more of an understanding on how IIS security works.

Basic permissions required for anonymous access

When you use anonymous access, a clean setup will implement the following settings:

  • Anonymous authentication for the website should be set to use the Application pool identity
    image
  • Permissions on disk should be granted to:
    • SYSTEM: Full control
    • Administrators: Full control
    • Application pool identity: Read or Modify, depending on your requirements. (It’s useful to the AppPoolIdentity account if you are only accessing local resources)
    • Users required for FTP, web publishing or any other access to content on disk.
      image

If you can achieve this (and you should!), you will have a site which is accessible to only the one site, which minimizes the attack surface if another site on the same server is untrusted, or if it is exploited.

Basic permissions required for Windows authentication

However, what if you want to use Windows auth to grant or deny users access to your site based on their Windows’ accounts.

First, you would turn off anonymous authentication so that users are required to authenticate with a Windows account.
image

There are now two options for the authorization part—which is to determine which Windows accounts are allowed and which are not:

  1. NTFS: Depend on the NTFS permissions (ACLs) on disk to determine which users have access (e.g. User1 is granted access but User2 isn’t). If you grant a user access on disk then they can access the site. If they do not have access then … well, they don’t have access.
  2. URL Authorization: Use IIS and/or ASP.NET’s URL Authorization. By default all users are granted access, but you can change this. Following is an example which has the default Allow All Users removed, and User1, User2, and the Administrators group granted access.
    image
    These settings are saved to the site’s web.config file, so you can set them manually too, and of course set them at the server level or other places in the IIS configuration hierarchy.

When using the URL Authorization method, you would need to grant access on disk to the Windows account (e.g. User1), basically meaning that option #1 and #2 are both used simultaneously.

Back to the original question

Let’s get back to the original question. What if you don’t want to have to grant the Windows accounts access on disk (#1 above) but you want to use URL Authorization (#2 above) to authorize Windows accounts access to your site?

Or, to word it another way, what if you want to use #2, without having to worry about #1 too.

Which users require access to disk?

This is possible, but let me step aside again and briefly explain how access to disk is determined.

The website accesses the disk by using the w3wp.exe worker process, which is essentially the application pool. The identity set for that app pool (e.g. IIS Apppool\Site001) is used in some situations on disk. In the anonymous access situation that was mentioned above, the site says to always use the application pool identity for anonymous users. So in that case you only need to grant the identity of the application pool to disk (SYSTEM and Administrators are beneficial for other usage, but not for actually running the site).

When using Windows authentication, the application pool identity (e.g. IIS Apppool\Site001) is used for some access but the Windows account (e.g. User1) is used for other access. It depends on the impersonation settings of your application or framework that you’re using. Therefore, you would generally need to grant access to the application pool identity, plus every Windows account (e.g. User1, User2, User99) which needs access to your site.

Back yet again to the original question

Let me try again to answer the original question, now that we’ve covered the aforementioned concepts.

What if you don’t want to have to maintain the disk permissions for each of the Windows users who you will add over time?

You have at least three options for this:

  1. You can be sloppy and just grant the Users or Everyone account access to disk. But please, please don’t do this on a production server. In fact even on your dev machine it’s not a good habit to get into.
  2. You can create a Windows group and whenever you have a new user who needs access to a site, you would add them to that group. This is a great solution and generally the best option. However, the original question above stated that they didn’t want to do this in their case, so that leads to #3.
  3. You can use the Connect As setting in IIS to have all disk access use a single account, regardless of the configuration. This means that you will not have to make any changes to Users or Groups, but instead you can make an IIS change to the list of allowed users and they will be granted access. If you are using IIS 7.0 or IIS 6.0 then you may need to consider this.
  4. In IIS 7.5, a new setting was introduced called authenticatedUserOverride. This enables you to say that you will use the application pool identity (Worker Process User) instead of the authenticated user’s identity. This is exactly what we’re looking for in this case.

The #4 option

Just like you can set the anonymous user to always run as the application pool identity, you can do the same for authenticated users. This was introduced in IIS 7.5. You can find the official documentation here.

You can change this using Configuration Editor, AppCmd or any of the APIs to update the settings, and you can set this at the server, site, or application level.

Here are the steps that are required to set the permissions for this requirement:

  1. Turn on Windows auth and turn off Anonymous auth:
    image
  2. Using URL Authorization, grant only the users who should have access to the site:
    image 
  3. For the NTFS permissions, grant the following:
    • SYSTEM: Full control
    • Administrators: Full control
    • Your application pool identity account (e.g. IIS AppPool/Site001): Read or Modify, depending on your requirements.
    • Users required for FTP, web publishing or any other access to content on disk. 
       image
      (Notice that I used the same images as I did previously … so far everything is the same.
  4. Go to the site (or server or subfolder) in IIS Manager and open Configuration Editor.
    image
  5. For the Section selection, choose “system.webServer/serverRuntime”
  6. Select UseWorkerProcessUser
    image 
  7. Click Apply 

Alternately, for Steps 4-7 you can do the same with AppCmd using the following command. Make sure to replace the site name with your site name:

appcmd.exe set config "Site001" -section:system.webServer/serverRuntime /authenticatedUserOverride:"UseWorkerProcessUser" /commit:apphost

Viola! You have a minimalistic permission set on disk that supports Windows authentication without having to update your permissions on disk. Simply update URL Authorization whenever you need to grant or deny access to Windows users.

Note that this does not address non-disk access like integrated authentication for SQL Server or access to registry keys or other network resources. They will still use either the application pool identity or the Windows user account’s identity, depending on the impersonation settings that your application uses.

As you can see, there are multiple configurations that you can use for anonymous or Windows based authentication. You have a lot of flexibility to set this up the way that makes the most sense to you.

Just make sure that you don’t settle on a non-secure method. IIS provides a highly secure and manageable solution as long as you use it correctly. Hopefully this articles provides an understanding of the basic building blocks necessary to do so.

Author: "--" Tags: "IIS, IIS7, IIS8, Windows Server"
Send by mail Print  Save  Delicious 
Date: Wednesday, 29 Jan 2014 16:17

IIS URL Rewrite has five different types of actions. They are: Rewrite, Redirect, Custom Response, Abort Request, and None. And if you have ARR (Application Request Routing) installed, then at the server level you’ll also see Route to Server Farm. The two most common actions are the Rewrite and the Redirect.

A common question that comes up for people who just start working with URL Rewrite is: what is the difference between a rewrite and a redirect? I remember wondering the same thing.

Fortunately there are some very clear cut-and-dry differences between then.

Simply put, a redirect is a client-side request to have the web browser go to another URL. This means that the URL that you see in the browser will update to the new URL.

A rewrite is a server-side rewrite of the URL before it’s fully processed by IIS. This will not change what you see in the browser because the changes are hidden from the user.

Let’s take a look at some other differences between them:

Redirect

Rewrite

Client-side Server-side
Changes URL in browser address bar Doesn’t change URL in browser address bar
Supports the following redirects:
301 – Permanent
302 – Found
303 – See Other
307 - Temporary
Redirect status is non-applicable
Useful for search engine optimization by causing the search engine to update the URL. Also useful for search engines by using a friendly URL to hide a messy URL.
Example:
http://yourdomain.com to http://www.yourdomain.com in the browser
Example:
http://localtest.me/articles/how-to-win-at-chess is a friendly URL for http://localtest.me/articles.aspx?name=now-to-win-at-chess
Can redirect to the same site or an unrelated site. Generally rewrites to the same site using a relative path, although if you have the ARR module installed you can rewrite to a different site. When you rewrite to a different site, URL Rewrite functions as a reverse proxy.
The page request flow is:
  • Browser requests a page
  • Server responds with a redirect status code
  • Browser makes 2nd request to the new URL
  • Server responds to the new URL
The page request flow is:
  • Browser requests a page
  • URL Rewrite rewrites the URL and makes request (still within IIS) for the updated page
Fiddler is a great tool to see the back and forth between the browser and server. Tools like Process Monitor and native IIS tools are best for getting under the covers.

Let’s take a look at some further examples:

A redirect changes the URL in the browser, like in the following examples:

Add a www to the domain name:image

Enforce trailing slashes or force to lowercase:image

Mapping of old to new URL after a site redesign, and let search engines know about it:
image

A rewrite doesn’t change the URL in the browser, but it does change the URL before the request is fully processed by IIS.

In the following example the URL is a friendly URL in the browser but the final URL seen by ASP.NET is not as friendly: 
image

Or you can use any part of the URL is a useful way by rewriting the URL. Again, the URL in the browser remains the same while the path or query string behind the scenes is changed:image

These are just some examples. Hopefully they clarify the difference between a rewrite and a redirect in URL Rewrite for IIS and help you with your URL Rewriting.

Author: "OWScott" Tags: "IIS, IIS7, URL Rewrite, IIS8"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 29 Jan 2014 16:17

IIS URL Rewrite has five different types of actions. They are: Rewrite, Redirect, Custom Response, Abort Request, and None. And if you have ARR (Application Request Routing) installed, then at the server level you’ll also see Route to Server Farm. The two most common actions are the Rewrite and the Redirect.

A common question that comes up for people who just start working with URL Rewrite is: what is the difference between a rewrite and a redirect? I remember wondering the same thing.

Fortunately there are some very clear cut-and-dry differences between then.

Simply put, a redirect is a client-side request to have the web browser go to another URL. This means that the URL that you see in the browser will update to the new URL.

A rewrite is a server-side rewrite of the URL before it’s fully processed by IIS. This will not change what you see in the browser because the changes are hidden from the user.

Let’s take a look at some other differences between them:

Redirect

Rewrite

Client-side Server-side
Changes URL in browser address bar Doesn’t change URL in browser address bar
Supports the following redirects:
301 – Permanent
302 – Found
303 – See Other
307 - Temporary
Redirect status is non-applicable
Useful for search engine optimization by causing the search engine to update the URL. Also useful for search engines by using a friendly URL to hide a messy URL.
Example:
http://yourdomain.com to http://www.yourdomain.com in the browser
Example:
http://localtest.me/articles/how-to-win-at-chess is a friendly URL for http://localtest.me/articles.aspx?name=now-to-win-at-chess
Can redirect to the same site or an unrelated site. Generally rewrites to the same site using a relative path, although if you have the ARR module installed you can rewrite to a different site. When you rewrite to a different site, URL Rewrite functions as a reverse proxy.
The page request flow is:
  • Browser requests a page
  • Server responds with a redirect status code
  • Browser makes 2nd request to the new URL
  • Server responds to the new URL
The page request flow is:
  • Browser requests a page
  • URL Rewrite rewrites the URL and makes request (still within IIS) for the updated page
Fiddler is a great tool to see the back and forth between the browser and server. Tools like Process Monitor and native IIS tools are best for getting under the covers.

Let’s take a look at some further examples:

A redirect changes the URL in the browser, like in the following examples:

Add a www to the domain name:image

Enforce trailing slashes or force to lowercase:image

Mapping of old to new URL after a site redesign, and let search engines know about it:
image

A rewrite doesn’t change the URL in the browser, but it does change the URL before the request is fully processed by IIS.

In the following example the URL is a friendly URL in the browser but the final URL seen by ASP.NET is not as friendly: 
image

Or you can use any part of the URL is a useful way by rewriting the URL. Again, the URL in the browser remains the same while the path or query string behind the scenes is changed:image

These are just some examples. Hopefully they clarify the difference between a rewrite and a redirect in URL Rewrite for IIS and help you with your URL Rewriting.

Author: "--" Tags: "IIS, IIS7, IIS8, URL Rewrite"
Send by mail Print  Save  Delicious 
Date: Monday, 28 Oct 2013 14:39

Software developer and IT Pro job interviews can be a lot of fun when you’re prepared for them, but they can be scary and overwhelming when you’re not. Since this is regarding your job which you’ll spend 50% of your waking hours at during the week, the more at ease you are with the interviews, the better you’re going to be at landing your ideal job.

Whether you’re a seasoned professional or newer to the field, you will benefit from brushing up on your interview skills.

Ben Weiss from Infusive Solutions showed me a creative resource to help prepare for job interviews. Yes, interviews with an ‘s’. There can be up to four interviews for a single job position (with Human Resources, Senior Developer, Software Manager, and the Chief Technology Officer), each of which require unique preparation and execution.

I don’t have an interview coming up but reading this almost made me want to prepare and see how I would do in an interview today.

SNAGHTML469531dbWhat makes this document unique is that it’s a lot of fun as it compares the process to preparing to take on one of the end of level bosses in your favorite game. Think Mario, Zelda or Duke Nukem. Each game boss requires different weapons or skills, making the analogy not only fun, but applicable too.

At the risk of getting off topic, have you seen the size comparison of Sci-Fi’s greatest machines and monsters? If not, check it out.

I was impressed with the document because it’s a nice  easy 17 page read which I felt was the right length to be a complete resource, but it wasn’t too long either. It also has links to other resources.

Ben and the other co-authors are trying to bring awareness to their companies. Ben helps software developers and IT pros with job placement in the Tri-state area. However, he agreed to place a link to the document that doesn’t ask for any of your information. The document is available as a free download without asking for anything in return.

If you have a job interview coming up I encourage you to download the PDF doc and check it out. Save it for later if needed. I believe that it will help you in your preparation in a fun and fully worthwhile way.

You can download it here.

Author: "OWScott" Tags: "General"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 28 Oct 2013 14:39

Software developer and IT Pro job interviews can be a lot of fun when you’re prepared for them, but they can be scary and overwhelming when you’re not. Since this is regarding your job which you’ll spend 50% of your waking hours at during the week, the more at ease you are with the interviews, the better you’re going to be at landing your ideal job.

Whether you’re a seasoned professional or newer to the field, you will benefit from brushing up on your interview skills.

Ben Weiss from Infusive Solutions showed me a creative resource to help prepare for job interviews. Yes, interviews with an ‘s’. There can be up to four interviews for a single job position (with Human Resources, Senior Developer, Software Manager, and the Chief Technology Officer), each of which require unique preparation and execution.

I don’t have an interview coming up but reading this almost made me want to prepare and see how I would do in an interview today.

SNAGHTML469531dbWhat makes this document unique is that it’s a lot of fun as it compares the process to preparing to take on one of the end of level bosses in your favorite game. Think Mario, Zelda or Duke Nukem. Each game boss requires different weapons or skills, making the analogy not only fun, but applicable too.

At the risk of getting off topic, have you seen the size comparison of Sci-Fi’s greatest machines and monsters? If not, check it out.

I was impressed with the document because it’s a nice  easy 17 page read which I felt was the right length to be a complete resource, but it wasn’t too long either. It also has links to other resources.

Ben and the other co-authors are trying to bring awareness to their companies. Ben helps software developers and IT pros with job placement in the Tri-state area. However, he agreed to place a link to the document that doesn’t ask for any of your information. The document is available as a free download without asking for anything in return.

If you have a job interview coming up I encourage you to download the PDF doc and check it out. Save it for later if needed. I believe that it will help you in your preparation in a fun and fully worthwhile way.

You can download it here.

Author: "--" Tags: "General"
Send by mail Print  Save  Delicious 
Date: Thursday, 24 Oct 2013 16:18

There are times when you need to reverse proxy through a server. The most common example is when you have an internal web server that isn’t exposed to the internet, and you have a public web server accessible to the internet. If you want to serve up traffic from the internal web server, you can do this through the public web server by creating a tunnel (aka reverse proxy).

Essentially, you can front the internal web server with a friendly URL, even hiding custom ports.

For example, consider an internal web server with a URL of http://10.10.0.50:8111. You can make that available through a public URL like http://tools.mysite.com/ as seen in the following image. The URL can be made public or it can be used for your internal staff and have it password protected and/or locked down by IP address.

image

This is easy to do with URL Rewrite and IIS. You will also need Application Request Routing (ARR) installed even though for a simple reverse proxy you won’t use most of ARR’s functionality. If you don’t already have URL Rewrite and ARR installed you can do so easily with the Web Platform Installer.

A lot can be said about reverse proxies and many different situations and ways to route the traffic and handle different URL patterns. However, my goal here is to get you up and going in the easiest way possible. Then you can dig in deeper after you get the base configuration in place.

URL Rewrite makes a reverse proxy very easy to set up. Note that the URL Rewrite Add Rules template doesn’t include Reverse Proxy at the server level. That’s not to say that you can’t create a server-level reverse proxy, but the URL Rewrite rules template doesn’t help you with that.

Getting Started

First you must create a website on your public web server that has the public bindings that you need. Alternately, you can use an existing site and route using conditions for certain traffic.

After you’ve created your site then open up URL Rewrite at the site level.

image

Using the “Add Rule(s)…” template that is opened from the right-hand actions pane, create a new Reverse Proxy rule.

image

If you receive a prompt (the first time) that the proxy functionality needs to be enabled, select OK. This is telling you that a proxy can route traffic outside of your web server, which happens to be our goal in this case. Be aware that reverse proxy rules can be dangerous if you open sites from inside you network to the world, so just be aware of what you’re doing and why.

image

The next and final step of the template asks a few questions.

image

The first textbox asks the name of the internal web server. In our example, it’s 10.10.0.50:8111. This can be any URL, including a subfolder like internal.mysite.com/blog. Don’t include the http or https here. The template assumes that it’s not entered.

You can choose whether to perform SSL Offloading or not. If you leave this checked then all requests to the internal server will be over HTTP regardless of the original web request. This can help with performance and SSL bindings if all requests are within a trusted network. If the network path between the two web servers is not completely trusted and safe then uncheck this.

Next, the template enables you to create an outbound rule. This is used to rewrite links in the page to look like your public domain name rather than the internal domain name. Outbound rules have a lot of CPU overhead because the entire web content needs to be parsed and updated. However, if you need it, then it’s well worth the extra CPU hit on the web server.

If you check the “Rewrite the domain names of the links in HTTP responses” checkbox then the From textbox will be filled in with what you entered for the inbound rule. You can enter your friendly public URL for the outbound rule. This will essentially replace any reference to 10.10.0.50:8111 (or whatever you enter) with tools.mysite.com in all <a>, <form>, and <img> tags on your site.

image

That’s it! Well, there is a lot more that you can do, this but will give you the base configuration. You can now visit www.mysite.com on your public web server and it will serve up the site from your internal web server.

You should see two rules show up; one inbound and one outbound. You can edit these, add conditions, and tweak them further as needed.

SNAGHTML325382fc

One common issue that can occur without outbound rules has to do with compression. If you run into errors with the new proxied site, try turning off compression to confirm if that’s the issue. Here’s a link with details on how to deal with compression and outbound rules.

I hope this was helpful to get started and to see how easy it is to create a simple reverse proxy using URL Rewrite for IIS.

Author: "OWScott" Tags: "IIS, IIS7, URL Rewrite, ARR, IIS8"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 24 Oct 2013 16:18

There are times when you need to reverse proxy through a server. The most common example is when you have an internal web server that isn’t exposed to the internet, and you have a public web server accessible to the internet. If you want to serve up traffic from the internal web server, you can do this through the public web server by creating a tunnel (aka reverse proxy).

Essentially, you can front the internal web server with a friendly URL, even hiding custom ports.

For example, consider an internal web server with a URL of http://10.10.0.50:8111. You can make that available through a public URL like http://tools.mysite.com/ as seen in the following image. The URL can be made public or it can be used for your internal staff and have it password protected and/or locked down by IP address.

image

This is easy to do with URL Rewrite and IIS. You will also need Application Request Routing (ARR) installed even though for a simple reverse proxy you won’t use most of ARR’s functionality. If you don’t already have URL Rewrite and ARR installed you can do so easily with the Web Platform Installer.

A lot can be said about reverse proxies and many different situations and ways to route the traffic and handle different URL patterns. However, my goal here is to get you up and going in the easiest way possible. Then you can dig in deeper after you get the base configuration in place.

URL Rewrite makes a reverse proxy very easy to set up. Note that the URL Rewrite Add Rules template doesn’t include Reverse Proxy at the server level. That’s not to say that you can’t create a server-level reverse proxy, but the URL Rewrite rules template doesn’t help you with that.

Getting Started

First you must create a website on your public web server that has the public bindings that you need. Alternately, you can use an existing site and route using conditions for certain traffic.

After you’ve created your site then open up URL Rewrite at the site level.

image

Using the “Add Rule(s)…” template that is opened from the right-hand actions pane, create a new Reverse Proxy rule.

image

If you receive a prompt (the first time) that the proxy functionality needs to be enabled, select OK. This is telling you that a proxy can route traffic outside of your web server, which happens to be our goal in this case. Be aware that reverse proxy rules can be dangerous if you open sites from inside you network to the world, so just be aware of what you’re doing and why.

image

The next and final step of the template asks a few questions.

image

The first textbox asks the name of the internal web server. In our example, it’s 10.10.0.50:8111. This can be any URL, including a subfolder like internal.mysite.com/blog. Don’t include the http or https here. The template assumes that it’s not entered.

You can choose whether to perform SSL Offloading or not. If you leave this checked then all requests to the internal server will be over HTTP regardless of the original web request. This can help with performance and SSL bindings if all requests are within a trusted network. If the network path between the two web servers is not completely trusted and safe then uncheck this.

Next, the template enables you to create an outbound rule. This is used to rewrite links in the page to look like your public domain name rather than the internal domain name. Outbound rules have a lot of CPU overhead because the entire web content needs to be parsed and updated. However, if you need it, then it’s well worth the extra CPU hit on the web server.

If you check the “Rewrite the domain names of the links in HTTP responses” checkbox then the From textbox will be filled in with what you entered for the inbound rule. You can enter your friendly public URL for the outbound rule. This will essentially replace any reference to 10.10.0.50:8111 (or whatever you enter) with tools.mysite.com in all <a>, <form>, and <img> tags on your site.

image

That’s it! Well, there is a lot more that you can do, this but will give you the base configuration. You can now visit www.mysite.com on your public web server and it will serve up the site from your internal web server.

You should see two rules show up; one inbound and one outbound. You can edit these, add conditions, and tweak them further as needed.

SNAGHTML325382fc

One common issue that can occur without outbound rules has to do with compression. If you run into errors with the new proxied site, try turning off compression to confirm if that’s the issue. Here’s a link with details on how to deal with compression and outbound rules.

I hope this was helpful to get started and to see how easy it is to create a simple reverse proxy using URL Rewrite for IIS.

Author: "--" Tags: "ARR, IIS, IIS7, IIS8, URL Rewrite"
Send by mail Print  Save  Delicious 
Date: Saturday, 21 Sep 2013 14:21

A fairly common request for URL Rewrite is to prepend a www to all 2nd level domains, regardless of the domain name. Consider the following domain names:

  • http://domain1.com
  • http://domain2.net
  • https://domain3.org

Following is an IIS URL Rewrite rule which will add the www to domain names without requiring you to create multiple rules. It will also maintain the http or https while doing so.

<rule name="Prepend www to 2nd level domain names" enabled="true" stopProcessing="true">
    <match url=".*" />
    <conditions trackAllCaptures="true">
        <add input="{HTTP_HOST}" pattern="^([^.]+\.[^.]+)$" />
        <add input="{CACHE_URL}" pattern="^(.+)://" />
    </conditions>
    <action type="Redirect" url="{C:2}://www.{C:1}/{R:0}" />
</rule>

This will result in the following URLs:

  • http://www.domain1.com
  • http://www.domain2.net
  • https://www.domain3.org
  • If you want to exclude a particular 2nd level domain name then simply add a negated third condition for the domain name which you want to exclude:

    <add input="{HTTP_HOST}" pattern="$domain4\.com^" negate="true" />
    Author: "OWScott" Tags: "IIS, IIS7, IIS8, URL Rewrite"
    Comments Send by mail Print  Save  Delicious 
    Date: Saturday, 21 Sep 2013 14:21

    A fairly common request for URL Rewrite is to prepend a www to all 2nd level domains, regardless of the domain name. Consider the following domain names:

    • http://domain1.com
    • http://domain2.net
    • https://domain3.org

    Following is an IIS URL Rewrite rule which will add the www to domain names without requiring you to create multiple rules. It will also maintain the http or https while doing so.

    <rule name="Prepend www to 2nd level domain names" enabled="true" stopProcessing="true">
        <match url=".*" />
        <conditions trackAllCaptures="true">
            <add input="{HTTP_HOST}" pattern="^([^.]+\.[^.]+)$" />
            <add input="{CACHE_URL}" pattern="^(.+)://" />
        </conditions>
        <action type="Redirect" url="{C:2}://www.{C:1}/{R:0}" />
    </rule>

    This will result in the following URLs:

  • http://www.domain1.com
  • http://www.domain2.net
  • https://www.domain3.org
  • If you want to exclude a particular 2nd level domain name then simply add a negated third condition for the domain name which you want to exclude:

    <add input="{HTTP_HOST}" pattern="$domain4\.com^" negate="true" />
    Author: "--" Tags: "IIS, IIS7, IIS8, URL Rewrite"
    Send by mail Print  Save  Delicious 
    Date: Saturday, 06 Apr 2013 15:06

    Microsoft IIS Server has what appears to be an odd default for the application pool recycle time. It defaults to 1740 minutes, which is exactly 29 hours. I’ve always been a bit curious where that default came from. If you’re like me, you may have wondered too.

    Wonder no longer! While at the MVP Summit this year in Bellevue WA I had the privilege again of talking with the IIS team. Wade Hilmo was there too. Somehow in the conversation a discussion about IIS default settings came up, which included the odd 1740 minutes for the app pool recycle interval. Wade told the story of how the setting came into being, and he granted me permission to share.

    As you can imagine, many decisions for the large set of products produced by Microsoft come about after a lot of deliberation and research. Others have a geeky and fun origin. This is one of the latter.

    The 1740 story

    image

    Back when IIS 6 was being developed—which is the version that introduced application pools—a default needed to be set for the Regular Time Interval when application pools are automatically recycled.

    Wade suggested 29 hours for the simple reason that it’s the smallest prime number over 24. He wanted a staggered and non-repeating pattern that doesn’t occur more frequently than once per day. In Wade’s words: “you don’t get a resonate pattern”. The default has been 1740 minutes (29 hours) ever since!

    That’s a fun little tidbit on the origin of the 1740. How about in your environment though? What is a good default?

    Practical guidelines

    First off, I think 29 hours is a good default. For a situation where you don’t know the environment, which is the case for a default setting, having a non-resonate pattern greater than one day is a good idea.

    However, since you likely know your environment, it’s best to change this. I recommend setting to a fixed time like 4:00am if you’re on the East coast of the US, 1:00am on the West coast, or whatever seems to make sense for your audience when you have the least amount of traffic. Setting it to a fixed time each day during low traffic times will minimize the impact and also allow you to troubleshoot easier if you run into any issues. If you have multiple application pools it may be wise to stagger them so that you don’t overload the server with a lot of simultaneous recycles.

    Note that IIS overlaps the app pool when recycling so there usually isn’t any downtime during a recycle. However, in-memory information (session state, etc) is lost. See this video if you want to learn more about IIS overlapping app pools.

    You may ask whether a fixed recycle is even needed. A daily recycle is just a band-aid to freshen IIS in case there is a slight memory leak or anything else that slowly creeps into the worker process. In theory you don’t need a daily recycle unless you have a known problem. I used to recommend that you turn it off completely if you don’t need it. However, I’m leaning more today towards setting it to recycle once per day at an off-peak time as a proactive measure.

    My reason is that, first, your site should be able to survive a recycle without too much impact, so recycling daily shouldn’t be a concern. Secondly, I’ve found that even well behaving app pools can eventually have something sneak in over time that impacts the app pool. I’ve seen issues from traffic patterns that cause excessive caching or something odd in the application, and I’ve seen the very rare IIS bug (rare indeed!) that isn’t a problem if recycled daily. Is it a band-aid? Possibly, but if a daily recycle keeps a non-critical issue from bubbling to the top then I believe that it’s a good proactive measure to save a lot of troubleshooting effort on something that probably isn’t important to troubleshoot. However, if you think you have a real issue that is being suppressed by recycling then, by all means, turn off the auto-recycling so that you can track down and resolve your issue. There’s no black and white answer. Only you can make the best decision for your environment.

    Idle Time-out

    While on the topic of app pool defaults, there is one more that you should change with every new server deployment. The Idle Time-out should be set to 0 unless you are doing bulk hosting where you want to keep the memory footprint per site as low as possible.

    image

    If you have a just a few sites on your server and you want them to always load fast then set this to zero. Otherwise, when you have 20 minutes without any traffic then the app pool will terminate so that it can start up again on the next visit. The problem is that the first visit to an app pool needs to create a new w3wp.exe worker process which is slow because the app pool needs to be created, ASP.NET or another framework needs to be loaded, and then your application needs to be loaded. That can take a few seconds. Therefore I set that to 0 every chance I have, unless it’s for a server that hosts a lot of sites that don’t always need to be running.

    There are other settings that can be reviewed for each environment but the two aforementioned settings are the two that should be changed almost every time.

    Hopefully you enjoyed knowing about the 29 hour default as much as I did, even if just for fun. Happy IISing.

    Author: "OWScott" Tags: "IIS, Performance Tuning, IIS7, Windows S..."
    Comments Send by mail Print  Save  Delicious 
    Date: Saturday, 06 Apr 2013 15:06

    Microsoft IIS Server has what appears to be an odd default for the application pool recycle time. It defaults to 1740 minutes, which is exactly 29 hours. I’ve always been a bit curious where that default came from. If you’re like me, you may have wondered too.

    Wonder no longer! While at the MVP Summit this year in Bellevue WA I had the privilege again of talking with the IIS team. Wade Hilmo was there too. Somehow in the conversation a discussion about IIS default settings came up, which included the odd 1740 minutes for the app pool recycle interval. Wade told the story of how the setting came into being, and he granted me permission to share.

    As you can imagine, many decisions for the large set of products produced by Microsoft come about after a lot of deliberation and research. Others have a geeky and fun origin. This is one of the latter.

    The 1740 story

    image

    Back when IIS 6 was being developed—which is the version that introduced application pools—a default needed to be set for the Regular Time Interval when application pools are automatically recycled.

    Wade suggested 29 hours for the simple reason that it’s the smallest prime number over 24. He wanted a staggered and non-repeating pattern that doesn’t occur more frequently than once per day. In Wade’s words: “you don’t get a resonate pattern”. The default has been 1740 minutes (29 hours) ever since!

    That’s a fun little tidbit on the origin of the 1740. How about in your environment though? What is a good default?

    Practical guidelines

    First off, I think 29 hours is a good default. For a situation where you don’t know the environment, which is the case for a default setting, having a non-resonate pattern greater than one day is a good idea.

    However, since you likely know your environment, it’s best to change this. I recommend setting to a fixed time like 4:00am if you’re on the East coast of the US, 1:00am on the West coast, or whatever seems to make sense for your audience when you have the least amount of traffic. Setting it to a fixed time each day during low traffic times will minimize the impact and also allow you to troubleshoot easier if you run into any issues. If you have multiple application pools it may be wise to stagger them so that you don’t overload the server with a lot of simultaneous recycles.

    Note that IIS overlaps the app pool when recycling so there usually isn’t any downtime during a recycle. However, in-memory information (session state, etc) is lost. See this video if you want to learn more about IIS overlapping app pools.

    You may ask whether a fixed recycle is even needed. A daily recycle is just a band-aid to freshen IIS in case there is a slight memory leak or anything else that slowly creeps into the worker process. In theory you don’t need a daily recycle unless you have a known problem. I used to recommend that you turn it off completely if you don’t need it. However, I’m leaning more today towards setting it to recycle once per day at an off-peak time as a proactive measure.

    My reason is that, first, your site should be able to survive a recycle without too much impact, so recycling daily shouldn’t be a concern. Secondly, I’ve found that even well behaving app pools can eventually have something sneak in over time that impacts the app pool. I’ve seen issues from traffic patterns that cause excessive caching or something odd in the application, and I’ve seen the very rare IIS bug (rare indeed!) that isn’t a problem if recycled daily. Is it a band-aid? Possibly, but if a daily recycle keeps a non-critical issue from bubbling to the top then I believe that it’s a good proactive measure to save a lot of troubleshooting effort on something that probably isn’t important to troubleshoot. However, if you think you have a real issue that is being suppressed by recycling then, by all means, turn off the auto-recycling so that you can track down and resolve your issue. There’s no black and white answer. Only you can make the best decision for your environment.

    Idle Time-out

    While on the topic of app pool defaults, there is one more that you should change with every new server deployment. The Idle Time-out should be set to 0 unless you are doing bulk hosting where you want to keep the memory footprint per site as low as possible.

    image

    If you have a just a few sites on your server and you want them to always load fast then set this to zero. Otherwise, when you have 20 minutes without any traffic then the app pool will terminate so that it can start up again on the next visit. The problem is that the first visit to an app pool needs to create a new w3wp.exe worker process which is slow because the app pool needs to be created, ASP.NET or another framework needs to be loaded, and then your application needs to be loaded. That can take a few seconds. Therefore I set that to 0 every chance I have, unless it’s for a server that hosts a lot of sites that don’t always need to be running.

    There are other settings that can be reviewed for each environment but the two aforementioned settings are the two that should be changed almost every time.

    Hopefully you enjoyed knowing about the 29 hour default as much as I did, even if just for fun. Happy IISing.

    Author: "--" Tags: "IIS, IIS7, IIS8, Performance Tuning, Win..."
    Send by mail Print  Save  Delicious 
    Date: Tuesday, 29 Jan 2013 17:33

    Application Request Routing (ARR) is a great solution for load balancing and other proxying needs. I’m a big fan and have written often about it.

    I had someone ask me this week about WebDAV support for ARR. With his ARR setup, he noted that WebDAV creation, uploads, and downloads worked well, but renaming and moving files did not. He aptly noticed in the IIS logs that the source IP address showed as being from the ARR server rather than from the original user.

    Here’s what the IIS log recorded:

    2013-01-28 10:36:20 10.11.12.13 MOVE /Documents/test https://domain.com/Documents/test 80 User 10.9.8.7 Microsoft-WebDAV-MiniRedir/6.1.7601 400 1 0 31

    The 10.9.8.7 IP Address is an IP address on the ARR server. And yes, the real IP address has been replaced with a made-up IP to protect the innocent.
     
    To be honest, I haven’t tested WebDAV with ARR yet, but the issue sounded like the proxy-in-the-middle issue. I suggested installing ARRHelper on the web servers, and sure enough, that took care of it. While I didn’t setup a full repro myself, he confirmed that it worked for him after the ARRHelper install so I feel quite confident in saying that WebDAV will work with ARR as long as ARRHelper is installed.

    So for anyone working with WebDAV and ARR and it doesn’t work for you, it’s likely that ARRHelper needs to be installed on the web servers.

    You can find ARRHelper here, and if interested, I have a short video explaining what it is and why it’s important.

    Author: "--" Tags: "ARR, IIS, IIS7, IIS8"
    Send by mail Print  Save  Delicious 
    Date: Tuesday, 29 Jan 2013 17:33

    Application Request Routing (ARR) is a great solution for load balancing and other proxying needs. I’m a big fan and have written often about it.

    I had someone ask me this week about WebDAV support for ARR. With his ARR setup, he noted that WebDAV creation, uploads, and downloads worked well, but renaming and moving files did not. He aptly noticed in the IIS logs that the source IP address showed as being from the ARR server rather than from the original user.

    Here’s what the IIS log recorded:

    2013-01-28 10:36:20 10.11.12.13 MOVE /Documents/test https://domain.com/Documents/test 80 User 10.9.8.7 Microsoft-WebDAV-MiniRedir/6.1.7601 400 1 0 31

    The 10.9.8.7 IP Address is an IP address on the ARR server. And yes, the real IP address has been replaced with a made-up IP to protect the innocent.
     
    To be honest, I haven’t tested WebDAV with ARR yet, but the issue sounded like the proxy-in-the-middle issue. I suggested installing ARRHelper on the web servers, and sure enough, that took care of it. While I didn’t setup a full repro myself, he confirmed that it worked for him after the ARRHelper install so I feel quite confident in saying that WebDAV will work with ARR as long as ARRHelper is installed.

    So for anyone working with WebDAV and ARR and it doesn’t work for you, it’s likely that ARRHelper needs to be installed on the web servers.

    You can find ARRHelper here, and if interested, I have a short video explaining what it is and why it’s important.

    Author: "OWScott" Tags: "IIS, IIS7, ARR, IIS8"
    Comments Send by mail Print  Save  Delicious 
    Date: Wednesday, 16 Jan 2013 14:37

    A friend of mine asked me recently how to handle a situation with a dot (.) in the path for an MVC project.  The path looked something like this:

    http://domain.com/aspnet/CLR4.0

    The MVC routes didn’t work for this path and the standard IIS 404 handler served up the page instead. However, the following URL did work:

    http://domain.com/aspnet/CLR4.0/

    The only difference is the trailing slash. For anyone that runs into the same situation, here’s the reason and the solution.

    What causes this inconsistency

    The issue with the first path is that IIS can’t tell if the path is a file or folder. In fact, it looks so much like a file with an extension of .0 then that’s what IIS assumes that it is. However, the second path is fine because the trailing slash makes it obvious that it’s a folder.

    We can’t just assign a wildcard handler to MVC in IIS for all file types because that will break files like .css, .js, .png, and other files are processed as static images.

    Note that this would not be an issue if the dot is in a different part of the path. It’s only an issue if the dot is in the last section. For example, the following would be fine:

    http://domain.com/aspnet/CLR4.0/info

    So how do we resolve it?

    As I mentioned, we can’t simply set a wildcard handler on it because it will break other necessary static files. So have about three options:

    1. You could always change the paths in your application. If you’re early enough in the development cycle that may be an option for you.
    2. Add file handlers for all extension types that you may have. For example, add a handler for .0, another for .1, etc.
    3. Use a URL rewriter like IIS URL Rewrite to watch for a particular pattern and append the training slash.

    Let’s look at these second two options. I’ll mention now up front that the URL Rewrite solution is probably the best bet, although I’ll cover both in case you prefer the handler method.

    Http Handler Solution

    You can handle specific extensions by adding a handler for each possible extension that you’ll support. This maps to System.Web.UI.PageHandlerFactory so that ASP.NET MVC has a handle on it and you can create an MVC route for it. This would apply to ASP.NET, PHP, or other frameworks too.

    C:\Windows\System32\inetsrv\appcmd.exe set config "Sitename" -section:system.webServer/handlers /+"[name='.0-PageHandlerFactory-Integrated-4.0',path='*.0',verb='GET,HEAD,POST,DEBUG',type='System.Web.UI.PageHandlerFactory',preCondition='integratedMode']" /commit:apphost
    C:\Windows\System32\inetsrv\appcmd.exe set config "SiteName" -section:system.webServer/handlers /+"[name='.1-PageHandlerFactory-Integrated-4.0',path='*.1',verb='GET,HEAD,POST,DEBUG',type='System.Web.UI.PageHandlerFactory',preCondition='integratedMode']" /commit:apphost

    Run this appcmd command from the command prompt to create the handlers.

    Make sure to update “Sitename” with your own site name, or leave it off to make it a server-wide change. And you can update ‘*.0’, ‘*.1’ to your extensions.

    If you do create the site-level rule, make sure to save your web.config back to your source control so that you don’t overwrite it on your next site update.

    IIS URL Rewrite Solution

    Probably the best solution, and the one that my friend used in this case, is to use URL Rewrite to add a trailing slash when needed. The advantage of this is that you can use a more general pattern to redirect the URL rather than a bunch of handlers for each specific extension.

    This assumes that you have IIS 7.0 or greater and that you have URL Rewrite installed. If you’re not familiar with URL Rewrite, check out the URL Rewrite articles on my blog (start with this one).

    Note: If you want, you can skip this section and jump right to the next section for an easier way to do this using URL Rewrite’s rule wizard.

    The following rule watches for a pattern of exactly “something.{digits}” (without a trailing slash). If it finds it then it performs a redirect and appends the trailing slash. It also confirms that it’s not a file or directory that exists on disk.

    <rule name="Add trailing slash for some URLs" stopProcessing="true">
    <match url="^(.*\.(\d)+)$" />
    <conditions>
    <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
    <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
    </conditions>
    <action type="Redirect" url="{R:1}/" />
    </rule>

    To apply this rule, using IIS Manager you can create a dummy rule as a placeholder and then edit web.config and replace your placeholder rule with this rule. If this still doesn’t make sense then be sure to review the articles I mentioned above. The configuration location is in <configuration><system.webServer><rules><rule>

    An added benefit of this rule is that you’ll make the search engines happy by having just one path for each page, rather than possibly two with and without the slash. This will help with your SEO rankings.

    I didn’t think of it at the time but just now I realized that you could use a general match url pattern of “.(*[^/])” and it will work for you too. The reason is that the check for IsFile and IsDirectory will ensure that your static images will continue to be served directly from disk so you won’t break them. So feel free to use <match url="(.*[^/])" /> instead if you want to add the trailing slash for all paths that don’t have them.

    The Real Easy Solution (URL Rewrite wizard)

    In fact, to make this easier yet, you can use URL Rewrite’s existing rule wizard to add the trailing slash. You must apply this at the site or folder level since the trailing slash wizard doesn’t exist at the server level.

    1. From IIS Manager, open URL Rewrite at the site or folder level.
      image
    2. Click “Add Rule(s)…” from the Actions pane.
      SNAGHTML14017621
    3. Select “Append or remove the trailing slash symbol” rule.
      image
    4. Use the default option of “Appended if it does not exist”
    5. Press OK.

    That’s it! You’ll have a rule added which will append the trailing slash for all non-physical file or folder paths that don’t already have the trailing slash. Not only will it handle dots in the path but the search engines will be happy too.

    As with the http extension solution, if you create this as a site or folder-level rule, it will be applied to your web.config file. Make sure to update web.config in your source control so that you don’t lose your changes on your next site deployment.

    Author: "OWScott" Tags: "IIS, ASP.NET, IIS7, URL Rewrite, MVC, II..."
    Comments Send by mail Print  Save  Delicious 
    Date: Wednesday, 16 Jan 2013 14:37

    A friend of mine asked me recently how to handle a situation with a dot (.) in the path for an MVC project.  The path looked something like this:

    http://domain.com/aspnet/CLR4.0

    The MVC routes didn’t work for this path and the standard IIS 404 handler served up the page instead. However, the following URL did work:

    http://domain.com/aspnet/CLR4.0/

    The only difference is the trailing slash. For anyone that runs into the same situation, here’s the reason and the solution.

    What causes this inconsistency

    The issue with the first path is that IIS can’t tell if the path is a file or folder. In fact, it looks so much like a file with an extension of .0 then that’s what IIS assumes that it is. However, the second path is fine because the trailing slash makes it obvious that it’s a folder.

    We can’t just assign a wildcard handler to MVC in IIS for all file types because that will break files like .css, .js, .png, and other files are processed as static images.

    Note that this would not be an issue if the dot is in a different part of the path. It’s only an issue if the dot is in the last section. For example, the following would be fine:

    http://domain.com/aspnet/CLR4.0/info

    So how do we resolve it?

    As I mentioned, we can’t simply set a wildcard handler on it because it will break other necessary static files. So have about three options:

    1. You could always change the paths in your application. If you’re early enough in the development cycle that may be an option for you.
    2. Add file handlers for all extension types that you may have. For example, add a handler for .0, another for .1, etc.
    3. Use a URL rewriter like IIS URL Rewrite to watch for a particular pattern and append the training slash.

    Let’s look at these second two options. I’ll mention now up front that the URL Rewrite solution is probably the best bet, although I’ll cover both in case you prefer the handler method.

    Http Handler Solution

    You can handle specific extensions by adding a handler for each possible extension that you’ll support. This maps to System.Web.UI.PageHandlerFactory so that ASP.NET MVC has a handle on it and you can create an MVC route for it. This would apply to ASP.NET, PHP, or other frameworks too.

    C:\Windows\System32\inetsrv\appcmd.exe set config "Sitename" -section:system.webServer/handlers /+"[name='.0-PageHandlerFactory-Integrated-4.0',path='*.0',verb='GET,HEAD,POST,DEBUG',type='System.Web.UI.PageHandlerFactory',preCondition='integratedMode']" /commit:apphost
    C:\Windows\System32\inetsrv\appcmd.exe set config "SiteName" -section:system.webServer/handlers /+"[name='.1-PageHandlerFactory-Integrated-4.0',path='*.1',verb='GET,HEAD,POST,DEBUG',type='System.Web.UI.PageHandlerFactory',preCondition='integratedMode']" /commit:apphost

    Run this appcmd command from the command prompt to create the handlers.

    Make sure to update “Sitename” with your own site name, or leave it off to make it a server-wide change. And you can update ‘*.0’, ‘*.1’ to your extensions.

    If you do create the site-level rule, make sure to save your web.config back to your source control so that you don’t overwrite it on your next site update.

    IIS URL Rewrite Solution

    Probably the best solution, and the one that my friend used in this case, is to use URL Rewrite to add a trailing slash when needed. The advantage of this is that you can use a more general pattern to redirect the URL rather than a bunch of handlers for each specific extension.

    This assumes that you have IIS 7.0 or greater and that you have URL Rewrite installed. If you’re not familiar with URL Rewrite, check out the URL Rewrite articles on my blog (start with this one).

    Note: If you want, you can skip this section and jump right to the next section for an easier way to do this using URL Rewrite’s rule wizard.

    The following rule watches for a pattern of exactly “something.{digits}” (without a trailing slash). If it finds it then it performs a redirect and appends the trailing slash. It also confirms that it’s not a file or directory that exists on disk.

    <rule name="Add trailing slash for some URLs" stopProcessing="true">
    <match url="^(.*\.(\d)+)$" />
    <conditions>
    <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
    <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
    </conditions>
    <action type="Redirect" url="{R:1}/" />
    </rule>

    To apply this rule, using IIS Manager you can create a dummy rule as a placeholder and then edit web.config and replace your placeholder rule with this rule. If this still doesn’t make sense then be sure to review the articles I mentioned above. The configuration location is in <configuration><system.webServer><rules><rule>

    An added benefit of this rule is that you’ll make the search engines happy by having just one path for each page, rather than possibly two with and without the slash. This will help with your SEO rankings.

    I didn’t think of it at the time but just now I realized that you could use a general match url pattern of “.(*[^/])” and it will work for you too. The reason is that the check for IsFile and IsDirectory will ensure that your static images will continue to be served directly from disk so you won’t break them. So feel free to use <match url="(.*[^/])" /> instead if you want to add the trailing slash for all paths that don’t have them.

    The Real Easy Solution (URL Rewrite wizard)

    In fact, to make this easier yet, you can use URL Rewrite’s existing rule wizard to add the trailing slash. You must apply this at the site or folder level since the trailing slash wizard doesn’t exist at the server level.

    1. From IIS Manager, open URL Rewrite at the site or folder level.
      image
    2. Click “Add Rule(s)…” from the Actions pane.
      SNAGHTML14017621
    3. Select “Append or remove the trailing slash symbol” rule.
      image
    4. Use the default option of “Appended if it does not exist”
    5. Press OK.

    That’s it! You’ll have a rule added which will append the trailing slash for all non-physical file or folder paths that don’t already have the trailing slash. Not only will it handle dots in the path but the search engines will be happy too.

    As with the http extension solution, if you create this as a site or folder-level rule, it will be applied to your web.config file. Make sure to update web.config in your source control so that you don’t lose your changes on your next site deployment.

    Author: "--" Tags: "ASP.NET, IIS, IIS7, IIS8, MVC, URL Rewri..."
    Send by mail Print  Save  Delicious 
    Date: Tuesday, 13 Nov 2012 22:14

    IIS 8 on Windows Server 2012 doesn’t have any fixed concurrent request limit, apart from whatever limit would be reached when resources are maxed.

    However, the client version of IIS 8, which is on Windows 8, does have a concurrent connection request limitation to limit high traffic production uses on a client edition of Windows.

    Starting with IIS 7 (Windows Vista), the behavior changed from previous versions.  In previous client versions of IIS, excess requests would throw a 403.9 error message (Access Forbidden: Too many users are connected.).  Instead, Windows Vista, 7 and 8 queue excessive requests so that they will be handled gracefully, although there is a maximum number of requests that will be processed simultaneously.

    Thomas Deml provided a concurrent request chart for Windows Vista many years ago, but I have been unable to find an equivalent chart for Windows 8 so I asked Wade Hilmo from the IIS team what the limits are.  Since this is controlled not by the IIS team itself but rather from the Windows licensing team, he asked around and found the authoritative answer, which I’ll provide below.

    Windows 8 – IIS 8 Concurrent Requests Limit

    Windows 8 (Basic edition) 3
    Windows 8 Professional, Enterprise 10
    Windows RT N/A since IIS does not run on Windows RT

    Windows 7 – IIS 7.5 Concurrent Requests Limit

    Windows 7 Home Starter 1
    Windows 7 Basic 1
    Windows 7 Premium 3
    Windows 7 Ultimate, Professional, Enterprise 10

    Windows Vista – IIS 7 Concurrent Requests Limit

    Windows Vista Home Basic (IIS process activation and HTTP processing only) 3
    Windows Vista Home Premium 3
    Windows Vista Ultimate, Professional, Enterprise 10

    Windows Server 2003, Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 allow an unlimited amount of simultaneously requests.

    Author: "OWScott" Tags: "IIS, Windows Vista, IIS7, Windows 7, Win..."
    Comments Send by mail Print  Save  Delicious 
    Date: Tuesday, 13 Nov 2012 22:14

    IIS 8 on Windows Server 2012 doesn’t have any fixed concurrent request limit, apart from whatever limit would be reached when resources are maxed.

    However, the client version of IIS 8, which is on Windows 8, does have a concurrent connection request limitation to limit high traffic production uses on a client edition of Windows.

    Starting with IIS 7 (Windows Vista), the behavior changed from previous versions.  In previous client versions of IIS, excess requests would throw a 403.9 error message (Access Forbidden: Too many users are connected.).  Instead, Windows Vista, 7 and 8 queue excessive requests so that they will be handled gracefully, although there is a maximum number of requests that will be processed simultaneously.

    Thomas Deml provided a concurrent request chart for Windows Vista many years ago, but I have been unable to find an equivalent chart for Windows 8 so I asked Wade Hilmo from the IIS team what the limits are.  Since this is controlled not by the IIS team itself but rather from the Windows licensing team, he asked around and found the authoritative answer, which I’ll provide below.

    Windows 8 – IIS 8 Concurrent Requests Limit

    Windows 8 (Basic edition) 3
    Windows 8 Professional, Enterprise 10
    Windows RT N/A since IIS does not run on Windows RT

    Windows 7 – IIS 7.5 Concurrent Requests Limit

    Windows 7 Home Starter 1
    Windows 7 Basic 1
    Windows 7 Premium 3
    Windows 7 Ultimate, Professional, Enterprise 10

    Windows Vista – IIS 7 Concurrent Requests Limit

    Windows Vista Home Basic (IIS process activation and HTTP processing only) 3
    Windows Vista Home Premium 3
    Windows Vista Ultimate, Professional, Enterprise 10

    Windows Server 2003, Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 allow an unlimited amount of simultaneously requests.

    Author: "--" Tags: "IIS, IIS7, IIS8, Windows 7, Windows Serv..."
    Send by mail Print  Save  Delicious 
    Date: Wednesday, 07 Nov 2012 14:04

    IIS URL Rewrite supports server variables for pretty much every part of the URL and http header. However, there is one commonly used server variable that isn’t readily available.  That’s the protocol—HTTP or HTTPS.

    You can easily check if a page request uses HTTP or HTTPS, but that only works in the conditions part of the rule.  There isn’t a variable available to dynamically set the protocol in the action part of the rule.  What I wish is that there would be a variable like {HTTP_PROTOCOL} which would have a value of ‘HTTP’ or ‘HTTPS’.  There is a server variable called {HTTPS}, but the values of ‘on’ and ‘off’ aren’t practical in the action.  You can also use {SERVER_PORT} or {SERVER_PORT_SECURE}, but again, they aren’t useful in the action.

    Let me illustrate.  The following rule will redirect traffic for http(s)://localtest.me/ to http://www.localtest.me/.

    <rule name="Redirect to www">
      <match url="(.*)" />
      <conditions>
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
      </conditions>
      <action type="Redirect" url="http://www.localtest.me/{R:1}" />
    </rule>

    The problem is that it forces the request to HTTP even if the original request was for HTTPS.

    Interestingly enough, I planned to blog about this topic this week when I noticed in my twitter feed yesterday that Jeff Graves, a former colleague of mine, just wrote an excellent blog post about this very topic.  He beat me to the punch by just a couple days.  However, I figured I would still write my blog post on this topic.  While his solution is a excellent one, I personally handle this another way most of the time.  Plus, it’s a commonly asked question that isn’t documented well enough on the web yet, so having another article on the web won’t hurt.

    I can think of four different ways to handle this, and depending on your situation you may lean towards any of the four.  Don’t let the choices overwhelm you though.  Let’s keep it simple, Option 1 is what I use most of the time, Option 2 is what Jeff proposed and is the safest option, and Option 3 and Option 4 need only be considered if you have a more unique situation.  All four options will work for most situations.

    Option 1 – CACHE_URL, single rule

    There is a server variable that has the protocol in it; {CACHE_URL}.  This server variable contains the entire URL string (e.g. http://www.localtest.me:80/info.aspx?id=5)  All we need to do is extract the HTTP or HTTPS and we’ll be set. This tends to be my preferred way to handle this situation.

    Indeed, Jeff did briefly mention this in his blog post:

    … you could use a condition on the CACHE_URL variable and a back reference in the rewritten URL. The problem there is that you then need to match all of the conditions which could be a problem if your rule depends on a logical “or” match for conditions.

    Thus the problem.  If you have multiple conditions set to “Match Any” rather than “Match All” then this option won’t work.  However, I find that 95% of all rules that I write use “Match All” and therefore, being the lazy administrator that I am I like this simple solution that only requires adding a single condition to a rule.  The caveat is that if you use “Match Any” then you must consider one of the next two options.

    Enough with the preamble.  Here’s how it works.  Add a condition that checks for {CACHE_URL} with a pattern of “^(.+)://” like so:

    image

    How you have a back-reference to the part before the ://, which is our treasured HTTP or HTTPS.  In URL Rewrite 2.0 or greater you can check the “Track capture groups across conditions”, make that condition the first condition, and you have yourself a back-reference of {C:1}.

    The “Redirect to www” example with support for maintaining the protocol, will become:

    <rule name="Redirect to www" stopProcessing="true">
      <match url="(.*)" />
      <conditions trackAllCaptures="true">
        <add input="{CACHE_URL}" pattern="^(.+)://" />
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
      </conditions>
      <action type="Redirect" url="{C:1}://www.localtest.me/{R:1}" />
    </rule>

    It’s not as easy as it would be if Microsoft gave us a built-in {HTTP_PROTOCOL} variable, but it’s pretty close.

    I also like this option since I often create rule examples for other people and this type of rule is portable since it’s self-contained within a single rule.

    Option 2 – Using a Rewrite Map

    For a safer rule that works for both “Match Any” and “Match All” situations, you can use the Rewrite Map solution that Jeff proposed.  It’s a perfectly good solution with the only drawback being the ever so slight extra effort to set it up since you need to create a rewrite map before you create the rule.  In other words, if you choose to use this as your sole method of handling the protocol, you’ll be safe.

    After you create a Rewrite Map called MapProtocol, you can use “{MapProtocol:{HTTPS}}” for the protocol within any rule action.  Following is an example using a Rewrite Map.

    <rewrite>
      <rules>
        <rule name="Redirect to www" stopProcessing="true">
          <match url="(.*)" />
          <conditions trackAllCaptures="false">
            <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
          </conditions>
          <action type="Redirect" 
            url="{MapProtocol:{HTTPS}}://www.localtest.me/{R:1}" />
        </rule>
      </rules>
      <rewriteMaps>
        <rewriteMap name="MapProtocol">
          <add key="on" value="https" />
          <add key="off" value="http" />
        </rewriteMap>
      </rewriteMaps>
    </rewrite>

    Option 3 – CACHE_URL, Multi-rule

    If you have many rules that will use the protocol, you can create your own server variable which can be used in subsequent rules. This option is no easier to set up than Option 2 above, but you can use it if you prefer the easier to remember syntax of {HTTP_PROTOCOL} vs. {MapProtocol:{HTTPS}}.

    The potential issue with this rule is that if you don’t have access to the server level (e.g. in a shared environment) then you cannot set server variables without permission.

    First, create a rule and place it at the top of the set of rules.  You can create this at the server, site or subfolder level.  However, if you create it at the site or subfolder level then the HTTP_PROTOCOL server variable needs to be approved at the server level.  This can be achieved in IIS Manager by navigating to URL Rewrite at the server level, clicking on “View Server Variables” from the Actions pane, and added HTTP_PROTOCOL. If you create the rule at the server level then this step is not necessary. 

    Following is an example of the first rule to create the HTTP_PROTOCOL and then a rule that uses it.  The Create HTTP_PROTOCOL rule only needs to be created once on the server.

    <rule name="Create HTTP_PROTOCOL">
      <match url=".*" />
      <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{CACHE_URL}" pattern="^(.+)://" />
      </conditions>
      <serverVariables>
        <set name="HTTP_PROTOCOL" value="{C:1}" />
      </serverVariables>
      <action type="None" />
    </rule>
     
    <rule name="Redirect to www" stopProcessing="true">
      <match url="(.*)" />
      <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
      </conditions>
      <action type="Redirect" url="{HTTP_PROTOCOL}://www.localtest.me/{R:1}" />
    </rule>

    Option 4 – Multi-rule

    Just to be complete I’ll include an example of how to achieve the same thing with multiple rules. I don’t see any reason to use it over the previous examples, but I’ll include an example anyway.  Note that it will only work with the “Match All” setting for the conditions.

    <rule name="Redirect to www - http" stopProcessing="true">
      <match url="(.*)" />
      <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
        <add input="{HTTPS}" pattern="off" />
      </conditions>
      <action type="Redirect" url="http://www.localtest.me/{R:1}" />
    </rule>
    <rule name="Redirect to www - https" stopProcessing="true">
      <match url="(.*)" />
      <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
        <add input="{HTTPS}" pattern="on" />
      </conditions>
      <action type="Redirect" url="https://www.localtest.me/{R:1}" />
    </rule>

    Conclusion

    Above are four working examples of methods to call the protocol (HTTP or HTTPS) from the action of a URL Rewrite rule.  You can use whichever method you most prefer.  I’ve listed them in the order that I favor them, although I could see some people preferring Option 2 as their first choice.  In any of the cases, hopefully you can use this as a reference for when you need to use the protocol in the rule’s action when writing your URL Rewrite rules.

    Further information:

    Author: "OWScott" Tags: "ARR, IIS, IIS7, IIS8, URL Rewrite"
    Comments Send by mail Print  Save  Delicious 
    Date: Wednesday, 07 Nov 2012 14:04

    IIS URL Rewrite supports server variables for pretty much every part of the URL and http header. However, there is one commonly used server variable that isn’t readily available.  That’s the protocol—HTTP or HTTPS.

    You can easily check if a page request uses HTTP or HTTPS, but that only works in the conditions part of the rule.  There isn’t a variable available to dynamically set the protocol in the action part of the rule.  What I wish is that there would be a variable like {HTTP_PROTOCOL} which would have a value of ‘HTTP’ or ‘HTTPS’.  There is a server variable called {HTTPS}, but the values of ‘on’ and ‘off’ aren’t practical in the action.  You can also use {SERVER_PORT} or {SERVER_PORT_SECURE}, but again, they aren’t useful in the action.

    Let me illustrate.  The following rule will redirect traffic for http(s)://localtest.me/ to http://www.localtest.me/.

    <rule name="Redirect to www">
      <match url="(.*)" />
      <conditions>
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
      </conditions>
      <action type="Redirect" url="http://www.localtest.me/{R:1}" />
    </rule>

    The problem is that it forces the request to HTTP even if the original request was for HTTPS.

    Interestingly enough, I planned to blog about this topic this week when I noticed in my twitter feed yesterday that Jeff Graves, a former colleague of mine, just wrote an excellent blog post about this very topic.  He beat me to the punch by just a couple days.  However, I figured I would still write my blog post on this topic.  While his solution is a excellent one, I personally handle this another way most of the time.  Plus, it’s a commonly asked question that isn’t documented well enough on the web yet, so having another article on the web won’t hurt.

    I can think of four different ways to handle this, and depending on your situation you may lean towards any of the four.  Don’t let the choices overwhelm you though.  Let’s keep it simple, Option 1 is what I use most of the time, Option 2 is what Jeff proposed and is the safest option, and Option 3 and Option 4 need only be considered if you have a more unique situation.  All four options will work for most situations.

    Option 1 – CACHE_URL, single rule

    There is a server variable that has the protocol in it; {CACHE_URL}.  This server variable contains the entire URL string (e.g. http://www.localtest.me:80/info.aspx?id=5)  All we need to do is extract the HTTP or HTTPS and we’ll be set. This tends to be my preferred way to handle this situation.

    Indeed, Jeff did briefly mention this in his blog post:

    … you could use a condition on the CACHE_URL variable and a back reference in the rewritten URL. The problem there is that you then need to match all of the conditions which could be a problem if your rule depends on a logical “or” match for conditions.

    Thus the problem.  If you have multiple conditions set to “Match Any” rather than “Match All” then this option won’t work.  However, I find that 95% of all rules that I write use “Match All” and therefore, being the lazy administrator that I am I like this simple solution that only requires adding a single condition to a rule.  The caveat is that if you use “Match Any” then you must consider one of the next two options.

    Enough with the preamble.  Here’s how it works.  Add a condition that checks for {CACHE_URL} with a pattern of “^(.+)://” like so:

    image

    How you have a back-reference to the part before the ://, which is our treasured HTTP or HTTPS.  In URL Rewrite 2.0 or greater you can check the “Track capture groups across conditions”, make that condition the first condition, and you have yourself a back-reference of {C:1}.

    The “Redirect to www” example with support for maintaining the protocol, will become:

    <rule name="Redirect to www" stopProcessing="true">
      <match url="(.*)" />
      <conditions trackAllCaptures="true">
        <add input="{CACHE_URL}" pattern="^(.+)://" />
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
      </conditions>
      <action type="Redirect" url="{C:1}://www.localtest.me/{R:1}" />
    </rule>

    It’s not as easy as it would be if Microsoft gave us a built-in {HTTP_PROTOCOL} variable, but it’s pretty close.

    I also like this option since I often create rule examples for other people and this type of rule is portable since it’s self-contained within a single rule.

    Option 2 – Using a Rewrite Map

    For a safer rule that works for both “Match Any” and “Match All” situations, you can use the Rewrite Map solution that Jeff proposed.  It’s a perfectly good solution with the only drawback being the ever so slight extra effort to set it up since you need to create a rewrite map before you create the rule.  In other words, if you choose to use this as your sole method of handling the protocol, you’ll be safe.

    After you create a Rewrite Map called MapProtocol, you can use “{MapProtocol:{HTTPS}}” for the protocol within any rule action.  Following is an example using a Rewrite Map.

    <rewrite>
      <rules>
        <rule name="Redirect to www" stopProcessing="true">
          <match url="(.*)" />
          <conditions trackAllCaptures="false">
            <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
          </conditions>
          <action type="Redirect" 
            url="{MapProtocol:{HTTPS}}://www.localtest.me/{R:1}" />
        </rule>
      </rules>
      <rewriteMaps>
        <rewriteMap name="MapProtocol">
          <add key="on" value="https" />
          <add key="off" value="http" />
        </rewriteMap>
      </rewriteMaps>
    </rewrite>

    Option 3 – CACHE_URL, Multi-rule

    If you have many rules that will use the protocol, you can create your own server variable which can be used in subsequent rules. This option is no easier to set up than Option 2 above, but you can use it if you prefer the easier to remember syntax of {HTTP_PROTOCOL} vs. {MapProtocol:{HTTPS}}.

    The potential issue with this rule is that if you don’t have access to the server level (e.g. in a shared environment) then you cannot set server variables without permission.

    First, create a rule and place it at the top of the set of rules.  You can create this at the server, site or subfolder level.  However, if you create it at the site or subfolder level then the HTTP_PROTOCOL server variable needs to be approved at the server level.  This can be achieved in IIS Manager by navigating to URL Rewrite at the server level, clicking on “View Server Variables” from the Actions pane, and added HTTP_PROTOCOL. If you create the rule at the server level then this step is not necessary. 

    Following is an example of the first rule to create the HTTP_PROTOCOL and then a rule that uses it.  The Create HTTP_PROTOCOL rule only needs to be created once on the server.

    <rule name="Create HTTP_PROTOCOL">
      <match url=".*" />
      <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{CACHE_URL}" pattern="^(.+)://" />
      </conditions>
      <serverVariables>
        <set name="HTTP_PROTOCOL" value="{C:1}" />
      </serverVariables>
      <action type="None" />
    </rule>
     
    <rule name="Redirect to www" stopProcessing="true">
      <match url="(.*)" />
      <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
      </conditions>
      <action type="Redirect" url="{HTTP_PROTOCOL}://www.localtest.me/{R:1}" />
    </rule>

    Option 4 – Multi-rule

    Just to be complete I’ll include an example of how to achieve the same thing with multiple rules. I don’t see any reason to use it over the previous examples, but I’ll include an example anyway.  Note that it will only work with the “Match All” setting for the conditions.

    <rule name="Redirect to www - http" stopProcessing="true">
      <match url="(.*)" />
      <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
        <add input="{HTTPS}" pattern="off" />
      </conditions>
      <action type="Redirect" url="http://www.localtest.me/{R:1}" />
    </rule>
    <rule name="Redirect to www - https" stopProcessing="true">
      <match url="(.*)" />
      <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
        <add input="{HTTP_HOST}" pattern="^localtest\.me$" />
        <add input="{HTTPS}" pattern="on" />
      </conditions>
      <action type="Redirect" url="https://www.localtest.me/{R:1}" />
    </rule>

    Conclusion

    Above are four working examples of methods to call the protocol (HTTP or HTTPS) from the action of a URL Rewrite rule.  You can use whichever method you most prefer.  I’ve listed them in the order that I favor them, although I could see some people preferring Option 2 as their first choice.  In any of the cases, hopefully you can use this as a reference for when you need to use the protocol in the rule’s action when writing your URL Rewrite rules.

    Further information:

    Author: "--" Tags: "ARR, IIS, IIS7, IIS8, URL Rewrite"
    Send by mail Print  Save  Delicious 
    Next page
    » You can also retrieve older items : Read
    » © All content and copyrights belong to their respective authors.«
    » © FeedShow - Online RSS Feeds Reader