• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Saturday, 02 Aug 2014 00:09

Although you won’t find it mentioned in the MDT documentation, in an OSD task sequence the MDT Gather step will attempt to set a variable called SMSDP to the distribution point server name from which the boot image was obtained.  This can be handy if you want to do something like copy the logs to a “local” DP.

  MDT does this in the GetDP function in the script ZTIGather.wsf.  It uses the following logic:

  • Get the boot image ID by looking at the value of the _SMSTSBootImageID variable, e.g. PRI00001.
  • Use that value to form the name of the variable to retrieve, _SMSTS%_SMSTSBootImageID%, e.g. _SMSTSPRI00001, then retrieve the value of that variable.
  • Split that variable on all “,” values, then pick the first non-SMSPXEIMAGES$ path.
  • Parse the string to get just the server name.

Unfortunately, this method that MDT uses to determine the “local” DP name has some issues.  First, if you do not have a boot image associated with the task sequence then SMSDP will never have a value.  Second, during a refresh task sequence the _SMSTS%_SMSTSBootImageID% variable will not have any value until the content is requested and downloaded.  So from the beginning of the task sequence until the reboot into WinPE, SMSDP will never have a value.

To get around these limitations I created a function that uses the following logic:

  • Load task sequence XML from the _SMSTSTaskSequence variable.
  • Find the package IDs for the referenced packages.
  • For each package query the _SMST<package ID>, _SMSTSMB<package ID>, and the _SMSTHTTP<package ID> variables in turn.
  • If a value is found it will split that variable on all “,” values, then pick the first non-SMSPXEIMAGES$ path.
  • Parse the string to get just the server name.

    So with this logic, as long as any package has been requested and download SMSDP should get a value.  Since the Use Toolkit Step runs very early, this code only has to run after that step to be successful.  I created a function called GetSMSDP and placed it in the HelperFunctions class of the library script that I have been building up over the years called MDTLibHelperClasses.vbs.

    I have provided two methods of using this function.  The first is to call it directly from CustomSettings.ini using MDTLibHelperClasses.vbs as a User Exit script during the Gather step.  Place MDTLibHelperClasses.vbs in the MDT Toolkit package Scripts folder.  You will also need to place a MDTExitInclude.vbs from a previous post in the MDT Toolkit package Scripts folder.  Make the following additions to CustomSetting.ini in the MDT Settings package:

    [Settings]
    Priority=IncludeExitScripts, SetSMSDP
    Properties=ExitScripts(*), SMSDP

    [IncludeExitScripts]
    UserExit=MDTExitInclude.vbs
    ExitScripts001=#Include("MDTLibHelperClasses.vbs")#

    [SetSMSDP]
    SMSDP=#oHelperFunctions.GetSMSDP()#

    The second method is to run this as a script in a Run Command Line step.  Place MDTSetSMSDP.wsf and MDTLibHelperClasses.vbs in the MDT Toolkit package Scripts folder.  Then create a Run Command Line step shortly after the first Gather step with the following command line:

    cscript "%DeployRoot%\Scripts\MDTSetSMSDP.wsf"

    Both MDTSetSMSDP.wsf and version 2.1.4 of MDTLibHelperClasses.vbs (the latest as of this writing) can be found in the attached Zip file.

     

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    This post was contributed by Michael Murgolo, a Senior Consultant with Microsoft Services - U.S. East Region.

    Attached Media: application/zip ( 30 ko)
    Author: "Michael Murgolo"
    Comments Send by mail Print  Save  Delicious 
    Date: Thursday, 01 May 2014 18:22

    While writing my last entry titled Pre-Flight Checks – Wireless Connectivity, I figured I would go ahead and post this script that does a pre-flight check to check the S.M.A.R.T. status of the hard drive.  S.M.A.R.T. stands for Self_Monitoring Analysis & Reporting Technology and it allows the machine to effectively predict impending failures of the hard drive.

    To check this status of the hard drive, I am looking at the Win32_DiskDrive class.  This class has an object called Status that keeps track of, oddly enough, the status of the hard drive.  The status we are looking for is ‘OK’.  For more information on the Win32_DiskDrive class, or Status, click here.

    As always, I am using the zero touch script format in a custom .wsf file.  For more information on custom ZTI scripts, please visit here.

    Option Explicit
     
    Dim iRetVal
    Dim oWMI, oConn, oRs
    Dim strComputer, sSmartIsClear, sSmartStatus, sSMART, DQ
    Dim colDisks, disk
    
    Const LOCAL_HARD_DISK = 3
    
    DQ = CHR(34)
    
     
    '//----------------------------------------------------------------------------
    '// End declarations
    '//----------------------------------------------------------------------------
     
    '//----------------------------------------------------------------------------
    '// Main routine
    '//----------------------------------------------------------------------------
     
    On Error Resume Next
    iRetVal = ZTIProcess
    ProcessResults iRetVal
    On Error Goto 0
     
    '//---------------------------------------------------------------------------
    '//
    '// Function: ZTIProcess()
    '//
    '// Input: None
    '//
    '// Return: Success - 0
    '// Failure - non-zero
    '//
    '// Purpose: Perform main ZTI processing
    '//
    '//---------------------------------------------------------------------------
    Function ZTIProcess()
     
         iRetVal = Success
     
         ZTIProcess = iRetval
     
         Const scriptVersion = "1.0"
    
    
    strComputer = "."
    
    
    
    ' Create objects
    Set oRs = CreateObject("ADODB.Recordset") 
    Set oConn = CreateObject("ADODB.Connection")
    Set oWMI = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")
    
    oLogging.CreateEntry "Querying the SMART WMI connection.", LogTypeInfo
    Set colDisks = oWMI.ExecQuery_
      ("Select * from win32_DiskDrive where MediaType = ‘Fixed hard disk media’")
    
    
    oLogging.CreateEntry "Parsing the SMART WMI connection.", LogTypeInfo
    
    For Each disk in colDisks
        sSmartStatus = disk.Status
          oLogging.CreateEntry "sSmartStatus:   " & sSmartStatus, LogTypeInfo 
    
      If sSmartStatus = "OK" Then
          oLogging.CreateEntry "sSmartIsClear:   " & sSmartIsClear, LogTypeInfo
      Else
          oLogging.CreateEntry "sSmartIsClear:   " & sSmartIsClear, LogTypeInfo
          Wscript.Quit(1) 
      End If
    
    Next Set colNetCards = Nothing ELSE oLogging.CreateEntry "Unable to establish a connection to SQL server " & SQLLOGSRV & _ ". Error - " & Err.Number & " - " & Err.Description, LogTypeError END IF END FUNCTION

    Adding to the Task Sequence

    To add this check to the task sequence, I have added it into a command-line task utilizing the shown syntax.

    image

     

    This post was contributed by Brad Tucker, a Senior Consultant with Microsoft Services, East Region, United States

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use

    Author: "Brad Tucker"
    Comments Send by mail Print  Save  Delicious 
    Date: Thursday, 01 May 2014 18:14

    While writing my last entry titled Pre-Flight Checks – Wireless Connectivity, I figured I would go ahead and post this script that does a pre-flight check to ensure the machine is plugged in to AC power.  With the numbers of mobile devices becoming more and more prevalent in today’s enterprises, a check to ensure the device is plugged in prior to beginning imaging is crucial.

    As I stated in my prior post (see above), MDT offers some checks, and even offers a wireless and AC power check within the UDI wizard.  These last two checks, however, would require a touch for the wizard, as they are built in.  Thus, this simple script to check the mobile device for AC power.

    To check if the device is plugged in, I am looking at the Win32_Battery class.  This class has an object called BatteryStatus that keeps track of, oddly enough, the status of the battery.  The status we are looking for is ‘2’.  This is recognized as ‘The system has access to AC so no battery is being discharged. However, the battery is not necessarily charging’.  For more information on the Win32_Battery class, or BatteryStatus, click here.

    Function ZTIProcess()
     
         iRetVal = Success
     
         ZTIProcess = iRetval
     
         Const scriptVersion = "1.0"
    
    
    strComputer = "."
    
    Set oWMI = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")
    
    ' Query Win32_Battery from WMI
    Set colBatteries = oWMI.ExecQuery("Select * From Win32_Battery")
    
    oLogging.CreateEntry "Checking to determine if computer is plugged in...", LogTypeInfo
    
    For each Item in colBatteries
        If Item.batterystatus = 2 Then
          oLogging.CreateEntry "The computer is plugged in.  Battery status is " & _
           Item.batterystatus, LogTypeInfo 
          Wscript.Quit(0)
        Else
          oLogging.CreateEntry "The computer is not plugged in.  Battery status is " & _
           Item.batterystatus, LogTypeError
          Wscript.Quit(1)
        End If    
    Next
    
    END FUNCTION

     

    Adding to the Task Sequence

    When adding this check to the task sequence, I add it to a standard command-line step and set it to run only if ISDESKTOP = ‘FALSE’ and ISSERVER = ‘FALSE’.

    image

     

    image

     

    This post was contributed by Brad Tucker, a Senior Consultant with Microsoft Services, East Region, United States

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use

    Author: "Brad Tucker"
    Comments Send by mail Print  Save  Delicious 
    Date: Thursday, 01 May 2014 18:11

    As many of you know, MDT offers a series of ‘pre-flight’ checks you can run at the beginning of a task sequence to verify any number of things – BitLocker state, memory, Windows Scripting Host, etc…  They exist within the Tools\x64\Preflight and Tools\x86\Preflight folders located in the deployment share.  Within the UDI Wizard, there are even checks to ensure the machine is plugged into AC power and to ensure the machine is using a wired LAN connection and not wireless.  These checks are part of the compiled code and not available to us via a script.  This is fine for Lite Touch Installation, but since it requires a touch, it would not work for a Zero Touch deployment.

    I have been recently asked to create a wireless check to insert into the task sequence.  I built it with the following requirements…

    1. It must check to see if the currently used network connection was a wireless connection
    2. If it determines the current connection to be wireless, it must return an error, or non-zero code
    3. It must log with the rest of the OSD logs.

    Determine the Current Adapter

    First we have to determine the currently used adapter.  To do this, I looked in the Win32_NetworkAdapterConfiguration class.  I queried for anything with IPEnabled = TRUE and then verified it had a valid IP address.While this seemed to pull all active adapters, the script continued to tell me the machine was connected via LAN, even though it was on wireless.  The problem was related to the Hyper-V adapters I had installed.  The Hyper-V virtual adapters were seen by the computer as a physical adapter and an additional ‘Local Area Connection’.  Thus, the script assumed a physical adapter was in use.

    To get around this, I modified the WMI query to look like this:

    SELECT * FROM win32_NetworkAdapterconfiguration WHERE IPEnabled = TRUE AND NOT Caption LIKE '%Hyper-V%'

    '// Check for LAN Connection
    Set colNetCfg = oWMI.ExecQuery("SELECT * FROM win32_NetworkAdapterconfiguration _
     WHERE IPEnabled = TRUE AND NOT Caption LIKE '%Hyper-V%'")
    For Each oNetCfg in colNetCfg
        sAdapterName = Mid(oNetCfg.Caption,12)
        oLogging.CreateEntry "Adapter name:   " & sAdapterName, LogTypeInfo
        '// default value
        IsValidIPAddress = False 
        For Each sIPAddress In oNetCfg.IPAddress
            If InStr(sIPAddress,":") = 0 And Mid(sIPAddress,1,8) _
             <> "169.254" And Mid(sIPAddress,1,3) <> "0.0" Then
                IsValidIPAddress = True
                oLogging.CreateEntry "IsValidIPAddress:   " & IsValidIPAddress, LogTypeInfo
            End If
        Next

    Find the Wireless Adapter

    Now that we have the currently used adapter, we need to know if it is wireless.  The approach I chose to take is to query the registry.  The key we are looking for is HKLM\SYSTEM\CurrentControlSet\Control\Network\{4D36E972-E325-11CE-BFC1-08002BE10318}.  Now we can look through each of the machine’s connections in the sub keys.  As we loop though them, we are looking for a connection with a MediaSubType = 2.  This type is returned for all adapters classified NdisPhysicalMediumWirelessLan in OID_GEN_PHYSICAL_MEDIUM.  More information on this OID can be found here.  Once we find this adapter, we retrieve its name.

    '// Get all Subkeys
    oRegProv.Enumkey HKLM, RegKeyPath, arrSubKeys
            
    '// Read each Subkey values
    For Each sSubKey In arrSubKeys
            
       '// Get MediaSubType value if any
       oRegProv.GetDWORDValue HKLM, RegKeyPath & "\" & sSubKey & "\" & "Connection", _
        "MediaSubType", dwValue
       If dwValue = 2 Then
            
          '// Get Name 
          oRegProv.GetStringValue HKLM, RegKeyPath & "\" & sSubKey & "\" & "Connection" _
           ,"Name", sRegNetworkName
          oLogging.CreateEntry "sRegNetworkName:   " & sRegNetworkName, LogTypeInfo

    NOTE:  In some rare cases, a wireless adapter may show a MediaSubType other than 2.  These are usually due to the adapter not supporting the wireless configuration service or the vendor uses a proprietary tool.  Please contact the vendor for assistance.

    Getting Results

    Now that we have the wireless adapter name from the registry (sRegWlanName) and the name of the currently used adapter (sAdapterName), we compare the two.  If they equal each other, then wireless status is set to TRUE and we return a non-zero code (1).

    This takes care of the first two requirements, but to cover the third I am using a .WSF script template.  For more information on the ZTI scripting template, click  here.

    '// Compare both                            
    If sRegNetworkName = sNetConnectionID Then    
       GetWirelessName = True
    To VPN or Not To VPN

    We now have determined what type of connection the machine is using, but we need to do one more thing before it can be implemented.  We need to make sure that all connections that show as LAN connected are not, in fact, VPN connections. To do this, I am searching the adapter name for anything identified in my array sVPNAdapters.

    '// VPN adapter strings
    sVPNAdapters = Array("VPN","JUNIPER")

    I am only looking for “VPN” or “Juniper”, but you can easily add to this for different VPN types. 

    Now that we have what to look for, we need to compare it to the adapter name previously recorded.  As you can see, if it finds “VPN” or “JUNIPER”, it sets IsVPNAdapter = TRUE and returns a non-zero code.

     '// if adapter is not wireless
     If IsWLANAdapter = False Then
        '// check if adapter is VPN
        For Each sVPN In sVPNAdapters
           If (Instr(UCase(sAdapterName),sVPN) > 0) Then
             IsVPNAdapter = True
           End If
        Next
        If IsVPNAdapter = True And oNetCfg.IPConnectionMetric > 0 Then
          iIPConnectStatus = oNetCfg.IPConnectionMetric
          sWLANStatus = "VPN Connected"
          oLogging.CreateEntry "Connection status:   " & sWLANStatus, _
           LogTypeError
          Wscript.Quit(1)

    Adding to the Task Sequence

    Now that I have a functional script, I am going to add it to the task sequence.  I have named the script z-WirelessCheck.wsf and have placed it in the Scripts folder within my Microsoft Deployment Toolkit package. So to add it to the task sequence, I have simply created a command-line task and used the following syntax – cscript.exe "%deployroot%\scripts\z-WirelessCheck.wsf" /debug:true

    image

    NOTE:  Make sure that “Continue on error” is not checked in the Options tab.

     

    I want to give credit to Veeraswamy ”Swamy” Achanta for contributing to this script.

    This post was contributed by Brad Tucker, a Senior Consultant with Microsoft Services, East Region, United States

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use

    Author: "Brad Tucker" Tags: "wsf, pre-flight, ZTI, Wireless, check"
    Comments Send by mail Print  Save  Delicious 
    Date: Thursday, 01 May 2014 16:36

    Microsoft Customer Services & Support (CSS) with assistance from the PowerShell team and the Garage has released some very cool scripting tools.  Since those of us involved with deployments are always creating/modifying/sharing scripts, these tools look to be right up our alley.  These tool are:

    • Script Browser – IT Pros can search, download and manage 9000+ TechNet automation script samples covering almost all Microsoft IT products from within their scripting environment.  Script Browser even supports offline search so users can download all interesting scripts and search them when they do not have internet access.
    • Script Analyzer – It automatically scan your automation script and provide suggestions to improve the script quality and readability.

    The resources can be downloaded from the following link:

    http://blogs.msdn.com/b/powershell/archive/2014/04/16/a-world-of-scripts-at-your-fingertips-introducing-script-browser.aspx

    The teams developing these tools are committed to continuously adding new features and benefiting IT Pros’ work.   They have an ambitious roadmap.  If you love what you see in Script Browser & Script Analyzer, please recommend it to your friends and colleagues. If you encounter any problems or have any suggestions, please contact onescript@microsoft.com. Your opinions and comments are more than welcome.

     

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    This post was contributed by Michael Murgolo, a Senior Consultant with Microsoft Services - U.S. East Region

    Author: "Michael Murgolo"
    Comments Send by mail Print  Save  Delicious 
    Date: Monday, 14 Apr 2014 16:12
    Update 2014-04-16:  I forgot to include that fact that a locationModify rule is required when using the %HklmWowSoftware% variable.  The post has been updated to reflect this.

     

    One challenge with capturing the settings for a 32-bit applications with USMT is that some file and Registry paths will be different on 32-bit operating systems and 64-bit operating systems.  On a 32-bit operating system 32-bit programs typically get installed to C:\Program Files and local machine Registry entries are written to a subkey of HKLM\Software.  However, on a 64-bit operating system , 32-bit programs get installed to C:\Program Files (x86) and HKLM\Software Registry entries are redirected to HKLM\SOFTWARE\Wow6432Node.  (I have oversimplified this here for brevity's sake.  See this section of the Programming Guide for 64-bit Windows for more details: http://msdn.microsoft.com/en-us/library/aa384249(v=vs.85).aspx.)

    Because of this, you might think you would need to different components to migrate 32-bit application settings depending on the source and destination operating system architecture.  For example, I had a customer that was using three different XML files with components for 32-bit applications.  I’ll illustrate this using a fictitious 32-bit application called MyApp.  This application is installed by default to C:\Program Files\MyApp and creates machine-based settings in HKLM\Software\MyApp on a 32-bit OS.  For the sake of simplicity for this example, lets say that the desired way to migrate this app is to capture all the files in the C:\Program Files\MyApp\Config folder and to capture all of HKLM\Software\MyApp (or the equivalent locations on a 64-bit OS).

    The customer had three different migration scenarios:

    • Windows XP 32-bit to Windows XP 32-bit (break/fix rebuilds, etc.)
    • Windows XP 32-bit to Windows 7 64-bit (OS migration)
    • Windows 7 64-bit to Windows 7 64-bit (break/fix rebuilds, etc.)

    For Windows XP 32-bit to Windows XP 32-bit migrations they had a component like this for MyApp.

    <component type="Application" context="System">
        <displayName>Migrate MyApp - XP to XP</displayName>
        <role role="Settings">
            <rules>
                <include>
                    <objectSet>
                        <pattern type="File">%CSIDL_PROGRAM_FILES%\MyApp\Config\* [*]</pattern>
                        <pattern type="Registry">HKLM\Software\MyApp\* [*]</pattern>
                    </objectSet>
                </include>
            </rules>
        </role>
    </component>

    For Windows XP 32-bit to Windows 7 64-bit migrations they had a component like this for MyApp.  This has locationModify rules to move the migrated items to the redirected locations for 32-bit apps on 64-bit Windows.

    <component type="Application" context="System">
        <displayName>Migrate MyApp - XP to Win7</displayName>
        <role role="Settings">
            <rules>
                <include>
                    <objectSet>
                        <pattern type="File">%CSIDL_PROGRAM_FILES%\MyApp\Config\* [*]</pattern>
                        <pattern type="Registry">HKLM\Software\MyApp\* [*]</pattern>
                    </objectSet>
                </include>
                <locationModify script="MigXmlHelper.RelativeMove('%CSIDL_PROGRAM_FILES%\MyApp\Config','%CSIDL_PROGRAM_FILESX86%\MyApp\Config')">
                    <objectSet>
                        <pattern type="File">%CSIDL_PROGRAM_FILES%\MyApp\Config\* [*]</pattern>
                    </objectSet>
                </locationModify>
                <locationModify script="MigXmlHelper.RelativeMove('HKLM\Software\MyApp','HKLM\SOFTWARE\Wow6432Node\MyApp')">
                    <objectSet>
                        <pattern type="Registry">HKLM\Software\MyApp\* [*]</pattern>
                    </objectSet>
                </locationModify>
            </rules>
        </role>
    </component>

    For Windows 7 64-bit to Windows 7 64-bit migrations they had a component like this for MyApp which directly captured/restored the redirected locations for the 32-bit app on 64-bit Windows.

    <component type="Application" context="System">
        <displayName>Migrate MyApp - Win7 to Win7</displayName>
        <role role="Settings">
            <rules>
                <include>
                    <objectSet>
                        <pattern type="File">%CSIDL_PROGRAM_FILESX86%\MyApp\Config\* [*]</pattern>
                        <pattern type="Registry">HKLM\SOFTWARE\Wow6432Node\MyApp\* [*]</pattern>
                    </objectSet>
                </include>
            </rules>
        </role>
    </component>

    Of course, this makes it complicated to call USMT with the correct XML file depending on what source and destination operating systems were involved.  Fortunately, there is a way to avoid this.  It involves using a technique I lifted directly from MigApp.xml.  MigApp.xml contains a namedElements node that defines a bunch of global items.  Some of these items define single variables representing the file and local machine Registry locations for 32-bit applications independent of the operating system architecture.  You can copy this into your own custom XML files and use those variables in the same way.  Copy the XML content below into your custom XML files (just after the migration node at the top)

    <library prefix="MigSysHelper">MigSys.dll</library>

    <_locDefinition>
        <_locDefault _loc="locNone"/>
        <_locTag _loc="locData">displayName</_locTag>
    </_locDefinition>

    <namedElements>
        <!-- Global -->
        <environment name="GlobalEnvX64">
            <conditions>
                <condition>MigXmlHelper.IsNative64Bit()</condition>
            </conditions>
            <variable name="HklmWowSoftware">
                <text>HKLM\SOFTWARE\Wow6432Node</text>
            </variable>
            <variable name="ProgramFiles32bit">
                <text>%ProgramFiles(x86)%</text>
            </variable>
            <variable name="CommonProgramFiles32bit">
                <text>%CommonProgramFiles(x86)%</text>
            </variable>
        </environment>
        <environment name="GlobalEnv">
            <conditions>
                <condition negation="Yes">MigXmlHelper.IsNative64Bit()</condition>
            </conditions>
            <variable name="HklmWowSoftware">
                <text>HKLM\Software</text>
            </variable>
            <variable name="ProgramFiles32bit">
                <text>%ProgramFiles%</text>
            </variable>
            <variable name="CommonProgramFiles32bit">
                <text>%CommonProgramFiles%</text>
            </variable>
        </environment>

        <!-- Global USER -->
        <environment context="USER" name="GlobalEnvX64User">
            <conditions>
                <condition>MigXmlHelper.IsNative64Bit()</condition>
            </conditions>
            <variable name="VirtualStore_ProgramFiles32bit">
                <text>%CSIDL_VIRTUALSTORE_PROGRAMFILES(X86)%</text>
            </variable>
            <variable name="VirtualStore_CommonProgramFiles32bit">
                <text>%CSIDL_VIRTUALSTORE_COMMONPROGRAMFILES(X86)%</text>
            </variable>
        </environment>
        <environment context="USER" name="GlobalEnvUser">
            <conditions>
                <condition negation="Yes">MigXmlHelper.IsNative64Bit()</condition>
            </conditions>
            <variable name="VirtualStore_ProgramFiles32bit">
                <text>%CSIDL_VIRTUALSTORE_PROGRAMFILES%</text>
            </variable>
            <variable name="VirtualStore_CommonProgramFiles32bit">
                <text>%CSIDL_VIRTUALSTORE_COMMONPROGRAMFILES%</text>
            </variable>
        </environment>
    </namedElements>

    Once you add this, you can now define one component to migrate MyApp that will work in all three scenarios:

    <component type="Application" context="System">
        <displayName>Migrate MyApp</displayName>
        <environment name="GlobalEnv"/>
        <environment name="GlobalEnvX64"/>
        <environment name="GlobalEnvUser"/>
        <environment name="GlobalEnvX64User"/>
        <role role="Settings">
            <rules>
                <include>
                    <objectSet>
                        <pattern type="File">%ProgramFiles32bit%\MyApp\Config\* [*]</pattern>
                        <pattern type="Registry">%HklmWowSoftware%\MyApp\* [*]</pattern>
                    </objectSet>
                </include>
                <locationModify script="MigXmlHelper.RelativeMove('%HklmWowSoftware%','%HklmWowSoftware%')">
                    <objectSet>
                        <pattern type="Registry">%HklmWowSoftware%\MyApp\* [*]</pattern>
                    </objectSet>
                </locationModify>
            </rules>
        </role>
    </component>

    The variable %ProgramFiles32bit% will resolve correctly to C:\Program Files on a 32-bit OS and C:\Program Files (x86) on a 64-bit OS.  The variable %HklmWowSoftware% will resolve correctly to HKLM\Software on a 32-bit OS and HKLM\SOFTWARE\Wow6432Node on a 64-bit OS.

    Note that you need to add four environment nodes shown in the above example into your components that will use these variables.  This is essentially adding a “reference” to the namedElements items that define the variables.

    You also need a locationModify rule like the one show above when using the %HklmWowSoftware% variable.  It may seem odd to need a location modify that has the same source and destination location.  However, this is needed because USMT does not automatically redirect Registry locations.  It will try to write them back to the original location.  Fortunately, the locationModify RelativeMove function will expand the environment variables of the first parameter in the context of the source machine and the second parameter in the context of the destination machine.  This will cause the Registry entries to be redirected properly.

    You may be wondering if you need to worry about redirection of HKCU Registry entries for 32-bit applications.  You don’t.  For most of HKCU, 32-bit apps write to HKCU without redirection (See http://msdn.microsoft.com/en-us/library/aa384232(v=vs.85).aspx for details).

     

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    This post was contributed by Michael Murgolo, a Senior Consultant with Microsoft Services - U.S. East Region

    Author: "Michael Murgolo" Tags: "usmt"
    Comments Send by mail Print  Save  Delicious 
    Date: Monday, 21 Oct 2013 22:21

    Last year I published a PowerShell script that is designed to remove the built-in Windows 8 applications when creating a Windows 8 image. Well now that Windows 8.1 has been released we must update the PowerShell script to work with Windows 8.1.

    The script below takes a simple list of Apps and then removes the provisioned package and the package that is installed for the Administrator. To adjust the script for your requirements simply update the $AppList comma separated list to include the Apps you want to remove. The script is designed to work as part of an MDT or Configuration Manager task sequence. If it detects that you are running the script within a task sequence it will log the to the task sequence folder otherwise it will log to the Windows\temp folder.

    <#    
        ************************************************************************************************************
        Purpose:    Remove built in apps specified in list
        Pre-Reqs:    Windows 8.1
        ************************************************************************************************************
    #>

    #---------------------------------------------------------------------------------------------------------------
    # Main Routine
    #---------------------------------------------------------------------------------------------------------------

    # Get log path. Will log to Task Sequence log folder if the script is running in a Task Sequence
    # Otherwise log to \windows\temp

    try

    {

    $tsenv = New-Object -COMObject Microsoft.SMS.TSEnvironment

    $logPath = $tsenv.Value("LogPath")

    }

    catch

    {

    Write-Host "This script is not running in a task sequence"

    $logPath = $env:windir + "\temp"

    }

    $logFile = "$logPath\$($myInvocation.MyCommand).log"

    # Start logging

    Start-Transcript $logFile

    Write-Host "Logging to $logFile"

    # List of Applications that will be removed

    $AppsList = "microsoft.windowscommunicationsapps","Microsoft.BingFinance","Microsoft.BingMaps",`

    "Microsoft.BingWeather","Microsoft.ZuneVideo","Microsoft.ZuneMusic","Microsoft.Media.PlayReadyClient.2",`

    "Microsoft.XboxLIVEGames","Microsoft.HelpAndTips","Microsoft.BingSports",`

    "Microsoft.BingNews","Microsoft.BingFoodAndDrink","Microsoft.BingTravel","Microsoft.WindowsReadingList",`

    "Microsoft.BingHealthAndFitness","Microsoft.WindowsAlarms","Microsoft.Reader","Microsoft.WindowsCalculator",`

    "Microsoft.WindowsScan","Microsoft.WindowsSoundRecorder","Microsoft.SkypeApp"

    ForEach ($App in $AppsList)

    {

    $Packages = Get-AppxPackage | Where-Object {$_.Name -eq $App}

    if ($Packages -ne $null)

    {

          Write-Host "Removing Appx Package: $App"

          foreach ($Package in $Packages)

          {

          Remove-AppxPackage -package $Package.PackageFullName

          }

    }

    else

    {

          Write-Host "Unable to find package: $App"

    }

    $ProvisionedPackage = Get-AppxProvisionedPackage -online | Where-Object {$_.displayName -eq $App}

    if ($ProvisionedPackage -ne $null)

    {

          Write-Host "Removing Appx Provisioned Package: $App"

          remove-AppxProvisionedPackage -online -packagename $ProvisionedPackage.PackageName

    }

    else

    {

          Write-Host "Unable to find provisioned package: $App"

    }

    }

    # Stop logging

    Stop-Transcript

    For more information on adding and removing apps please refer to this TechNet article.

    This post was contributed by Ben Hunter, a Senior Product Marketing Manager with Microsoft

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    Author: "Ben Hunter"
    Comments Send by mail Print  Save  Delicious 
    Date: Monday, 21 Oct 2013 22:21

    Last year I published a PowerShell script that is designed to remove the built-in Windows 8 applications when creating a Windows 8 image. Well now that Windows 8.1 has been released we must update the PowerShell script to work with Windows 8.1.

    The script below takes a simple list of Apps and then removes the provisioned package and the package that is installed for the Administrator. To adjust the script for your requirements simply update the $AppList comma separated list to include the Apps you want to remove. The script is designed to work as part of an MDT or Configuration Manager task sequence. If it detects that you are running the script within a task sequence it will log the to the task sequence folder otherwise it will log to the Windows\temp folder.

    <#    
        ************************************************************************************************************
        Purpose:    Remove built in apps specified in list
        Pre-Reqs:    Windows 8.1
        ************************************************************************************************************
    #>

    #---------------------------------------------------------------------------------------------------------------
    # Main Routine
    #---------------------------------------------------------------------------------------------------------------

    # Get log path. Will log to Task Sequence log folder if the script is running in a Task Sequence
    # Otherwise log to \windows\temp

    try

    {

    $tsenv = New-Object -COMObject Microsoft.SMS.TSEnvironment

    $logPath = $tsenv.Value("LogPath")

    }

    catch

    {

    Write-Host "This script is not running in a task sequence"

    $logPath = $env:windir + "\temp"

    }

    $logFile = "$logPath\$($myInvocation.MyCommand).log"

    # Start logging

    Start-Transcript $logFile

    Write-Host "Logging to $logFile"

    # List of Applications that will be removed

    $AppsList = "microsoft.windowscommunicationsapps","Microsoft.BingFinance","Microsoft.BingMaps",`

    "Microsoft.BingWeather","Microsoft.ZuneVideo","Microsoft.ZuneMusic","Microsoft.Media.PlayReadyClient.2",`

    "Microsoft.Media.PlayReadyClient.2","Microsoft.XboxLIVEGames","Microsoft.HelpAndTips","Microsoft.BingSports",`

    "Microsoft.BingNews","Microsoft.BingFoodAndDrink","Microsoft.BingTravel","Microsoft.WindowsReadingList",`

    "Microsoft.BingHealthAndFitness","Microsoft.WindowsAlarms","Microsoft.Reader","Microsoft.WindowsCalculator",`

    "Microsoft.WindowsScan","Microsoft.WindowsSoundRecorder","Microsoft.SkypeApp"

    ForEach ($App in $AppsList)

    {

    $Packages = Get-AppxPackage | Where-Object {$_.Name -eq $App}

    if ($Packages -ne $null)

    {

          Write-Host "Removing Appx Package: $App"

          foreach ($Package in $Packages)

          {

          Remove-AppxPackage -package $Package.PackageFullName

          }

    }

    else

    {

          Write-Host "Unable to find package: $App"

    }

    $ProvisionedPackage = Get-AppxProvisionedPackage -online | Where-Object {$_.displayName -eq $App}

    if ($ProvisionedPackage -ne $null)

    {

          Write-Host "Removing Appx Provisioned Package: $App"

          remove-AppxProvisionedPackage -online -packagename $ProvisionedPackage.PackageName

    }

    else

    {

          Write-Host "Unable to find provisioned package: $App"

    }

    }

    # Stop logging

    Stop-Transcript

    For more information on adding and removing apps please refer to this TechNet article.

    This post was contributed by Ben Hunter, a Senior Product Marketing Manager with Microsoft

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    Author: "Ben Hunter"
    Comments Send by mail Print  Save  Delicious 
    Date: Wednesday, 02 Oct 2013 23:53

    The OSVersion variable is populated with a short string representing the version of the operating system (e.g. XP, Vista, Win7Client, 2008, etc.).  With MDT 2012, you may have noticed that when you deploy Window 8 that the value of the OSVersion variable gets set to “Other” instead of something like “Win8”.  This is because the MDT team has deprecated the OSVersion property.  The logic that set this property has not been updated for Windows 8 and will no longer be updated.  The team decided that using these string values in script logic leads to hidden bugs and unexpected behavior as new OS versions are released.  For example, code testing for a client OS that is Windows Vista or higher would require something like this using OSVersion:

    If oEnvironment.Item("OSVersion") = "Vista" Or oEnvironment.Item("OSVersion") = "Win7Client" Then…

    This would work until Windows 8 was released.  Then this would have to be updated with another OR with the Windows 8 value.  They now recommend using a variable called OSCurrentVersion which has a value of the OS major and minor version (e.g. 6.1 for Windows 7).  So the equivalent code using OSCurrentVersion would look like this and would continue to work as new operating systems are released:

    If CSng(oEnvironment.Item("OSCurrentVersion")) > 6.0 And UCase(oEnvironment.Item("IsServerOS") = "FALSE" Then...

    While I wholeheartedly agree with this reasoning, there is one instance where I like an easily recognized string for the OS version.  This is composite custom property called ModelOSArchAlias that I defined in my Model Alias post. which combines the ModelAlias, OSVersion, and Architecture properties.  These values are used to create Make and Model entries in the MDT database.  Using OSCurrentVersion instead of OSVersion would lead to ModelOSArchAlias values like ThinkPadT420_6.1_X64 instead of the more easily readable ThinkPadT420_Win7Client_X64.  Also, since Windows team seems to have developed an aversion to increasing the major version of Windows, I see it becoming easy to confuse which version you are referencing with an OSCurrentVersion of 6.0, 6.1, 6.2, or 6.3 (Windows Vista, 7, 8, and 8.1 respectively).

    So to make it possible to use a short string representing the version of the operating system that is current, I’ve created a function called GetOSVersionTag that is essentially a copy of the ZTIGather.wsf code that sets OSVersion with some improvements and updated to set proper values for Windows 8, Windows Server 2012, Windows 8.1, and Windows Server 2012 R2.  I have placed this function in a class library script that I have been building up over the years called MDTLibHelperClasses.vbs.  This script is conceptually similar to ZTIUtility.vbs.  It can be referenced similarly in other MDT scripts and using the technique from my last post it can also be used as a User Exit script.  As I create new general purpose functions going forward, I will place them in future versions of this library as appropriate.

    To use this function to create a custom variable called OSVersionTag and use that in ModelOSArchAlias, add MDTLibHelperClasses.vbs (and MDTExitInclude.vbs and ModelAliasExit.vbs from the two previous posts referenced earlier) to the Scripts folder of your Deployment Share or Configuration Manager MDT Files package and add the following to CustomSetting.ini (used during Gather in the newly deployed OS):

    [Settings]
    Priority=IncludeExitScripts, ModelAliasVars, Default
    Properties=ExitScripts(*), OSVersionTag, ModelAlias, ModelOSArchAlias

    [IncludeExitScripts]
    UserExit=MDTExitInclude.vbs
    ExitScripts001=#Include("MDTLibHelperClasses.vbs")#
    ExitScripts002=#Include("ModelAliasExit.vbs")#

    [ModelAliasVars]
    OSVersionTag=#oHelperFunctions.GetOSVersionTag#
    ModelAlias=#SetModelAlias()#
    ModelOSArchAlias=%ModelAlias%_%OSVersionTag%_%Architecture%

     

     

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    This post was contributed by Michael Murgolo, a Senior Consultant with Microsoft Services - U.S. East Region

    Attached Media: application/zip ( 16 ko)
    Author: "Michael Murgolo" Tags: "Scripts, MDT, MDT 2012, User Exit Script"
    Comments Send by mail Print  Save  Delicious 
    Date: Wednesday, 02 Oct 2013 23:53

    The OSVersion variable is populated with a short string representing the version of the operating system (e.g. XP, Vista, Win7Client, 2008, etc.).  With MDT 2012, you may have noticed that when you deploy Window 8 that the value of the OSVersion variable gets set to “Other” instead of something like “Win8”.  This is because the MDT team has deprecated the OSVersion property.  The logic that set this property has not been updated for Windows 8 and will no longer be updated.  The team decided that using these string values in script logic leads to hidden bugs and unexpected behavior as new OS versions are released.  For example, code testing for a client OS that is Windows Vista or higher would require something like this using OSVersion:

    If oEnvironment.Item("OSVersion") = "Vista" Or oEnvironment.Item("OSVersion") = "Win7Client" Then…

    This would work until Windows 8 was released.  Then this would have to be updated with another OR with the Windows 8 value.  They now recommend using a variable called OSCurrentVersion which has a value of the OS major and minor version (e.g. 6.1 for Windows 7).  So the equivalent code using OSCurrentVersion would look like this and would continue to work as new operating systems are released:

    If CSng(oEnvironment.Item("OSCurrentVersion")) > 6.0 And UCase(oEnvironment.Item("IsServerOS") = "FALSE" Then...

    While I wholeheartedly agree with this reasoning, there is one instance where I like an easily recognized string for the OS version.  This is composite custom property called ModelOSArchAlias that I defined in my Model Alias post. which combines the ModelAlias, OSVersion, and Architecture properties.  These values are used to create Make and Model entries in the MDT database.  Using OSCurrentVersion instead of OSVersion would lead to ModelOSArchAlias values like ThinkPadT420_6.1_X64 instead of the more easily readable ThinkPadT420_Win7Client_X64.  Also, since Windows team seems to have developed an aversion to increasing the major version of Windows, I see it becoming easy to confuse which version you are referencing with an OSCurrentVersion of 6.0, 6.1, 6.2, or 6.3 (Windows Vista, 7, 8, and 8.1 respectively).

    So to make it possible to use a short string representing the version of the operating system that is current, I’ve created a function called GetOSVersionTag that is essentially a copy of the ZTIGather.wsf code that sets OSVersion with some improvements and updated to set proper values for Windows 8, Windows Server 2012, Windows 8.1, and Windows Server 2012 R2.  I have placed this function in a class library script that I have been building up over the years called MDTLibHelperClasses.vbs.  This script is conceptually similar to ZTIUtility.vbs.  It can be referenced similarly in other MDT scripts and using the technique from my last post it can also be used as a User Exit script.  As I create new general purpose functions going forward, I will place them in future versions of this library as appropriate.

    To use this function to create a custom variable called OSVersionTag and use that in ModelOSArchAlias, add MDTLibHelperClasses.vbs (and MDTExitInclude.vbs and ModelAliasExit.vbs from the two previous posts referenced earlier) to the Scripts folder of your Deployment Share or Configuration Manager MDT Files package and add the following to CustomSetting.ini (used during Gather in the newly deployed OS):

    [Settings]
    Priority=IncludeExitScripts, ModelAliasVars, Default
    Properties=ExitScripts(*), OSVersionTag, ModelAlias, ModelOSArchAlias

    [IncludeExitScripts]
    UserExit=MDTExitInclude.vbs
    ExitScripts001=#Include("MDTLibHelperClasses.vbs")#
    ExitScripts002=#Include("ModelAliasExit.vbs")#

    [ModelAliasVars]
    OSVersionTag=#oHelperFunctions.GetOSVersionTag#
    ModelAlias=#SetModelAlias()#
    ModelOSArchAlias=%ModelAlias%_%OSVersionTag%_%Architecture%

     

     

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    This post was contributed by Michael Murgolo, a Senior Consultant with Microsoft Services - U.S. East Region

    Attached Media: application/zip ( 16 ko)
    Author: "Michael Murgolo" Tags: "User Exit Script, MDT, MDT 2012, Scripts"
    Comments Send by mail Print  Save  Delicious 
    Date: Friday, 13 Sep 2013 22:53

    Most readers of this blog should be familiar with MDT User Exit scripts, as many of the posts provided them for many scenarios.  In case you are not, the MDT help file defines them this way:

    “A user exit script is effectively a function library that can be called during the processing of the CustomSettings.ini file using the UserExit directive. A user exit script contains one or more functions that can be called during the process of the CustomSettings.ini file.”

    User exit scripts are a great way to extend the Gather process. However, how ZTIGather.wsf actually process user exit scripts has implications for which scripts you can use as user exit scripts.

    When ZTIGather.wsf finds a UserExit entry in a CustomSettings.ini section, it actually processes the user exit script twice, once at the beginning of the section processing and once at the end.  When it does this it first loads the contents of the user exit script file into a string variable, executes the loaded code in the global namespace of a script using the VBScript ExecuteGlobal statement, and then calls the UserExit function that is included in the script.

    The UserExit function I usually include is similar to this and requires the function signature (set of parameters) shown below:

    Function UserExit(sType, sWhen, sDetail, bSkip)
        oLogging.CreateEntry "USEREXIT:TestUserExit.vbs started: " & sType & " " & sWhen & " " & sDetail, LogTypeInfo
        UserExit = Success
    End Function

    The sType input parameter is always called with a value of "SECTION".  The sWhen input parameter is called with "BEFORE" at the beginning of section processing and "AFTER" at the end of section processing.  The sDetail input parameter is the CustomSettings.ini section name.  The bSkip parameter is a return parameter.  If bSkip is set to True in the UserExit function when it is called at the beginning of section processing, the rest of that section in CustomSettings.ini is skipped.

    This behavior allows you to run different code at the beginning or end of section processing or skip section processing based on code run at the beginning of section processing like the trivial samples shown below.

    Function UserExit(sType, sWhen, sDetail, bSkip)
        oLogging.CreateEntry "USEREXIT:TestUserExit.vbs started: " & sType & " " & sWhen & " " & sDetail, LogTypeInfo
        If sWhen = "BEFORE" Then oEnvironment.Item("WhenProperty") = "Before"
        If sWhen = "AFTER" Then oEnvironment.Item("WhenProperty") = "After"
        UserExit = Success
    End Function

    Function UserExit(sType, sWhen, sDetail, bSkip)
        oLogging.CreateEntry "USEREXIT:TestUserExit.vbs started: " & sType & " " & sWhen & " " & sDetail, LogTypeInfo
        If TestCondition = False Then bSkip = True
        UserExit = Success
    End Function

    While this functionality is interesting, I have never had any occasion where I needed to use it.  I only know one person who has put logic in a UserExit function, fellow Deployment Guy Dave Hornbaker.  So why did I bother explaining it to you?  Well, as I said earlier it has implications for which scripts you can use as user exit scripts.

    The first is that any script you want to use as a user exit script must have a UserExit function.  Fellow Deployment Guy Dave Hornbaker once needed to use ZTIDiskUtility.vbs functions in one of his user exit scripts.  Now he could have either added a UserExit function to ZTIDiskUtility.vbs so it could be used as a user exit script directly as well or added a script tag to ZTIGather.wsf that referenced ZTIDiskUtility.vbs.  However, we generally recommend against modifying the scripts that ship with MDT unless it is absolutely necessary.  If you do, then you have to remember to redo those changes when applying an update to MDT.

    The second is that the when a user exit script contains VBScript Classes, the second ExecuteGlobal of the contents at the end of section processing fails with a “name redefined” error.  This causes ZTIGather.wsf execution to fail.

    To overcome both of these limitations, I created a user exit script (MDTExitInclude.vbs) with a function called Include that only loads the contents of a VBScript file into a string variable, executes the loaded code using the VBScript ExecuteGlobal statement, and only does this once.  So while this does not allow the little-used section processing control behavior I described above, this function essentially allows you to use any VBScript as a user exit script.

    After adding MDTExitInclude.vbs to Scripts folder in the LTI Deployment Share or ConfigMgr MDT Toolkit package, modify CustomSetting.ini similar to the example below to load MDTExitInclude.vbs as a user exit script and then load another script using the Include function.  This sample loads a script named TestUserExit.vbs and make functions inside it available for use.

    [Settings]
    Priority=IncludeScript, Default
    Properties=IncludeResult

    [IncludeScript]
    UserExit=IncludeExit.vbs
    IncludeResult=#Include("TestUserExit.vbs")#

    One other advantage of loading user exit scripts in this fashion is that it simplifies loading multiple scripts.  For example, I recently needed to load five user exit scripts for one deployment.  Normally this would require creating five sections in CustomSettings.ini since you can only have one UserExit directive per section.  Using MDTExitInclude.vbs you can load as many scripts as you want in a single section like this:

    [Settings]
    Priority=IncludeExitScripts, Default
    Properties=ExitScripts(*)

    [IncludeExitScripts]
    UserExit=MDTExitInclude.vbs
    ExitScripts001=#Include("MDTLibHelperClasses.vbs")#
    ExitScripts002=#Include("ModelAliasExit.vbs")#
    ExitScripts003=#Include("MDTConfigMgrFunctions.vbs")#
    ExitScripts004=#Include("MDTExitGetCollectionAdvertsDeploys.vbs")#
    ExitScripts005=#Include("MDTExitGetResourceAdvertsDeploys.vbs")#

    This sample uses one “throw away” list item, ExitScripts, to execute the Include function for each script.  This avoids “Settings section bloat” by not requiring multiple entries in the Priority or Properties line for just loading multiple scripts.

     

     

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    This post was contributed by Michael Murgolo, a Senior Consultant with Microsoft Services - U.S. East Region.

    Attached Media: application/zip ( 1 ko)
    Author: "Michael Murgolo" Tags: "Scripts, MDT, User Exit Script"
    Comments Send by mail Print  Save  Delicious 
    Date: Friday, 13 Sep 2013 22:53

    Most readers of this blog should be familiar with MDT User Exit scripts, as many of the posts provided them for many scenarios.  In case you are not, the MDT help file defines them this way:

    “A user exit script is effectively a function library that can be called during the processing of the CustomSettings.ini file using the UserExit directive. A user exit script contains one or more functions that can be called during the process of the CustomSettings.ini file.”

    User exit scripts are a great way to extend the Gather process. However, how ZTIGather.wsf actually process user exit scripts has implications for which scripts you can use as user exit scripts.

    When ZTIGather.wsf finds a UserExit entry in a CustomSettings.ini section, it actually processes the user exit script twice, once at the beginning of the section processing and once at the end.  When it does this it first loads the contents of the user exit script file into a string variable, executes the loaded code in the global namespace of a script using the VBScript ExecuteGlobal statement, and then calls the UserExit function that is included in the script.

    The UserExit function I usually include is similar to this and requires the function signature (set of parameters) shown below:

    Function UserExit(sType, sWhen, sDetail, bSkip)
        oLogging.CreateEntry "USEREXIT:TestUserExit.vbs started: " & sType & " " & sWhen & " " & sDetail, LogTypeInfo
        UserExit = Success
    End Function

    The sType input parameter is always called with a value of "SECTION".  The sWhen input parameter is called with "BEFORE" at the beginning of section processing and "AFTER" at the end of section processing.  The sDetail input parameter is the CustomSettings.ini section name.  The bSkip parameter is a return parameter.  If bSkip is set to True in the UserExit function when it is called at the beginning of section processing, the rest of that section in CustomSettings.ini is skipped.

    This behavior allows you to run different code at the beginning or end of section processing or skip section processing based on code run at the beginning of section processing like the trivial samples shown below.

    Function UserExit(sType, sWhen, sDetail, bSkip)
        oLogging.CreateEntry "USEREXIT:TestUserExit.vbs started: " & sType & " " & sWhen & " " & sDetail, LogTypeInfo
        If sWhen = "BEFORE" Then oEnvironment.Item("WhenProperty") = "Before"
        If sWhen = "AFTER" Then oEnvironment.Item("WhenProperty") = "After"
        UserExit = Success
    End Function

    Function UserExit(sType, sWhen, sDetail, bSkip)
        oLogging.CreateEntry "USEREXIT:TestUserExit.vbs started: " & sType & " " & sWhen & " " & sDetail, LogTypeInfo
        If TestCondition = False Then bSkip = True
        UserExit = Success
    End Function

    While this functionality is interesting, I have never had any occasion where I needed to use it.  I only know one person who has put logic in a UserExit function, fellow Deployment Guy Dave Hornbaker.  So why did I bother explaining it to you?  Well, as I said earlier it has implications for which scripts you can use as user exit scripts.

    The first is that any script you want to use as a user exit script must have a UserExit function.  Fellow Deployment Guy Dave Hornbaker once needed to use ZTIDiskUtility.vbs functions in one of his user exit scripts.  Now he could have either added a UserExit function to ZTIDiskUtility.vbs so it could be used as a user exit script directly as well or added a script tag to ZTIGather.wsf that referenced ZTIDiskUtility.vbs.  However, we generally recommend against modifying the scripts that ship with MDT unless it is absolutely necessary.  If you do, then you have to remember to redo those changes when applying an update to MDT.

    The second is that the when a user exit script contains VBScript Classes, the second ExecuteGlobal of the contents at the end of section processing fails with a “name redefined” error.  This causes ZTIGather.wsf execution to fail.

    To overcome both of these limitations, I created a user exit script (MDTExitInclude.vbs) with a function called Include that only loads the contents of a VBScript file into a string variable, executes the loaded code using the VBScript ExecuteGlobal statement, and only does this once.  So while this does not allow the little-used section processing control behavior I described above, this function essentially allows you to use any VBScript as a user exit script.

    After adding MDTExitInclude.vbs to Scripts folder in the LTI Deployment Share or ConfigMgr MDT Toolkit package, modify CustomSetting.ini similar to the example below to load MDTExitInclude.vbs as a user exit script and then load another script using the Include function.  This sample loads a script named TestUserExit.vbs and make functions inside it available for use.

    [Settings]
    Priority=IncludeScript, Default
    Properties=IncludeResult

    [IncludeScript]
    UserExit=IncludeExit.vbs
    IncludeResult=#Include("TestUserExit.vbs")#

    One other advantage of loading user exit scripts in this fashion is that it simplifies loading multiple scripts.  For example, I recently needed to load five user exit scripts for one deployment.  Normally this would require creating five sections in CustomSettings.ini since you can only have one UserExit directive per section.  Using MDTExitInclude.vbs you can load as many scripts as you want in a single section like this:

    [Settings]
    Priority=IncludeExitScripts, Default
    Properties=ExitScripts(*)

    [IncludeExitScripts]
    UserExit=MDTExitInclude.vbs
    ExitScripts001=#Include("MDTLibHelperClasses.vbs")#
    ExitScripts002=#Include("ModelAliasExit.vbs")#
    ExitScripts003=#Include("MDTConfigMgrFunctions.vbs")#
    ExitScripts004=#Include("MDTExitGetCollectionAdvertsDeploys.vbs")#
    ExitScripts005=#Include("MDTExitGetResourceAdvertsDeploys.vbs")#

    This sample uses one “throw away” list item, ExitScripts, to execute the Include function for each script.  This avoids “Settings section bloat” by not requiring multiple entries in the Priority or Properties line for just loading multiple scripts.

     

     

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    This post was contributed by Michael Murgolo, a Senior Consultant with Microsoft Services - U.S. East Region.

    Attached Media: application/zip ( 1 ko)
    Author: "Michael Murgolo" Tags: "User Exit Script, MDT, Scripts"
    Comments Send by mail Print  Save  Delicious 
    Date: Tuesday, 13 Aug 2013 01:37

    There are occasions where the variables I needed to use to query or retrieve data from the database were not the ones that match the field names in the database.  Luckily, the MDT Gather process supports variable remapping in the database sections of CustomSetting.ini to do just that.

    I demonstrated one type of variable remapping in my post on Model Aliases.  I wanted to query Model settings, apps, etc. using the ModelAlias instead of the Model.  Since Model is the field in the database that the queries use to find the records, we have to tell Gather that is should use the value found in the ModelAlias variable to query the Model field in the database instead.

    The standard Make/Model Settings database section looks like the one below.  The table (view) in the database is MakeModelSettings and the fields used to find the records are Make and Model.

    [MMSettings]
    SQLServer=%MDTSQLServer%
    Database=%MDTDatabase%
    Netlib=%MDTNetlib%
    SQLShare=%MDTSQLShare%
    Table=MakeModelSettings
    Parameters=Make, Model

    For my ModelAlias Settings sections I only was interested in finding records with a matching Model field, but the value I wanted to match was actually stored in the ModelAlias variable.  The changes shown in red tell Gather to do exactly that:

    [MASettings]
    SQLServer=%MDTSQLServer%
    Database=%MDTDatabase%
    Netlib=%MDTNetlib%
    SQLShare=%MDTSQLShare%
    Table=MakeModelSettings
    Parameters=ModelAlias
    ModelAlias=Model

    This tells Gather to use ModelAlias for the value of the input parameter and that this parameter is to match the Model field.

    I recently needed to do the second type of variable remapping, placing the results from a field in a returned records into a variable that is different from the name of the field.  In my scenario, I needed to place the results of the different application queries into different list variables.  The default database sections for getting applications (CApps, LApps, MMApps, RApps) will fill the results into the Applications list variable, which matches the name of the field in the returned results.  I needed each of the queries to return the results into different variables so that I could later append them back together in a certain order with results from some other steps in the task sequence in the mix.  For example, say I wanted the store the results of MMApps into variable called ModelApps instead of applications.

    The standard MMApps sections looks like this:

    [MMApps]
    SQLServer=%MDTSQLServer%
    Database=%MDTDatabase%
    Netlib=%MDTNetlib%
    SQLShare=%MDTSQLShare%
    Table=MakeModelApplications
    Parameters=Make, Model
    Order=Sequence

    To return the results into ModelApps, you first need to define the custom variable in the Properties line and then add the remapping line to the MMApps section (shown in red).  (In case you haven’t see this, adding (*) at the end of a variable name in the Properties line tells Gather that the custom variable is a list variable.)

    [Settings]
    Priority=MMApps, Default
    Properties=ModelApps(*)

    [MMApps]
    SQLServer=%MDTSQLServer%
    Database=%MDTDatabase%
    Netlib=%MDTNetlib%
    SQLShare=%MDTSQLShare%
    Table=MakeModelApplications
    Parameters=Make, Model
    Order=Sequence
    ModelApps=Applications

    One unfortunate quirk of this type of remapping is that the original variable (Applications) will also be filled with the results.  So you have to be aware of this if you had hoped that Applications would remain empty so it could be used else ware, as I had .  I was going to use Applications as the final appended list.  I had to use another custom list variable instead.

    If necessary, you can also use both types of remapping in database sections at the same time.

    Update 2013-08-20:  I forgot to mention that this same remapping of input parameters and output variables can also be done in CustomSetting.ini Web Service query sections.

    Update 2013-09-10:  I recently discovered that when remapping input parameters, if the parameter is a list item then the remapped value will be added to the original list item.  For example, recently I wanted to query a list of applications from a specific role name and map it to a specific output list item like this:

    [Settings]
    Priority=Common, RAppsCore
    Properties=RoleCore, CoreApps(*)

    [Common]
    RoleCore=Core Applications
    MDTSQLServer=SQLServer001
    MDTDatabase=MDT
    MDTNetlib=DBNMPNTW
    MDTSQLShare=SQLShare$

    [RAppsCore]
    SQLServer=%MDTSQLServer%
    Database=%MDTDatabase%
    Netlib=%MDTNetlib%
    SQLShare=%MDTSQLShare%
    Table=RoleApplications
    Parameters=RoleCore
    Order=Sequence
    RoleCore=Role
    CoreApps=Applications

    In this example, RoleCore is being input for the parameter Role.  Since Role is a list item, the value stored in RoleCore, “Core Applications” in this example, will be added as Role001.  This is another thing to keep in mind when remapping variables.

    I’ll describe ways to clear out the list items that are getting filled unintentionally by remapping in another post.

     

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    This post was contributed by Michael Murgolo, a Senior Consultant with Microsoft Services - U.S. East Region.

    Author: "Michael Murgolo"
    Comments Send by mail Print  Save  Delicious 
    Date: Tuesday, 13 Aug 2013 01:37

    There are occasions where the variables I needed to use to query or retrieve data from the database were not the ones that match the field names in the database.  Luckily, the MDT Gather process supports variable remapping in the database sections of CustomSetting.ini to do just that.

    I demonstrated one type of variable remapping in my post on Model Aliases.  I wanted to query Model settings, apps, etc. using the ModelAlias instead of the Model.  Since Model is the field in the database that the queries use to find the records, we have to tell Gather that is should use the value found in the ModelAlias variable to query the Model field in the database instead.

    The standard Make/Model Settings database section looks like the one below.  The table (view) in the database is MakeModelSettings and the fields used to find the records are Make and Model.

    [MMSettings]
    SQLServer=%MDTSQLServer%
    Database=%MDTDatabase%
    Netlib=%MDTNetlib%
    SQLShare=%MDTSQLShare%
    Table=MakeModelSettings
    Parameters=Make, Model

    For my ModelAlias Settings sections I only was interested in finding records with a matching Model field, but the value I wanted to match was actually stored in the ModelAlias variable.  The changes shown in red tell Gather to do exactly that:

    [MASettings]
    SQLServer=%MDTSQLServer%
    Database=%MDTDatabase%
    Netlib=%MDTNetlib%
    SQLShare=%MDTSQLShare%
    Table=MakeModelSettings
    Parameters=ModelAlias
    ModelAlias=Model

    This tells Gather to use ModelAlias for the value of the input parameter and that this parameter is to match the Model field.

    I recently needed to do the second type of variable remapping, placing the results from a field in a returned records into a variable that is different from the name of the field.  In my scenario, I needed to place the results of the different application queries into different list variables.  The default database sections for getting applications (CApps, LApps, MMApps, RApps) will fill the results into the Applications list variable, which matches the name of the field in the returned results.  I needed each of the queries to return the results into different variables so that I could later append them back together in a certain order with results from some other steps in the task sequence in the mix.  For example, say I wanted the store the results of MMApps into variable called ModelApps instead of applications.

    The standard MMApps sections looks like this:

    [MMApps]
    SQLServer=%MDTSQLServer%
    Database=%MDTDatabase%
    Netlib=%MDTNetlib%
    SQLShare=%MDTSQLShare%
    Table=MakeModelApplications
    Parameters=Make, Model
    Order=Sequence

    To return the results into ModelApps, you first need to define the custom variable in the Properties line and then add the remapping line to the MMApps section (shown in red).  (In case you haven’t see this, adding (*) at the end of a variable name in the Properties line tells Gather that the custom variable is a list variable.)

    [Settings]
    Priority=MMApps, Default
    Properties=ModelApps(*)

    [MMApps]
    SQLServer=%MDTSQLServer%
    Database=%MDTDatabase%
    Netlib=%MDTNetlib%
    SQLShare=%MDTSQLShare%
    Table=MakeModelApplications
    Parameters=Make, Model
    Order=Sequence
    ModelApps=Applications

    One unfortunate quirk of this type of remapping is that the original variable (Applications) will also be filled with the results.  So you have to be aware of this if you had hoped that Applications would remain empty so it could be used else ware, as I had .  I was going to use Applications as the final appended list.  I had to use another custom list variable instead.

    If necessary, you can also use both types of remapping in database sections at the same time.

    Update 2013-08-20:  I forgot to mention that this same remapping of input parameters and output variables can also be done in CustomSetting.ini Web Service query sections.

    Update 2013-09-10:  I recently discovered that when remapping input parameters, if the parameter is a list item then the remapped value will be added to the original list item.  For example, recently I wanted to query a list of applications from a specific role name and map it to a specific output list item like this:

    [Settings]
    Priority=Common, RAppsCore
    Properties=RoleCore, CoreApps(*)

    [Common]
    RoleCore=Core Applications
    MDTSQLServer=SQLServer001
    MDTDatabase=MDT
    MDTNetlib=DBNMPNTW
    MDTSQLShare=SQLShare$

    [RAppsCore]
    SQLServer=%MDTSQLServer%
    Database=%MDTDatabase%
    Netlib=%MDTNetlib%
    SQLShare=%MDTSQLShare%
    Table=RoleApplications
    Parameters=RoleCore
    Order=Sequence
    RoleCore=Role
    CoreApps=Applications

    In this example, RoleCore is being input for the parameter Role.  Since Role is a list item, the value stored in RoleCore, “Core Applications” in this example, will be added as Role001.  This is another thing to keep in mind when remapping variables.

    I’ll describe ways to clear out the list items that are getting filled unintentionally by remapping in another post.

     

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    This post was contributed by Michael Murgolo, a Senior Consultant with Microsoft Services - U.S. East Region.

    Author: "Michael Murgolo"
    Comments Send by mail Print  Save  Delicious 
    Date: Sunday, 16 Jun 2013 09:01

    The ActiveX Installer Service (AXIS) is a Windows technology that enables the installation of ActiveX controls to a standard user in the enterprise. It consists of a Windows service, a Group Policy administrative template, and a few changes in Internet Explorer behavior.

    Many organizations must install ActiveX controls on their desktops in order to ensure that a variety of programs that they must use on a daily basis will work properly. However, most ActiveX controls must be installed by a member of the Administrators group, and many organizations have configured or want to configure their users to run as standard users, which are non-administrative users that are members of the Users group. As a result, organizations often have to repackage and deploy the ActiveX controls to the users. In addition, many of these ActiveX controls must be regularly updated. Many organizations find this to be difficult and costly to manage for standard users.

    With Windows 7/8 the ActiveX Installer Service is a native OS service and you can easily deploy and update ActiveX controls to your standard user environments. The ActiveX Installer Service enables you to leverage Group Policy to define and manage approved host URLs that standard users can use to install ActiveX controls in a locked-down environment. For more information about AXIS, see: http://technet.microsoft.com/en-us/library/cc721964.aspx.

    Here is how ActiveX Installer Service works :

    image

    1. Define a list of explicitly approved host URLs
    2. AxIS checks Group Policy Object (GPO) to see URL is approved
    3. Internet Explorer asks AxIS to install the ActiveX
    4. No admin credentials required for install if approved
    5. If not approved, administrator credentials required for install
    6. Only installs ActiveX controls with a .cab, .dll, or .ocx file extension

    AxInstallerService in Windows allows the corporate administrator to manage ActiveX controls while maintaining a strong security posture, by having users run as standard user with default file system settings. AXIS provides Group Policy options to configure trusted sources of ActiveX controls and a broker process to install controls from those trusted sources on behalf of standard users. The key benefit is that you can maintain a non-administrative security posture on user workstations along with centralized administrative control. AXIS relies on the IT administrator to identify trusted sources (typically Internet or intranet URLs) of ActiveX controls.

    When an object tag directs Internet Explorer to invoke a control, AXIS takes the following steps:

    1. Checks that the control is installed. If not, it must be installed prior to use
    2. Checks the AXIS policy setting to verify if the control is from a trusted source
    3. The specific check matches the host name of the URL specified in the CODEBASE attribute of the object tag against the list of trusted locations specified in policy
    4. Downloads and installs the control on the user’s behalf

    Some security zones settings configure the ability for computers to execute and/or download ActiveX controls. However, even if Internet Explorer allows an ActiveX control to be downloaded from the web site, the ActiveX control can only be installed from an elevated process or administrative account. One of the goals for enterprises is to only provide end users standard, non-administrative access to their operating system. This means that ActiveX controls downloaded from web sites – regardless of the web site’s security zone – cannot be installed by the end users.

    With Windows 7/8 and beyond, AXIS is a native Windows service that will install ActiveX controls on behalf of end-users. Enterprises can maintain a list of approved web sites, implemented via Group Policy, that will cause AXIS to install any required ActiveX controls for the end-user. Further, AXIS can be configured to install ActiveX controls from all Trusted Sites.

    The advantage of using AXIS over an Software Distribution tool is that no packaging of ActiveX controls is required, which significantly reduces the amount of time needed to get an ActiveX control installed in production. Group Policy based administration enables rapid changes to the deployed computers. Leveraging AXIS involves some additional management, specifically the management of a Group Policy object to add specific sites to leverage AXIS. The control of ActiveX installation and functional state can be managed in enterprises via Active Directory Group Policy.

    Policy Settings

    Scope

    Policy Path

    Turn off ActiveX Opt-In Prompt

    User, Machine

    Windows Components\Internet Explorer

    Only use the ActiveX Installer Service for
    installation of ActiveX controls

    User, Machine

    Windows Components\Internet Explorer

    Only allow approved domains to use
    ActiveX without prompt

    User, Machine

    Windows Components\Internet Explorer\Internet Control Panel\Security\PER ZONE

    Disable Per-User Installation of
    ActiveX Controls

    User, Machine

    Windows Components\Internet Explorer

    Turn off ActiveX Opt-In prompt: This policy setting allows you to turn off the ActiveX Opt-in prompt. The ActiveX Opt-in prevents websites from loading any COM object without prior approval. If a page attempts to load a COM object that Internet Explorer has not used before, an Information bar will appear asking the user for approval. If you enable this policy setting, the ActiveX Opt-in prompt will not appear. Internet Explorer does not ask the user for permission to load a control, and will load the ActiveX if it passes all other internal security checks. If you disable or do not configure this policy setting, the ActiveX Opt-In prompt will appear.

    Only use the ActiveX Installer Service for installation of ActiveX controls:
    This policy setting allows you to specify how ActiveX controls are installed. If you enable this policy setting, ActiveX controls will only install if the ActiveX Installer Service is present and has been configured to allow ActiveX controls to be installed. If you disable or do not configure this policy setting, ActiveX controls, including per-user controls, will be installed using the standard installation process.

    Disable Per-User Installation of ActiveX Controls: This policy setting allows you to disable the per-user installation of ActiveX controls. This policy only affects ActiveX controls that can be installed on a per-user basis. If you enable this policy setting, ActiveX controls cannot be installed on a per-user basis. If you disable or do not configure this policy setting, ActiveX controls can be installed on a per-user basis.

     

    Configuring the ActiveX Installer Service

    The ActiveX Installer Service is enabled by default in Windows 7 /8 , you only need GPMC to configure it. You must configure the ActiveX Installer Service settings by using an administrative template in Group Policy. The administrative template consists of a list of approved installation sites, which the ActiveX Installer Service uses to determine whether an ActiveX control can be installed. We recommend Domain policies over Local policies.

    To configure the ActiveX Installer Service using local GPMC (similar steps for Domain Policy)

    1. Press Windows Key + R to open the Run command.
    2. Type mmc, and then click OK.
    3. In the File menu, click Add/Remove Snap-in.
    4. In the Add/Remove Snap-ins dialog box, select Group Policy Management Console, and then click Add.
    5. In the Select Group Policy Object dialog box, accept the default setting of the local computer or click Browse to configure a remote computer, and then click Finish.
    6. In the Add/Remove Snap-ins dialog box, click OK.
    7. In the console tree, expand Local Computer Policy, expand Computer Configuration, expand Administrative Templates, expand Windows Components, and then click ActiveX Installer Service.

      image
    8. In the details pane, click Approved Installation Sites for ActiveX Controls to edit
      image
    9. In the Approved Installation Sites for ActiveX Controls Properties dialog box, select Enabled, and then click Show next to Host URLs.
    10. In the Show Contents dialog box type the name for the URL where you want to allow ActiveX controls to be installed
    11. Type the values for the four ActiveX Installer Service host URLs settings.
      image
    12. Click OK
    13. In the details pane, click Establish ActiveX installation policy for sites in Trusted zones to Edit.
    14. Make your selection for the Trusted zones
       imageimage
    15. Click OK to close

    When you add a URL, you can specify comma-delimited values that detail the settings for the ActiveX Installer Service.
    You can configure four values:

    • Installing ActiveX controls that have trusted signatures
    • Installing signed ActiveX controls
    • Installing unsigned ActiveX controls
    • HTTPS error exceptions

     

    ActiveX Recommended Practices

    Only install ActiveX controls from reputable organizations -
    We recommend that you only install ActiveX controls from publishers that you know and trust. The ActiveX Installer Service does not determine whether the host presenting the ActiveX control is connected to a secure network. Ensuring that you only install ActiveX controls from reputable publishers will help mitigate this threat.

    Deploy commonly used ActiveX controls -
    We recommend that you deploy ActiveX controls that are commonly used in your environment by using your organization's application deployment method. Many users today use laptops to connect to multiple networks, including wireless hot spots. A malicious proxy at an insecure network could attempt to trick the ActiveX Installation Service by redirecting it to a host with malicious software that represents itself as a commonly used ActiveX control. Ensuring that you deploy commonly used ActiveX controls for your users will help mitigate this threat.

    Only use HTTPS host URLs -
    We recommend that you only modify the value for HTTPS error exceptions to require the connection to pass all verification checks (0). If a remote users connects to an insecure wireless network, and the proxy attempts to redirect the connection, this setting will ensure that the ActiveX control installation will fail since the certificate will be invalid.

    Consolidate ActiveX controls to a central server -
    We recommend that you consolidate the ActiveX controls you use in your organization to a central server. The location where a Web site hosts an ActiveX control is called a CODEBASE. Normally, the CODEBASE is specified in the Web page, and the installation process retrieves the ActiveX control from that location.
    In managed enterprises, you can use Group Policy to override the CODEBASE that is specified within the Web page to redirect to an internal server. Using this setting allows you to easily manage which ActiveX controls users can install by consolidating the ActiveX controls onto a central server; if the server is an HTTPS server, you also satisfy the previous recommended practice, only use HTTPS host URLs.
    You can configure a common Group Policy setting to redirect all ActiveX control installations to a central server in your organization. You can do this by using the CodeBaseSearchPath registry key. For more information on the CodeBaseSearchPath see Implementing Internet Component Download http://go.microsoft.com/fwlink/?LinkId=90677

     

    AXIS Implementation Checklist

    1. Gather ActiveX controls - You can assess which controls, if any, are appropriate to use within your organization. You may need to gather an inventory of existing ActiveX controls already in production use. The Microsoft  Assessment and Planning Toolkit or Application Compatibility Manager as part of the Windows 8 ADK will help for the inventory.
    2. Create and implement Group Policies

     

    Most Common Controls

     

    More Information about ActiveX can be found:


    This post is based on the work of  Steve Campbell  (Architect with Microsoft Consulting Services US ) and was contributed by Lutz Seidemann, a Solution Architect with Microsoft Consulting Services – World Wide Client Center of Excellence.

    The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    Author: "Lutz Seidemann" Tags: "Tools, AXIS, GPO, Windows 7, Windows 8"
    Comments Send by mail Print  Save  Delicious 
    Date: Sunday, 16 Jun 2013 09:01

    The ActiveX Installer Service (AXIS) is a Windows technology that enables the installation of ActiveX controls to a standard user in the enterprise. It consists of a Windows service, a Group Policy administrative template, and a few changes in Internet Explorer behavior.

    Many organizations must install ActiveX controls on their desktops in order to ensure that a variety of programs that they must use on a daily basis will work properly. However, most ActiveX controls must be installed by a member of the Administrators group, and many organizations have configured or want to configure their users to run as standard users, which are non-administrative users that are members of the Users group. As a result, organizations often have to repackage and deploy the ActiveX controls to the users. In addition, many of these ActiveX controls must be regularly updated. Many organizations find this to be difficult and costly to manage for standard users.

    With Windows 7/8 the ActiveX Installer Service is a native OS service and you can easily deploy and update ActiveX controls to your standard user environments. The ActiveX Installer Service enables you to leverage Group Policy to define and manage approved host URLs that standard users can use to install ActiveX controls in a locked-down environment. For more information about AXIS, see: http://technet.microsoft.com/en-us/library/cc721964.aspx.

    Here is how ActiveX Installer Service works :

    image

    1. Define a list of explicitly approved host URLs
    2. AxIS checks Group Policy Object (GPO) to see URL is approved
    3. Internet Explorer asks AxIS to install the ActiveX
    4. No admin credentials required for install if approved
    5. If not approved, administrator credentials required for install
    6. Only installs ActiveX controls with a .cab, .dll, or .ocx file extension

    AxInstallerService in Windows allows the corporate administrator to manage ActiveX controls while maintaining a strong security posture, by having users run as standard user with default file system settings. AXIS provides Group Policy options to configure trusted sources of ActiveX controls and a broker process to install controls from those trusted sources on behalf of standard users. The key benefit is that you can maintain a non-administrative security posture on user workstations along with centralized administrative control. AXIS relies on the IT administrator to identify trusted sources (typically Internet or intranet URLs) of ActiveX controls.

    When an object tag directs Internet Explorer to invoke a control, AXIS takes the following steps:

    1. Checks that the control is installed. If not, it must be installed prior to use
    2. Checks the AXIS policy setting to verify if the control is from a trusted source
    3. The specific check matches the host name of the URL specified in the CODEBASE attribute of the object tag against the list of trusted locations specified in policy
    4. Downloads and installs the control on the user’s behalf

    Some security zones settings configure the ability for computers to execute and/or download ActiveX controls. However, even if Internet Explorer allows an ActiveX control to be downloaded from the web site, the ActiveX control can only be installed from an elevated process or administrative account. One of the goals for enterprises is to only provide end users standard, non-administrative access to their operating system. This means that ActiveX controls downloaded from web sites – regardless of the web site’s security zone – cannot be installed by the end users.

    With Windows 7/8 and beyond, AXIS is a native Windows service that will install ActiveX controls on behalf of end-users. Enterprises can maintain a list of approved web sites, implemented via Group Policy, that will cause AXIS to install any required ActiveX controls for the end-user. Further, AXIS can be configured to install ActiveX controls from all Trusted Sites.

    The advantage of using AXIS over an Software Distribution tool is that no packaging of ActiveX controls is required, which significantly reduces the amount of time needed to get an ActiveX control installed in production. Group Policy based administration enables rapid changes to the deployed computers. Leveraging AXIS involves some additional management, specifically the management of a Group Policy object to add specific sites to leverage AXIS. The control of ActiveX installation and functional state can be managed in enterprises via Active Directory Group Policy.

    Policy Settings

    Scope

    Policy Path

    Turn off ActiveX Opt-In Prompt

    User, Machine

    Windows Components\Internet Explorer

    Only use the ActiveX Installer Service for
    installation of ActiveX controls

    User, Machine

    Windows Components\Internet Explorer

    Only allow approved domains to use
    ActiveX without prompt

    User, Machine

    Windows Components\Internet Explorer\Internet Control Panel\Security\PER ZONE

    Disable Per-User Installation of
    ActiveX Controls

    User, Machine

    Windows Components\Internet Explorer

    Turn off ActiveX Opt-In prompt: This policy setting allows you to turn off the ActiveX Opt-in prompt. The ActiveX Opt-in prevents websites from loading any COM object without prior approval. If a page attempts to load a COM object that Internet Explorer has not used before, an Information bar will appear asking the user for approval. If you enable this policy setting, the ActiveX Opt-in prompt will not appear. Internet Explorer does not ask the user for permission to load a control, and will load the ActiveX if it passes all other internal security checks. If you disable or do not configure this policy setting, the ActiveX Opt-In prompt will appear.

    Only use the ActiveX Installer Service for installation of ActiveX controls:
    This policy setting allows you to specify how ActiveX controls are installed. If you enable this policy setting, ActiveX controls will only install if the ActiveX Installer Service is present and has been configured to allow ActiveX controls to be installed. If you disable or do not configure this policy setting, ActiveX controls, including per-user controls, will be installed using the standard installation process.

    Disable Per-User Installation of ActiveX Controls: This policy setting allows you to disable the per-user installation of ActiveX controls. This policy only affects ActiveX controls that can be installed on a per-user basis. If you enable this policy setting, ActiveX controls cannot be installed on a per-user basis. If you disable or do not configure this policy setting, ActiveX controls can be installed on a per-user basis.

     

    Configuring the ActiveX Installer Service

    The ActiveX Installer Service is enabled by default in Windows 7 /8 , you only need GPMC to configure it. You must configure the ActiveX Installer Service settings by using an administrative template in Group Policy. The administrative template consists of a list of approved installation sites, which the ActiveX Installer Service uses to determine whether an ActiveX control can be installed. We recommend Domain policies over Local policies.

    To configure the ActiveX Installer Service using local GPMC (similar steps for Domain Policy)

    1. Press Windows Key + R to open the Run command.
    2. Type mmc, and then click OK.
    3. In the File menu, click Add/Remove Snap-in.
    4. In the Add/Remove Snap-ins dialog box, select Group Policy Management Console, and then click Add.
    5. In the Select Group Policy Object dialog box, accept the default setting of the local computer or click Browse to configure a remote computer, and then click Finish.
    6. In the Add/Remove Snap-ins dialog box, click OK.
    7. In the console tree, expand Local Computer Policy, expand Computer Configuration, expand Administrative Templates, expand Windows Components, and then click ActiveX Installer Service.

      image
    8. In the details pane, click Approved Installation Sites for ActiveX Controls to edit
      image
    9. In the Approved Installation Sites for ActiveX Controls Properties dialog box, select Enabled, and then click Show next to Host URLs.
    10. In the Show Contents dialog box type the name for the URL where you want to allow ActiveX controls to be installed
    11. Type the values for the four ActiveX Installer Service host URLs settings.
      image
    12. Click OK
    13. In the details pane, click Establish ActiveX installation policy for sites in Trusted zones to Edit.
    14. Make your selection for the Trusted zones
       imageimage
    15. Click OK to close

    When you add a URL, you can specify comma-delimited values that detail the settings for the ActiveX Installer Service.
    You can configure four values:

    • Installing ActiveX controls that have trusted signatures
    • Installing signed ActiveX controls
    • Installing unsigned ActiveX controls
    • HTTPS error exceptions

     

    ActiveX Recommended Practices

    Only install ActiveX controls from reputable organizations -
    We recommend that you only install ActiveX controls from publishers that you know and trust. The ActiveX Installer Service does not determine whether the host presenting the ActiveX control is connected to a secure network. Ensuring that you only install ActiveX controls from reputable publishers will help mitigate this threat.

    Deploy commonly used ActiveX controls -
    We recommend that you deploy ActiveX controls that are commonly used in your environment by using your organization's application deployment method. Many users today use laptops to connect to multiple networks, including wireless hot spots. A malicious proxy at an insecure network could attempt to trick the ActiveX Installation Service by redirecting it to a host with malicious software that represents itself as a commonly used ActiveX control. Ensuring that you deploy commonly used ActiveX controls for your users will help mitigate this threat.

    Only use HTTPS host URLs -
    We recommend that you only modify the value for HTTPS error exceptions to require the connection to pass all verification checks (0). If a remote users connects to an insecure wireless network, and the proxy attempts to redirect the connection, this setting will ensure that the ActiveX control installation will fail since the certificate will be invalid.

    Consolidate ActiveX controls to a central server -
    We recommend that you consolidate the ActiveX controls you use in your organization to a central server. The location where a Web site hosts an ActiveX control is called a CODEBASE. Normally, the CODEBASE is specified in the Web page, and the installation process retrieves the ActiveX control from that location.
    In managed enterprises, you can use Group Policy to override the CODEBASE that is specified within the Web page to redirect to an internal server. Using this setting allows you to easily manage which ActiveX controls users can install by consolidating the ActiveX controls onto a central server; if the server is an HTTPS server, you also satisfy the previous recommended practice, only use HTTPS host URLs.
    You can configure a common Group Policy setting to redirect all ActiveX control installations to a central server in your organization. You can do this by using the CodeBaseSearchPath registry key. For more information on the CodeBaseSearchPath see Implementing Internet Component Download http://go.microsoft.com/fwlink/?LinkId=90677

     

    AXIS Implementation Checklist

    1. Gather ActiveX controls - You can assess which controls, if any, are appropriate to use within your organization. You may need to gather an inventory of existing ActiveX controls already in production use. The Microsoft  Assessment and Planning Toolkit or Application Compatibility Manager as part of the Windows 8 ADK will help for the inventory.
    2. Create and implement Group Policies

     

    Most Common Controls

     

    More Information about ActiveX can be found:


    This post is based on the work of  Steve Campbell  (Architect with Microsoft Consulting Services US ) and was contributed by Lutz Seidemann, a Solution Architect with Microsoft Consulting Services – World Wide Client Center of Excellence.

    The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    Author: "Lutz Seidemann" Tags: "Windows 7, Tools, Windows 8, GPO, AXIS"
    Comments Send by mail Print  Save  Delicious 
    Date: Friday, 14 Jun 2013 08:22

    So – your development cycles have been completed and now you are ready to deploy the much anticipated Windows 8 based application that you have developed to your clients. You will quickly realize that the deployment of your newly created Windows 8 application cannot happen until the appx assembly has been signed. All methods of deployment (Windows Store, PowerShell or System Center 2012 Configuration Manager) require the application to be signed using a certificate issued by a trusted source before you can deploy it.

    If your application was developed with the intention of staying within the corporate landscape, then you may use a certificate issued by an internally hosted trusted CA. A lot of documentation is available about the requirements of the certificate issued, but a how-to guide was not available until now. This blog post will walk you through the steps required to install an internally developed application to production systems.

    The screen captures in this blog post are performed using Windows Server 2012 Domain Controller, Windows Server 2012 Certificate Authority, Visual Studio 2012 and Windows 8 Enterprise. The procedures for Windows Server 2008 R2 vary slightly, but the same certificate requirements can been completed.

    The diagram below identifies the workflow that this blog post will walk you through.

    clip_image002

     

    Get the Certificate

    Visual Studio will validate the certificate used to sign the app in the following ways:

    • Verifies the presence of the Basic Constraints extension and its value, which must be either Subject Type=End Entity or unspecified.
    • Verifies the value of the Enhanced Key Usage property, which must contain Code Signing and may also contain Lifetime Signing. Any other EKUs are prohibited.
    • Verifies the value of the KeyUsage (KU) property, which must be either Unset or DigitalSignature.
    • Verifies the existence of a private key exists.
    • Verifies whether the certificate is active, hasn’t expired, and hasn't been revoked.

    Create the Template

    The built-in Windows 2008 R2 or Windows 2012 templates will not allow the creation of a certificate which meets all of these requirements. A new template must be created which allows the issuance of a properly configured certificate.

    Load an MMC and add the Certificate Authority and Certificate Templates

    clip_image002[6]

    Select Certificate Templates > Right Click on Code Signing > Duplicate Template

    clip_image004

    On the Compatibility tab

    · Change Certificate Authority to Windows Server 2008 R2 or Higher

    · Change the Certificate Recipient to Windows 7/Server 2008 R2 or Higher

    Note: These two changes allow the Basic Constraints Extension to be enabled.

    clip_image006

    On the Request Handling tab

    · Check the box to allow private key to be exported

    clip_image008

    On the General tab

    · Provide a useful name for this new template

    clip_image010

    On the Extensions tab

    · Click on the Application Policies Extension and verify Code Signing

    Note: For additional security, you can also add the Lifetime Signing extension to this template to ensure the signing certificate is no longer valid after expiration.

    clip_image012

    On the Extensions tab

    · Click on Basic Constraints and click Edit and check the box to Enable this extension.

    Note: If this checkbox is grayed out, make sure the certificate template is set properly on the Compatibility tab

    clip_image014

    On the Subject Name tab

    · Select the Supply in the request radio button and Click OK on the warning

    clip_image016

    On the Security tab

    · Add a user or group to allow them to enroll the certificate. The must have the Read and Enroll permissions.

    clip_image018

    In the MMC, expand Certificate Authority > {CAName} > Right Click Certificate Templates > New > Certificate Template to Issue

    Select the Template Name just created > Click OK

    clip_image020

    Notice the APPX Code Signing Template is now listed on the CA under Certificate Templates

    clip_image022

    Request the Certificate

    The certificate template has been created and now must be requested to generate a .cer file that will be placed in the local store on the computer the request is made from. It doesn’t matter which system makes the request because the .cer is immediately used to generate the .pfx file needed to sign the application.

    Open an MMC and add the certificates snap-in and select My User account radio button.

    In the MMC > Expand Certificates – Current user > Personal > Right Click on Certificates > All Tasks > Request New Certificate

    Note: The computer store can be used as well, but the computer account would need permission to enroll the certificate. In this example, we only added permissions for the application developers group.

    clip_image002[8]

    Click Next on the Before You Begin screen

    clip_image004[6]

    On the Select Certificate Enrollment Policy screen

    · Ensure Active Directory Enrollment Policy is selected

    · Click Next

    clip_image006[7]

    On the Request Certificates screen

    · Click on the link below the APPX Code Signing template to configure additional settings

    Note: The Enroll button cannot be selected until the missing settings are configured

    clip_image008[5]

    On the Certificate Properties screen

    · Under Subject Name the type should be Common Name

    · Value must be the same as the Publisher value in the Visual Studio 2012 package.appxmanifest

    · Click Add

    Note: The CN= is automatically appended and is not required when typing the Publisher Name. In this example just ContosoAppDev was entered in the value textbox.

    clip_image010[5]

    clip_image012[5]

    On the Request Certificates screen

    · APPX Code Signing is selected

    · Click Enroll

    clip_image014[5]

    On the Certificate Installation Results screen

    · Check the status

    · Click finish

    clip_image016[5]

    On the Certificates – Current User MMC

    · The new certificate will be listed

    clip_image018[5]

     

    Export to PFX

    Visual Studio requires the .pfx format to sign the application. In the previous step, we generated a .cer file which is located in the user store. We need to convert that .cer to a .pfx in preparation for signing.

    On the Certificates – Current User MMC

    · Right Click the New Certificate > Click All Tasks > Click Export

    clip_image002[10]

    On the Welcome screen

    · Click Next

    clip_image004[9]

    On the Export Private key screen

    · Click ‘Yes, export the private key’

    · Click Next

    clip_image006[10]

    On the Export File Format screen

    · Ensure Personal Information Exchange is selected

    · Ensure Include all certificates in the certification path if possible is checked

    · Check Export all extended properties

    · Click Next

    clip_image008[8]

    On the Security screen

    · Select the Password checkbox

    · Enter a password (this will be needed during import into Visual Studio 2012)

    · Click Next

    clip_image010[8]

    On the File to Export screen

    · Provide a path and filename

    · Click Next

    clip_image012[8]

    On the Completing the Certificate Export Wizard screen

    · Click Next

    clip_image014[8]

    On the Certificate Export Wizard message box

    · Click OK

    clip_image016[8]

    Sign the Application

    Open Windows Explorer to the location where the pfx file was saved.

    Note: The pfx file should be moved to a computer with VS 2012 installed.

    clip_image001

    Open Visual Studio 2012 project to be signed

    · double click the package.appxmanifest

    · Click Choose Certificate…

    clip_image003

    On the Choose Certificate screen

    · Click Configure Certificate > Select from File…

    clip_image005

    On the Select File screen

    · Navigate to and select the exported PFX file

    · Click Open

    clip_image007

    On the Enter Password screen

    · Enter Password

    · Click OK

    clip_image009

    On the Choose Certificate screen

    · Click OK

    clip_image011

    Package the signed APPX

    We have created the .pfx file needed to sign the application in the previous steps, so now we can sign our application.

    Open Visual Studio 2012 project to be packaged

     

    Inside the project

    · Right click the Project

    · Click Rebuild

    clip_image002[12]

    Inside Solution Explorer

    · Right click the solution to be packaged

    · Click Store

    · Click Create App Package

    clip_image004[11]

    On Create Your Package screen

    · Select No

    · Click Next

    clip_image006[12]

    On the Select and Configure Packages screen

    · Specify the path for the package to be placed

    · Click Create

    clip_image008[10]

    On the Package Creation Completed screen

    · Click OK

    Note: You may click on the link provided to navigate to the location the package was placed.

    clip_image010[10]

    Configure Group Policy

    In order to deploy a Windows 8 application using Side loading, the computer receiving the package must either have a developer license (used for testing purposes only) or appropriate local/group policy settings to ensure the applications which are trusted can be installed.

    Open Group Policy Management

    · Right click where you want to link the new Group Policy

    · Click Create a GPO in this domain and Link it here…

    Note: The Windows 8 systems must be located within the location where the new GPO is being linked

    clip_image002[14]

    On the new GPO screen

    · Name the GPO appropriately

    · Click OK

    clip_image004[13]

    On the GPMC

    · Right click the new policy

    · Click Edit…

    clip_image005

    On the Group Policy Management Editor screen

    · Expand Computer Configuration > Policies > Administrative Templates > Windows Components > App Package Deployment

    · Right Click Allow all trusted apps to install > Click Edit

    clip_image007[7]

    On Allow trusted apps to install screen

    · Select Enabled

    · Click OK

    clip_image009[5]

     

    This post was contributed by John Taylor, a Senior Consultant with Microsoft National IT Operational Consulting – US.

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    Attached Media: application/octet-stream (2 218 ko)
    Author: "DeploymentGuys" Tags: "Deployment, ConfigMgr 2012, Windows 8"
    Comments Send by mail Print  Save  Delicious 
    Date: Friday, 14 Jun 2013 08:22

    So – your development cycles have been completed and now you are ready to deploy the much anticipated Windows 8 based application that you have developed to your clients. You will quickly realize that the deployment of your newly created Windows 8 application cannot happen until the appx assembly has been signed. All methods of deployment (Windows Store, PowerShell or System Center 2012 Configuration Manager) require the application to be signed using a certificate issued by a trusted source before you can deploy it.

    If your application was developed with the intention of staying within the corporate landscape, then you may use a certificate issued by an internally hosted trusted CA. A lot of documentation is available about the requirements of the certificate issued, but a how-to guide was not available until now. This blog post will walk you through the steps required to install an internally developed application to production systems.

    The screen captures in this blog post are performed using Windows Server 2012 Domain Controller, Windows Server 2012 Certificate Authority, Visual Studio 2012 and Windows 8 Enterprise. The procedures for Windows Server 2008 R2 vary slightly, but the same certificate requirements can been completed.

    The diagram below identifies the workflow that this blog post will walk you through.

    clip_image002

     

    Get the Certificate

    Visual Studio will validate the certificate used to sign the app in the following ways:

    • Verifies the presence of the Basic Constraints extension and its value, which must be either Subject Type=End Entity or unspecified.
    • Verifies the value of the Enhanced Key Usage property, which must contain Code Signing and may also contain Lifetime Signing. Any other EKUs are prohibited.
    • Verifies the value of the KeyUsage (KU) property, which must be either Unset or DigitalSignature.
    • Verifies the existence of a private key exists.
    • Verifies whether the certificate is active, hasn’t expired, and hasn't been revoked.

    Create the Template

    The built-in Windows 2008 R2 or Windows 2012 templates will not allow the creation of a certificate which meets all of these requirements. A new template must be created which allows the issuance of a properly configured certificate.

    Load an MMC and add the Certificate Authority and Certificate Templates

    clip_image002[6]

    Select Certificate Templates > Right Click on Code Signing > Duplicate Template

    clip_image004

    On the Compatibility tab

    · Change Certificate Authority to Windows Server 2008 R2 or Higher

    · Change the Certificate Recipient to Windows 7/Server 2008 R2 or Higher

    Note: These two changes allow the Basic Constraints Extension to be enabled.

    clip_image006

    On the Request Handling tab

    · Check the box to allow private key to be exported

    clip_image008

    On the General tab

    · Provide a useful name for this new template

    clip_image010

    On the Extensions tab

    · Click on the Application Policies Extension and verify Code Signing

    Note: For additional security, you can also add the Lifetime Signing extension to this template to ensure the signing certificate is no longer valid after expiration.

    clip_image012

    On the Extensions tab

    · Click on Basic Constraints and click Edit and check the box to Enable this extension.

    Note: If this checkbox is grayed out, make sure the certificate template is set properly on the Compatibility tab

    clip_image014

    On the Subject Name tab

    · Select the Supply in the request radio button and Click OK on the warning

    clip_image016

    On the Security tab

    · Add a user or group to allow them to enroll the certificate. The must have the Read and Enroll permissions.

    clip_image018

    In the MMC, expand Certificate Authority > {CAName} > Right Click Certificate Templates > New > Certificate Template to Issue

    Select the Template Name just created > Click OK

    clip_image020

    Notice the APPX Code Signing Template is now listed on the CA under Certificate Templates

    clip_image022

    Request the Certificate

    The certificate template has been created and now must be requested to generate a .cer file that will be placed in the local store on the computer the request is made from. It doesn’t matter which system makes the request because the .cer is immediately used to generate the .pfx file needed to sign the application.

    Open an MMC and add the certificates snap-in and select My User account radio button.

    In the MMC > Expand Certificates – Current user > Personal > Right Click on Certificates > All Tasks > Request New Certificate

    Note: The computer store can be used as well, but the computer account would need permission to enroll the certificate. In this example, we only added permissions for the application developers group.

    clip_image002[8]

    Click Next on the Before You Begin screen

    clip_image004[6]

    On the Select Certificate Enrollment Policy screen

    · Ensure Active Directory Enrollment Policy is selected

    · Click Next

    clip_image006[7]

    On the Request Certificates screen

    · Click on the link below the APPX Code Signing template to configure additional settings

    Note: The Enroll button cannot be selected until the missing settings are configured

    clip_image008[5]

    On the Certificate Properties screen

    · Under Subject Name the type should be Common Name

    · Value must be the same as the Publisher value in the Visual Studio 2012 package.appxmanifest

    · Click Add

    Note: The CN= is automatically appended and is not required when typing the Publisher Name. In this example just ContosoAppDev was entered in the value textbox.

    clip_image010[5]

    clip_image012[5]

    On the Request Certificates screen

    · APPX Code Signing is selected

    · Click Enroll

    clip_image014[5]

    On the Certificate Installation Results screen

    · Check the status

    · Click finish

    clip_image016[5]

    On the Certificates – Current User MMC

    · The new certificate will be listed

    clip_image018[5]

     

    Export to PFX

    Visual Studio requires the .pfx format to sign the application. In the previous step, we generated a .cer file which is located in the user store. We need to convert that .cer to a .pfx in preparation for signing.

    On the Certificates – Current User MMC

    · Right Click the New Certificate > Click All Tasks > Click Export

    clip_image002[10]

    On the Welcome screen

    · Click Next

    clip_image004[9]

    On the Export Private key screen

    · Click ‘Yes, export the private key’

    · Click Next

    clip_image006[10]

    On the Export File Format screen

    · Ensure Personal Information Exchange is selected

    · Ensure Include all certificates in the certification path if possible is checked

    · Check Export all extended properties

    · Click Next

    clip_image008[8]

    On the Security screen

    · Select the Password checkbox

    · Enter a password (this will be needed during import into Visual Studio 2012)

    · Click Next

    clip_image010[8]

    On the File to Export screen

    · Provide a path and filename

    · Click Next

    clip_image012[8]

    On the Completing the Certificate Export Wizard screen

    · Click Next

    clip_image014[8]

    On the Certificate Export Wizard message box

    · Click OK

    clip_image016[8]

    Sign the Application

    Open Windows Explorer to the location where the pfx file was saved.

    Note: The pfx file should be moved to a computer with VS 2012 installed.

    clip_image001

    Open Visual Studio 2012 project to be signed

    · double click the package.appxmanifest

    · Click Choose Certificate…

    clip_image003

    On the Choose Certificate screen

    · Click Configure Certificate > Select from File…

    clip_image005

    On the Select File screen

    · Navigate to and select the exported PFX file

    · Click Open

    clip_image007

    On the Enter Password screen

    · Enter Password

    · Click OK

    clip_image009

    On the Choose Certificate screen

    · Click OK

    clip_image011

    Package the signed APPX

    We have created the .pfx file needed to sign the application in the previous steps, so now we can sign our application.

    Open Visual Studio 2012 project to be packaged

     

    Inside the project

    · Right click the Project

    · Click Rebuild

    clip_image002[12]

    Inside Solution Explorer

    · Right click the solution to be packaged

    · Click Store

    · Click Create App Package

    clip_image004[11]

    On Create Your Package screen

    · Select No

    · Click Next

    clip_image006[12]

    On the Select and Configure Packages screen

    · Specify the path for the package to be placed

    · Click Create

    clip_image008[10]

    On the Package Creation Completed screen

    · Click OK

    Note: You may click on the link provided to navigate to the location the package was placed.

    clip_image010[10]

    Configure Group Policy

    In order to deploy a Windows 8 application using Side loading, the computer receiving the package must either have a developer license (used for testing purposes only) or appropriate local/group policy settings to ensure the applications which are trusted can be installed.

    Open Group Policy Management

    · Right click where you want to link the new Group Policy

    · Click Create a GPO in this domain and Link it here…

    Note: The Windows 8 systems must be located within the location where the new GPO is being linked

    clip_image002[14]

    On the new GPO screen

    · Name the GPO appropriately

    · Click OK

    clip_image004[13]

    On the GPMC

    · Right click the new policy

    · Click Edit…

    clip_image005

    On the Group Policy Management Editor screen

    · Expand Computer Configuration > Policies > Administrative Templates > Windows Components > App Package Deployment

    · Right Click Allow all trusted apps to install > Click Edit

    clip_image007[7]

    On Allow trusted apps to install screen

    · Select Enabled

    · Click OK

    clip_image009[5]

     

    This post was contributed by John Taylor, a Senior Consultant with Microsoft National IT Operational Consulting – US.

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    Attached Media: application/octet-stream (2 218 ko)
    Author: "DeploymentGuys" Tags: "Deployment, ConfigMgr 2012, Windows 8"
    Comments Send by mail Print  Save  Delicious 
    Date: Friday, 07 Jun 2013 18:48

    In October last year I published a script that is designed to remove the built-in Windows 8 applications when creating a Windows 8 image. After a reading some of the comments in that blog post I decided to create a new version of the script that is simpler to use. The new script removes the need to know the full name for the app and the different names for each architecture. I am sure you will agree that this name - Microsoft.Bing – is much easier to manage than this - Microsoft.Bing_1.2.0.137_x86__8wekyb3d8bbwe.

    The script below takes a simple list of Apps and then removes the provisioned package and the package that is installed for the Administrator. To adjust the script for your requirements simply update the $AppList comma separated list to include the Apps you want to remove.

    $AppsList = "Microsoft.Bing" , "Microsoft.BingFinance" , "Microsoft.BingMaps" , "Microsoft.BingNews",` 
                "Microsoft.BingSports" , "Microsoft.BingTravel" , "Microsoft.BingWeather" , "Microsoft.Camera",` 
                "microsoft.microsoftskydrive" , "Microsoft.Reader" , "microsoft.windowscommunicationsapps",` 
                "microsoft.windowsphotos" , "Microsoft.XboxLIVEGames" , "Microsoft.ZuneMusic",` 
                "Microsoft.ZuneVideo" , "Microsoft.Media.PlayReadyClient"

    ForEach ($App in $AppsList)
    {
        $PackageFullName = (Get-AppxPackage $App).PackageFullName
        if ((Get-AppxPackage $App).PackageFullName)
        {
            Write-Host "Removing Package: $App"
            remove-AppxProvisionedPackage -online -packagename $PackageFullName
            remove-AppxPackage -package $PackageFullName
        }
        else
        {
            Write-Host "Unable to find package: $App"
        }
    }

     

     

    For more information on adding and removing apps please refer to this TechNet article.

    This post was contributed by Ben Hunter, a Solution Architect with Microsoft Consulting Services.

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    Author: "Ben Hunter"
    Comments Send by mail Print  Save  Delicious 
    Date: Friday, 07 Jun 2013 18:48

    In October last year I published a script that is designed to remove the built-in Windows 8 applications when creating a Windows 8 image. After a reading some of the comments in that blog post I decided to create a new version of the script that is simpler to use. The new script removes the need to know the full name for the app and the different names for each architecture. I am sure you will agree that this name - Microsoft.Bing – is much easier to manage than this - Microsoft.Bing_1.2.0.137_x86__8wekyb3d8bbwe.

    The script below takes a simple list of Apps and then removes the provisioned package and the package that is installed for the Administrator. To adjust the script for your requirements simply update the $AppList comma separated list to include the Apps you want to remove.

    $AppsList = "Microsoft.Bing" , "Microsoft.BingFinance" , "Microsoft.BingMaps" , "Microsoft.BingNews",` 
                "Microsoft.BingSports" , "Microsoft.BingTravel" , "Microsoft.BingWeather" , "Microsoft.Camera",` 
                "microsoft.microsoftskydrive" , "Microsoft.Reader" , "microsoft.windowscommunicationsapps",` 
                "microsoft.windowsphotos" , "Microsoft.XboxLIVEGames" , "Microsoft.ZuneMusic",` 
                "Microsoft.ZuneVideo" , "Microsoft.Media.PlayReadyClient"

    ForEach ($App in $AppsList)
    {
        $PackageFullName = (Get-AppxPackage $App).PackageFullName
        if ((Get-AppxPackage $App).PackageFullName)
        {
            Write-Host "Removing Package: $App"
            remove-AppxProvisionedPackage -online -packagename $PackageFullName
            remove-AppxPackage -package $PackageFullName
        }
        else
        {
            Write-Host "Unable to find package: $App"
        }
    }

     

     

    For more information on adding and removing apps please refer to this TechNet article.

    This post was contributed by Ben Hunter, a Solution Architect with Microsoft Consulting Services.

    Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

    Author: "Ben Hunter"
    Comments Send by mail Print  Save  Delicious 
    Next page
    » You can also retrieve older items : Read
    » © All content and copyrights belong to their respective authors.«
    » © FeedShow - Online RSS Feeds Reader