Maintenance of Indexes and Fulltext-Catalog of FIM SQL databases

This is a follow-up to 2 posts I wrote in the past. To avoid index fragmentation and the issues I had with sets in the past I implemented 2 SQL Jobs to keep them clean.

Speed up FIM 2010 R2 SQL performance by rebuild/reorganize indexes

FIM 2010 R2: SQL timeout on using large sets in other sets

For around a month now the following to SQL jobs work perfectly in my customer’s environment even in production, so I think it seems to be safe to implement this, but you should test them on your own.

By default both SQL jobs run weekly on every sunday.

The first SQL job starts a PowerShell script to maintain the indexes on all tables of the FIMService database. The script works only on indexes with fragmentation higher than 20% and will only rebuild 100 indexes per schedule (You can adjust this in the script).

The PowerShell script is not my work, see author information in the scripts URL.

Make sure you replace the DOMAIN\USER and SERVERNAME placeholder in the script to values that fits to your environment.

USE [msdb]
GO

/****** Object:  Job [Index Rebuild on all FIMService tables]    Script Date: 11/17/2013 14:08:12 ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object:  JobCategory [[Uncategorized (Local)]]]    Script Date: 11/17/2013 14:08:12 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'Index Rebuild on all FIMService tables',
		@enabled=1,
		@notify_level_eventlog=0,
		@notify_level_email=0,
		@notify_level_netsend=0,
		@notify_level_page=0,
		@delete_level=0,
		@description=N'No description available.',
		@category_name=N'[Uncategorized (Local)]',
		@owner_login_name=N'DOMAIN\USER', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object:  Step [Index rebuild step]    Script Date: 11/17/2013 14:08:12 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Index rebuild step',
		@step_id=1,
		@cmdexec_success_code=0,
		@on_success_action=1,
		@on_success_step_id=0,
		@on_fail_action=2,
		@on_fail_step_id=0,
		@retry_attempts=0,
		@retry_interval=0,
		@os_run_priority=0, @subsystem=N'PowerShell',
		@command=N'PUSHD SQLSERVER:\SQL\SERVERNAME\DEFAULT

# PowerShell script iterated over tables in database, gathers set of indexes
# then for every index, gathers all partitions and performs RebuildIndex on that partition
# To execute this script:
#      Launch SQL PowerShell ( Start -> Run -> sqlps.exe)
#      in Powershell window CD  SQL\machine_name\instance_name; Example: CD SQL\demo-machine\DEFAULT
#      Copy the following script and paste it in SQL powershell window  to run this script
# http://sethusrinivasan.com/2012/02/14/index-rebuild-on-large-database-sql-agent-powershell-job/

# following variables can be updated
# database Name
$dbName = "FIMService"
# number of indexes to rebuild, script terminates after Rebuilding specified number of indexes
$indexesToProcess = 100
# fragmentation threshold - indexes with fragmentation less than this value will be skipped
$fragmentationThreshold = 20

$processedIndex = 0
$tables = dir Databases\$dbName\Tables
"Listing all tables from Database:" + $dbName

foreach($table in $tables)
{
   "    Listing Indexes for Table:" + $table
   foreach($index in $table.Indexes)
   {
        "    Listing Physical Partitions for Indexes:" + $index
        foreach($partition in $index.PhysicalPartitions)
        {
            $fragInfo = $index.EnumFragmentation([Microsoft.SqlServer.Management.Smo.FragmentationOption]::Sampled,
                                    $partition.PartitionNumber)
            $fragmentation = $fragInfo.Rows.Item(0)["AverageFragmentation"]

            "        Checking fragmentation on " +  $index.Name + " is greater than :" + $fragmentationThreshold
            "        Current Fragmentation:" + $fragmentation
            "        Paritition:" + $partition.PartitionNumber
            if($fragmentation -gt $fragmentationThreshold)
            {
                "        Rebuilding Index: " + $index.Name + " partition:" + $partition.PartitionNumber
                $processedIndex = $processedIndex + 1
                if($index.IsPartitioned -eq $True)
                {
                    $index.Rebuild($partition.PartitionNumber)
                }
                else
                {
                    $index.Rebuild()
                }
            }

            if ( $processedIndex -ge $indexesToProcess)
            {
                break
            }
        }

        if ( $processedIndex -ge $indexesToProcess)
        {
            break
        }
    }

    if ( $processedIndex -ge $indexesToProcess)
    {
        break
    }
}

POPD',
		@database_name=N'master',
		@flags=48
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Index rebuild weekly',
		@enabled=1,
		@freq_type=8,
		@freq_interval=1,
		@freq_subday_type=1,
		@freq_subday_interval=0,
		@freq_relative_interval=0,
		@freq_recurrence_factor=1,
		@active_start_date=20120214,
		@active_end_date=99991231,
		@active_start_time=130000,
		@active_end_time=235959,
		@schedule_uid=N'69a997b3-6475-4c18-bd87-9f4cf27e687a'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

The second SQL job uses the SQL internal function to do a rebuild on the FIMService full text catalog. You can user this SQL script or the build-in wizard to create this job.

If using this script make sure to replace the DOMAIN\USER placeholder to values that fits to your environment.

USE [msdb]
GO

/****** Object:  Job [Start Optimize Catalog Population on FIMService.ftCatalog]    Script Date: 11/17/2013 14:13:07 ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object:  JobCategory [Full-Text]    Script Date: 11/17/2013 14:13:07 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'Full-Text' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'Full-Text'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'Start Optimize Catalog Population on FIMService.ftCatalog',
		@enabled=1,
		@notify_level_eventlog=2,
		@notify_level_email=0,
		@notify_level_netsend=0,
		@notify_level_page=0,
		@delete_level=0,
		@description=N'Scheduled full-text optimize catalog population for full-text catalog ftCatalog in database FIMService. This job was created by the Full-Text Catalog Scheduling dialog or Full-Text Indexing Wizard.',
		@category_name=N'Full-Text',
		@owner_login_name=N'DOMAIN\USER', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object:  Step [Full-Text Indexing]    Script Date: 11/17/2013 14:13:07 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Full-Text Indexing',
		@step_id=1,
		@cmdexec_success_code=0,
		@on_success_action=1,
		@on_success_step_id=-1,
		@on_fail_action=2,
		@on_fail_step_id=-1,
		@retry_attempts=0,
		@retry_interval=0,
		@os_run_priority=0, @subsystem=N'TSQL',
		@command=N'USE [FIMService]
ALTER FULLTEXT CATALOG [ftCatalog] REORGANIZE
',
		@database_name=N'master',
		@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Weekly FIM.ftCatalog rebuild',
		@enabled=1,
		@freq_type=8,
		@freq_interval=1,
		@freq_subday_type=1,
		@freq_subday_interval=0,
		@freq_relative_interval=0,
		@freq_recurrence_factor=1,
		@active_start_date=20131024,
		@active_end_date=99991231,
		@active_start_time=120000,
		@active_end_time=235959,
		@schedule_uid=N'6ba433a3-79eb-4552-ba0b-5f1cc9d5dc1b'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

FIM 2010 R2: SQL timeout on using large sets in other sets

Last week a strange error appears in my environment, I was adding the All expected rule resources set to another set to give read permissions to operational admins, so that they can see the EREs on the provisioning tab of user resources. I build a set for these operational admins containing all resources that they should have read access to, in that set I include serveral objecttypes like teams and groups and also the above sets which has large amount of members.

Here is the set I’m trying to build:

EditSetError2

I do this directly in production, and all went fine, however to don’t break my deployment through the 3 stages (dev, test, prod) I also try to make this change in my other 2 environments, but then I their I got an Postprocessing Error.

I have to say that all 3 environments running on identical physical machines, with nearly the same configuration. (Windows Server 2008 R2, SQL2008R2 and FIM 2010 R2 SP1).

Here is the error I got in portal:

EditSetError1

Error processing your request: The server was unwilling to perform the requested operation.
Reason: Unspecified.
Attributes:
Correlation Id: 1292580b-150f-4921-9beb-c8761476787e
Request Id:
Details: Request could not be dispatched.

I figured out that there are also 2 errors in FIM Eventlog:

.Net SqlClient Data Provider: System.Data.SqlClient.SqlException: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
   at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
   at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
   at System.Data.SqlClient.SqlDataReader.ConsumeMetaData()
   at System.Data.SqlClient.SqlDataReader.get_MetaData()
   at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
   at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)
   at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)
   at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)
   at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)
   at System.Data.SqlClient.SqlCommand.ExecuteReader()
   at Microsoft.ResourceManagement.Data.DataAccess.DoRequestCreation(RequestType request, Guid cause, Guid requestMarker, Boolean doEvaluation, Int16 serviceId, Int16 servicePartitionId)

and this:

Requestor: urn:uuid:7fb2b853-24f0-4498-9534-4e10589723c4
Correlation Identifier: 1292580b-150f-4921-9beb-c8761476787e
Microsoft.ResourceManagement.WebServices.Exceptions.UnwillingToPerformException: Other ---> System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
at Microsoft.ResourceManagement.WebServices.RequestDispatcher.CreateRequest(UniqueIdentifier requestor, UniqueIdentifier targetIdentifier, OperationType operation, String businessJustification, List`1 requestParameters, CultureInfo locale, Boolean isChildRequest, Guid cause, Boolean doEvaluation, Nullable`1 serviceId, Nullable`1 servicePartitionId, UniqueId messageIdentifier, UniqueIdentifier requestContextIdentifier, Boolean maintenanceMode)
at Microsoft.ResourceManagement.WebServices.ResourceManagementService.Put(Message request)
--- End of inner exception stack trace ---

I found out that the request is taking more time than the SQL timeout configured in FIM Service, which is 58 seconds by default. But why only in dev and test and not in production.

Sadly I have an answer on this, since I don’t do further debugging with SQL profiler, as the following changes to the FIM Service configuration file resolves the issue.

To extend the timeout modify the FIM service configuration file Microsoft.ResourceManagement.Service.exe.config by adding the both timeout parameter:

<resourceManagementService externalHostName="myfimservice" dataReadTimeoutInSeconds="1200" dataWriteTimeoutInSeconds="1200"/>

These parameters are also documented within the config file.

I’ve tried the value 120 and 300 first but the request seems to take longer, after that I decided to use value 1200. After the request completed I could see the request takes more than 6 minutes to complete. In production it could complete within the 58 seconds timeout. Very strange behavior.

However after extending the timeout and restart FIM Service the portal will still present you a timeout but the request will complete in background, you can check this in request history. There is also a possibility to extend this timeout in portal by changing the web.config, but for me that was not necessary.

Beside getting rid of that error there are still 2 question i could not answer myself:

1. Why did that affect dev and test but not production (having in mind the environments are identical) ?

2. What takes so long to add a set to another set, even it has a large amount of members ?

If you could answer one or both of them so please leave me a comment or mail.

 

FIM 2010 R2 Hotfix Build (4.1.3479.0) released

Today, just a week after the last hotfix, Microsoft released another hotfix for Forefront Identity Manager 2010 R2, which has build 4.1.3479.0

You can download this hotfix directly from this link.

Here are the issues fixed, taken from the KB article:

Issues that are fixed in this update

This update fixes the following issues that were not previously documented in the Microsoft Knowledge Base.

Continue reading “FIM 2010 R2 Hotfix Build (4.1.3479.0) released”

Correct group objects with: Dynamic group has static member

Sometimes I have warnings in the group UI of my dynamic groups, telling me that “Dynamic group has static member”. Unless you cannot add static member to dynamic groups in FIM portal by yourself, it can be flow into through the Synchronization Engine, especially if you have equal precedence on the member attribute for groups. I need this setting, because groups can be managed in FIM Portal and in Active Directory.

So if someone add a user to a group in AD which is a dynamic group in FIM Portal it, this change flows into Portal and you get this warning. I still working on a solution that this will not happen in future.

I’ve tried to create a set to catch such groups in Portal and maybe send a notification message or clean up such groups with a workflow but I have no look in creating such a set. So I ended up in PowerShell once again to do this.

Here is my script to Remove all static member from dynamic groups.

add-pssnapin FIMAutomation

$grouplist = Export-FIMConfig -only -custom "/Group[MembershipLocked = 'true' and ExplicitMember = /Person]"

If ($grouplist -eq $null) { Write-Host "There is no dynamic group with static member" ; exit }

foreach ($group in $grouplist)
{
    $memberlist=($group.ResourceManagementObject.ResourceManagementAttributes | where {$_.AttributeName -eq "ExplicitMember"}).Values

    $importObject = New-Object Microsoft.ResourceManagement.Automation.ObjectModel.ImportObject
    $importObject.ObjectType = "Group"
    $importObject.TargetObjectIdentifier = $group.ResourceManagementObject.ObjectIdentifier
    $importObject.SourceObjectIdentifier = $group.ResourceManagementObject.ObjectIdentifier
    $importObject.State = 1

    foreach ($member in $memberlist)
    {
        $importChange = New-Object Microsoft.ResourceManagement.Automation.ObjectModel.ImportChange
        $importChange.Operation = [Microsoft.ResourceManagement.Automation.ObjectModel.ImportOperation]::Delete
        $importChange.AttributeName = "ExplicitMember"
        $importChange.AttributeValue = $member.Replace("urn:uuid:","")
        $importChange.FullyResolved = 1
        $importChange.Locale = "Invariant"
        $importObject.Changes += $importChange
    }

    $importObject | Import-FIMConfig
}

Notes:

The script currently only removes person objects from static member, you can modify this on your own if you have also group objects in the ExplicitMember attribute of dynamic groups.

 

FIM 2010 R2 Hotfix Build (4.1.3469.0) released

On Oct. 6 2013 Microsoft released a new Hotfix for FIM 2010 R2, which is Built 4.1.3469.0.
You can find the documentation and download link in KB2877254.

IssueList taken from the original KB Article:

Issues that are fixed or features that are added in this update

This update fixes the following issues or adds the following features that were not previously documented in the Microsoft Knowledge Base.
Continue reading “FIM 2010 R2 Hotfix Build (4.1.3469.0) released”

FIM 2010 R2: Searching for Request Details in msidmCompositeType

With Forefront Identity Manager 2010 R2 Microsoft add some performance modifications to Service and Portal. By default exports from the FIM Management Agent are batched (aggregated) up to 1000 changes. So in request details you may see only an Update to msidmCompositeType ‘’ Request which includes all the changes of maybe multiple objects in the RequestParameter attribute.

Craig has also post a short description on how to deal with this objecttype.

Since this is very good for performance on exports, it is very bad for searching changes in the request log in portal. In my environment I even see changes that seems to be only relevant to one object, holding changes to multiple objects. In this particular case the requests display name is Update to msidmCompisiteType ‘myuser’ Request, but in the RequestParameter attribute I also found changes to other objects.

While you cannot actually searching the request log for changes to a specific object, especially with the new batch object type, I’ve decided to create a little script for this in PowerShell of course.

The script is based on the FIMAutomation snap-in and also the FIM PowerShell Module from Craig. Because I missed some attribute in output of the Get-FIMRequestParameter cmdlet I have modified this a little bit.

Giving an objectType, attribute and value to search, the script will retrieve all changes to this object searching all direct/single requests and also all batched updates.

To use this script you must in addition to install the FIMPowerShellModule modify the FIMPowerShellModule.psm1 file like below. I’ve only added the Mode and Target properties to the output of the Get-FIMRequestParameter function.

$RequestParameter | foreach-Object{
    New-Object PSObject -Property @{
        Mode = ([xml]$_).RequestParameter.Mode
	Target = ([xml]$_).RequestParameter.Target
        PropertyName = ([xml]$_).RequestParameter.PropertyName
        Value = ([xml]$_).RequestParameter.Value.'#text'
        Operation = ([xml]$_).RequestParameter.Operation
     } |
     Write-Output

Here is my PowerShell Script, see also comments inside the code:

param($objectType, $attribute, $searchValue)
#objectType = The objectType of the target object you are trying to get requests for.
#attribute = The attribute of the target object you want to use for searching
#searchValue = The value of the attribute of the target object you are searching requests for
#
#ex. Get-FIMRequestDetails.ps1 -objectType "Person" -attribute "AccountName" -searchValue "pstapf"
#
#This gets all requests matching the given target object.

# Load FIMAutomation SnapIn and FIMPowershellModule (http://fimpowershellmodule.codeplex.com)
if(@(get-pssnapin | where-object {$_.Name -eq "FIMAutomation"} ).count -eq 0) {add-pssnapin FIMAutomation}
Import-Module C:\Windows\System32\WindowsPowerShell\V1.0\Modules\FIM\FIMPowerShellModule.psm1

# Check if the object you are searching requests for exists in portal and get its GUID
$filter = "/" + $objectType + "[" + $attribute + "=" + $searchValue + "]"
$searchObject = Export-FIMConfig -OnlyBaseResources -CustomConfig $filter

If ($searchObject -ne $null)
{
    $searchObjectGuid = $searchObject.ResourceManagementObject.ObjectIdentifier.Replace("urn:uuid:","")
    Write-Host "Object found:" $searchValue " with GUID:" $searchObjectGuid
}
else
{
    Write-Host "The object you are searching for does not exists in FIM Portal"
    Exit
}

# Get the aggregated requests of the object you search for
$export=@()
$filter = "/Request[Target=/msidmCompositeType[/msidmElement=/" + $objectType + "[" + $attribute + "=" + $searchValue + "]]]"
$export = Export-FIMConfig -OnlyBaseResources -CustomConfig $filter

# Get the single requests of the object you search for
$filter = "/Request[Target=/" + $objectType + "[" + $attribute + "=" + $searchValue + "]]"
$export += Export-FIMConfig -OnlyBaseResources -CustomConfig $filter
$requestlist = $export | Convert-FimExportToPSObject | Sort-Object msidmCompletedTime

# Get the RequestParameter of the object you search fo from all requests and add some requestDetails
If ($requestlist.count -gt 0)
{
    $resultItems = @()
    foreach ($requestItem in $requestList)
    {
        $resultItems += $requestItem | Get-FimRequestParameter | where { $_.Target -eq $searchObjectGuid } | ForEach-Object {
        New-Object PSObject -Property @{
            Target = $_.Target
            Operation = $_.Operation
            Mode = $_.Mode
            Attribute = $_.PropertyName
            Value = $_.Value
            RequestName = $requestItem.DisplayName
            RequestGuid = $requestItem.ObjectID.Replace("urn:uuid:","")
            CompleteTime = $requestItem.msidmCompletedTime
            Status = $requestItem.RequestStatus
            }
        }
    }
    $resultItems

}
else
{
    Write-Host "No request found for the object you searched for."
}

Using Notes:

Script output is PSObject, so you can pipe the output to many other cmdlets, like Out-GridView or Convertto-Html for example. In addition you can filter the output by date also on your own.

Out-GridView is a very neat cmdlet for this, as you can show/hide columns and also do some basic filtering, try a Status:Completed filter for example.

OutGridView

Here is also some output with ConverTo-Html with some basic CSS:

OutHtml

FIM 2010: Configuration Deployment (Part 2: Configuration Objects)

As you hopefully read in Part 1, we often had to deal with deploying configuration from one stage to another. Deploying the schema is the easier part as this mostly goes 1:1 from dev to test and finally production, but I have to deal with environment specific differences in the configuration objects like Sets, Workflows or MPRs.

In synchronization rules for example I often use fixed stings, like domain or constructing DNs for provision objects to LDAP directories and many other attributes. I also have some sets which must be different in the environments as in testing stage some more people have permissions to do something rather than in production, so I want to manage some of these sets manually and don’t want them to be deployed.

So, like in the schema deployment script I put all the example scripts together and also start the exports in parallel using PowerShell jobs. In addition after joining and calculating the delta configuration I do a search and replace of the fixed string values and even delete some objects which I don’t want to deploy. These replacements and deletes are defined in a configuration XML file.

So here is the script I use to deploy my configuration changes from one environment to the other:
(This script is currently designed to run from the target system)

param([switch]$AllLocales)
# use -AllLocales parameter to deploy schema data incl. localizations

#### Configuration Section ####
$sourceFilename="D:\Deploy\Policy_Source.xml"
$sourceServer="fim.source.com"
$destFilename="D:\Deploy\Policy_Target.xml"
$destServer="fim.target.com"

$tempDeltaFile = "D:\Deploy\Policy_DeltaTemp.xml"
$finalDeltaFile = "D:\Deploy\Policy_DeployData.xml"
$undoneFile = "D:\Deploy\undone.xml"

$config=[XML](Get-Content .\DeployReplaceConfig.xml)

#### Cleanup old data files ####
remove-item $sourceFilename -ErrorAction:SilentlyContinue
remove-item $destFilename -ErrorAction:SilentlyContinue
remove-item $tempDeltaFile -ErrorAction:SilentlyContinue
remove-item $finalDeltaFile -ErrorAction:SilentlyContinue
remove-item $undoneFile -ErrorAction:SilentlyContinue

#### Load FIMAutonation cmdlets ####
if(@(get-pssnapin | where-object {$_.Name -eq "FIMAutomation"} ).count -eq 0) {add-pssnapin FIMAutomation}

#### Function Definitions ####
function CommitPortalChanges($deployFile)
{
	$imports = ConvertTo-FIMResource -file $deployFile
	if($imports -eq $null)
	  {
		throw (new-object NullReferenceException -ArgumentList "Changes is null.  Check that the changes file has data.")
	  }
	Write-Host "Importing changes into T&A environment"
	$undoneImports = $imports | Import-FIMConfig
	if($undoneImports -eq $null)
	  {
		Write-Host "Import complete."
	  }
	else
	  {
		Write-Host
		Write-Host "There were " $undoneImports.Count " uncompleted imports."
		$undoneImports | ConvertFrom-FIMResource -file $undoneFile
		Write-Host
		Write-Host "Please see the documentation on how to resolve the issues."
	  }
}

function SyncPortalConfig($sourceFile, $destFile, $tempFile)
{
	$joinrules = @{
		# === Customer-dependent join rules ===
		# Person and Group objects are not configuration will not be migrated.
		# However, some configuration objects like Sets may refer to these objects.
		# For this reason, we need to know how to join Person objects between
		# systems so that configuration objects have the same semantic meaning.
		Person = "AccountName";
		Group = "DisplayName";

		# === Policy configuration ===
		# Sets, MPRs, Workflow Definitions, and so on. are best identified by DisplayName
		# DisplayName is set as the default join criteria and applied to all object
		# types not listed here.
		#"ma-data" = "Description";

		# === Schema configuration ===
		# This is based on the system names of attributes and objects
		# Notice that BindingDescription is joined using its reference attributes.
		ObjectTypeDescription = "Name";
		AttributeTypeDescription = "Name";
		BindingDescription = "BoundObjectType BoundAttributeType";

		# === Portal configuration ===
		ConstantSpecifier = "BoundObjectType BoundAttributeType ConstantValueKey";
		SearchScopeConfiguration = "DisplayName SearchScopeResultObjectType Order";
		ObjectVisualizationConfiguration = "DisplayName AppliesToCreate AppliesToEdit AppliesToView"
	}

	$destination = ConvertTo-FIMResource -file $destFile
	if($destination -eq $null)
		{ throw (new-object NullReferenceException -ArgumentList "destination Schema is null.  Check that the destination file has data.") }

	Write-Host "Loaded destination file: " $destFile " with " $destination.Count " objects."

	$source = ConvertTo-FIMResource -file $sourceFile
	if($source -eq $null)
		{ throw (new-object NullReferenceException -ArgumentList "source Schema is null.  Check that the source file has data.") }

	Write-Host "Loaded source file: " $sourceFile " with " $source.Count " objects."
	Write-Host
	Write-Host "Executing join between source and destination."
	Write-Host
	$matches = Join-FIMConfig -source $source -target $destination -join $joinrules -defaultJoin DisplayName
	if($matches -eq $null)
		{ throw (new-object NullReferenceException -ArgumentList "Matches is null.  Check that the join succeeded and join criteria is correct for your environment.") }
	Write-Host "Executing compare between matched objects in source and destination."
	$changes = $matches | Compare-FIMConfig
	if($changes -eq $null)
		{ throw (new-object NullReferenceException -ArgumentList "Changes is null.  Check that no errors occurred while generating changes.") }
	Write-Host "Identified " $changes.Count " changes to apply to destination."
	$changes | ConvertFrom-FIMResource -file $tempFile
	Write-Host "Sync complete."
}

$functions = {
	function GetPortalConfig($filename, $serverFQDN, $creds, $AllLocales)
	{
		if(@(get-pssnapin | where-object {$_.Name -eq "FIMAutomation"} ).count -eq 0) {add-pssnapin FIMAutomation}
		$uri="http://" + $serverFQDN + ":5725/ResourceManagementService"
		if ($AllLocales -eq $true)
			{ $policy = Export-FIMConfig -uri $uri -credential $creds -policyConfig -portalConfig -MessageSize 9999999 –AllLocales }
		else
			{ $policy = Export-FIMConfig -uri $uri -credential $creds -policyConfig -portalConfig -MessageSize 9999999 }
		$policy | ConvertFrom-FIMResource -file $filename
	}
}

#### Main Script ####
$creds=Get-Credential -message "Enter credentials for source FIM"
$myargs=@($sourceFileName, $sourceServer, $creds, $AllLocales.IsPresent)
start-job -name "SourceFIM" -init $functions -script { GetPortalConfig $args[0] $args[1] $args[2] $args[3] } -ArgumentList $myargs

$creds=Get-Credential -message "Enter credentials for destination FIM"
$myargs=@($destFileName, $destServer, $creds, $AllLocales.IsPresent)
start-job -name "DestFIM" -init $functions -script { GetPortalConfig $args[0] $args[1] $args[2] $args[3] } -ArgumentList $myargs

write-host "Waiting for Policy Export to complete..."
get-job | wait-job

Write-Host "Exports complete: Starting Policy compare..."
Write-Host
SyncPortalConfig $sourceFilename $destFilename $tempDeltaFile

Write-Host "`nReplace configuration data with destination values"
$portalData=(get-content $tempDeltaFile)

foreach ($ReplaceConfig in $config.DeployConfig.ReplaceData)
{
	$newPortalData=@()
	foreach ($CurrentLine in $portalData)
	{
		$newPortalData+=$CurrentLine.replace($ReplaceConfig.SearchString, $ReplaceConfig.ReplaceString)
	}
	$portalData=$newPortalData
}

$portalData | set-content $finalDeltaFile

Write-Host "`nRemoving excluded Objects from deploy data"
$deployXML=[XML](get-content $finalDeltaFile)

foreach ($ReplaceConfig in $config.DeployConfig.IgnoreData)
{
	$deleteObjects=$deployXML.Results.ImportObject | where { $_.TargetObjectIdentifier -eq "urn:uuid:"+$ReplaceConfig.SearchGUID }
	foreach ($delObj in $deleteObjects)
	{
		write-host "Deleting " $delObj.ObjectType ": " $delObj.AnchorPairs.JoinPair.AttributeValue
		try
		{
			$deployXML.Results.RemoveChild($delObj) | out-null
		}
		catch [System.Management.Automation.MethodInvocationException]
		{
			write-host "Ignoring " $delObj.ObjectType ": " $delObj.AnchorPairs.JoinPair.AttributeValue " not in DeploymentData"
		}
	}
}

$deployXML.Save($finalDeltaFile)
Write-Host "`n`nFinal file for deployment created: " $finalDeltaFile

$input=Read-Host "Do you want commit changes to destination FIM ? (y/n)"
if ($input -eq "y")
{
	CommitPortalChanges $finalDeltaFile
}

This is the configuration XML file: In IgnoreData the Name is only for viewing what object this is, all search is done on the GUID, which is the GUID of the target system.

<?xml version="1.0" encoding="utf-8"?>
<DeployConfig>
	<ReplaceData Name="NetbiosDomain">
		<SearchString>SOURCE-DOM</SearchString>
		<ReplaceString>TARGET-DOM</ReplaceString>
	</ReplaceData>
	<ReplaceData Name="DomainSID">
		<SearchString>S-1-5-21-xxxxxxxxx-xxxxxxxxxx-xxxxxxxxx</SearchString>
		<ReplaceString>S-1-5-21-xxxxxxxxxx-xxxxxxxxx-xxxxxxxxx</ReplaceString>
	</ReplaceData>
	<ReplaceData Name="LDAPDomain1">
		<SearchString>DC=source,DC=com</SearchString>
		<ReplaceString>DC=target,DC=com</ReplaceString>
	</ReplaceData>
	<IgnoreData Name="Set: All GroupAdmins">
		<SearchGUID>95833928-23e0-4e3d-bcc0-b824f3a8e123</SearchGUID>
	</IgnoreData>
	<IgnoreData Name="Set: All UserAdmin">
		<SearchGUID>28279e59-14b7-4b89-9d8a-b4c666a65301</SearchGUID>
	</IgnoreData>
</DeployConfig>

Due to the implementation of PowerShell jobs, if you get any error or exceptions you can retrieve the output from both background jobs with the following command:

Receive-Job

Or

Receive-Job –name

The script also creates the undone.xml file like the original join script, so you can use the original ResumeUndoneImport.ps1 script to retry importing the unprocessed changes.
In addition to this I have built a little helper script that would generate the XML for the objects I don’t want to be deployed and copy this directly to the clipboard, so you can easily add a lot of objects in a short time.

PARAM([string]$DisplayName)
set-variable -name URI -value "http://localhost:5725/resourcemanagementservice" -option constant

Write-Host "- Reading Set Information from Portal"
if(@(get-pssnapin | where-object {$_.Name -eq "FIMAutomation"} ).count -eq 0)
{add-pssnapin FIMAutomation}

$exportObject = export-fimconfig -uri $URI `
                               -onlyBaseResources `
                               -customconfig ("/Set[DisplayName='$DisplayName']")
if($exportObject -eq $null) {throw "Cannot find a set by that name"}
$ResourceID = $exportObject.ResourceManagementObject.ResourceManagementAttributes | `
                Where-Object {$_.AttributeName -eq "ObjectID"}

Write-Host "DisplayName: $DisplayName has ObjectID: " $ResourceID.Value
Write-Host
Write-Host "The following output is added to your clipboard:"
Write-Host
$objectGUID=$ResourceID.Value.Replace("urn:uuid:","")
$output=@()
$output+="`t"
$output+="`t`t$ObjectGUID"
$output+="`t"
$output | clip
$output
Write-Host

Notes:

This script has currently one small issue, each of the replacement objects are currently deployed every time you use this script, as the delta calculation see differences between the environments, and replacement is done after sync and calculate, but that’s ok it still works fine.

I will do an update of the script in maybe the near future to check if the replacement is already done to avoid this, but I haven’t find a way to do so right now.

FIM 2010: Configuration Deployment (Part 1: Schema)

My main customer has an environment with 3 stages (Development, Test, Production), so very perfect to work with. But deployment from one stage to the other is not very neat with the default tools that are shipped with Forefront Identity Manager 2010, as (you all know it) there are only a few PowerShell scripts you deal with.

I’ve searched a lot of blogs, also TechNet forum and wiki, but it seems nobody has ever wrote an article about this part of work on a FIM solution (or possibly I’m unable to find 😉 ).

So for the schema deployment you have to export schema with PowerShell on both (source and target) stages, then copy the file together and use another PowerShell script to join them and calculate the delta. Finally import the delta with another script.

As I have to deal with a multi-language environment, the –AllLocales parameter slows up the things, in addition I want to get rid of jumping between the RDP sessions to do all this.

So I modified the default scripts and put them all together in one and use PowerShell jobs to get the export from both environments in parallel (using jobs can be a little bit tricky). After that the delta is calculated and you are asked if you want to import the changes into FIM.

So here is the script I use to deploy my schema changes from one environment to the other:
(This script is currently desinged to run from the target system)

param([switch]$AllLocales)
# use -AllLocales parameter to deploy schema data incl. localizations

#### Configuration Section ####
$sourceFilename="D:\Deploy\Schema_Dev.xml"
$sourceServer="fim.dev.domain.com"
$destFilename="D:\Deploy\Schema_Prod.xml"
$destServer="fim.prod.domain.com"

$deployFilename="D:\Deploy\Schema_DeployData.xml"
$undoneFile = "D:\Deploy\undone.xml"

#### Cleanup old data files ####
remove-item $sourceFilename -ErrorAction:SilentlyContinue
remove-item $destFilename -ErrorAction:SilentlyContinue
remove-item $deployFilename -ErrorAction:SilentlyContinue
remove-item $undoneFile -ErrorAction:SilentlyContinue

#### Define Functions ####
function CommitPortalChanges($deployFile)
{
	$imports = ConvertTo-FIMResource -file $deployFile
	if($imports -eq $null)
	  {
		throw (new-object NullReferenceException -ArgumentList "Changes is null.  Check that the changes file has data.")
	  }
	Write-Host "Importing changes into T&A environment"
	$undoneImports = $imports | Import-FIMConfig
	if($undoneImports -eq $null)
	  {
		Write-Host "Import complete."
	  }
	else
	  {
		Write-Host
		Write-Host "There were " $undoneImports.Count " uncompleted imports."
		$undoneImports | ConvertFrom-FIMResource -file $undoneFile
		Write-Host
		Write-Host "Please see the documentation on how to resolve the issues."
	  }
}

function SyncSchema($sourceFile, $destFile, $deployFile)
{
	$joinrules = @{
		# === Schema configuration ===
		# This is based on the system names of attributes and objects
		# Notice that BindingDescription is joined using its reference attributes.
		ObjectTypeDescription = "Name";
		AttributeTypeDescription = "Name";
		BindingDescription = "BoundObjectType BoundAttributeType";
	}

	if(@(get-pssnapin | where-object {$_.Name -eq "FIMAutomation"} ).count -eq 0) {add-pssnapin FIMAutomation}

	$destination = ConvertTo-FIMResource -file $destFile
	if($destination -eq $null)
		{ throw (new-object NullReferenceException -ArgumentList "Destination Schema is null.  Check that the destination file has data.") }

	Write-Host "Loaded destination file: " $destFile " with "  $destination.Count " objects."

	$source = ConvertTo-FIMResource -file $sourceFile
	if($source -eq $null)
		{ throw (new-object NullReferenceException -ArgumentList "Source Schema is null.  Check that the source file has data.") }

	Write-Host "Loaded source file: " $sourceFile " with " $source.Count " objects."
	Write-Host
	Write-Host "Executing join between source and destination."
	$matches = Join-FIMConfig -source $source -target $destination -join $joinrules -defaultJoin DisplayName
	if($matches -eq $null)
		{ throw (new-object NullReferenceException -ArgumentList "Matches is null.  Check that the join succeeded and join criteria is correct for your environment.") }
	Write-Host "Executing compare between matched objects in source and destination."
	$changes = $matches | Compare-FIMConfig
	if($changes -eq $null)
		{ throw (new-object NullReferenceException -ArgumentList "Changes is null.  Check that no errors occurred while generating changes.") }
	Write-Host
	Write-Host "Identified " $changes.Count " changes to apply to destination."
	Write-Host "Saving changes to " $deployFile "."
	$changes | ConvertFrom-FIMResource -file $deployFile
	Write-Host
	Write-Host "Sync complete. The next step is to commit the changes using CommitChanges.ps1."
}

$functions = {
	function GetSchema($filename, $serverFQDN, $creds, $AllLocales)
	{
		if(@(get-pssnapin | where-object {$_.Name -eq "FIMAutomation"} ).count -eq 0) {add-pssnapin FIMAutomation}
		$uri="http://" + $serverFQDN + ":5725/ResourceManagementService"
		if ($AllLocales -eq $true)
			{ $schema = Export-FIMConfig -uri $uri -credential $creds -allLocales -schemaConfig -customConfig "/SynchronizationFilter" }
		else
			{ $schema = Export-FIMConfig -uri $uri -credential $creds -schemaConfig -customConfig "/SynchronizationFilter" }
		$schema | ConvertFrom-FIMResource -file $filename
	}
}

#### Main Script ####
$creds=Get-Credential -message "Enter credentials for source FIM"
$myargs=@($sourceFileName, $sourceServer, $creds, $AllLocales.IsPresent)
start-job -name "SourceFIM" -init $functions -script { GetSchema $args[0] $args[1] $args[2] $args[3] } -ArgumentList $myargs

$creds=Get-Credential -message "Enter credentials for destination FIM"
$myargs=@($destFileName, $destServer, $creds, $AllLocales.IsPresent)
start-job -name "DestFIM" -init $functions -script { GetSchema $args[0] $args[1] $args[2] $args[3] } -ArgumentList $myargs

Write-Host "Waiting for Schema Export to complete..."
Write-Host
get-job | wait-job

Write-Host "Exports complete: Starting Schema compare..."
SyncSchema $sourceFilename $destFilename $deployFilename

$input=Read-Host "Do you want commit changes to destination FIM ? (y/n)"
if ($input -eq "y")
{
	CommitPortalChanges $deployFilename
}

Due to the implementation of PowerShell jobs, if you get any error or exceptions you can retrieve the output from both background jobs with the following command:

Receive-Job <JobNumber>

Or

Receive-Job –name <JobName>

The script also creates the undone.xml file like the original join script, so you can use the original ResumeUndoneImport.ps1 script to retry importing the unprocessed changes.

So this was the easiest part, next time I will take a look on the deployment of the policies (like Sets, MPRs and Workflows) in which I have to deal with possible different values in the stages (like for e.g. NetBIOS Domain Names or Part of the DN) and also some objects for which I don’t want to have changes deployed, as they have to be different within these stages.

So, come back in a few days or use the RSS feed to don’t miss Part 2: Policy Deployment.

Speed up FIM 2010 R2 SQL performance by rebuild/reorganize indexes

Recently a user in the Technet Forum had performance issues with SQL MA’s on Syncs, see this Thread. This can happen over time mostly in large environments or environments many changes.

The problem was fragmentation of indexes, and the solution was to rebuild/reorganize these indexes as documented in the Deployment Guide for FIM.

I followed the best practices in mostly all of my projects, also for the SQL Server deployment, but don’t have in mind to check the indexes regularly or on performance issues. So keep that in mind (as I do now) that maybe the Reorganize or Rebuild of SQL indexes can solve your problem.

How-To: Manage Group Membership from the User UI in FIM 2010 R2

So, let’s start this blog with an Technet Wiki Articel i wrote a couple of days ago. This is the solution from one of my customers, which need to have helpdesk users to manage group membership like it is done in ADUC.

One of the missing features in FIM Portal is that you can not OOB change the group membership of users in the user UI, like it is done in AD with the memberOf. But with the Powershell Activity an some small scripts and RCDC editing you can build a really cool solution for that.

Technet Wiki: FIM 2010 R2 HowTo Manage Group Membership from User UI