Sizing your ConfigMgr Primary Site Server

Getting the right server configuration for your ConfigMgr environment is far from an exact science, but with a few simply tests you can at least get an idea if you’re dead wrong, or still on track… As you probably know, ConfigMgr Current Branch requires a lot more CPU and memory than ConfigMgr 2007 ever did, but the key bottleneck is most often Disk I/O.

Update 04/27/2023: Replaced the old SQLIO.exe tool with DiskSpd.exe

Reaching out to the community

The information I have later in this post are from my own experience from various customer engagements the last few years, but I would love to hear from you. I’m hoping to gather additional real world numbers here, providing a few more samples that could help others sizing their ConfigMgr server better. If you are willing please contact me on LinkedIn and we’ll take it from there.

The things I’m asking for is the following:

  • Notify help desk that ConfigMgr will be unavailable for 30 minutes.
  • Notify the storage folks that you plan to put some really high load on their SAN for about 15 minutes (ask nicely, and/or wait to off peak hours)
  • Stop all ConfigMgr/SQL services on the site server (if possible disconnect the server from network).
  • Run DiskSpd.exe (use below PowerShell Script and pipe to a text file)
  • Optional: Send me the text file, and some info about your site server(s), VM configuration (CPU/Memory/disk), SAN Hardware etc. Together we can provide some real world configurations that other admins can learn from…

Thanks / Johan

Sample #1 – Single Primary Site Server, supporting 12.000 clients

For this scenario, I would start off with a single VM running, with SQL Server running locally on the VM, and having the VM configured with 8 vCPU’s and 64 GB of RAM.

But before installing SQL Server, I request a few disk volumes from the storage group to determine what the final disk layout will be. To determine the final disk layout I use DiskSpd.exe from Microsoft to get a rough idea about the performance I get from each volume. After gathering and reviewing the result from DiskSpd.exe I request the final disk layout from the storage group.

The critical thing about using DiskSpd.exe is to have enough amount of data to test with, a 20-50 GB file is enough for most tests, and to run the tests a least a few minutes.

Note: If using other benchmarking tools that requires pre-created files, please do not use FSUtil because it will just create an empty file, which the SAN cache may suck into RAM immediately and your test results will be off the charts. Create a “real” file, with content, generate a giant ISO file, or a large WinRAR archive, anything you can think of as long as the file is full with data.

Next step, benchmarking

Then run some DiskSpd.exe tests with various block sizes, here is a good starting point with a 64 kb block size. You can also try using a 8 kb block size.

#
# Get Diskspd.exe at https://aka.ms/diskspd
#

# Set Variables
$DiskSpdPath = "E:\Demo\Diskspd"
$ExportPath = "E:\Demo\Diskspd"
$ExportFile = "$ExportPath\$($Env:ComputerName)_diskspd_results.txt"

# Get all disk volumes
$Volumes = Get-CimInstance -ClassName Win32_LogicalDisk -Filter "DriveType = 3"
$TestFileSize = "20" # GB
$FirstRunDuration = 60 # Seconds
$SecondRunDuration = 300 # Seconds

# Copy Diskspd.exe to C:\Windows\Temp
Copy-Item $DiskSpdPath\diskspd.exe "C:\Windows\Temp"
Set-Location "C:\Windows\Temp"

# Remove previous results
If (Test-path $ExportFile){Remove-Item -Path $ExportFile -Force }

foreach ($Volume in $Volumes){
    $Testfile = "$($Volume.DeviceID)\Testfile.dat"

    # Check for free space. Minimum is size specified in TestFileSize plus 10 GB
    [int]$TestFileSizeInt = [convert]::ToInt32($TestFileSize, 10)
    $NeededFreeSpace = [int]$TestFileSizeInt + 10 #GigaBytes
    $Disk = Get-wmiObject Win32_LogicalDisk -Filter "DeviceID='$($Volume.DeviceID)'" 
    $FreeSpace = [MATH]::ROUND($disk.FreeSpace /1GB)
    Write-Host "Checking free space on $($Volume.DeviceID) Minimum is $NeededFreeSpace GB"

    if($FreeSpace -lt $NeededFreeSpace){
    
        Write-Warning "Oupps, you need at least $NeededFreeSpace GB of free disk space"
        Write-Warning "Available free space on $($Volume.DeviceID) is $FreeSpace GB"
        Write-Warning "Skipping this volume..."
    }
    Else{
        # All good, start the tests
        $TestFileSizeInDiskspdFormat = $TestFileSize+"G"

        # Read test
        Write-Host "Starting first 100% read test on $($Volume.DeviceID)"
        $Result = .\DiskSpd.exe -r -w0 -t8 -o8 -b64K -c"$TestFileSizeInDiskspdFormat" -d"$FirstRunDuration" -h -L $Testfile

        # Format and output result
        foreach ($line in $result) {if ($line -like "total:*") { $total=$line; break } }
        foreach ($line in $result) {if ($line -like "avg.*") { $avg=$line; break } }
        $mbps = $total.Split("|")[2].Trim()
        $iops = $total.Split("|")[3].Trim()
        $latency = $total.Split("|")[4].Trim()
        $cpu = $avg.Split("|")[1].Trim()

        $TestName = "Read Test 01"
        $Hash = New-Object System.Collections.Specialized.OrderedDictionary
        $Hash.Add("ComputerName",$Env:ComputerName)
        $Hash.Add("Test",$TestName)
        $Hash.Add("Disk",$Volume.DeviceID)
        $Hash.Add("IOPS",$iops)
        $Hash.Add("Mbps",$mbps)
        $Hash.Add("Latency",$latency)
        $Hash.Add("CPU",$cpu)

        $CSVObject = New-Object -TypeName psobject -Property $Hash
        $CSVObject | Export-Csv -Path $ExportFile -NoTypeInformation -Append

        Start-Sleep -Seconds 5
        Write-Host "Starting second 100% read test on $($Volume.DeviceID)"
        $Result = .\DiskSpd.exe -r -w0 -t8 -o8 -b64K -c"$TestFileSizeInDiskspdFormat" -d"$SecondRunDuration" -h -L $Testfile

        # Format and output result
        foreach ($line in $result) {if ($line -like "total:*") { $total=$line; break } }
        foreach ($line in $result) {if ($line -like "avg.*") { $avg=$line; break } }
        $mbps = $total.Split("|")[2].Trim()
        $iops = $total.Split("|")[3].Trim()
        $latency = $total.Split("|")[4].Trim()
        $cpu = $avg.Split("|")[1].Trim()

        $TestName = "Read Test 02"
        $Hash = New-Object System.Collections.Specialized.OrderedDictionary
        $Hash.Add("ComputerName",$Env:ComputerName)
        $Hash.Add("Test",$TestName)
        $Hash.Add("Disk",$Volume.DeviceID)
        $Hash.Add("IOPS",$iops)
        $Hash.Add("Mbps",$mbps)
        $Hash.Add("Latency",$latency)
        $Hash.Add("CPU",$cpu)

        $CSVObject = New-Object -TypeName psobject -Property $Hash
        $CSVObject | Export-Csv -Path $ExportFile -NoTypeInformation -Append

        # Write test
        Write-Host "Starting first 100% write test on $($Volume.DeviceID)"
        $Result = .\DiskSpd.exe -r -w100 -t8 -o8 -b64K -c"$TestFileSizeInDiskspdFormat" -d"$FirstRunDuration" -h -L $Testfile

        # Format and output result
        foreach ($line in $result) {if ($line -like "total:*") { $total=$line; break } }
        foreach ($line in $result) {if ($line -like "avg.*") { $avg=$line; break } }
        $mbps = $total.Split("|")[2].Trim()
        $iops = $total.Split("|")[3].Trim()
        $latency = $total.Split("|")[4].Trim()
        $cpu = $avg.Split("|")[1].Trim()

        $TestName = "Write Test 01"
        $Hash = New-Object System.Collections.Specialized.OrderedDictionary
        $Hash.Add("ComputerName",$Env:ComputerName)
        $Hash.Add("Test",$TestName)
        $Hash.Add("Disk",$Volume.DeviceID)
        $Hash.Add("IOPS",$iops)
        $Hash.Add("Mbps",$mbps)
        $Hash.Add("Latency",$latency)
        $Hash.Add("CPU",$cpu)

        $CSVObject = New-Object -TypeName psobject -Property $Hash
        $CSVObject | Export-Csv -Path $ExportFile -NoTypeInformation -Append

        Start-Sleep -Seconds 5
        Write-Host "Starting second 100% write test on $($Volume.DeviceID)"
        $Result = .\DiskSpd.exe -r -w100 -t8 -o8 -b64K -c"$TestFileSizeInDiskspdFormat" -d"$SecondRunDuration" -h -L $Testfile

        # Format and output result
        foreach ($line in $result) {if ($line -like "total:*") { $total=$line; break } }
        foreach ($line in $result) {if ($line -like "avg.*") { $avg=$line; break } }
        $mbps = $total.Split("|")[2].Trim()
        $iops = $total.Split("|")[3].Trim()
        $latency = $total.Split("|")[4].Trim()
        $cpu = $avg.Split("|")[1].Trim()

        $TestName = "Write Test 02"
        $Hash = New-Object System.Collections.Specialized.OrderedDictionary
        $Hash.Add("ComputerName",$Env:ComputerName)
        $Hash.Add("Test",$TestName)
        $Hash.Add("Disk",$Volume.DeviceID)
        $Hash.Add("IOPS",$iops)
        $Hash.Add("Mbps",$mbps)
        $Hash.Add("Latency",$latency)
        $Hash.Add("CPU",$cpu)

        $CSVObject = New-Object -TypeName psobject -Property $Hash
        $CSVObject | Export-Csv -Path $ExportFile -NoTypeInformation -Append

        Remove-Item $Testfile
    
    }

}

For example, if you get the below results I would expect a somewhat normal SAN (or just a badly configured/sized high-end SAN), and would recommend a classic ConfigMgr disk layout with six volumes.

  • Read Sequential –  with 8 kb blocks: 5000 – 25000 IOPS
  • Random write – with 8 kb blocks: 2000 – 7000 IOPS

In this configuration I’m splitting the DB files, the DB Logs, and the TempDB over three different volumes.

Classic ConfigMgr Disk Layout

However, if I’m starting to see values way over 50000 IOPS for the same read, and 20000 IOPS for the same write, I would expect a really high-end SAN, or possible a local SSD array, or a local accelerator card (FusionIO etc.) and would, for at least smaller environments, recommend a different disk layout with only four volumes. Because of the large amount of IO, dividing the database components is not as critical as for the previous scenario.

SSD or SSD Accelerator card configuration for smaller ConfigMgr sites.

Sample #2 – 3000 clients

Some time ago I got a configuration sent from a smaller environment: They had 8 x 600 GB SSD, one big RAID10 array, split up into logical volumes for OS/APP/Content/Database. Even though there are other options with available with that many SSD disks, this will work great for a small site. Here are the DiskSpd.exe results from one of the volumes:

  • Read Sequential –  with 8 kb blocks: 28000 IOPS, 200 MBs/Sec
  • Read Sequential –  with 64 kb blocks: 15000 IOPS, 965 MBs/Sec
  • Random write – with 8 kb blocks: 72000 IOPS, 565 MBs/Sec
  • Random write – with 64 kb blocks: 22500 IOPS, 1400 MBs/Sec

Sample #3 – 6500 clients

This was another production environment: They had 4 x 800 GB SSD in a RAID10 array, split up into logical volumes for OS/APP/Content/Database. Here are the DiskSpd.exe results from one of the volumes:

  • Read Sequential –  with 8 kb blocks: 36600 IOPS, 286 MBs/Sec
  • Read Sequential –  with 64 kb blocks: 8000 IOPS, 514 MBs/Sec
  • Random write – with 8 kb blocks:  19600 IOPS, 153 MBs/Sec
  • Random write – with 64 kb blocks: 9800 IOPS, 613 MBs/Sec

Note: These are the live results when running the tests once, buffering set to not use file nor disk caches. The first result seemed quite low, so I think it was a onetime glitch in that test. The other results are a better match for the disk configuration:

About the author

Johan Arwidmark

0 0 votes
Article Rating
Subscribe
Notify of
guest
1 Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
bartrumb
bartrumb
9 years ago

Awesome! I'm excited to see these results.

For anybody in the process designing and configuring SQL for SCCM be sure to check out this link for a Script to help pre-create the databases: blog.coretech.dk/kea/slides-and-scripts-from-the-system-center-2012-configuration-manager-r2-advanced-infrastructure-session-wcl307/


>