Starting already back in ConfigMgr Current Branch v1610, Peer Cache has been available. This feature is designed to help reduce the network impact of delivering content to clients in distributed environments, and works with all the package types that ConfigMgr supports (updates, legacy packages, applications, images etc.).
Here is a step-by-step guide that shows you how to setup peer cache in ConfigMgr Current Branch, as well as give you some background info.

The Guide
The guide in this post are four simple steps to get it going:
- Step 1 – Create a collection structure
- Step 2 – Configure Peer Cache Client settings
- Step 3 – Create boundaries and boundary groups
- Step 4 – Verifying that it works
But first, a little bit of background.
ConfigMgr Peer Cache 101
The way Peer Cache works, is that you enable, via client settings, what machines on each site that should be allowed to share content with their friends. These machines are called Peer Cache Sources. Once these machines have the content, other machines in the same boundary group can download the content from their "friends" rather than from a remote DP. You can basically see these clients as extra distribution points 🙂
Peer Cache Sources
The content you want to have available for peer caching must be fully deployed to the peer cache sources, so it's located in their cache, before it becomes available for other clients. But once they have it, there is no need to wait deploying to the rest. ConfigMgr learns about it's new "distribution points" quite quickly.
Cached Content
The content you want to have available for peer caching must be fully deployed to the peer cache sources, so it's located in their cache, before it becomes available for other clients. But once they have it, there is no need to wait deploying to the rest. ConfigMgr learns about it's new "distribution points" quite quickly. Starting with ConfigMgr v1806, the MP only return live/active clients on your network, as reported by their fast channel status.
Boundary Group
Peer Cache is done per boundary group, so if a client roams to a new site (new boundary group), it will be served with different Peer Cache Sources.
Security
Starting with ConfigMgr v1710, all transfers between the peer cache client and the peer cache source it's currently using, is done with HTTPS.
Peer Cache and BranchCache
In many scenarios you can use BranchCache on it's own, without the need to involve Peer Cache, but Peer Cache does work together with BranchCache, sort of, where BranchCache works as a backup for packages, but also to peer content that Peer Cache cannot, like ConfigMgr policies. See this post for more info about setting up BranchCache: https://deploymentresearch.com/setup-branchcache-for-configmgr-current-branch/
Here is a quick summary of BranchCache and Peer Caching features:
BranchCache
- Peers on the local subnet only (unless adding in a third party solution, then it can span multiple subnets)
- Does not support OSD (unless adding in a a free third party extension for BranchCache),
- Can start peering content as soon as the first client receives a few blocks of the file
- Can peer all ConfigMgr package types, as well as ConfigMgr policies
- Uses a separate cache than the ConfigMgr client
- Can utilize ConfigMgr content that has been data deduplicated to further reduce network impact.
Peer Cache
- Peers on boundary group level or on local subnet (configurable on the boundary group)
- Cannot start peering content until the entire package has been downloaded
- Can peer all ConfigMgr package types, but not ConfigMgr policies
- Uses the ConfigMgr client cache
Replacing WinPE Peer Cache
Peer Cache in ConfigMgr Current Branch v1610 and later is a direct replacement for the WinPE Peer Cache feature that was introduced in ConfigMgr Current Branch v1511. But hopefully you have upgraded your ConfigMgr platform to something newer 🙂
Scenario
In my lab, I have two sites, New York (192.168.1.0/24) which has a local DP, and Chicago (192.168.4.0/24) which does not have a local DP.
- New York: With the CM01 DP, has five clients: W10PEER-0001 – W10PEER-0005.
- Chicago: With no DP, has five clients: W10PEER-0006 – W10PEER-0010.
Note: To setup a lab with multiple routed networks I recommend using a virtual router instead of the typical NAT switch in Hyper-V or VMWare. It can be based on either Linux or Windows, and you find a step-by-step guide here: https://deploymentresearch.com/285/Using-a-virtual-router-for-your-lab-and-test-environment

Step 1 – Create a collection structure
Since you need to deploy content to a few machines first in each site (at least one), I created a collection structure that looked like this:
- Peer Cache Sources – All Sites: In this collection, I added two machines from Chicago, and two machines from New York.
- Peer Cache Clients – New York: Here I added three other machines in the New York site, just for testing
- Peer Cache Clients – Chicago: Here I added three other machines in the Chicago site, again just for testing
Dynamic Collection for Peer Cache Sources
If you have many locations, you might find it easier to create a dynamic collection that automatically finds good candidates for being Peer Cache Sources:
-- For all potential peer cache source machines but VMs
select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_SYSTEM_ENCLOSURE on SMS_G_System_SYSTEM_ENCLOSURE.ResourceID = SMS_R_System.ResourceId inner join SMS_G_System_NETWORK_ADAPTER on SMS_G_System_NETWORK_ADAPTER.ResourceId = SMS_R_System.ResourceId inner join SMS_G_System_NETWORK_ADAPTER_CONFIGURATION on SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.ResourceId = SMS_R_System.ResourceId where SMS_G_System_SYSTEM_ENCLOSURE.ChassisTypes in ("3","6","4","5","7","15","16","17","23") and (SMS_G_System_NETWORK_ADAPTER.AdapterType = "Ethernet 802.3" and SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.IPEnabled = 1) and SMS_R_System.IsVirtualMachine = "False"
-- For all potential peer cache source machines including VMs (for lab)
select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_SYSTEM_ENCLOSURE on SMS_G_System_SYSTEM_ENCLOSURE.ResourceID = SMS_R_System.ResourceId inner join SMS_G_System_NETWORK_ADAPTER on SMS_G_System_NETWORK_ADAPTER.ResourceId = SMS_R_System.ResourceId inner join SMS_G_System_NETWORK_ADAPTER_CONFIGURATION on SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.ResourceId = SMS_R_System.ResourceId where SMS_G_System_SYSTEM_ENCLOSURE.ChassisTypes in ("3","6","4","5","7","15","16","17","23") and (SMS_G_System_NETWORK_ADAPTER.AdapterType = "Ethernet 802.3" and SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.IPEnabled = 1)
-- For all potential peer cache source machines including VMs (for lab), limited to hard drive size
select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_SYSTEM_ENCLOSURE on SMS_G_System_SYSTEM_ENCLOSURE.ResourceID = SMS_R_System.ResourceId inner join SMS_G_System_NETWORK_ADAPTER on SMS_G_System_NETWORK_ADAPTER.ResourceID = SMS_R_System.ResourceId inner join SMS_G_System_NETWORK_ADAPTER_CONFIGURATION on SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.ResourceID = SMS_R_System.ResourceId inner join SMS_G_System_DISK on SMS_G_System_DISK.ResourceId = SMS_R_System.ResourceId where SMS_G_System_SYSTEM_ENCLOSURE.ChassisTypes in ("3","6","4","5","7","15","16","17","23") and SMS_G_System_NETWORK_ADAPTER.AdapterType = "Ethernet 802.3" and SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.IPEnabled = 1 and SMS_G_System_DISK.Size > 102398

ConfigMgr OSD
In order for OS Deployment to use a peer cache source for content, you must add the the SMSTSPeerDownload collection variable, set to True, to the collection(s) you are deploying the task sequence to. Optionally, you can also add the SMSTSPreserveContent variable to force the machine to keep the packages used during OSD in the ConfigMgr Client cache. If you skip adding the SMSTSPeerDownload variable, the client will always go to a distribution point for the packages during OSD.


Step 2 – Configure Peer Cache Client settings
To make ConfigMgr Clients share content with others, they must be configured to do so via Client Settings. You also need to extend the ConfigMgr client cache (see below).
Warning: Do Not enabling peer caching on all your clients for, just pick a few on each site, or at least use a dynamic collection query to find suitable candidates. See the preceding examples in the Step 1 section.
Coolness: Behind the scenes, the client setting is named CCM_SuperPeerClientConfig, and you will also see SuperPeer mentioned in the log files.
1. In the Administration workspace, in the Client Settings node, create a new custom client device setting named Peer Cache Sources.
2. In the Peer Cache Sources dialog box, select the Client Cache Settings check box, and then in the left pane, select Client Cache Settings.
3. In the Custom Device Settings pane, set the Maximum cache si
ze to something useful, like 65 GB, and then enable peer caching by setting the Enable Configuration Manager client in full OS to share content policy to Yes.
Note: A perhaps better way to set the maximum client size is by using a configuration item, that via a script sets it dynamically depending on how much free disk space the machines has. You find a good example from Heath Lawson (@HeathL17) here: http://blogs.msdn.microsoft.com/helaw/2014/01/07/configuration-manager-cache-management.
4. Deploy the Peer Cache Sources client setting to the Peer Cache Sources – All Sites collection.

Step 3 – Create boundaries and boundary groups
Since peer caching clients finds friends within a boundary group only, you need to have a somewhat decent structure of boundary groups. For simplicity in my testing I simple created the following boundary groups.
- New York: To which I added the 192.168.1.1 – 192.168.1.254 IP range boundary
- Chicago: To which I added the 192.168.4.1 – 192.168.4.254 IP range boundary

Step 4 – Verifying that it works
Now it's time to verify it work, and in my example I deployed a 1 GB package to the Peer Cache Sources collection (containing two clients in each site).
Once these clients had the content, I deployed it to the remaining clients in each site, and watched what happened by following the CAS.log on each client.
Behavior in New York
In the initial versions of Peer Cache, a client would always use a local DP if there was one at the same subnet, but in recent versions you can configure that behavior on the boundary group.

If you change your boundary group settings to the above, clients in New York are getting content from the CM01 DP, even though their peer caching friends have the content. This is the interesting line in the log.
Matching DP location found 0 – http://cm01.corp.viamonstra.com/sms_dp_smspkg$/ps10007f (Locality: SUBNET)

Behavior in Chicago
In Chicago, there is no local DP in Chicago, and the clients will get the content from its peer caching friends. Below is CAS.log example from clients in Chicago, and as you can see it ranked peer cache source before the remote CM01 DP (which it also found).
Shorthand: The clients in Chicago are getting content from their peer caching friends. This is the interesting line in the log.
Matching DP location found 0 – http://w10peer-0006.corp.viamonstra.com:8003/sccm_branchcache$/ps100083 (Locality: PEER)

As a final touch, after waiting for 24 hours, you can see the clients reporting their download history, both in the ContentTransferManager.log and as well by going to the Monitoring workspace and select the Distribution Status / Client Data Sources node.
Note: The folks at 2Pint Software has released a utility that allow you to set the interval. The name is Trigger Happy, and you download it here: http://2pintsoftware.com/download/trigger-happy


Written by Johan Arwidmark
Best article I've ever read. The way you explain things is very easy to understand.
Thanks 🙂
Very helpful post Johan.
We're a medium-sized hospital with hundreds of remote locations on various speed internet circuits. After reading your post I was able to set up a peer cache host at each remote location, cutting our Windows 10 refresh time to each office from 12 hours to 1.5, and cutting the bandwidth usage down 90%.
Thanks very useful notes
I did this same and it works well
i have question ?
i want to deploy a office 365 for client but i need only to keep the content on peer source machine not to be install
same as for windows upgrade by task sequence
kindly help me
I wrote an article on pre-caching content that should be helpful: https://deploymentresearch.com/pre-caching-content-for-p2p-in-a-configmgr-environment/