Tag Archives: SharePoint 2016

PowerShell and SharePoint Modules

There are a lot of posts about creating modules, for SharePoint, in PowerShell. I decided to write this post so everything I’ve learned is in one place. I’m hoping you learn something from this (as I did).

I have tested the code in this article in SharePoint On-premise  2010, 2013 and 2016.

Modules do have the advantage of, once created and loaded, the functions in your module will be available in every PowerShell session.

Loading the Microsoft.SharePoint.Powershell Snapin

Making sure the Microsoft.SharePoint.PowerShell snapin is always loaded first. This way you are always certain the snapin is available and its not necessary to add it to every script file you create.

Note: this only has to be done once and will be available for all future PowerShell sessions.

Open the PowerShell ISE and run the following, at the command line.  This will create a new profile script (profile.ps1) or open if it already exists.

PS C:> if (!(test-path $profile.AllUsersAllHosts)) {new-item -type file -path $profile.AllUsersAllHosts –force}
powershell_ise $profile.AllUsersAllHosts

Now that the profile.ps1 file is open in the PowerShell ISE, add the following code.  If there is already code in the profile.ps1 file, make sure the following doesn’t already exist and, if not, add it.

$ver = $host | select version
if ($ver.Version.Major -gt 1) {$host.Runspace.ThreadOptions = "ReuseThread"} 
if ((Get-PSSnapin "Microsoft.SharePoint.PowerShell" -ErrorAction SilentlyContinue) -eq $null) 
{
Add-PSSnapin "Microsoft.SharePoint.PowerShell"
}

Here is what the above looks like in my PowerShell session.

profile.ps1 – click on the image to open, full size, in another browser tab.

Make sure to close this PowerShell instance and load a new instance. Test that the Microsoft.SharePoint.PowerShell module has been loaded into your new PowerShell instance, type
Get-SPFarm at the command line and it should now execute without any issues.

Now that we have the Microsoft.SharePoint.PowerShell module loading with every instance of PowerShell, lets move on to creating our first module.

Creating a Script/Module Template

A PowerShell module is fundamentally the same as any other script file with the addition of a few elements.  Let’s start by showing you the fundamental header, body and footer that is in every one of my script files.  In fact, I have this as a template file and use it each time I wish to create a new module.

<#
	.SYNOPSIS
	.DESCRIPTION
	.NOTES
#>
function Verb-YourFunctionName
{
    <#
    .SYNOPSIS

    .DESCRIPTION

    .PARAMETER Web

    .PARAMETER LeadingSpaceCount
        Used to control verbose output. If -1 (default) no verbose output will be rendered. Any value of zero or greater, will generate verbose output to the host. The value of this parameter is used to indent the verbose output. For example, a value of 2 will put two spaces before the output.

    .EXAMPLE
    .EXAMPLE
    #>
    [CmdletBinding()]
    Param(
        [Parameter(Mandatory=$true)][Microsoft.SharePoint.PowerShell.SPWebPipeBind]$Web,
        [Parameter(Mandatory=$false)]
            [System.Int32]$LeadingSpaceCount=-1
    )

    # Setup the leading space for verbose output.
    # If -1 (default) no verbose output will be rendered.  Any value of zero or greater, will generate verbose output to the host.
    # The value of this parameter is used to indent the verbose output.  For example, a value of 2 will put two spaces before the output.
    $ls = ""
    if($LeadingSpaceCount -ge 0){$ls = "".PadRight($LeadingSpaceCount," "); $LeadingSpaceCount++}
    if($LeadingSpaceCount -ge 0)
    {
        Write-Host "$ls- WHAT THIS FUNCTION DOES" -ForegroundColor Yellow
        $ls = "".PadRight($LeadingSpaceCount," ")
        $LeadingSpaceCount++
    }
    try
    {
        [Microsoft.SharePoint.SPWeb]$oWeb = Get-SPWeb $Web
    }
    catch
    {
        Write-Host ([System.String]::Format("$ls- EXCEPTION: [{0}] [{1}]", $sScriptName, $_.Exception.Message)) -ForegroundColor Red
        Write-Host
        throw $_.Exception
    }
    finally
    {
        if($oWeb -ne $null){$oWeb.Dispose()}
    }
    return($SomeReturnVar)
    }

#region Module Members to be Exported
if([System.IO.Path]::GetExtension($MyInvocation.ScriptName) -like ".psm1")
{
    Export-ModuleMember -Function Verb-YourFunctionName -ErrorAction SilentlyContinue
}
#endregion

The header section of the module is simply a set of comments describing the functionality contained within the script/module file.  I tend to breakup my functions into separate files so maintenance is simplified.

The body section of the module is the function or code we are writing.  In the case of my template, the function is named Verb-YourFunctionName.  When being used, the function name will be modified to match the need.

The footer section is where we tell the Add-PSModule code (in the next section) what functions are to be included in our module.

#region Module Members to be Exported
if([System.IO.Path]::GetExtension($MyInvocation.ScriptName) -like ".psm1")
{
    Export-ModuleMember -Function Verb-YourFunctionName -ErrorAction SilentlyContinue
}
#endregion

You will change the Verb-YourFunctionName to the actual name of your function in this module.

Using Add-PSModule to Add Your Module

The last step in this process is to use the Add-PSModule to add the module to your PowerShell session.

Personally I so this in a PowerShell function; one for each different module I wish to create.  For example, I have BMCommon, BMActiveDirectory, BMSharePoint2013, etc. modules.  In most of my client environments, I install these modules so all of my extended functions and variables are available to all instances of PowerShell.

function Add-BMSharePoint2013PSModule
{
    Param(
        [Parameter(Mandatory=$false
            [string]$ModuleInstallPath = "C:\Windows\System32\WindowsPowerShell\v1.0\Modules",
        [Parameter(Mandatory=$false)]
            [int]$LeadingSpaceCount=-1
    )

    $ls = ""
    if($LeadingSpaceCount -ge 0){$ls0 `
        "".PadRight($LeadingSpaceCount," ");
        $LeadingSpaceCount++}

    $sScriptSourcePath = Split-Path -parent `
        $PSCommandPath
    $sModuleName = "BMSharePoint2013"
    $sManifestDescription = "Contains SharePoint 2013 management extended functions."

    $aScriptFiles = "Add-ADUserToSPGroup.ps1",
        "Add-SPAppTile.ps1",
        "Add-SPSitePermissionLevel.ps1"

    $bSuccess = Add-PSModule `
        -ScriptSourcePath $sScriptSourcePath `
        -ModuleName $sModuleName `
        -ScriptFiles $aScriptFiles `
        -ManifestAuthor "Bob Mixon" `
        -ManifestCompanyName "Bob Mixon" `
        -ManifestDescription $sManifestDescription `
        -ModuleInstallPath $ModuleInstallPath `
        -LeadingSpaceCount $LeadingSpaceCount

    return($bSuccess)
}

Add-BMSharePoint2013PSModule -LeadingSpaceCount 0

 

This is all you really need to do!  The high-level steps for accomplishing this are:

  1. Make sure the PowerShell snapin “Microsoft.SharePoint.PowerShell” has been loaded in the PowerShell profile.ps1 file.
  2. Create your new function script file/module; include the Export-ModuleMember at the bottom of the script file/module.
  3. Create and run your module to create and install the module.

Once these steps are complete, you can close your PowerShell instance, open another and your functions will be available.

 

Document Libraries and Folders

I am asked on a regular basis how I feel about the use of Document Library folders. To be honest, I always tell people “avoid them” unless you thoroughly understand the consequences of using folders or have a controlled means of promoting consistent use. This response always stirs up controversy so I thought I’d write this article to help you better understand my thinking.

Before I dive too deep in to this article, I need to first say; there is no right or wrong way of implementing your document management solution. There are simply better ways of surfacing information and improving the user experience. If you are concerned about findability and user adoption, then read-on! Another prerequisite to reading this article is to understand, this is not a black and white topic; meaning, it’s not an all or nothing issue. I believe document library folders have a place and can be used as long as the consequences are thoroughly understood.

Do I use them in my implementations? Absolutely… However, I do so in a very limited manner and very specific ways. With all that said, let’s move on…

First, let’s take a look at why you would use document library folders. The only reason that immediately comes to mind is; because that is what we are used to doing! I personally don’t think this is a great reason, but it does hold some truth. We have pushed users to using file share and local drives all of which have folder and nested folders.

Now lets take a look at why we shouldn’t use document library folders; or ways of improving the users experience.

Navigating document library folders is not the same as what users experienced when storing/managing files on file shares and other drives. The tree view, with +/- to expand/collapse branches, doesn’t exist in the out-of-box Document Library Web Part. Navigating down a folder branch requires the simple click of a link. However, navigating back up the branch requires the user to click the browser back button. This is a cumbersome user experience at best.

Just so you are aware, there are options to help solve some of the navigation issues described above. You could write a custom Web Part, find publicly available code or purchase a 3rd party product. There is code and products available. However, I would first recommend you continue reading because these products won’t solve all of the problems you will have.

In addition to the navigation issues described above, folders hide the content contained within; especially when you use sub-folders. For example, when a user arrives at a site and they see a library named Shared Documents (which I also don’t recommend using), first off the name of that library doesn’t tell them anything about what it contains. Also, if the user clicks on the library, all they see is the first layer of folders. This may begin to indicate what files are contained within those folders but doesn’t suggest anything about what might be contained within sub-folders. Again, hiding content and detail from users; forcing them to traverse folders until they find what they are looking for. Not very efficient at all.

Remember, if it takes a long time for a user to find an important document, quite often they will make a local copy for future reference; so they don’t have to go through the pains of finding it again on your Intranet. Findability is extremely important!

I also have to make a comment about manually managing folder-level security; avoid it at all costs! The use of it can and will become confusing for your user-base. I have seen structures become so complex, contributors loose comfort knowing where to store and manage their content. And what happens when contributors loose comfort and security with the solutions we implement? They stop using them and go back to what they feel secure and comfortable with… Something else to remember, security is one of the most complex features to teach a non-technical user. If you are expecting your non-technical user-base to manage item and folder-level security, it will eventually be done so incorrectly; this brings up a whole different set of issues (good topic for another article). And, if you aren’t expecting the general non-technical user to manage their own security (which is good, I don’t recommend it) then you are placing the burden of that management on Site Collection Administrators, IT or whomever you have it assigned to.

Lastly, let’s talk about content duplication and single source of truth. Content duplication occurs in an organization in many different ways; so many that we don’t want to add to the problem. Folders hide content. As such, if it takes a long time for a user to find an important document, they are more likely to make a local copy of that document so they don’t have to find it again in the future. Another way duplication occurs by using folders, is by simply needing a document to be present in multiple folders. For example, let’s say you have a document that is related to 2 of 3 areas of Human Resources Benefits; related to Medical and Dental Benefits but not Vision Benefits. If you have benefit documents broken down in folders by benefit type (which is common), you may have to duplicate this common document in 2 places. Again, we have to avoid the duplication of content on our document management system at all costs. It’s called managing the single source of truth!

So… what can be done to solve these problems, improving the user experience and overall findability?

First and foremost, consider using metadata! Whether you choose to continue using a Document Library named Shared Documents or not, I would recommend you start applying Content Types and metadata to your documents. As a simplified example, move all of your documents in a library to the root folder and apply metadata that can be used to filter the view. Still not the greatest user experience; however, a user will at least be able to see all of the files contained within a library simply by clicking the library link. They can then filter the view down using the metadata column filtering abilities built in to SharePoint.

Using more Document Libraries is an even better approach to solving the overuse of folders issue. Let me demonstrate; a user arrives at a Human Resources site and is presented with a Document Library named Shared Documents, with folders and sub-folders. Well, you already know how I feel about this user experience. Now improve the experience for the user by creating multiple Document Libraries; one named Medical Benefit Documents, another named Dental Benefit Documents and another named Vision Benefit Documents. Using this approach alone will significantly improve the user experience from a findability perspective. The Medical Benefit Documents library has a Content Type associated with it named Medical Benefit Document; which forces the contributor to store and manage only documents of type Medical Benefit. This also helps with your overall IA too; but that’s a whole different article series! Just be certain to not have your Document Libraries so generic that you have to manage too many Content Types with it; this can also be confusing for contributors.

Another approach, and one I use all the time, is to build your hierarchy with sub-sites. Take Human Resources Benefits for example. You may be better suited having a Benefits sub-site then sub-sites below that, one for each specific type of benefit (Medical, Dental and Vision). There are many, many advantages to using sub-sites in this manner. Sub-sites, used in this manner are really nothing more than what is called “a point of aggregation and association”. Let me provide you with another example. On a Medical Benefits sub-site you can store and manage related announcements, policies, procedures, forms, FAQ’s, glossary terms, links to external references and so on. You are basically bringing the user to a new level of experience when you aggregate all information associated with Medical Benefits on a single sub-site.

Another advantage to using sub-sites as a point of aggregation and association, is realized when you begin to aggregate content to a higher level on your Intranet. For example, if you choose to aggregate all Policy Document (documents of type Policy) to an aggregate view using the Content Query Web Part, those policies can be grouped by site/sub-site. This is a very useful view and cannot be produced automatically using folders.

So how do you handle the folder-level security confusion issue described above Bob? Great question, and I’m glad you asked! Remember, my implementations utilize sub-sites and multiple document libraries. I simply place a Content Editor Web Part above the Document Library Web Part and use it to describe the libraries intended use and security. I target this Web Part to the contributor audience so it is only displayed for those who are contributors on that specific site or page.

The document management/Intranet solutions I have implemented, contain a wide rage of the techniques described above. The actual implementation techniques used all depends on the size of organization, amount of content, contributors, consumers, security and a myriad of other requirements. It is my hope that some of these techniques help with your next implementation or give you ideas of how to improve your current implementation.

Please leave questions and comments below as it tends to help me understand what other articles I need to write!

Until next time…