10 steps to optimise SharePoint performance
- 25 May, 2010 03:52
SharePoint, the fastest growing product in Microsoft's history, is used to store reams of documents, meaning application performance is a key component for successful SharePoint deployment and adoption. Here are 10 steps to improve the performance of your SharePoint servers.
Step 1: Separate user and database traffic
A common misconception is that servers connected to a high-speed network segment will have plenty of bandwidth to perform all required operations. But SharePoint places a tremendous amount of demand on SQL -- each request for a page can result in numerous calls to the database, not to mention service jobs, search indexing and other operations.
In order to mitigate the conflict between user and database traffic, connectivity between front-end servers and SQL should be isolated, either via separate physical networks or virtual LANs. Typically this requires at least two separate network interface cards in each front-end Web server with static routes configured to ensure traffic is routed to the correct interface. The same configuration may also be applied to application and index server.
Step 2: Isolate search indexing
A typical medium server farm consists of one or more Web front-end servers, a dedicated index or application server and a separate SQL database server. Search traffic initiated by the index server must be processed by the same servers responsible for delivering user content. In order to prevent search and user traffic from conflicting, an additional server may be added to the farm, which is dedicated solely to servicing search queries (in smaller environments, the index server may also serve this function). The farm administrator would then configure the search service to perform crawls only against this dedicated server. This configuration may reduce traffic to the Web front-end servers by as much as 70% during index operations.
Step 3: Adjust SQL parameters
One quick way to avoid future headaches is to provision the major SharePoint databases on separate physical disks (or LUNs if a storage-area network is involved). This means one set of disks for search databases, one for temporary databases and still another for content databases. Additional consideration should be given to isolating the log files (*.ldf). Although these do not incur the same level of I/O as other files, they do play a primary role in backup and recovery and they can grow to several times the size of the master database files.
Another technique is to proactively manage the size and growth of individual databases. By default, SQL grows database files in small increments, either 1MB at a time or as a fixed percentage of database size (usually 10%). These settings can cause SQL to waste cycles constantly expanding databases, and prevents further data from being written while the databases are expanding. An alternative approach is to pre-size the databases up to the maximum recommended size (100GB) if space is available and set auto growth to a fixed size (e.g. 10MB or 20MB).
Step 4: Defragment database indexes
SQL Server maintains its own set of indexes for data stored in various databases in order to improve query efficiency and read operations. Just as with files stored on disk, these indexes can become fragmented. It is important to plan for regular maintenance operations, which includes index defragmentation. Special care should be taken to schedule these types of operations as they are resource-intensive and, in many cases, can prevent data from being written to or read from the indexes.
Step 5: Distribute user data across multiple content databases
Most SharePoint data is stored in lists: tasks, announcements, document libraries, issues, picture libraries, and so forth. A great deal of this data is actually stored in a single table in the content database associated with the site collection. Regardless of how many sites and subsites are created within the SharePoint hierarchy, each site collection has only one associated content database. This means that a site collection with thousands of subsites is storing the bulk of the user data from every list in every site in a single table in SQL.
This can lead to delays as SQL must recursively execute queries over one potentially very large dataset. One way to reduce the workload is to manage the mapping of site collections to content databases. Administrators can use the central administration interface to pre-stage content databases to ensure that site collections are associated with a single database or grouped logically based on size or priority. By adjusting the 'maximum number of sites' setting or changing database status to "offline", administrators can also control which content database is used when new site collections are created.
Step 6: Minimize page size
For SharePoint users connected to the portal via a LAN it is easy to manage content and find resources, but for users on the far end of a slower WAN link the heavyweight nature of a typical SharePoint page can be a real performance-killer.
If you have many remote users, start with a minimal master page, which, as the name implies, removes unnecessary elements and allows designers to start with a clean slate that only contains the base functionality required for the page to render correctly.
Step 7: Configure IIS compression
SharePoint content consists of two primary sources -- static files resident in the SharePoint root directories (C:\Program Files\Common Files\Microsoft Shared\12 for 2007 and \14 for 2010) and dynamic data stored in the content. At runtime, SharePoint merges the page contents from both sources then transmits them inside an HTTP response to the requesting user. Internet Information Server (IIS) versions 6 and 7 both contain various mechanisms for reducing the payload of HTTP responses prior to transmitting them across the network. Adjusting these settings can reduce the size of the data transmitted to the client, resulting in shorter load times and faster page rendering.
IIS compression settings can be modified from a base value of 0 (no compression) to a maximum value of 10 (full compression). Adjusting this setting determines how aggressive IIS should be in executing the compression algorithms.
Step 8: Take advantage of caching
Much of the content requested by users can be cached in memory, including list items, documents, query results and Web parts. Site administrators can configure their own cache profiles to meet different user needs. Anonymous users, for example, can be assigned one set of cache policies while authenticated users are assigned another, allowing content editors to get a more recent view of content changes than general readers. Cache profiles can also be configured by page type, so publishing pages and layout pages behave differently, and administrators have the option to specify caching on the server, the client, or both.
In addition, the SharePoint Object Cache can significantly improve the execution time for resource-intensive components, such as the Content Query Web Part. For example, large objects that are requested frequently, such as images and files, can also be cached on disk for each Web application to improve page delivery times.
Step 9: Manage page customizations
SharePoint Designer is a useful tool for administrators and power users but page customization can be harmful to overall performance. When customization occurs, the entire page content, including the markup and inline code, is stored in the database and must be retrieved each time the page is requested. This introduces relatively little additional overhead on a page-by-page basis, but in larger environments with hundreds or even thousands of pages, all that back-and-forth to the database can add up to significant performance degradation.
To prevent this problem, administrators should implement a policy that restricts page customizations to only those situations where it is absolutely necessary. Site collection and farm administrators also have the option to disable the use of Designer or, when necessary, use the 'reset to site definition' option to undo changes and revert back to the original content.
Step 10: Limit navigation depth
One of the most significant design elements on any portal site is the global, drop-down, fly-out menu at the top of each page. It seems like a handy way to navigate through all the various sites and pages -- until it becomes so deep and cluttered that all ability to navigate beyond the first few levels is lost completely. Even worse, fetching all the data to populate the navigation menus can be resource-intensive on sites with deep hierarchies.
SharePoint designers have the ability to customize the depth and level of each navigation menu by modifying the parameters for the various navigation controls within the master page. Administrators should limit that depth to a manageable level that does not impact performance.
- Rebranded Quadmark revamps its IT solutions with Google Apps
- Simple, Proven, Tranformative
- Research firm Radicati names Google Apps for Business the leader in cloud business email
- How do you measure up against top IT service provider benchmarks?
- Vintek partners with IBM to reduce costs and improve system reliability
Australia not ready for only one Cloud flavour: Mitel
P-Day is here
AMD's Sempron lives on with new desktop chips
Gold Coast-based Icon expands into US
Optus hits 2.3Gbps throughput in real-world test