Subscribe

RSS Feed (xml)

Powered By

Skin Design:
Free Blogger Skins

Powered by Blogger


Saturday, September 27, 2008

SQL Server 2005 Performance Audit : SQL Server Hardware Configuration Checklist

SQL Server Hardware Configuration Checklist:

Number of CPUs (and or Cores)
32-Bit vs. 64-Bit
CPU MHz
CPU L2 Cache Size
Physical RAM Amount
SAN vs. Local Storage
Total Amount of Available Drive Space on Server
Total Number of Physical Drives in Each Array
RAID Level of Array Used for SQL Server Databases
Hardware vs. Software RAID
Disk Fragmentation Level
Location of Operating System
Location of SQL Server Executables
Location of Swap File
Location of tempdb Database
Location of System Databases
Location of User Databases
Location of Log Files
Location of Backup Files
Number of Disk Controllers in Server
Type of Disk Controllers in Server
Size of Cache in Disk Controllers in Server
Is Write Back Cache in Disk Controller On or Off?
Speed of Disk Drives
How Many Network Cards Are in Server?
What is the Speed of the Network Cards in Server?
Are the Network Cards Hard-Coded for Speed/Duplex?
Are the Network Cards Attached to a Switch?
Are All the Hardware Drivers Up-to-Date?
Is this Physical Server Dedicated to SQL Server?

Auditing SQL Server Hardware Is an Important Early Step

We have to start somewhere and server hardware is as good a place to start as any other audit area. In addition, when many DBAs, or database users, think about poor database performance, often the first thing that pops in their minds is whether the hardware and/or hardware configuration is up to meeting the demands of their SQL Server databases. While the above statement is often cited, what you may not know is that, generally speaking, server hardware is one of the least common causes of SQL Server performance problems. Now, this is not to say that hardware is not important, it is. But oftentimes, it is some other issue, such as poor database design, poorly designed queries, or poor indexing that are causing performance problems, not the hardware.

But again, that is why we are conducting this performance audit. Our goal is to identify what potential performance issues your server may have, and hardware just might be the problem. If you have tuned your database to perfection, and are still having performance issues, then hardware may be your bottleneck to better performance.

This portion of the audit will be divided into these major sections:

• CPU
• Memory
• Disk Storage
• Network Connectivity
• Misc.

As part of this audit, you will want to complete the above checklist. As you do, you may find out things about server you were not aware of. It is always a good idea to “inventory” your server’s hardware so you know exactly what you are working with.

CPU

Number of CPUs

This first point is obvious, the more CPUs your SQL Server has, the faster it can potentially perform. I say potentially, because if your SQL Server does not need extra CPU resources, adding more CPU power won’t affect performance much, if at all.

CPUs currently come in one core, two core, and four core models. Currently, the most cost-effective option is to go with four core CPUs, both in regards to hardware costs and licensing costs, assuming you need the power of four cores.

The Standard Edition of SQL Server 2005 can support up to four CPUs. The Enterprise Edition can support as many CPUs as the OS it is using supports.

Keep in mind that adding additional CPUs does not provide linear performance benefits. In other word, a two CPU server is not twice as powerful as a single CPU server, and an eight CPU server is not twice as powerful as a four CPU server. The addition of each additional CPU offers diminishing returns.

It is very difficult to estimate the number of CPUs any specific SQL Server-based application will need. This is because each application works differently and is used differently. Experienced DBAs often have a feel for what kind of CPU power an application might need, although until you really test your server's configuration under realistic conditions, it is hard to really know what is needed.

Because of the difficulty of selecting the appropriate numbers of CPUs to purchase for a SQL Server, you might want to consider the following rules of thumb:

• Purchase a server with as many CPUs as you can afford.

• If you can't do the above, then at least purchase a server that has room to expand its total number of CPUs. Almost all SQL Servers need more power as time passes and workloads increase.

Here's some potential scenarios:

• SQL Server will be used to run a specialized accounting application that will only be used by no more than ten users at a time, and you don't expect this to change in the next couple of years. If this is the case, a two core CPU will most likely be adequate. If you expect that the number of users may increase fairly soon, then you would want to consider purchasing a four core CPU now. In addition, purchasing a server with an empty socket offers you even more flexibility for future growth.

• SQL Server will be used to run a specialty application written in-house. The application will not only involve OLTP, but need to support fairly heavy reporting needs. It is expected that concurrent usage will not exceed 100 users. In this case, you might want to consider a server with four CPUs, but with the ability to expand to four more if necessary. It is hard to predict what "fairly heavy reporting needs" really mean. I have seen some fairly simple, but poorly written reports, peg out all of a server's CPUs.

• SQL Server will run an ERP package that will support between 250 - 300 concurrent users. For "heavy-duty" applications like this, ask the vendor for their hardware recommendations, as they should already have a good idea of their product's CPU needs. You will probably be looking at least eight CPUs as a starting point.

I could provide many other examples, but the gist of what I am trying to get across is that it is very hard to predict exactly how many CPUs a particular SQL Server-based application will need, and that you should generally purchase a system bigger than you think you will need, because in many cases, usage demands on an application are often underestimated. It is less expensive in the long run to purchase a larger server now (with more CPUs), than to have to replace your entire server in 6-12 months because of poor estimates.


32-Bit vs. 64-Bit

When you are deciding between purchasing 32-bit vs. 64-bit CPUs, what you are actually deciding is how much memory you want SQL Server to easily access, not CPU power. In fact, 64-bit CPUs can under perform 32-bit CPUs when it comes to raw processing power. You only generally only need to consider 64-bit CPUs if you expect your server will need 16 GB or more of RAM.

CPU Speed

Like the number of CPUs, the needed speed of the CPUs you purchase is hard to estimate. Generally speaking, as with the number of CPUs your SQL Server has, purchase the fastest CPUs you can afford. It is better to purchase too large a system than too small a system.

CPU L2 Cache

One of the most common questions I get is "should you purchase a less expensive CPU with a smaller L2 cache, or a more expensive XEON CPU with a larger L2 cache?" What complicates this decision is the fact that you can purchase faster chips with smaller L2 caches than you can of chips that have a large L2 cache. Here's my rule of thumb:

• If you will only be running one or two CPU cores, go with the fastest CPU you can get, with L2 cache as a secondary consideration. If you have a choice of L2 cache size, always get the largest you can.

• But, if you will be running four or more CPU cores, then you want to go with the CPUs with the largest L2 cache, even though their speed may not be as high. The reason for this is in order for SQL Server to run optimally on servers with four or more CPUs, the L2 cache has to be much larger, otherwise you will be wasting much of the power of the additional CPUs.

Most two and four core CPUs come with 4MB to16MB of L2 cache.


Memory

While server memory is discussed here after first discussing the CPU, don't think that it is not as important as your server's CPU. In fact, memory is probably the most important hardware ingredient for any SQL Server, affecting SQL Server's performance more than any other hardware.

When we are talking about memory, we are referring to physical RAM. Often, the word memory (in the Windows Server world) refers to physical RAM and virtual memory (swap file). This definition is not good for SQL Server because SQL Server is not designed to use virtual memory.

Instead of using the operating system's combination of physical RAM and virtual memory, SQL Server prefers to keep data in physical RAM as much as it can. The reason for this is speed. Data in RAM is much faster to retrieve than data on disk.

When SQL Server can't keep all of the data it manages in RAM (the SQL Server Buffer Cache), it accesses disk, similar to the way that the operating system manages virtual memory. But SQL Server's "caching" mechanism is more sophisticated and faster than what the operating system virtual memory can provide.

The fastest way to find out if your SQL Server has an adequate amount of RAM is to check the SQL Server: Buffer Cache Hit Ratio counter using System Monitor. If this counter is 99% or higher, then most likely you have enough physical RAM in your SQL Server. If this counter is lower than 99%, and if you are happy with your SQL Server's performance, then you probably have enough physical RAM in your SQL Server. But if you are not satisfied with your server's performance, and the Buffer Cache Hit Ratio is lower than 99%, then adding more RAM should result in better performance.

On the other hand, if the Buffer Hit Cache Hit Ratio is above 99%, and you aren’t happy with your server’s performance, adding more RAM won’t help. Why? Because if the Buffer Hit Cache Ratio is that high, SQL Server is using all the RAM it needs to get its job done. If you add more RAM than it needs, SQL Server won’t use it.

In fact, I have seen many SQL Servers with lots of installed RAM, but little of it used because their particular database and application didn’t need the RAM to run without a memory bottleneck. For example, one SQL Server I saw has 14 GB installed on it, but SQL Server only uses a maximum of 140 MB of it, leaving the rest of the RAM to go unused. And at this level of memory use, its Buffer Cache Hit Ratio always exceeds 99%.

So what does all of this boil down to? If your buffer hit cache ratio is less than 99%, and performance is suffering, then seriously consider adding more RAM.

Disk Storage

After memory, disk storage is often the most important factor affecting SQL Server's performance. It is also a complicated topic. In this section, I will focus on the "easiest" areas where disk storage performance can be bolstered.

SAN vs. Local Storage

If you check out the SQL Server newsgroups, you often see questions about whether a SAN or local storage is the best option. Just like more answers, it depends. A properly configured SAN can be faster than locally-attached storage, and offer other benefits, such as better redundancy, more flexible storage options, virtualization, and more. On the other hand, a poorly configured SAN can result is very poor IO performance as compared to a properly configured locally-attached storage.

Then there is the factor of cost. Assuming cost is a priority, then a SAN is probably out of the question. But if performance and storage redundancy and flexibility is more important than cost, then consider a SAN. The key to great SAN performance is a correct configuration, which often requires a SAN expert to implement.


Total Amount of Available Drive Space on Server

While the performance effect isn't huge, it is important that all of your disk arrays have at least 20% of free space. This is because NTFS (which is the disk format I assume you are using) needs extra space to work efficiently. If the space is not available, then NTFS is not able to function at its full capacity and I/O performance can degrade. It also leads to more disk fragmentation, which causes the server to work harder to read and write data.

Take a look at each of the physical disks in your SQL Server, checking to see if there is at least 20% or more of free space. If there isn't, then consider trying:

• Removing any unnecessary data from the disks (empty the recycle bin, remove temp files, remove setup files, etc.)
• Moving some of the data to disks with more space
• Adding more disk space

Total Number of Physical Drives in Each Array

A disk array generally refers to two or more physical disk drives working together as a single unit. For example, a RAID 5 array might have four physical drives in it. So why is it important to know how many physical drives are in the one or more arrays in your SQL Server?

With the exception of mirrored arrays (which are two physical drives working together), the more physical drives that are in an array, the faster reads and writes are for that array.

For example, let's say that I want to purchase a new SQL Server with a RAID 5 array and that I need at least 800 GB of available space. Let's also assume that the vendor has proposed two different array configurations:

5 – 200 GB drives (800 GB available)
9 – 100 GB drives (800 GB available)

Both of these options meet our criteria of providing at least 800 GB of RAID 5 disk space. But which array will provide better read and write performance? The answer is the second choice, the nine 100 GB drives. Why?

Generally speaking, the more disks that are in an array, the more disk heads there are available to read and write data. SCSI and fiber-channel drives, for example, have the ability to read and write data simultaneously. So the more physical drives that there are in an array, the faster data is read or written to the array. Each drive in the array shares part of the workload, and the more, the better. There are some limits to this, depending on the disk controller, but generally, more is better.

So what does this mean to you? After you take a look at the number of arrays you have in your SQL Server, and the number of drives in each array, is it feasible to reconfigure your current arrays to take better advantage of the principal of more is better?

For example, let's say that your current server has two disk arrays used to store user databases. Each is a RAID 5 array with three 100 GB drives each. In this case, it might be beneficial to reconfigure these two arrays into a single array of six 100 GB drives. Not only would this provide faster I/O, but it would also recover 200 GB of hard disk space.

Take a careful look at your current configuration. You may, or may not be able to do much. But if you can, you will be able to see the benefits of your change as soon as you make them.

RAID Level of Array Used for SQL Server Databases

As you probably already know, there are various different types of disk array configurations, called RAID levels. Each has their pros and cons. Here is a brief summary of the most commonly used RAID levels, and how they can be best used in your SQL Server:

RAID 1

The operating system and SQL Server executables, including the operating system's swap file, are often located on a RAID 1 array. Some people locate the swap file on its own RAID 1 array, but I doubt that this really offers much of a performance boost because paging, on a well-configured server, is not much of an issue. This option provides a good compromise between performance, fault tolerance, and cost.

If your SQL Server database(s) are very small, and all the databases can fit on a single disk drive, consider RAID 1 for the storing of all your SQL Server data files, especially if cost is an issue.

Another way to use RAID 1 arrays is to separate each transaction log on its own RAID 1 array. This is because transactions logs are written to and read from sequentially, and by isolating them to their own array, sequential disk I/O won't be mixed with slower random disk I/O, and performance is boosted. If you can’t put each transaction log on its own RAID 1 array, you also have the option of putting all of your transaction logs on a single RAID 1 array. This is not an ideal situation, but is better than mixing your database and log files on the same array.


RAID 5

Although this is the most popular type of RAID storage, it is not the best option for optimum SQL Server I/O performance. If a database experiences more than 10% writes, and most OLTP databases do, write performance will suffer, hurting the overall I/O performance of SQL Server. RAID 5 is best used for read-only or mostly read-only databases. Testing at Microsoft has found that RAID 5 can be as much as 50% slower than using RAID 10. On the other hand, RAID 10 is expensive, and your budget many only support a RAID 5 array option. RAID 5 is also a poor choice for storing transaction logs, as transactions logs are very write intensive.


RAID 10

RAID 10 offers the best performance for SQL Server databases (read and write), although it is the most expensive RAID option. The more write intensive the database, the more important it is to use RAID 10. RAID 10 arrays are also a good option for transaction logs.


RAID Summary

Most likely, your current SQL Server configuration does not match the recommendations above. In some cases, you may be able to modify your current array configuration to come closer to what is recommended above, but in most cases, you will probably have to live with what you have until you get a new budget for a new server and array(s).

If you can only do one of the above recommendations, I would recommend that you move to RAID 10 over the other options. This option, above all others listed above, will give you the greatest overall boost in SQL Server I/O performance.

Hardware vs. Software RAID

RAID can be implemented through hardware or software (via the operating system). There is no debate on this topic, don't ever user software RAID, it is very slow. Always use hardware RAID.

Disk Fragmentation Level

If you create a new database on a brand new disk array, the database file and transaction log file created will be one contiguous file. But if your database or transaction log grows in size (and what database and transaction log doesn't) over time, it is possible for the files to become physically fragmented over time. File fragmentation, which scatters pieces of your files all over a disk array, causes your disk array to work harder to read or write data, hurting disk I/O performance.

As part of your performance audit, you need to find out how defragmented your SQL Server database and transaction logs are. If you have Windows 2000 or 2003, you can use the built-in defragmentation utility to run a fragmentation analysis to see how badly the files are fragmented. Or, you can use a third-party defragmentation tool.

If the analysis recommends that you defragment, you should. Unfortunately, defragmenting a SQL Server's database and transaction log files is not always an easy task. Open files, such as those database and transaction log files found on a running SQL Server, cannot always be defragmented. For example, the built-in defragmentation utility cannot defrag SQL Server MDF and LDF files, but Diskeeper can in many cases, but not all. This means, than is some cases, you may have to bring SQL Server offline in order to defrag MDF and LDF files. And depending on how fragmented the files are, and the size of the files, this could take many hours.

But do you really have much choice about defragmenting your SQL Server files? If your I/O performance is currently adequate, then you shouldn't bother defragmenting. But if your I/O performance is a bottleneck, then defragmenting is one inexpensive way of boosting performance, albeit a time consuming one in many cases.

One note about SANS. Some SANs include their own built-in defragmentation software. If this is the case for your SAN, then you won’t have to worry about defragmenting the data on your SAN.

Ideally, you should periodically physically defragment your SQL Server database and transaction log files. This way, you can ensure that you don't experience any I/O performance issues because of this very common problem.

Location of the Operating System

For best performance, operating system files should be on a disk array that does not include the SQL Server data files (MDBs or LDFs). In addition, they should be located on a disk array that supports either RAID 1, 5, or 10.

Generally, I install, as most people do, the operating system on drive C: of the server. I usually configure drive C: as a RAID 1 mirrored drive for both fault tolerance and best overall performance.

In most cases, as long as you don't locate the operating system on the same array as SQL Server data files, you have great flexibility in placing operating system files on your server.


Location of Swap File

Assuming that your SQL Server is a dedicated SQL Server, and that SQL Server memory usage has been set to dynamic (the default), the swap file won't see a lot of activity. This is because SQL Server doesn't normally use it a lot. Because of this, it is not critical that the swap file be located in any particular location, except you don't want to locate it on the same array as SQL Server data files.

Generally, I place the swap file on the same array as the operating system and SQL Server executables, which I have indicated earlier, is a disk array that supports RAID 1, RAID 5, or RAID 10. This is usually drive C:. This makes administration much easier.

If your SQL Server is a shared server, running applications other than SQL Server, and paging is an issue (due to the other applications), you might want to consider moving the swap file to its own dedicated array for better performance. But better yet, make SQL Server a dedicated server.


Location of SQL Server Executables

The location of the SQL Server executables (binaries), like the location of the operating system files, is not critical, as long as they are not located on the same array as the SQL Server data files. As with operating system files, I generally place SQL Server executables on drive C:, which is generally configured as a RAID 1 mirrored drive.




Location of the tempdb Database

If your tempdb database is heavily used, consider moving it to an array of its own, either RAID 1 or RAID 10, to boost disk I/O performance. Avoid RAID 5 arrays as they can be slow when writing data, a common side-effect of using tempdb. This will help to reduce overall I/O contention and boost performance.

If your application uses the tempdb database a lot, and causes it to grow larger than its default size, you may want to permanently increase the default size of the tempdb file to a size closer to what is actually used by your application on a day-to-day basis. This is because every time the SQL Server service (mssqlserver) is restarted, the tempdb file is recreated to the default size. While the tempdb file can grow as needed, it does take some resources to perform this task. By having the tempdb file at the correct size when SQL Server is restarted, you don't have to worry about the overhead of it growing during production.

In addition, heavy activity in the tempdb database can drag down your application's performance. This is especially true if you create one or more large temp tables and then are querying or joining them. To help speed these queries, be sure the AUTOSTATS database option is turned on for tempdb, and then create one or more indexes on these temp tables that can be used by your query. In many cases, you will find that this can substantially speed up your application. But like many performance tips, be sure you test this one to see if it actually helps in your particular situation.

Location of System Databases

The system databases (master, msdb, model) don't experience a lot of read and write activity, so locating them on the same array as your SQL Server data files is generally not a performance issue. The only exception might be for very large databases with hundreds or thousands of users. In this case, putting them on their own array can help boost overall I/O performance somewhat.

Location of User Databases

For best performance, user database files (MDBs) should be located on their own array (RAID 1, 5, or 10), separate from all other data files, including log files. If you have multiple large databases on the same SQL Server, consider locating each separate database file(s) on its own array for less I/O contention.

Location of Log Files

Ideally, each log file should reside on its own separate array (RAID 1 or 10, RAID 5 will slow down transaction log writes more than you would like). The reason for this is because most of the time, transaction logs experience sequential writes, and if the array can write the data sequentially (not having to interrupt itself to perform other reads and writes), then sequential writes are very fast. But if the array can't write sequentially because it has to random perform other reads and writes, sequential writes can't be performed, and performance suffers.

Of course, having a separate array for each log file is expensive, and often can't be cost justified. At the very least though, locate all log files on an array (RAID 1 or RAID 10) other than the array used for database files. While sequential write performance won't be as good as if each log file had its own array, it is still much better than trying to contend for disk I/O with data files.


Location of Backup Files

All production databases need to be backed up. Your options include backing up directly to tape, to a local disk array, or to a remote storage device. Of these three options, generally the best performance is gained by backing up to a local disk array (or SAN). In addition, for best performance, the local drive array you use to store backups on should be separate than where your user databases are stored, and should be RAID 1 or RAID 5. This is because database backups are a highly write-intensive procedure. Once your disk backups are made, you then have the option of moving them to a different location for higher fault tolerance.


Number of Disk Controllers in Server

A single disk controller, whether is it is SCSI or fiber channel, has a maximum limit on its throughput. Because of this, you will want to match the number of disk controllers to the amount of data throughput you expect. As each controller is do different, I can't recommend specific solutions, other than to say that at a very minimum, you will want two disk controllers. One controller should be used for non-hard disk devices, such as the CD-ROM, backup devices, and so on. And the other controller would be used for hard disk. The goal is not to attach both slow and fast devices on the same controller.

Quite often, you see the following scenario, which is a good one. One controller is for non-hard disk devices, one controller is used for a RAID 1 local hard disk, and a third (and sometimes more) is used for arrays that hold SQL Server database files and logs. Be sure you don't attach more drives to a controller than it can handle.

Type of Disk Controllers in Server

Always purchase the fastest disk controller you can afford, assuming you want the best SQL Server performance. As you may know, different disk controllers have different performance characteristics. For example, there are different types of SCSI, such as Wide SCSI, Narrow SCSI, Ultra SCSI, and so on. The same is true, although to a less degree, of fiber channel connections.

Because of the wide variety of controllers, I can't recommend any specific ones. Generally, a hardware vendor will offer several models to choose from. Ask about the performance benefits of each one, and get the one that offers the best throughput.

Size of Cache in Disk Controllers in Server

Also, when you purchase a disk controller, consider how much disk cache it has. Some disk controllers allow you to add extra disk cache. Generally, you will want to purchase as much disk cache as your controller can hold. SQL Server is very I/O intensive, and anything we can do to boost I/O performance, like employing a large disk cache, will help out a lot. SANs often have huge disk caches.

Is Write Back Cache in Disk Controller On or Off?

The disk cache in your disk controller offers two ways to speed access. One is for reads and the other for writes. Of these, the most important use for it is for reads, as this is where most disk I/O time is spent in most SQL Server databases. A write back cache, on the other hand, is used to speed up writes, which usually occur less often, relatively speaking. Unfortunately, SQL Server, in most cases, assumes that write back cache is not on, and because of this, write back caching should be turned off on most controllers. If you don't, it is possible, under certain circumstances, to get corrupted data after SQL Server writes data (once it writes data, it assumes it was written correctly), but for some reason (such as a loss of power) the write back cache does not write the data to disk.

While there are some controllers that offer battery backup to help prevent such issues, they don't always work as expected. Personally, I prefer non-corrupt data (written more slowly) than corrupt data (that was written much faster). In other words, I recommend turning write back catching off on your disk controller, even though you might suffer a very small write performance hit by doing so.

Speed of Disk Drives

The disk drives that come in your arrays can often be purchased with different speeds. As you might expect, for best performance, always purchase the fastest disks you can. Generally, this is 15,000 RPM or faster. In addition, don't mix and match drives of different speeds in the same array. If you do, performance will suffer.

Networking

How Many Network Cards Are in Your Server?

Fortunately, network traffic to and from a SQL Server is generally not a bottleneck, and a single network card is often more than adequate. But if you find that network traffic is a problem (you have hundreds or thousands of users) then moving to multiple network cards is justified, and can boost performance. In addition, two or more network cards can add to redundancy, helping to reduce downtime.

What is the Speed of the Network Cards in Server?

At the very minimum, your server should have 100Mbs network cards. If one or more 100MBs cards don't offer enough throughput, then consider gigabit cards. If fact, you might want to skip 100MBs cards altogether and only use gigabit cards instead. Using a faster network card doesn't speed up network traffic, it only allows more traffic to get through, which in turn allows your server to work at its optimum performance.


Are the Network Cards Hard-Coded for Speed/Duplex?

If you have a dual 10/100 or 10/100/1000 card in a SQL Server that is supposed to auto-sense the network's speed and set itself accordingly, don't accept that it has worked correctly. It is fairly common for a network card to auto-sense incorrectly, setting a less than optimum speed or duplex setting, which can significantly hurt network performance. What you need to do is to manually set the card's speed and duplex setting, this way you know for sure that it has been set correctly.


Are the Network Cards Attached to a Switch?

This may be obvious in a large data center, but for smaller organizations, a hub may still being used to connect server. If so, seriously consider replacing the hub with an appropriate switch, and configure the switch to communicate at its highest possible performance, such as 100MBs and full duplex. Moving from a hub to a switch can make dramatic difference in network performance.

Miscellaneous

Are All the Hardware Drivers Up-to-Date?

Admittedly, this is a boring topic, but it is more important than you might think. One of the biggest performance hogs (not to leave out causes of strange and unusual problems) are buggy drivers, whether they are found in disk controllers, network, cards, or elsewhere. By using the latest drivers, the odds are that you will be getting a better, faster performing driver, allowing SQL Server to perform at its best.

Regularly, you should be checking to see if newer drivers are available for your hardware, and installing them when you have downtime. I have personally seen radical performance differences by changing from an old, buggy driver to a new one that has been thoroughly debugged and tuned.

Is this Physical Server Dedicated to SQL Server?

I have alluded to this before, but I can't say it too often. SQL Server should run on a dedicated physical server, not shared with other application software. When you share SQL Server with other software, you force SQL Server to fight over physical resources, and you make it much more difficult to tune your server for optimum SQL Server performance. Time and time again, when I get questions about poor SQL Server performance, I find out that the culprit responsible is another application running on the same server. You just have to learn to say NO.


Now What?

This has been a long journey so far, but we still have a long way to go. When I first evaluate a SQL Server for performance, and perform a performance audit, I take detailed notes about all of the topics discussed above. I then compare how the server is configured to the ideal configuration, and then look for easy ways to move closer to the ideal configuration. Sometimes this is easy (obvious, easy to correct mistakes have been made), and other times, there is not too much you can do. But you won't know this if you don't perform the audit.

Your goal should be to perform the part of the performance audit, as described on this page, for each of your SQL Servers, and then use this information to make corrections, if you can. If you can't, then you can use this information as ammunition for getting new and better hardware.

Once you have completed this part of the performance audit, you are now ready to audit the operating system for potential performance improvements.

No comments:

Post a Comment

Recent Posts

Archives