|
Blocked Process Threshold
This option is new to SQL Server 2005.
The blocked process threshold option is used to specify the threshold, in seconds, when blocked process reports are generated. By default, no blocked process reports are produced.
You can define an alert to be executed when this event is generated. For example, you can choose to notify a DBA to take appropriate action to handle a blocking situation.
The blocked process threshold option uses the deadlock monitor background thread to transverse the list of tasks waiting for a time greater than or multiples of the configured threshold. The event is generated once per reporting interval for each blocked task.
The blocked process report is done on a best effort basis. There is no real-time reporting. If your server is experiencing a lot of blocking, consider turning on this option to help you identify and troubleshoot blocking issues.
Cost Threshold for Parallelism
Using parallelism to execute a SQL Server query has its costs. This is because it takes a little additional overhead to run a single query in parallel on multiple CPUs than to run it serially on a single CPU. But if the benefits of running a query using parallelism is higher (because often parallel queries can sometimes result in fewer overall resources used for long running queries) than the additional overhead costs, then using parallelism is a good thing.
As a rule of thumb, if a query can run serially very fast, there is no point in even considering parallelism for the query, as the extra time required to evaluate it for possible parallelism might be longer than the time it takes to run the query serially.
By default, if the Query Optimizer determines that a query will take less than 5 seconds to execute, parallelism is not considered by SQL Server. This 5 second figure can be modified using the "cost threshold for parallelism" SQL Server option. You can change this value anywhere from 0 to 32767 seconds. So if you set this value to 10, this means that the Query Optimizer won't consider parallelism for any query that it thinks will take less than 10 seconds to run.
In most cases, you should not change this setting. But if you find that your SQL Server runs many queries with parallelism, and if the CPU rate is very high, raising this setting from 5 to a higher figure (you will have to experiment to find the ideal figure for your situation), will reduce the number of queries using parallelism, potentially reducing the overall usage of your server's CPUs, which may help the overall performance of your server. It could also hurt the performance of your server.
Another option to consider is to reduce the value from 5 seconds to a smaller number, although this could hurt, rather than help performance in many cases. One area where a smaller value might be useful is in cases where SQL Server is acting as a data warehouse and many very complex queries are being run. A lower value will allow the Query Optimizer to use parallelism more often, which can help in some situations.
You will want to test changes to the default value thoroughly before implementing it on your production servers.
If SQL Server only has access to a single CPU (either because there is only one CPU in the server, or because of an "affinity mask" setting, parallelism is not considered for a query.
If you find in your audit that the cost threshold for parallelism has been changed from the default value, find out why. If you can't get an answer, move it back to the default value.
Cursor Threshold
If your SQL Server does not use cursors as part of the T-SQL code it is running, or uses them very little, then this setting should never be changed from its default value of "-1".
A "cursor threshold" of "-1" tells SQL Server to execute all cursors synchronously, which is the ideal setting if the result sets of cursors executed on your server are not large. But if many, or all of the cursors running on your SQL Server produce very large result sets, then executing cursors synchronously is not the most efficient way to execute a cursor.
The "cursor threshold" setting has two other settings (besides the default) for running large cursors. A setting of "0" tells SQL Server to run all cursors asynchronously, which is more efficient if most or all of the cursor's result sets are large.
What if some of the cursor result sets are small and some are large, then what do you do? In this case, you can decide what large and small is, and then use this number as the cutoff point for SQL Server. For example, let's say that we consider any cursor result set of under 1000 rows as small, and any cursor result set of over 1000 rows as large. If this is the case, we can set the "cursor threshold" to 1000.
When the "cursor threshold" is set to 1000, what happens is that if the Query Optimizer predicts that the result set will be less than 1000, then the cursor will be run synchronously. And if the Query Optimizer predicts that the result set will be more than 1000, then the cursor will be run asynchronously.
In many ways, this option provides the best of both worlds. The only problem is what is predicting the ideal "cursor threshold". To determine this, you will need to test. But as you might expect, the default value if often the best, and you should only change this option if you know for sure that your application uses very large cursors and that you have tested this option and know for sure that by changing it, it has helped, not hurt performance.
As a part of your audit, you may also want to investigate how often cursors are used, and how large the result sets are. Only by knowing this will you know what the best setting is for your server. Of course, you could always try to eliminate the use of cursors on the server. This way, the setting can remain at the default value, and you don't have to worry about the overhead of cursors.
Default Trace Enabled
By default, SQL Server 2005 performs a Profiler trace on an on-going basis. This information can be very handy when troubleshooting some types of problems. The default trace log is stored by default in the \MSSQL\LOG directory using a rollover trace file. The name for the default trace log file is log.trc.
For most servers, you will want to leave this option on because of the benefits it provides when troubleshooting. On the other hand, this option creates additional overhead, which might be noticed on a very busy server. Only consider turning this option off if your server is maxed out of CPU and I/O resources.
Fill Factor (%)
This option allows you to change the default fill factor for indexes when they are built. By default, the fill factor setting is set to "0". A setting of "0" is somewhat confusing, as what it means is that leaf index pages are filled 100% (not 0%), but that intermediate index pages (non-leaf pages) have some space left in them (they are not filled up 100%). Legal settings for the fill factor setting range from 0 through 100.
The default fill factor only comes into play when you build or rebuild indexes without specifying a specific fill factor. If you do specify a fill factor when you create a new index, that value is used, not the default fill factor.
In most cases, it is best to leave the default fill factor alone, and if you want a value other than the default fill factor, then specify it when you create an index. In fact, this is a best practice.
As a part of your audit, note if the fill factor is some figure other than the the default value of "0". If it is, try to find out why. And if you can't find out why the default value was changed, or there is not a good reason, switch it back to the default value. Also, if the value has been changed, keep in mind that any indexes created after the default value was changed may be using this default fill factor value. If so, you may need to reevaluate these indexes to see if the fill factor used for creating them is appropriate.
Index Create Memory (KB)
The Index Create Memory option controls the maximum amount of memory initially allocated for creating an index. If more memory is needed later for the index creation, and the memory is available, SQL Server will use it, exceeding the setting of this option. If additional memory is not available, index creation will use the memory initially allocated.
The default value of "0" tells SQL Server to automatically determine the ideal value for this setting. In most cases, SQL Server will configure the amount of memory optimally.
But in some unusual cases, especially with very large tables, it is possible for SQL Server to make a mistake, causing large indexes to be created very slowly, or not at all. If you run into this situation, you may want to consider setting the Index Create Memory setting yourself, although you will have to trial and error the setting until you find the optimum one for your situation. Legal settings for this option run from 704 to 2147483647.
Keep in mind that if you do change the setting, that this memory is allocated for index creation and will not be available for other use. If your server has more than enough RAM, then this will be no problem. But if your server is short on RAM, changing this setting could negatively affect the performance of other aspects of SQL Server
As with the other settings, if you find in your audit that this setting is some value other than the default, try to find out why. If you can't find out why, or if there is not a good reason, change it back to the default value.
Lightweight Pooling
SQL Server 2005, by default, runs in what is called "thread mode." What this means is that SQL Server uses what are called UMS (User Mode Schedulers) threads to run user processes. SQL Server will create one UMS thread per processor, with each one taking turns running the many user processes found on a busy SQL Server. For optimum efficiency, the UMS attempts to balance the number of user processes run by each thread, which in effect tries to evenly balance all of the user processes over all the CPUs in the server.
SQL Server also has an optional mode it can run in, called fiber mode. In this case, SQL Server uses one thread per processor (like thread mode), but the difference is that multiple fibers are run within each thread. Fibers are used to assume the identity of the thread they are executing and are non-preemptive to other SQL Server threads running on the server. Think of a fiber as a "lightweight thread," which, under certain circumstances, takes less overhead than standard UMS threads to manage. Fiber mode is turned on and off using the "lightweight pooling" SQL Server configuration option. The default value is "0", which means that fiber mode is turned off.
So what does all this mean? Like everything, there are pros and cons to running in one mode over another. Generally speaking, fiber mode is only beneficial when all of the following conditions exist:
• Two or more CPUs are found on the server (the more the CPUs, the larger the benefit).
• All of the CPUS are running near maximum (90-100%) most of the time.
• There is a lot of context switching occurring on the server (as reported by the Performance Monitor System Object: Context Switches/sec. Generally speaking, more than 5,000 context switches per second is considered high.
• The server is making little or no use of distributed queries or extended stored procedures.
If all the above are true, then turning on the "lightweight pooling" option in SQL Server may general a 5% or greater boost in performance.
But if the four circumstances are all not true, then turning on "lightweight pooling" could actually degrade performance. For example, if your server makes use of many distributed queries or extended stored procedures, then turning on "lightweight pooling" will definitely cause a problem because they cannot make use of fibers, which means that SQL Server will have to switch back-and-forth from fiber mode to thread mode as needed, which hurts SQL Server's performance.
As with the other settings, if you find in your audit that this setting is some value other than the default, try to find out why. In addition, check to see if the four conditions above exist. If they do, then turning "lightweight pooling" on may be beneficial. If these four conditions do not exist, then use the default value of "0".
Locks
Each time SQL Server locks a record, the lock must be stored in memory. By default, the value for the "locks" option is "0", which means that lock memory is dynamically managed by SQL Server. Internally, SQL Server can reserve up to 60% of available memory for locks. In addition, if SQL Server determines that allocating memory for locking could cause paging at the operating system level, it will not allocate the memory to locks, instead giving it up to the operating system in order to prevent paging.
In almost all cases, you should allow SQL Server to dynamically manage locks, leaving the default value as it. If you enter your own value for lock memory (legal values are from 5000 to 2147483647 KB), then SQL Server cannot dynamically manage this portion of memory. In addition, no more memory that what you have specified can be used for locking, which may cause lock memory to run out under some circumstances.
If you get an error message that says you have exceeded the maximum number of locks available, you have these options:
• Closely examine your queries to see if they are causing excessive locking. If they are, it is possible that performance is also being hurt because of a lack of concurrency in your application. It is better to fix bad queries than it is to allocate additional memory to tracking locks.
• Reduce the number of applications running on the server.
• Add more RAM to your server.
• Boost the number of locks to a higher value (based on trial and error). This is the least desirable option as giving memory to locks prevents it from being used by SQL Server for other purposes, as needed.
Do your best to resist using this option. If you find in your audit that this setting is some other value other than the default, find out why. If you can't find out why, or if the reason is poor, change it back to the default value.
Max Degree of Parallelism
This option allows you to specify if parallelism is turned on, turned off, or only turned on for some CPUs, but not for all CPUs in your server. Parallelism refers to the ability of the Query Optimizer to use more than a single CPU to execute a single query. By default, parallelism is turned on and can use as many CPUs as there are in the server (unless this has been reduced due to the affinity mask option). If your server has only one CPU, the "max degree of parallelism" value is ignored.
The default for this option is "0", which means that parallelism is turned on for all available CPUs. If you change this setting to "1", then parallelism is turned off for all CPUs. This option allows you to specify how many CPUs can be used for parallelism. For example, if your server has 8 CPUs and you only want parallelism to use up to 4 of them, you can specify a value of 4 for this option.
If parallelism is turned on, as it is by default if you have multiple CPUs, then the query optimizer will evaluate each query for the possibility of using parallelism, which takes a little overhead. On many OLTP servers, the nature of the queries being run often doesn't lend itself to using parallelism for running queries. Examples of this include standard SELECT, INSERT, UPDATE and DELETE statements. Because of this, the query optimizer is wasting its time evaluating each query to see if it can take advantage of parallelism. If you know that if your queries will probably never need the advantage of parallelism, you can save a little overhead by turning this feature off, so queries aren't evaluated for this. This is subject to Cost Threshold for Parallelism previously discussed.
Of course, if the nature of the queries that are run on your SQL Server can take advantage of parallelism, you will not want to turn parallelism off. For example, if your OLTP server runs many correlated subqueries, or other complex queries, then you will probably want to leave parallelism on. You will want to test this setting to see if making this particular change will help, or hurt, your SQL Server's performance in your unique operating environment.
In most cases, because most servers run both OLTP and OLAP queries, parallelism should be kept on. As part of your performance audit, if you find parallelism turned off, or if it is restricted, find out why. As part of your audit, you will also want to determine if the server is virtually all OLTP-oriented. If so, the turning off parallelism might be justified, although you will want to thoroughly test this to see if turning it off helps or hurts overall SQL Server performance. But if the server runs mixed OLTP and OLAP, or mostly OLAP queries, then parallelism should generally be on for best overall performance.
There is one more complication to this setting. In some cases, certain queries, even long running ones, just don’t run efficiently when running using parallelism. If fact, they can take much more time to run in parallel that they do serially. If you discover any such queries, you can use the MAXDOP query hint to turn off parallelism for any problem queries, while still keeping parallelism turned on for all other queries.
Max Server Memory (MB) & Min Server Memory (MB)
For best SQL Server performance, you want to dedicate your SQL Servers to only running SQL Server, not other applications. And in most cases, the settings for the "maximum server memory" and the "minimum server memory" should be left to their default values. This is because the default values allow SQL Server to dynamically allocate memory in the server for the best overall optimum performance. If you "hard code" a minimum or maximum memory setting, you risk hurting SQL Server's performance.
On the other hand, if SQL Server cannot be dedicated to its own physical server (other applications run on the same physical server along with SQL Server) you might want to consider changing either the minimum or maximum memory values, although this is generally not required.
Let's take a closer look at each of these two settings.
The "maximum server memory" setting, when set to the default value of 2147483647 (in MB), tells SQL Server to manage the use of memory dynamically, and if it needs it, to use as much RAM as is available (while leaving some memory for the operating system).
If you want SQL Server to not use all of the available RAM in the server, you can manually set the maximum amount of memory SQL Server can use by specifying a specific number that is between 4 (the lowest number you can enter) to the maximum amount of RAM in your server (but don't allocate all the RAM to SQL Server, as the operating system needs some RAM too).
Only in cases when SQL Server has to share memory with other applications on the same server, or when you want to artificially keep SQL Server from using all of the RAM available to it, would you want to change the default value. For example, if your "other" application(s) are more important than SQL Server's performance, then you can restrain SQL Server's performance if you want by restricting how much RAM it can use.
There are also two potentially performance issues you can create if you do attempt to set the "maximum server memory" setting manually. First, if you allocate too much memory to SQL Server, and not enough for other applications or the operating system, then the operating system may have no choice but to begin excessive paging, which will slow performance of your server. Also, if you are using the Full-Text Search service, you must also leave plenty of memory for its use. Its memory is not dynamically allocated like the rest of SQL Server's memory, and there must be enough available memory for it to run properly.
The "min server memory" setting, when set to the default value of 0 (in MB), tells SQL Server to manage the use of memory dynamically. This means that SQL Server will start allocating memory as is needed, and the minimum amount of RAM used can vary as SQL Server's needs vary.
If you change the "min server memory" setting to a value other than the default value of 0, what this means is not that SQL Server will automatically begin using this amount of minimum memory automatically, as many people assume, but that once the minimum amount is reached (because it is needed) that the minimum amount specified will never go down below the specified minimum.
For example, if you specify a minimum value of 100 MB, then restart SQL Server, SQL Server will not immediately use 100 MB of RAM for its minimal use. Instead, SQL Server will only take as much as it needs. If it never needs 100MB, then it will never be fully used. But if SQL Server does exceed the 100 MB amount specified, then later it doesn't need it, then this 100 MB will then become the bottom limit of how much memory SQL Server allocates. Because of this behavior, there is little reason to change the "min server memory" setting to any value other than its default value.
If your SQL Server is dedicated, there is no reason to use the "min server memory" setting at all. If you are running other applications on the same server as SQL Server, there might be a very small benefit of changing this setting to a minimum figure, but it would be hard to determine what this value should be, and the overall performance benefit would be negligible.
If you find in your audit that these settings are some other value other than the default, find out why. If you can't find out why, or if the reason is poor, change them back to their default values.
Max Text Repl Size
The "max text repl size" setting is used to specify the maximum size of text or image data that can be inserted into a replicated column in a single physical INSERT, UPDATE, WRITETEXT, or UPDATETEXT transaction. If you don't use replication, or if you don't replicate text or image data, then this setting should not be changed.
The default value is 65536, the minimum value is 0, and the maximum value is 2147483647 (in bytes). If you do heavy replication of text or image data, you might want to consider increasing this value only if the size of this data exceeds 64K. But as with most of these settings, you will have to experiment with various values to see what works best for your particular circumstances.
As part of your audit, if you don't use replication, then the only correct value here is the default value. If the default value has been changed, you need to investigate if text or image data is being replicated. If not, or if the replicated data is less than 64K, then change it back to the default value.
Max Worker Threads
The "max worker threads" SQL Server configuration setting is used to determine how many worker threads are made available to the sqlservr.exe process from the operating system. Generally speaking, one worker thread is assigned to each connection to SQL Server. This includes both system and user threads. If the number of actual connections exceeds the amount of worker threads assigned by SQL Server, then thread pooling begins, which means that connections may share threads. While thread sharing does save on memory, it can hurt the overall performance of SQL Server.
The default value is 0 for thread sharing (in SQL Server 2000, it was 255). In SQL Server 2005, the default setting is 0, which means that SQL Server will determine the maximum number of worker threads based on the following schedule:
Number of CPUs 32-bit computer
<= 4 processors 256
8 processors 288
16 processors 352
32 processors 480
If the number of connections exceeds the amount specified by SQL Server (see above chart), and if server is not memory bound, consider changing the default value of 0 to a number slightly larger than the maximum number of simultaneous connections you expect SQL Server to service. This way, thread sharing is not performed, and overall performance to the server is boosted. Of course, if your server is currently under memory pressure, you don’t want to change the default setting for this option unless you are able to add more RAM to the server.
But if you don't have any extra RAM available, then adding more worker threads can hurt SQL Server's performance. In this case, allowing SQL Server to use thread pooling offers better performance (in the form of a compromise). This is because thread pooling uses less resources than not using it. But, on the downside, thread pooling can introduce problems of resource contention between connections. For example, two connections sharing a thread can conflict when both connections want to perform some task as the exact same time (which can't be done because a single thread can only service a single connection at the same time).
As you might expect, before using this setting in production, you will want to test your server's performance before and after the change to see if SQL Server benefited, or was hurt, from the change.
Min Memory Per Query
When a query runs, SQL Server does its best to allocate the optimum amount of memory for it to run efficiently and quickly. By default, the "minimum memory per query" setting allocates 1024 KB, as a minimum, for each query to run. The "minimum memory per query" setting can be set from 0 to 2147483647 KB.
If a query needs more memory to run efficiently, and if it is available, then SQL Server automatically assigns more memory to the query. Because of this, changing the value of the "minimum memory per query" default setting is generally not advised.
In some cases, if your SQL Server has more RAM than it needs to run efficiently, the performance of some queries can be boosted if you increase the "minimum memory per query" setting to a higher value, such as 2048 KB, or perhaps a little higher. As long as there is "excess" memory available in the server (essentially, RAM that is not being used by SQL Server), then boosting this setting can help overall SQL Server performance. But if there is no excess memory available, increasing the amount of memory for this setting is more likely to hurt overall performance, not help it.
Nested Triggers
This configuration option does affect performance, but not in the conventional way. By default, the "nested triggers" option is set to the default value of "1". This means that nested triggers (a nested trigger is a trigger that cascades up to a maximum limit of 32) can be run. If you change this setting to "0", then nested triggers are not permitted. Obviously, by not allowing nested triggers, overall performance can be improved, but at the cost of application flexibility.
This setting should be left to its default value, unless you want to prevent developers from using nested triggers. Also, some third-party applications could fail if you turn off nested triggers, assuming they depend on them.
Network Packet Size (B)
"Network packet size" determines the size of the packet size SQL Server uses when it talks to clients over a network. The default value is 4096 bytes, with a legal range from a minimum of 512 bytes, to a maximum value which is based on the maximum packet size that the network protocol you are using supports.
In theory, by changing this value, performance can be boosted if the size of the packet more or less matches the size of the data in the packet. For example, if the data moved over the wire is small, less than 512 bytes on average, changing the default value of 4096 bytes to 512 bytes can boost performance. Or, if you are doing a lot of data movement, such as with bulk loads, of if you deal with a lot of TEXT or IMAGE data, then by increasing the default packet size to a number larger than 4096 bytes, then it will take fewer packets to send the data, resulting in less overhead and better performance.
In theory, this sounds great. In reality, you will see little, if any, performance boost. This is because there is no such think as an average data size. In some cases data is small, and in other cases, data is very large. Because of this, changing the default value of the "network packet size" is generally not very useful.
As a part of your audit, carefully question any value for this setting other than the default. If you can't get a good answer, change it back.
Open Objects
This option is no longer available in SQL Server 2005, although it has been retained for backward compatibility with older scripts. In SQL Server 2005, the number of open database objects is managed dynamically and is limited only by available memory.
Priority Boost
By default, SQL Server processes run at the same priority as any other applications on a server. In other words, no single application process has a higher priority than another when it comes to getting and receiving CPU cycles.
The "priority boost" configuration option allows you to change this. The default value for this option is "0", means that the priority of SQL Server processes is the same as all other application processes. If you change it to "1", then SQL Server now has a higher priority than other application processes. In essence, this means that SQL Server has first priority to CPU cycles over other application processes running on the same server. But does this really boost performance of SQL Server?
Let's look at a couple of scenarios. First, let's assume a server runs not only SQL Server, but other apps (not recommended for best performance, but a real-world possibility), and that there is plenty of CPU power available. If this is the case, and if you give SQL Server a priority boost, what happens? No much. If there is plenty of CPU power available, a priority boost doesn't mean much. Sure, SQL Server might gain a few milliseconds here and there as compared to the other applications, but I doubt if you would be able to notice the difference.
Now let's look at a similar scenario as above, but let's assume that CPU power is virtually all exhausted. If this is the case, and SQL Server is given a priority boost, sure, SQL Server will now get its work done faster, but only at the cost of slowing down the other applications. If this is what you want, OK. But a better solution would be to boost CPU power on the server, or reduce the server's load.
But what if SQL Server is running on a dedicated server with no other applications and if there is plenty of excess CPU power available? In this case, boosting the priority will not gain a thing, as there is nothing competing (other than part of the operating system) for CPU cycles, and besides, there are plenty of extra cycles to go around.
And last of all, if SQL Server is on a dedicated server, and the CPU is maxed out, giving it a priority boost is a zero sum game as parts of the operating system could potentially be negatively affected if you do. And the gain, if any, will be very little for SQL Server.
As you can see, this option is not worth the effort. In fact, Microsoft has documented several problems related to using this option, which makes this option even less desirable to try.
If you find this option turned on in your audit, question its purpose. If you currently are not having any problems with it on, you can probably leave it on without issues. But I would recommend setting it back to its default.
Query Governor Cost Limit
The "query governor cost limit" option allows you to limit the maximum length of time a query can run. For example, let's say that some of the users of your server like to run very long-running queries that really hurt the performance of your server. By setting this option, you could prevent them from running any queries that exceeded, say 300 seconds (or whatever number you pick). The default value for this setting is "0", which means that there are no limits to how long a query can run.
The value you set for this option is approximate, and is based on how long the Query Optimizer estimates the query will run. If the estimate is more than the time you have specified, the query won't run at all, producing an error instead. This can save a lot of valuable server resources.
On the other hand, users can get real unhappy with you if they can't run the queries they have to run in order to do their job. What you might consider doing is helping those users to write more efficient queries. That way, everyone will be happy.
Unlike most of my other suggestions, if your audit turns up a value here other than "0", great. As long as users aren't complaining, this is a good deal. In fact, if this setting is set to "0", consider adding a value here and see what happens. Just don't make it too small. You might consider starting with value of about 600 seconds and see what happens. If that is OK, then try 500 seconds, and so on, until you find out when users start complaining, then you can back off.
Query Wait (s)
If SQL Server is very busy and is hurting for more memory resources, it will queue what it considers memory-intensive queries (those that use sorting or hashing) until there is enough memory available to run them. In some cases, there just isn't enough memory to run them and they eventually time out, producing an error message. By default, a query will time out after a period of time equal to 25 times the estimated amount of time the Query Optimizer thinks it will take for the query to run.
The best solution for such a problem is to add more memory to the server, or to reduce its load. But if that can't be done, one option, although fraught with problems of its own, is to use the "query wait" configuration option. The default setting for this option is "-1", which waits the time period described above, and then causes the query to time out. If you want the time out period to be greater so that queries won't time out, you can set the "query wait" time to a large enough number. As you might guess, you will have to determine this time out number yourself through trial and error.
The problem with using this option is that a transaction with an intensive query may be holding locks, which can cause deadlock or other locking contention problems, which in the end may a bigger problem than the query timing out. Because of this, this option is not recommended to be changed.
If you find a non-default value in your audit, find out why. If there is no good reason to keep it, change it back to the default value. But, if someone has thought this out thoroughly, and if you cannot detect any locking issues, then consider leaving this option as is.
Recovery Interval (min)
If you have a very active OLTP server application with many INSERTS, UPDATES, and DELETES, it is possible that the default "recovery interval" of 0 (which means that SQL Server automatically determines the appropriate recovery interval) may not be appropriate. If you are watching the performance of your server with the Performance Monitor and notice that you have regular periods of 100% disk write activity (occurring during the checkpoint process), you may want to set the "recovery interval" to a higher number, such as 5 or 10. This figure refers to the maximum number of minutes it will take SQL Server to perform a recovery after it is restarted. The default figure of 0, in effect, works out to be about a maximum recovery period of 1 minute.
Another potential reason to use this "recovery interval" option is if the server is devoted to OLAP or a data warehouse. In these instances, these mostly read-only databases don't generally benefit from a short recovery interval.
If your server does not match any of the above suggestions, then leaving the default value it generally the best choice.
By extending the checkpoint time, you reduce the number of times SQL Server performs a checkpoint, and if effect, reduce some of SQL Server's overhead. You may need to experiment with this figure in order to find the ideal compromise between performance and the time it takes for SQL Server to perform a recovery.
Ideally, you want to keep this number as small as possible in order to reduce the amount of time it takes to restart the mssqlserver service the next time it happens. This is because each time the mssqlserver service starts, it goes through an automatic recovery process, and the larger the "recovery interval" is set, the longer the recover process can take. You must decide what the best compromise is in performance and recovery time that best fits your needs.
As a part of your audit, you will want to evaluate the current setting for "recovery interval" in regards to its potential use. For busy OLTP servers, you will want to do a lot of research before you decide to increase the "recover interval" to see if it will help or not. Testing is important. But if your server is a dedicated OLAP or data warehouse server, increasing the "recovery interval" is an easier decision to make.
Scan for Startup Procs
SQL Server has the ability, if properly configured, to look for stored procedures to run automatically when the mssqlserver service starts. This can be handy if you want a particular action to occur on startup, such as the loading of a specific stored procedure into cache so that it is already there when users begin accessing the server.
By default, the "scan for startup procs" is set to "0", which means that a scan for stored procedures is not done at startup. If you don't have any startup stored procedures, then this is the obvious setting. There is no point spending resources looking for stored procedures that don't exist.
But if you do have one or more stored procedures you want to execute on server startup, then this option has to be set to "1", which turns on the startup scan.
If you find in your audit that this is set to "1", check to see if there are any start-up stored procedures. If not, then return this option back to the default setting.
Set Working Set Size
This option has been removed from SQL Server 2005, and has only been retained for compatibility with existing scripts.
User Connections
By default, SQL Server only allocates as many user connections as it needs. This allows those who need to connect to connect, while at the same time minimizing the amount of memory used. When the "user connections" setting is set to its default value of "0", user connections are dynamically set. Under virtually all circumstances, this is the ideal setting.
If you change the default value for "user connections," what you are telling SQL Server to do is to allocate only the number of user connections you have specified, no more or no less. Also, it will allocate memory for every user connection specified, whether or not it is being used. Because of these problems, and because SQL Server can perform this task automatically and efficiently, there is no reason to change this setting from the default.
If your audit shows a value other than "0", change it back to zero. Don't even both asking why.
Now What?
Your goal should be to perform this part of the performance audit, described on this page, for each of your SQL Servers, and then use this information to make changes as appropriate, assuming you can.
Once you have completed this part of the performance audit, you are now ready to audit your SQL Server database configurations.
No comments:
Post a Comment