MSDB log file max size reset after SQL Patches

I have encountered this several times already after different Cumulative Updates/Service Packs…

We set fixed maximum size for SQL Server databases and update the max size when required. But after installing Service Pack and Cumulative Updates the maximum size is reset to 2,097,152 MB for the MSDB log file:
max size - msdb log

I am going to submit this issue to the Microsoft Connect (hopefully it still works and active) or check if somebody else had similar issues. As this happens only with one system database and only with one file I would suspect that this is a bug.

This is just something to watch out and remember to review after SQL Server patching if you use fixed maximum files sizes.


MSSQL Tips: Tip #17 – Using Policy Based Management to check that SQL Server indexes are created in the correct filegroup

Another tip about using Policy Based Management for the SQL Server checks.

Please read the latest MSSQL Tip: “Using Policy Based Management to check that SQL Server indexes are created in the correct filegroup“.

Check out all of my tips here:


MSSQL Tips: Tip #14 – Using WMI alerts to import SQL Server Default Trace events

And another tip about WMI alerts use.

Please read the latest MSSQL Tip: “Using WMI alerts to import SQL Server Default Trace events“.

Check out all of my tips here:


Q&A: Script for the Job that will shrink the log files for all databases

There was a user question on how to shrink the log files with dynamic SQL (without hard coding databases names). The script is provided below. But first – make sure that you size the log files properly (large enough to grow) and make sure that you backup regularly the log files (or use simple recovery mode). And do not truncate the logs, shrink them just to the size you need. So, if my log files do not usually grow larger than 4 GB I will use the following script:

CREATE table #tran_log_space_usage(
database_name sysname
, log_size_mb float
, log_space_used float
, status int
insert into #tran_log_space_usage

delete from #tran_log_space_usage where database_name = ‘Exclude_DB’

EXEC dbo.sp_msforeachdb ‘
DECLARE @log_file VARCHAR(150)
IF (SELECT DB_ID(”?”)) > 4 and (SELECT DATABASEPROPERTY(”?”, ”IsOffline”)) <>1
IF (select log_size_mb as LogSizeMB
from #tran_log_space_usage
where database_name = ”?”) > 4048
select @log_file = name
from [?].sys.database_files where file_id = 2
USE [?]
DBCC SHRINKFILE (@log_file , 4048)
PRINT ”/*********************************/”

DROP table #tran_log_space_usage

You can add exceptions for the databases as well (see ‘Exclude_DB’ in script).


MSDB database cleanup

Disk space these days is not as expensive as it used to be, but as a DBA I like to keep things tidy and optimize as much things as possible. Cleaning up databases and old backups, archiving databases are some of my regular tasks.

There will be a couple examples on MSDB database cleanup in this post with screenshots and reports.

MSDB database contains a lot of historical records related to the email, backups and maintenance plans. It does not usually grow too fast, but it may have a lot of historical information in large environments. If you do not maintain the history properly msdb sometimes can have years of history.

Keep in mind that some of the historical information is required for databases restores, for audit, etc. So, before you start cleaning up your database make sure that this does not go against your policies. Some DBAs base their restore scripts on the msdb database’s tables, this also has to be taken into consideration.

Some settings for the job history could be found under SQL Server Agent. Ideally you would setup “History Cleanup Task” using Maintenance Plans in SQL Server Management Studio (SSMS). This is quick and easy way:

But there are circumstances when you may need to do this manually, for example: SQL Server Agent is disabled on SQL Server; you inherited old SQL Server where history has not been cleaned up since server setup. Also, keep in mind that “History Cleanup Task” does not include Database Mail logs.


Manual Cleanup

Audit requirements in this example will be to keep 6 months of historical data and 6 months of backup history (based on Service Level Agreements). This is not real life scenario that we will be only using it to demonstrate database size changes after the cleanup.

I would like to keep email sending log for the last 6 months as well as evidence for auditors that I get e-mails about critical system errors.

Here is the msdb database size on SQL Server with 50 databases and 1 year of history:

Let’s run SSMS report showing Disk Usage by Top Tables:


Step 1: Backup-restore history cleanup

This is done by executing sp_delete_backuphistory stored procedure. Here are the tables affected (as per BOL):

“… sp_delete_backuphistory must be run from the msdb database and affects the following tables:

  • backupfile
  • backupfilegroup
  • backupmediafamily
  • backupmediaset
  • backupset
  • restorefile
  • restorefilegroup
  • restorehistory

The physical backup files are preserved, even if all the history is deleted.”


The stored procedure’s parameters:

sp_delete_backuphistory [ @oldest_date = ] ‘oldest_date’


The script to cleanup backup-restore history for our example scenario:

EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = ‘January 1, 2012’


Step 2: Mail history cleanup

This is done by executing sysmail_delete_log_sp and sysmail_delete_mailitems_sp stored procedures.


The sysmail_delete_log_sp stored procedure deletes entries from the Database Mail log (msdb.dbo.sysmail_log table).


The stored procedure’s parameters:

sysmail_delete_log_sp [ [ @logged_before = ] ‘logged_before’ ]

[, [ @event_type = ] ‘event_type’ ]


The script to cleanup old mail log records for our example:

EXEC msdb.dbo.sysmail_delete_log_sp @logged_before = ‘January 1, 2012’

The sysmail_delete_mailitems_sp stored procedure deletes e-mail messages from the Database Mail internal tables (msdb.dbo.sysmail_allitems table).


The stored procedure’s parameters:

sysmail_delete_mailitems_sp [ [ @sent_before = ] ‘sent_before’ ]

[ , [ @sent_status = ] ‘sent_status’ ]


The script to cleanup old mail items for our example scenario:

EXEC msdb.dbo.sysmail_delete_mailitems_sp @sent_before = ‘January 1, 2012’ , @sent_status = NULL — all statuses


Step 3: Maintenance plans log cleanup

This is done by executing sp_maintplan_delete_log stored procedure.

The sp_maintplan_delete_log stored procedure permanently deletes entries and details from the Maintenance Plans log (msdb.dbo.sysmaintplan_logdetail and msdb.dbo.sysmaintplan_log tables).

The stored procedure’s parameters:

sp_maintplan_delete_log [ [@plan_id = ] ‘plan_id’ ]

[, [@subplan_id = ] ‘subplan_id’ ]

[, [@oldest_time = ] ‘oldest_time’ ]

The script to cleanup old maintenance plans log records for our example scenario:

EXEC msdb.dbo.sp_maintplan_delete_log @oldest_time = ‘January 1, 2012’


Step 4: Job history cleanup

This is done by executing sp_purge_jobhistory stored procedure.

The sp_purge_jobhistory stored procedure permanently deletes entries and details from the job history (msdb.dbo.sysjobhistory table).

The stored procedure’s parameters:

sp_maintplan_delete_log [ [@plan_id = ] ‘plan_id’ ]

[, [@subplan_id = ] ‘subplan_id’ ]

[, [@oldest_time = ] ‘oldest_time’ ]

The script to cleanup old maintenance plans log records for our example scenario:

EXEC msdb.dbo.sp_maintplan_delete_log @oldest_time = ‘January 1, 2012’

All steps in one script


SET @date = ‘January 1, 2012’

EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = @date

EXEC msdb.dbo.sysmail_delete_log_sp @logged_before = @date

EXEC msdb.dbo.sysmail_delete_mailitems_sp @sent_before = @date , @sent_status = NULL

EXEC msdb.dbo.sp_maintplan_delete_log @oldest_time = @date

EXEC msdb.dbo.sp_purge_jobhistory @oldest_date = @date


Now if we run the same database reports from SSMS as before we will see the difference:

The database’s size reduced by almost 64%!



Backup Default SQL Server Trace files

Default Trace first has been introduced in SQL Server 2005 and could be really useful for Database Administrators in troubleshooting.

There is really great article about what Default Trace is and what you can do with it if you want to learn more:—the-power-of-performance-and-security-auditing/

Today I would like to share with you my solution on how to set up an automated trace files’ backups.

How many times you thought: “If I set this audit last month I would know who changed the server configuration”. Unfortunately sometimes we are too busy with our day to day work or do not think that we will ever need to check who changed that specific option on SQL Server and when it was changed. One of the reasons could be that nobody has access to the SQL Server except you. Or you know what you do and documented every one of your steps. But what about those new applications that require sysadmin privileges on SQL Server to install applications and databases? You ask vendor what are the requirements and their response is: “SQL Server 2005 or higher”. And then you discover that installation process actually changed your SQL Server configuration to enable CLR.

This is not real life example, but very close. It took me a couple of hours to go through the tape backups to find out when I changed the server configuration, because I could not remember changing it at all! So, it WAS an application installation.

The first problem when you go through the tape backups is that it is very time consuming. The tape could be offsite; you need to involve backup guys. The second problem is that the trace file we are looking for may not even be on the tape, especially if you backup trace files only once a day. The number of trace files is limited to five files and the new file is generated when file’s size reaches 20 MB. At the same time the oldest file is deleted.

There are lots of solutions out there to have specific records imported to a database from the trace file. One of the latest articles by Aaron Bertrand could be found here:

But this does not quite work for me. I do not know what I may need next time, so capturing just specific records do not really work for me. Considering that I use DDL triggers for audit and that I need to check old trace files probably once a year, I do not want to import all records to the database, because this is another database to support and this includes backups, maintenance etc.

To save myself time when I needed to check something the next time I came up with solution to backup trace files. This solution will allowed me to make sure that ALL trace files are on the tape backup as well as allow me to will keep an archive of the trace files on the server for the quick access. The solution has been designed and tested for SQL Server 2008 R2.

We want to have all archived Default Trace files on the same server, but in different folder. The files that are archived would also be compressed (zipped).

A little challenge here is that we cannot backup the latest trace file because it will be still in use. We can will only backup “closed” traces.


  1. Archive folder on the server (make sure that SQL Server account or account that will be used to run the archiving job has rights to create and delete files in this folder). To make the scripting more generic I have created the folder under the “Log” folder (location of the Default Trace on your SQL Server instance).
  2. WMI SQL Server Alert.
  3. SQL Server Job to respond to alert (modify job to backup files).
  4. SQL Server Agent “Replace tokens for all job responses to alerts” option is enabled.
  5. Database mail set up.
  6. Operator exists to get test e-mails.
  7. Gzip utility to “zip” archived files.

Note: 5 and 6 only required for the testing. You may skip testing part as this is just to demonstrate files rollover process and response.

Implementation Steps (including testing):

  • This solution requires SQL Server Agent option set to “Replace tokens for all job responses to alerts” which may not be desirable if you are running a very secure environment.

EXEC msdb.dbo.sp_set_sqlagent_properties@alert_replace_runtime_tokens=1

OR set it up using SQL Server Management Studio through the SQL Server Agent Properties:

Note that if SYSTEM account has been removed from logins or SYSADMIN server role you may have issues with WMI alerts.

  • First we need to find location of the Default Trace on your SQL Server instance:
SELECT LEFT(path, LEN(path)-CHARINDEX('\', REVERSE(path)))  FROM sys.traces WHERE id = 1

  • Now – create subfolder for the archived trace files (it could be any other folder that you want to backup to the tapes):

  • Create job to test/confirm that WMI event (trace file rollover) triggers e-mail and that “Replace tokens” option has been configured properly and works. You will need to replace @notify_email_operator_name parameter’s value and @recipients e-mail in the script below. Also replace @profile_name with your DB Mail profile. Remember, that we are just testing WMI now. Do not test this in your production environment!
USE [msdb]
SELECT @ReturnCode = 0
 IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'DBA' AND category_class=1)
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'DBA'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'WMI Response - Default Trace File Rollover',
            @description=N'Sends notifications to DBA when Default Trace File Rollover event occures',
            @notify_email_operator_name=N'DBA_Operator', @job_id = @jobId OUTPUT
-- replace N'DBA_Operator' with operator you created
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
 EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Sends e-mail in response to alert',
            @os_run_priority=0, @subsystem=N'TSQL',
            @command=N'EXEC msdb.dbo.sp_send_dbmail
    @profile_name = ''DBServerAlerts'',
    @recipients = '''',
    @body =   ''File Name: $(ESCAPE_SQUOTE(WMI(FileName)));
                        Start Time: $(ESCAPE_SQUOTE(WMI(StartTime)));
                        Server Name: $(ESCAPE_SQUOTE(WMI(ServerName)));
                        Instance Name: $(ESCAPE_SQUOTE(WMI(SQLInstance)))'',
    @subject = ''Default Trace File Rollover event - $(ESCAPE_SQUOTE(WMI(SQLInstance)))'' ;'
-- replace with e-mail you will be using for the testing;
-- replace @profile_name with your DB Mail profile
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
GOTO EndSave
  • Create alert:
EXEC msdb.dbo.sp_add_alert @name=N'WMI - Default Trace File Rollover',
   @wmi_query=N'select *  from TRACE_FILE_CLOSE',
   @job_name=N'WMI Response - Default Trace File Rollover' -- name of the job created in step 4

Note that @wmi_namespace will be different for the named instance (example: \\.\root\Microsoft\SqlServer\ServerEvents\INSTANCENAME).

  • Initial testing.

It may take some time to get response from this alert when you test this. This will depend on how busy your system is (how often trace files reach 20 MB size). It may take actually several hours on your test SQL Server. To speed up trace files generation I run this script when I do my testing (it runs about 2 minutes on my test server):

CREATE TABLE tempdb.dbo.ThisIsLongNameToFillOutTraceFile
       ThisIsLongNameToFillOutTraceFile_c1 uniqueidentifier NOT NULL  CONSTRAINT column_a_un1 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c2 uniqueidentifier NOT NULL  CONSTRAINT column_a_un2 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c3 uniqueidentifier NOT NULL  CONSTRAINT column_a_un3 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c4 uniqueidentifier NOT NULL  CONSTRAINT column_a_un4 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c11 uniqueidentifier NOT NULL  CONSTRAINT column_a_un5 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c12 uniqueidentifier NOT NULL  CONSTRAINT column_a_un6 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c13 uniqueidentifier NOT NULL  CONSTRAINT column_a_un7 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c14 uniqueidentifier NOT NULL  CONSTRAINT column_a_un8 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c111 uniqueidentifier NOT NULL  CONSTRAINT column_a_un9 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c112 uniqueidentifier NOT NULL  CONSTRAINT column_a_un10 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c113 uniqueidentifier NOT NULL  CONSTRAINT column_a_un11 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c114 uniqueidentifier NOT NULL  CONSTRAINT column_a_un12 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c211 uniqueidentifier NOT NULL  CONSTRAINT column_a_un13 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c212 uniqueidentifier NOT NULL  CONSTRAINT column_a_un14 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c213 uniqueidentifier NOT NULL  CONSTRAINT column_a_un15 UNIQUE,
       ThisIsLongNameToFillOutTraceFile_c214 uniqueidentifier NOT NULL  CONSTRAINT column_a_un16 UNIQUE
       )  ON [PRIMARY]
DROP TABLE dbo.ThisIsLongNameToFillOutTraceFile
GO 3000

Here is example of e-mail that you will get when trace file rollover will occur:

You will see on the server the new file (log_2088.trc) created at the same timetime:

  • Now, when we validated that WMI alerts work and you have server configured properly we can replace the job with the final version:

There will be two steps – to copy closed trace file to the archive folder and to zip the copied file:

Step 1:

Command (in one line):

for /f "tokens=1,2,3,4,5 delims=\ " %a in ("$(ESCAPE_SQUOTE(WMI(FileName)))") do 
xcopy $(ESCAPE_SQUOTE(WMI(FileName))) %a\%b\%c\%d\defaultTraceArchive

Step 2:

Command (in one line):

for /f "tokens=1,2,3,4,5 delims=\ " %a in ("$(ESCAPE_SQUOTE(WMI(FileName)))") do 
cmd /c "%a\%b\%c\%d\defaultTraceArchive\gzip.exe -9 %a\%b\%c\%d\defaultTraceArchive\*.trc"


Now we have everything ready for the automated backup of the Default Trace files.


Final script: