Tuesday, January 31, 2012

Shrinking the log file of secondary database in Log Shipping setup

SQL Server 2005/2008: Shrinking the log file of secondary database in Log Shipping setup


Platform : SQL Server 2005 and 2008

I had set up log shipping between two servers and most days the size of the log file of the primary database remains within acceptable limits. But today there was some bulk load and that resulted in a huge log file. Consequently the log file size of the secondary database (Database to which the logs were being shipped and restored) also got big. So big that the disk drive where the log file was located was almost full.

So I had to figure out a way to shrink the log file of the secondary database but unfortunately it was in READ ONLY mode so there was no way I can shrink that database. Until today I had not reached this situation where I have to shrink the log file of secondary database. Just to try I ran the DBCC SHRINKFILE  command against the primary database and that did the trick. This shrank the log file of the secondary server too. But before this happened, I had to run the LSBackp, LSCopy and the LSRestore job to make sure that the transaction was copied over to the secondary database.

Conclusion: In a Log Shipping set up in SQL Server 2005/2008, DBCC SHRINKFILE on primary database does get propagated to the secondary database.

Find missing indexes

SQL Server 2005/2008 : Find missing indexes

Missing index report
Starting from SQL Server 2005 there were some DMVs introduced that stored information regarding what indexes would enhance the performance of the queries. Following query will give you a list of indexes that SQL Server recommends. Use this report wisely and do not create all the indexes listed in the report.


SELECT
  migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure,
 [Table] = [statement],
  'CREATE INDEX [missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' + CONVERT (varchar, mid.index_handle)
  + '_' + LEFT (PARSENAME(mid.statement, 1), 32) + ']'
  + ' ON ' + mid.statement
  + ' (' + ISNULL (mid.equality_columns,'')
  + CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END
  + ISNULL (mid.inequality_columns, '')
  + ')'
  + ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement,
  migs.*, mid.database_id, mid.[object_id]
FROM sys.dm_db_missing_index_groups mig
INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10
ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC

Monday, January 30, 2012

Wrap text in Query Window

SQL Server 2005/2008: How to wrap text in query window?

If you want to generate and edit a query that is very long in the Query window then it is easier if the generate query has next line characters. Otherwise you will end up with one long line and have to manually enter the next line characters. This can be painful if the query is very large and creates opportunities to make mistakes.

To enable text wrap go to Menu options TOOLS and then OPTIONS

Tools > Options > Text Editor

Enable the Wrap text option for 'ALL LANGUAGES'























You can also enable the options to display LINE NUMBERS in the same window.

SQL Server to Excel Import

How to Import data from SQL Server database to an Excel file?

If you have Excel 2007 or higher then in the Menu options go to 'Data' and there should be an option to import data from SQL Server.

If the 'Microsoft SQL Server' does not show up in the data sources, then go to Control Panel > Admin tools > Data sources and add MS SQL Server ODBC driver.

For Microsoft Office 2003

1) In the menu click on 'Data' > Import External Data > Import Data.
















2) In the 'SELECT DATA SOURCE' window click on 'New Source'


3) In the 'Data Connection Wizard' pick 'Microsoft SQL Server'




















4) Next enter the hostname or IP Address of the server where the SQL Server instance is running.
Also pick whether you want to connect using Windows authentication or SQL Server. And enter the appropriate login information.


















5) Next select the database name and the table name you want to SELECT the data from.


















6) Give an appropriate Name and description for this connection.

















7) You should see the 'Select Data source' Window now. If not then go to 'Import external Data' in the menu and pick the connection you just created.
















8) In the 'Import data' select the worksheet and the location where you want this data to be displayed.
















9) If you want to import the entire table then click 'Edit Query' and Select 'Table' in 'Command Type' drop down.


If the table is too large and you want to restrict the number of records selected then pick 'SQL' in 'Command Type' drop down and enter the SELECT query in the 'Command Text' window.



Thursday, January 26, 2012

Error: 18456, Severity: 14, State: 38

Error: 18456, Severity: 14, State: 38
Login failed for user 'produser'. Reason:Failed to open the explicitly specified database. [CLIET: 192.168.10.10]

or

Error: 18456, Severity: 14, State: 16.
Login failed for user 'XXXXXX'. [CLIENT: xxx.xx.x.xxx]

This is one of the most frustrating login failure errors. It does not mention what database the login was trying to connect to. The windows event viewer logs do not give any further information.

It seems that prior to SQL Server 2008 this meant the same as State 16. But in that case you would actually see the database name to which the login was trying to connect.
To troubleshoot state 38 you need to run a profiler trace and capture the following two events.

Errors and Warnings: User Error Message
Security Audit: Audit Login Failed
















Make sure you select all the columns and run the trace while the login attempt is made.

For the event “User Error Message” you should see the database name to which the connection is being attempted. Verify the database exists and verify that the user has been created in the database and has the permission to connect to the database. You should also check if this is an orphaned user and fix it.

Monday, January 23, 2012

The In-row data RSVD page count for object "Table123" is incorrect. Run DBCC UPDATEUSAGE

Over the weekend I received an alert from the backup job that the full backup had failed. On further analysis I found that the DBCC CheckDB step before the full backup step failed with the following message.

Executing the query "DBCC CHECKDB WITH NO_INFOMSGS " failed with the following error:
The In-row data RSVD page count for object "Table456", index ID 0, partition ID 549656920064, alloc unit ID 549656920064 (type In-row data) is incorrect. Run DBCC UPDATEUSAGE.
The In-row data RSVD page count for object "Table123", index ID 0, partition ID 23025438949376, alloc unit ID 23025438949376 (type In-row data) is incorrect. Run DBCC UPDATEUSAGE.

As suggested in the message, I decided to run the DBCC UPDATEUSAGE on the concerned database. But before I did that, I exported all the tables to CSV files. Generated scripts to create all the users and grant them the permissions. Saved the previous successful backup so that I can recreate all the objects if required.
Then I ran the DBCC command and it ran successfully. Checked the DBCC CheckDB output one more time and verified everything was fine.

Monday, January 16, 2012

SQL Server 2005/2008 : Error: 9002, Severity: 17, State: 2


Error: 9002, Severity: 17, State: 2
The transaction log for database 'mydatabase' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

The transaction log file will get full in one of the following two situations.

1) If the log file has been configured with preset max size limit then the file is full.
2) If the log file has been configured with unlimited size then perhaps the disk is full.

If it is the second situation then first free up some space in the disk by moving some files or deleting some files.

Now lets look why the file got full. First thing that you need to check is the log_reuse_wait_desc column in the sys.databases.

select name, recovery_model_desc, log_reuse_wait_desc from sys.databases

There are several reasons that could come up in this column and some of them are noted here.
NOTHING
CHECKPOINT
LOG_BACKUP
ACTIVE_BACKUP_OR_RESTORE
ACTIVE_TRANSACTION
DATABASE_MIRRORING
REPLICATION
DATABASE_SNAPSHOT_CREATION
LOG_SCAN
OTHER_TRANSIENT


If the database in question is TEMPDB then the process to resolve it would be different and also the reasons for which TEMPDB gets full are different. But let me discuss the most common reason why a user DBs log file gets full. 

LOG_BACKUP
In most cases you will see the reason noted in 'log_reuse_wait_desc' is given as 'LOG_BACKUP'. This means that the database is in FULL recovery model and is waiting for a log backup to be taken.

If you have scheduled a regular log backup job then check its status and wait for it to finish before you shrink the log file. If you check the free space in the log file then you will indeed see a lot of unused space but you can not shrink it. Once the log backup completes you can shrink the file.

But if the data file is not as big as the log file then instead of doing a log backup, I will do the following.

1) Change the recovery model to SIMPLE
2) Shrink the log file.
3) Change the recovery model to FULL
4) Take a full backup and subsequently schedule log backups.

Sometimes the above steps take a lot less time to complete than taking a log backup and then shrinking the file. But please keep in mind that when you do this you have essentially broken the log chain and will have to resync the database if it is configured for log shipping. 

The question of whether to truncate the log or not is dependent on the DB size. If it is not too big then truncate it and take a full backup. Otherwise it is best to take log backups.

ACTIVE_TRANSACTION
Other prominent reason that I have seen is 'ACTIVE_TRANSACTION'. In this case, it would be best if you first add a new log file to the database or extend it. Then run the DBCC OPENTRAN on that database and check the open transactions. This should give you more information about the transaction that is consuming most of the log space and has not yet completed. 

If the reason given is ACTIVE_BACKUP_OR_RESTORE then refer to my earlier post to find what is the expected time to finish the current backup or restore.

http://saveadba.blogspot.com/2011/10/backup-and-restore-progress.html
If the reason is related to either replication or mirroring then first check the status of replication or mirroring to ensure that they are upto speed and don't have any latency. This should help in reducing the log reuse wait time.





Tuesday, January 10, 2012

SQL Server 2005/2008: Capturing Deadlock information or simulating deadlock

You can always run a profiler trace to capture the deadlock graph that gives details of the SPID and queries involved. But I would rather turn on the traces for deadlock and capture the info in the Error logs.

To do this run the following commands

DBCC TRACEON (1204, 01)
DBCC TRACEON (1222, -1)

The output from each of them are different so check the output before you decide which one you prefer. I like the output from the second one which was introduced in SQL Server 2005 and gives the info for each process involved.

If you want to test the output then take the following steps to simulate a deadlock and check the output.

1) CREATE TABLE a (i int); 
    CREATE TABLE b (i int);
   
    INSERT a SELECT 1;
    INSERT b SELECT 9;

2) In a new window
    BEGIN TRAN
    UPDATE a SET i = 11  
    WHERE i = 1 
    WAITFOR DELAY '00:00:10' 
    UPDATE b SET i = 99 
    WHERE i = 9 
    COMMIT

3) In another window
    BEGIN TRAN 
    UPDATE b SET i = 99 
    WHERE i = 9 
    WAITFOR DELAY '00:00:10' 
    UPDATE a SET i = 11 
    WHERE i = 1 
    COMMIT


If you want to retest the deadlock then drop the tables and rerun the steps.