Quantcast
Channel: SQL Archives - SQL Authority with Pinal Dave
Viewing all 594 articles
Browse latest View live

SQL SERVER – FCB::Open failed: Could not open file for file number 2. OS error: 5(Access is denied.)

$
0
0

I assume a number of times when the machine is booted up, I am most likely to hit the Management Studio to work on some script or the other. But sometimes strange things happen and I get all sorts of error. There might be tons of reasons why SQL Server is not able to start. This blog is a result of a quick consulting engagement with one of my clients where I faced OS error.

They contact me to solve a production down situation where SQL was not starting after moving the database files from C Drive to D Drive. They shared the error message as below

SQL SERVER - FCB::Open failed: Could not open file <Path> for file number 2.  OS error: 5(Access is denied.) FCB-01

—————————
Services
—————————
Windows could not start the SQL Server (MSSQLSERVER) on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 3417.
—————————
OK
—————————

This is a very generic error. My first data which I always ask is to give me SQL Server ERRORLOG when SQL is not able to start.

SQL SERVER – Where is ERRORLOG? Various Ways to Find ERRORLOG Location

Here is what they shared with me

2016-06-14 06:28:06.15 spid4s Error: 17204, Severity: 16, State: 1.
2016-06-14 06:28:06.15 spid4s FCB::Open failed: Could not open file D:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\mastlog.ldf for file number 2. OS error: 5(Access is denied.).
2016-06-14 06:28:06.15 spid4s Error: 5120, Severity: 16, State: 101.
2016-06-14 06:28:06.15 spid4s Unable to open the physical file “D:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\mastlog.ldf”. Operating system error 5: “5(Access is denied.)”.
2016-06-14 06:28:06.15 spid4s SQL Server shutdown has been initiated

It means that SQL was shutting down because master database was not getting opened. You can also get a similar message in Event log which same error number.

Source: MSSQLSERVER
Date: 6/13/2015 2:24:39 PM
Event ID: 17204
Task Category: Server
Level: Error
Keywords: Classic
User: N/A
Computer: MySQLServer.MyCorp.local
Description:
FCB::Open failed: Could not open file D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\mastlog.ldf for file number 2. OS error: 5(Access is denied.).

Our problem is OS error: 5(Access is denied.).

Fix for Error FCB Open

This is a Windows related issue where SQL Server does not have appropriate permission to the folder that contains the master database file and hence the error. Now, what should be done? Since we are having access denied, we need to give access. Here are the steps:

Click on the file (shown in the error message), right click and select properties. Then from within the “security” tab, verify that that the account for the SQL Server service has full control to this file. In my client’s case it was NT Service\MSSQLServer so we have given full control to that.

SQL SERVER - FCB::Open failed: Could not open file <Path> for file number 2.  OS error: 5(Access is denied.) FCB-02

Have you encountered a similar situation? What has been your troubleshooting steps? I would love to learn the same from you too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – FCB::Open failed: Could not open file for file number 2. OS error: 5(Access is denied.)


Recover Lost Data Using the Transaction Log Files

$
0
0

Every now and then, experienced SQL Server DBAs as well as the SQL Server rookies find themselves in the unpleasant situation when some important data has been changed or lost with the monumental task to solve this in the most efficient way. Regardless of the change source – was it an internal or external user, the intent behind it – unintended mistake or a malicious change, or even the exact nature of the change – update, delete, drop or something else, database administrators are faced with the task to recover the lost data and enable users to continue using the database as if the recovery was never required by providing the data recovery. Let us learn about how to recover lost data using the transaction log files.

Even though data recovery is not always possible, especially in cases of heavy corruption, disk fails or other major calamities, or when users find themselves without any backups or files for the recovery, recovering data lost due to rogue INSERT, UPDATE and DELETE operations, as well as due to ALTER, DROP or even CREATE operations is possible in 100% cases when the appropriate recovery sources (live database or full database backup (prior to the changes) and consequential transaction log backups) are available.

Recover Lost Data

Here are more details on the requirements:

  • Database is still live on the SQL Server instance (no other issues with the database except that the data has been changed/lost) or the full database backup taken prior to the unwanted changes is available and can be restored on the SQL Server – we will need to connect to the database to perform the recovery of the lost data.
  • Database is in the full recovery model – this ensures that information on all database changes has been written in the online transaction log file. In case that database is in the simple recovery model, SQL Server will overwrite online transaction log file content on a regular basis, and operation history with recovery information will not be available for recovery. In some cases, though, recovery can be performed even on databases in a simple recovery model (when traffic on the database is really low, and recovery is performed immediately after the unwanted changes, but in this case, recovery is highly unlikely to recover 100% of the lost data, since due to SQL Server nature, information from the online transaction log file will probably be partially deleted
  • Online transaction log file or all consequential transaction log backups which follow the latest full database backup are available in case if database transaction log backups are being created on (ir)regular basis – they can be stored on any accessible network drive or on the local disk and accessed remotely

With these requirements met, we can use ApexSQL Log in order to completely recover the lost data. ApexSQL Log is a transaction log reader which enables users to examine online transaction log file or transaction log backups of Microsoft SQL Server databases, analyze content of log files in a comprehensive grid, or perform data or structure recovery of the lost data by creating a rollback script. The rollback script will create the opposite operation from the ill-fated one in order to negate it and return the original field value or table structure. The rollback or ‘Undo’ script created by the ApexSQL Log can also be edited by the user to fine-tune the recovery process.

In order to showcase the recovery of lost data with ApexSQL Log, let’s assume the following:

  • Our SQL Server database is in the full recovery model
  • Latest full database backup has been created this morning at 08:00am and regular transaction log backups are created every hour
  • Somewhere around 11:35am, several rogue delete operations have occurred on couple of tables

To start the recovery process:

  1. Start ApexSQL Log and initiate new session by clicking on the ‘New’ button in the main ribbon – this will start a new ‘Session’
  2. The first step of the session wizard is a “Database connection”. Simply choose a SQL Server instance, choose authentication method and provide valid credentials and choose the database which needs to be recovered from the drop-menu
  3. After clicking on the ‘Next’ button, the ‘Data sources’ step of the wizard is shown. In this step, we need to add all relevant consequential transaction log files from the last full database backup onwards (or audit only online transaction log file if backups are not being created). These can be added by clicking on the ‘Add’ button, or by using ‘Add pattern’ option to add multiple files in a bulk. Conveniently, ApexSQL Log automatically adds full and transaction log backups for audited database by default, so the only remaining task is to choose (check) files we need to audit

Note: as shown in the screenshot above, online transaction log has been unchecked, since in our example we are creating transaction log backups, and analyzing online transaction log file will not bring value to our analysis

  1. Click on the ‘Next’ button leads to the ‘Output’ step of the wizard. In case user wants to perform a thorough analysis and investigation on the rogue changes (who made them, when, how…) the choice to ‘Open results in grid’ is the best option – auditing results are shown in comprehensive grid which is suitable for investigation. Other options (“Create before-after report” and “Export results”) allow users to create specific or overall auditing reports. Since in the goal in the example above is to simply and efficiently recover the data, option “Undo/Redo” should be chosen
    Recover Lost Data Using the Transaction Log Files recoverlog3
  2. After selecting the most appropriate option, the wizard leads to the ‘Filter setup’ step which allows users to use various filters to fine-tune the auditing results. For our example, lets setup a date/time filter to audit changes only between 11am and noon
    Recover Lost Data Using the Transaction Log Files recoverlog4

                Uncheck all operations except deletes

Recover Lost Data Using the Transaction Log Files recoverlog5

And choose the two tables that were affected by the deletes we want to roll-back. Other filters can be used in addition to the above mentioned ones to further focus the auditing

Recover Lost Data Using the Transaction Log Files recoverlog6

  1. Click on the ‘Next’ button forwards user to the final step of the wizard. The ‘Undo’ script is already selected by default, so click on the ‘Finish’ button will complete the wizard.
  2. Once the processing is completed, ApexSQL Log will display auditing results (statistics) and creates an undo script to rollback unwanted changes.

Recover Lost Data Using the Transaction Log Files recoverlog7

Created “undo” script can be opened, edited and executed directly in the ApexSQL Log internal editor, or via any other editor tool such is SQL Server Management Studio

  1. As shown in the Auditing statistics, 23 rogue deletes have been found, which means that the rollback script that was created contains undo operations for all 23 deletes. Click on the ‘undo.sql’ link will open the script in the internal editor. The script can now be inspected or edited if the need requires it
  2. The only task that remains at this point is to execute the script against our database. To do this, click on the ‘Connect’ button and choose our database and provide connection credentials. Once this is done, click on the ‘Execute’ button and the script will be executed and data recovered – ApexSQL Log will show the execution results in the ‘Message’ pane

Recover Lost Data Using the Transaction Log Files recoverlog8

This concludes the recovery process and roll-backs all of the rogue deletes that have occurred on the observed two tables – data is inserted back to the tables in its original state as it was before the unwanted deletes have occurred.

As mentioned above, the same approach can be used to perform a recovery from other DML and DDL operations which were unintended. User simply needs to adjust the filters in the session wizard to include/exclude only those operations or tables or users which are relevant for the recovery process.

Alternate approach to the recovery is to ‘Open results in grid’ when choosing auditing ‘Output’ and completing the remaining steps in the same manner as above. The results will be first shown in the comprehensive grid, where users get another chance to look at the results and to further filter the audited operations in addition to the various details which are shown for each operation (including time of occurrence, SQL Server login that ran the operation, LSN…). Once the user is satisfied with grid filtering, the recovery script can also be created directly from the application main ribbon.

Recover Lost Data Using the Transaction Log Files recoverlog9

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on Recover Lost Data Using the Transaction Log Files

SQL SERVER – Error Fix: Msg 13601 Working with JSON Structure

$
0
0

As you new versions of SQL Server come, the amount of capabilities just increases exponentially. It is tough to keep up in pace with the innovations and learning that one needs to go through. I have in the past written few articles around working with JSON over the blogs earlier. These games of playing with the new capabilities will show up tons of errors as we are not completely aware of what is possible. These experiments lead you from learning to another.

I a recent play with SQL for one of the blogs earlier, I had got an error which I had forgotten to write here. The error which I was getting is shown below:

Msg 13601, Level 16, State 1, Line 1
Property ‘a.b’ cannot be generated in JSON output due to a conflict with another column name or alias. Use different names and aliases for each column in SELECT list.

SQL SERVER - Error Fix: Msg 13601 Working with JSON Structure JSON-Error-Msg-13601-01

On first note, it was self-explanatory to what is the correction for the same. I am just going to rehash the steps you need to be aware when working with this error:

  • Two columns or path aliases in the query with FOR JSON PATH clause have the same name or one is a prefix of another.
  • A JSON formatter cannot decide whether the prefix should reference scalar or object.

Fix JSON Error

A simple resolution for this would be to change the prefix to something like this as shown below:

SELECT 1 as 'a', 2 as 'b'
FOR JSON PATH

With that small change being made. The output would get easily resolved and as shown below:

SQL SERVER - Error Fix: Msg 13601 Working with JSON Structure JSON-Error-Msg-13601-02

Do keep sharing your errors when working with JSON, I would surely love to post some of these in the blog. On the contrary, are you working with this feature?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Error Fix: Msg 13601 Working with JSON Structure

How to Use Zip With SSIS? – Notes from the Field #130

$
0
0

[Note from Pinal]: This is a 130th episode of Notes from the Field series. In this episode we are going to learn something very simple but effective about SIS and Zip. This subject is not very much discussed about and hardly there are many information about this subject available. In this episode of the Notes from the Field series database expert Kevin Hazzard explains the how to use ZIP with SSIS. Read the experience of Kevin in his own words.


How to Use Zip With SSIS? - Notes from the Field #130 KevinHazzard

My son David is a database and ETL developer like me. He came to me with an interesting problem a while back. After dumping a bunch of tables into text-based data files using SSIS, he was required to zip them up before transmitting them via FTP. Everything he needed to do the job was built into SSIS except for the zipping function.

In the UNIX world, I’d have told him to pack the directory into a tar ball from the command line. But how do you zip up an entire folder from the command line in Windows? Better still, how could you do this programmatically within SSIS? Archiving and unarchiving of files and folders should be something that Windows does with ease. Yet, Windows was pretty late to the game with respect to compressed folder support. While some compression capabilities are built into the Windows shell, doing this from the command line without installing software isn’t straightforward.

ZIP with SSIS

How to Use Zip With SSIS? - Notes from the Field #130 ssisandzip If you search for “Zip Windows Command Line” across the web, you’ll find lots of third party tools that can do the job. And if you restrict the search to “Zip in SSIS” you’ll find pretty much the same set of tools that you must weave into a solution using libraries which often need to be installed in the Global Assembly Cache (GAC). Let’s face it: nobody likes to use third party tools in SSIS, especially to solve problems that seem so simple. Aside from the potential legal and security concerns, deploying and maintaining installed applications on various ETL servers can present a range of non-trivial, lingering operational issues that someone must manage.

I assumed that because the major search engines pointed only to these kinds of solutions on the first page or two of results that they represented the best-known way to go about zipping up files within SSIS. After an hour on his own, David came back to me and said he had found a better way. He discovered that the .NET Framework versions 4.5 and newer include a handy class in the System.IO.Compression namespace called ZipFile. Inside that class is a method named CreateFromDirectory that will do the trick. David simply dropped a script task onto his SSIS control flow, set the script project’s .NET Framework version to 4.5 and added a few lines of C# code that looked like this:

string filename = Dts.Variables["vZipFileName"].Value.ToString();
if (System.IO.File.Exists(filename))
System.IO.File.Delete(filename);
System.IO.Compression.ZipFile.CreateFromDirectory(
Dts.Variables["vExtractDirectory"].Value.ToString(), filename);

All that’s required are the two parameters vZipFileName and vExtractDirectory passed via SSIS variables. The script first checks to see if the Zip file already exists and deletes it as necessary. David had discovered through repeated testing that the deletion step was necessary since the CreateFromDirectory call will fail if the target file already exists.

It’s really that easy: aside from the required .NET Framework upgrade, there are no other tools or libraries to install. As you can imagine, I was pleasantly surprised by David’s quick research, his out-of-the-box thinking and the elegant solution that came from it. Other interesting methods in the ZipFile class like ExtractToDirectory and Open can be used to read and process compressed files if you need to ingest data from them, of course. I hope you find this simple way of handling Zip files as useful on your SSIS journeys as David’s proud Dad does.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on How to Use Zip With SSIS? – Notes from the Field #130

SQL SERVER – Script level upgrade for database master failed because upgrade step sqlagent100_msdb_upgrade.sql encountered error

$
0
0

Sometimes I feel that one error can be caused due to multiple reasons and there are different ways to solve it. I already have two blogs for the error message. As they say, there are is no silver bullet when it comes to solving some of the issues. SQL Server might through a generic message, but it all lead to the same problem related to master database:

SQL SERVER – Script level upgrade for database ‘master’ failed because upgrade step ‘sqlagent100_msdb_upgrade.sql’

SQL SERVER – Error 15559 – Error 912 – Script Level Upgrade for Database ‘master’ Failed

In my recent client’s case, from SQL Error Log see this error in the running upgrade script:

2016-06-21 15:55:45.40 Server      Microsoft SQL Server 2008 (SP4) – 10.0.6000.29 (X64)
Sep  3 2014 04:11:34
Copyright (c) 1988-2008 Microsoft Corporation
Enterprise Edition (64-bit) on Windows NT 6.0 (Build 6002: Service Pack 2)
..
2016-06-21 15:55:46.11 spid9s      Server name is ‘SQLPROD-N1’. This is an informational message only. No user action is required.
2016-06-21 15:55:46.11 spid9s      The NETBIOS name of the local node that is running the server is ‘CLUSTER’. This is an informational message only. No user action is required.
..
2016-06-21 15:56:07.91 spid9s      Error: 5184, Severity: 16, State: 2.
2016-06-21 15:56:07.91 spid9s      Cannot use file ‘F:\SQLData\temp_MS_AgentSigningCertificate_database.mdf’ for clustered server. Only formatted files on which the cluster resource of the server has a dependency can be used. Either the disk resource containing the file is not present in the cluster group or the cluster resource of the SQL Server does not have a dependency on it.
2016-06-21 15:56:07.91 spid9s      Error: 1802, Severity: 16, State: 1.
2016-06-21 15:56:07.91 spid9s      CREATE DATABASE failed. Some file names listed could not be created. Check related errors.
2016-06-21 15:56:07.91 spid9s      Error: 912, Severity: 21, State: 2.
2016-06-21 15:56:07.91 spid9s      Script level upgrade for database ‘master’ failed because upgrade step ‘sqlagent100_msdb_upgrade.sql’ encountered error 598, state 1, severity 25. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the ‘master’ database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.
2016-06-21 15:56:07.92 spid9s      Error: 3417, Severity: 21, State: 3.
2016-06-21 15:56:07.92 spid9s      Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.

My client contacted me because of below error (which is side effect of earlier errors not a real corruption of master database)

Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it

When we checked failover cluster manager and checked the properties of SQL Server resource, we found that F drive dependency to SQL was missing. So, we added all the 3 drives in SQL Server resource properties as dependency.

SQL SERVER - Script level upgrade for database master failed because upgrade step sqlagent100_msdb_upgrade.sql encountered error upd-script-01

Once we added dependencies, upgrade script to run fine and we were able to bring SQL resource online. I know this was more of a clustering / configuration problem. But these things do happen and we are sent clueless on the possible root cause. Hope this blog helps you just in case you get this error.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Script level upgrade for database master failed because upgrade step sqlagent100_msdb_upgrade.sql encountered error

SQL SERVER – Unable to cast object of type ‘System.DBNull’ to type ‘System.String’. (Microsoft.SqlServer.Smo)

$
0
0

When you are driving on a highway, you look forward to the sign boards give you guidance while you are busy driving. These sign boards tell you when your exit is approaching and how you should be dealing with the same. I consider such sign boards inside SQL Server as “Error Messages”. I rely on them so heavily to give me insider information to a possible solution. Not always are these messages super easy to understand. Take for instance the following message. While trying to create a database on my lab server, I got below error related to System.DBNull:

SQL SERVER - Unable to cast object of type 'System.DBNull' to type 'System.String'. (Microsoft.SqlServer.Smo) cast-error-01

TITLE: Microsoft SQL Server Management Studio
—————————–
Unable to cast object of type ‘System.DBNull’ to type ‘System.String’. (Microsoft.SqlServer.Smo)
——————————
BUTTONS:
OK
——————————

This was a blind dark tunnel for sure. But, since I always want to know the cause of the error, I captured profiler. Here is the query which was last fired from SSMS.

exec sp_executesql N'SELECT
dtb.collation_name AS [Collation],
dtb.name AS [DatabaseName2]
FROM
master.sys.databases AS dtb
WHERE
(dtb.name=@_msparam_0)',N'@_msparam_0 nvarchar(4000)',@_msparam_0=N'SQLAuthority'

When I executed the query manually in SSMS, I found below

SQL SERVER - Unable to cast object of type 'System.DBNull' to type 'System.String'. (Microsoft.SqlServer.Smo) cast-error-02

Since the error message talks about NULL (‘System. DBNull’), I think it is for the collation column above which is returned as NULL. When I checked SSMS, I found that I already had a database with the same name but in offline state.

SQL SERVER - Unable to cast object of type 'System.DBNull' to type 'System.String'. (Microsoft.SqlServer.Smo) cast-error-03

Have easy workaround to use T-SQL and perform CREATE DATABASE command to get valid error.

Msg 1801, Level 16, State 3, Line 1
Database ‘SQLAuthority’ already exists. Choose a different database name.

If I bring the database online, then I can see the valid error. It looks like an issue with SQL Server Management Studio. What do you think? Have you ever encountered such error in your environment? I personally hope a better error message pops up. But now you know what to do for this error.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Unable to cast object of type ‘System.DBNull’ to type ‘System.String’. (Microsoft.SqlServer.Smo)

SQL Server Agent – Unable to start the service – The request failed or the service did not respond in a timely fashion

$
0
0

Rarely, it also happens when someone becomes accidental DBA and may get stumped with simple things. This blog is result of troubleshooting which I have done for a company where I found one such DBA. I don’t say you cannot become a DBA in this manner, it is the hard work that goes behind this that needs to be understood. Let us learn about error related to SQL Server Agent.

When they started working with me, they had issue with starting a SQL Service. I worked with them and found that it was due to permission issue. I already blogged about this FCB:: Open failed: Could not open file for file number 2.

Once SQL was started they also said SQL Agent is now starting. Below was the error

SQL Server Agent - Unable to start the service - The request failed or the service did not respond in a timely fashion timely-01-800x237

So I asked to open the SQL Agent Logs. The current log is named as SQLAgent.OUT. Here is the snippet from that log.
2015-06-14 05:09:39 – ? [100] Microsoft SQLServerAgent version 10.50.4033.0 (x86 unicode retail build): Process ID 7820
2015-06-14 05:09:39 – ? [101] SQL Server SQLPROD version 10.50.4033 (0 connection limit)
2015-06-14 05:09:39 – ? [102] SQL Server ODBC driver version 10.50.4033
2015-06-14 05:09:39 – ? [103] NetLib being used by driver is DBNETLIB.DLL; Local host server is SQLPROD
2015-06-14 05:09:39 – ? [310] 4 processor(s) and 4096 MB RAM detected
2015-06-14 05:09:39 – ? [339] Local computer is SQLPROD running Windows NT 6.0 (6002) Service Pack 2
2015-06-14 05:09:39 – ! [000] This installation of SQL Server Agent is disabled. The edition of SQL Server that installed this service does not support SQL Server Agent.
2015-06-14 05:09:39 – ? [098] SQLServerAgent terminated (normally)

The above message is interesting and I had an idea about the possible issue. I asked to check initial 5 lines from ERRORLOG and we saw below.

2015-06-14 05:03:35.79 Server Microsoft SQL Server 2008 R2 (SP2) – 10.50.4033.0 (Intel X86)
Jul 9 2014 16:08:15
Copyright (c) Microsoft Corporation
Express Edition on Windows NT 6.0 (Build 6002: Service Pack 2) (WOW64)

Now this all makes sense. Here is the learning: SQL Express doesn’t have the SQL Agent functionality. The service will still appear, though but we can’t start it.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL Server Agent – Unable to start the service – The request failed or the service did not respond in a timely fashion

SQL SERVER – Fix Error Msg 13602 working with JSON documents

$
0
0

My saga of working with JSON documents and finding out the various errors continues. With JSON being one of the new data structures introduced inside SQL Server 2016, I have been exploring to how these are fundamentally different from using the FORXML constructs. The more I have been working and testing with this capability, the more similarities I see in them. Coming from a generation of working with SQL Server for more than a decade, I try to explore and learn the new concepts keeping some reference. This helps in faster learning and can accelerate the way you work with the new concepts.

In this blog, I am calling out one of the restrictions of using the FOR JSON clause with SQL Server 2016. We cannot use this clause in a DML statement like UPDATE. You will be presented with an error as shown below:

Msg 13602, Level 16, State 1, Line 7
The FOR JSON clause is not allowed in a UPDATE statement.

The code that caused this error is shown below for your reference:

UPDATE MyJSONTable
SET col = 1
FOR JSON AUTO

As you can see the output from SQL Server Management Studio output:

SQL SERVER - Fix Error Msg 13602 working with JSON documents JSON-SQL-Server-Error-Msg-13602-01-800x201

Hence, to summarize, this error is returned if FOR JSON clause is used in a statement where it is not supported (e.g. UPDATE). Equivalent error is returned in FOR XML clause. For completeness of understanding, we will get an error as shown below when used with FOR XML clause:

Msg 6819, Level 16, State 1, Line 7
The FOR XML clause is not allowed in a UPDATE statement.

As you can see, many of these features of FOR JSON is similar to how FOR XML used to work in previous versions of SQL Server. I am sure none had any requirement of getting an output clause as JSON. But I could be proved wrong. Do let me know via comments on the scenario where you would have loved this to be enabled.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Fix Error Msg 13602 working with JSON documents


SQL SERVER – Detected a DTC/KTM in-doubt Transaction with UOW

$
0
0

Recently, one of my clients contacted me to assist in fixing and issue which was causing their production database to be in SUSPECT state. The learnings I am getting from various consulting engagements is awesome and the after effect of this is a number of blogs I have published in the recent past. They were using Microsoft SQL Server 2008 R2 (SP2) in a clustered environment. They had a disaster and had a hard failure of SQL cluster. Initially there were issues with DTC and they were able to fix them by installing the DTC component in the cluster. Now, once they reinstalled DTC, the production database was in SUSPECT state. Let us learn about how to detect DTC/KTM in-doubt Transaction with UOW.

I got connected to their server via desktop sharing tool and I checked SSMS and found that the database ‘IN_SELLER_MAIN’ was in SUSPECT state. I looked into ERRORLOG and found below:

2016-06-27 07:12:19.55 spid21s SQL Server detected a DTC/KTM in-doubt transaction with UOW {C36097CA-09D8-406D-B8D4-3A0200DD6A9A}.Please resolve it following the guideline for Troubleshooting DTC Transactions.
2016-06-27 07:12:19.55 spid21s Error: 3437, Severity: 21, State: 3.
2016-06-27 07:12:19.55 spid21s An error occurred while recovering database ‘SELLER’. Unable to connect to Microsoft Distributed Transaction Coordinator (MS DTC) to check the completion status of transaction (32:-1730264831). Fix MS DTC, and run recovery again.
2016-06-27 07:12:19.55 spid21s Error: 3414, Severity: 21, State: 2.
2016-06-27 07:12:19.55 spid21s An error occurred during recovery, preventing the database ‘SELLER’ (database ID 9) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.

When I checked DTC, there were no transactions listed

SQL SERVER - Detected a DTC/KTM in-doubt Transaction with UOW DTC-Suspect

SOLUTION

Error messages are pretty straight forward. When SQL was stopped, there were some transactions which need to roll-forward/rollback. Since client reinstalled DTC, there was no way to recovery in-doubt transaction by itself. So, we decided to change setting in SQL Server such that it aborts those transaction and recovery continues.

sp_configure 'in-doubt xact resolution', 2 
GO  
RECONFIGURE  
GO

The value 1 or 2 determines what to be done with such in-doubt transactions. Since it would have been rolled back on other server, we have changed the value to 2. Then we restarted SQL resource and it came online. With that database also came online.

When I search for “Troubleshooting DTC Transactions” as mentioned in ERRORLOG message, I found an article (which is for SQL 2000).

If you are a DBA then you must have seen SUSPECT database at least once, please share the reason and the fix via comments to share the knowledge.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Detected a DTC/KTM in-doubt Transaction with UOW

SQL SERVER 2016 – WARNING: Setup Limited to Reporting Services for SharePoint

$
0
0

I have a habit of generally reading the “What’s New” section whenever a new version is around in the block. This is a great way to actually check, learn everything that is getting released in a nutshell. These often don’t just mean we can master them in that single page, but it gives us enough information to start our journey for exploration. Let us learn about Setup Limited to Reporting Services for SharePoint warning.

Also, it is important that you keep learning and testing various scenarios because I get pushed into customer environments that challenge me. During my lab testing, I was trying to install SQL Server 2016. As soon as I clicked on the setup, I was welcomed with below warning. This was fundamentally for one of my client assignments wherein I was helping them as part of migration to the latest version of SQL Server.

Setup limited to Reporting Services for SharePoint”. As the message says further – “This server is running Windows Server 2008 R2

SQL SERVER 2016 - WARNING: Setup Limited to Reporting Services for SharePoint setup16-01

I went back and looked into the documentation and found issue listed here.

Issue:

There is a table for operating system support which doesn’t list Windows Server 2008 R2. The minimum operating system is Windows Server 2012. This was not new because these compatibility lists are always published by Microsoft with every single release. Understanding the supportability matrix and making sure these are adhered to is critical in production environments. If we override the warning, be aware that the support might not apply in case we get into issues later.

I am curious to know if you have installed such environments at your work? What are those scenarios when you did this as a DBA? Did you appraise your decision makers of such warnings? Will be great to understand your experience – do let us know via comments please.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER 2016 – WARNING: Setup Limited to Reporting Services for SharePoint

SQL SERVER – Tracking Database Dependencies

$
0
0

In this blog post we will be learning about tracking database dependencies.

Dependency Analysis

If database changes are needed to be made at a column level, it’s essential that you perform impact assessment in order to determine which objects will be affected after the change. This means that SQL table column dependencies within a database need to be analyzed.

ApexSQL Clean is a tool that is used to analyze SQL server database dependencies and also to remove unwanted objects.

To analyze table column dependencies within a SQL database the first thing you need to do is to create a new project. That can be done by clicking the New button in the Home tab and choosing the desired server and the database.

SQL SERVER - Tracking Database Dependencies apextracking1

After that you need to select the Column dependencies button/option. Activating it will include all visible columns in the results grid and will make it expandable and be able to preview every column for each table.

SQL SERVER - Tracking Database Dependencies apextracking2

The column type is automatically shown alongside the column name, whether it’s ROWGUID, Identity type or a regular column.

SQL SERVER - Tracking Database Dependencies apextracking3

For any selected column or object, ApexSQL Clean provides additional information in both Children and Parents panes. The Parents pane contains a list of database objects, that the selected object depends on.

SQL SERVER - Tracking Database Dependencies apextracking4

Every time a table schema is modified, it is very important to detect all possible SQL database objects that might be affected after the change. ApexSQL Clean has a main advantage which is that it leverages dependency analysis algorithms to display column level dependencies, not like in SSMS, where the object level dependencies can only be viewed. Without performing a full analysis, there is a possible risk of causing problems after implementing the update.

Graphical representation of dependencies

The Dependency viewer allows the visualization of all SQL object dependencies, including relationships that affect encrypted objects, system objects, or objects stored in databases encrypted with Transparent Data Encryption (TDE).

SQL SERVER - Tracking Database Dependencies apextracking5

The Dependency viewer shows database object dependencies relative to each other in an easy to understand, well organized visual format, and offers various options for object manipulation, appearance, filtering, and more.

SQL SERVER - Tracking Database Dependencies apextracking6

Visual dependencies can be displayed in four available types:

  • Circular
  • Hierarchical
  • Force directed
  • Orthogonal

Visual dependencies can be filtered out by using four dependency filters:

  • Children only – Using this filter shows the specified object and the objects that directly depend on it
  • Descendants – Shows the specified object, all objects that depend on it and all the objects that depend on those objects as well
  • Parents and children only – This filter shows the specified object, all of the objects it depends on and the objects that directly depend on it
  • Parents and descendants only – Shows the whole dependency chain (the selected object, all of the objects it depends on, all objects that depend on it and all of the objects that depend on those objects as well)

The dependency viewer feature, besides the previously mentioned ability to display the database dependencies graphically, also offers the choice of customizing and designing the overall appearance of the dependency diagrams. This option is used to improve the visual look and readability of dependency diagrams. Those customization capabilities are:

  • Layout editing – resizing nodes and clusters, moving nodes and edges
  • Format functionality – adjusting colors, shapes and fonts for all graphical objects
  • Creation of custom graphs – manual definition of object position and size
  • General navigation – zooming and scrolling

Analyzing dependencies in client code

Among all other features ApexSQL Clean is packed with a very useful feature that unfortunately most people are not even aware of. This feature manages SQL code analysis in Delphi, C#, XML, XAML, VB.NET, ASP.NET, CSS, HTML code etc., and detects what SQL objects are actually used and which ones aren’t. This helpful little feature keeps SQL databases stay clean and well organized.

Remove unwanted objects

In order to find any unreferenced objects that should be removed from a SQL database create a new project, select a database and click OK. Once that is done, the objects referential status will be displayed in the results grid. All unreferenced SQL objects will be marked with a green checkmark in the Unreferenced column. Only these objects can be selected to be removed from the database.

SQL SERVER - Tracking Database Dependencies apextracking7

To clean up the database, select those objects and click the Create drop script button in the Actions tab.

SQL SERVER - Tracking Database Dependencies apextracking8

After that, the drop script generation menu will appear, where you can select the SQL objects to be cleaned up. Selected objects are automatically checked before running the drop script generation menu.

SQL SERVER - Tracking Database Dependencies apextracking9

The next step offers the choice to create a drop script which can be used later, or to execute the created script and remove all unwanted objects. If you choose the Create drop script option, you can save it in a desired location or open the script in either ApexSQL built-in editor or in some other external editors, like SSMS, Notepad++ etc.

SQL SERVER - Tracking Database Dependencies apextracking10

After executing the drop script an additional message dialog will appear with the script execution results.

Creating a database cleanup report

Creating a report is easy to do and you will be provided with detailed information after, whether you want to include parent-children objects, or maybe you need to generate a document which contains objects that have no children objects or any other references. To create a cleanup report of unreferenced objects, click the Export button in the Actions tab.

SQL SERVER - Tracking Database Dependencies apextracking11

As shown above, there are two types of export options available. HTML and XML report. Selecting either option will result in opening a new menu with additional customization options shown below:

SQL SERVER - Tracking Database Dependencies apextracking12

Select the needed report options and click OK. The generated cleanup report will contain previously selected information and it will allow the printing of a copy if it is necessary. A sample of a created HTML report can be seen below:

SQL SERVER - Tracking Database Dependencies apextracking13

On the other hand, if you decide to create an XML report, the exported file will be created in .XML format and will look similar to this:

SQL SERVER - Tracking Database Dependencies apextracking14

With the create report feature, ApexSQL Clean will create a well-organized document which contains all information about objects and their dependencies. Whether they are needed for reporting referenced objects, or maybe to keep a record of the unreferenced ones.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Tracking Database Dependencies

SQL SERVER – Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes

$
0
0

This is one of the most interesting stories written by my friend Syrovatchenko Sergey. He is an expert on SQL Server and works at Devart. Just like me he also shares his passion for Wait Stats and new features of the SQL Server. In this blog post he talks about one of the most interesting feature about Delayed Durability. I strongly encourage that you find sometime during your day to read this blog post and discover more about this topic.


SQL SERVER - Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes tmetric1

I’ve recently started helping with a new project TMetric, which is developed as a free web-service for tracking working hours. The technology stack was originally selected to be Microsoft with SQL Server 2014 as data repository. One of the first tasks assigned to me was to study the opportunity to accelerate auto-tests.

Before I got into gear, the project had existed for a long time and had gathered a fair number of tests (at that time I reckoned 1300 of auto-tests). On a build machine with SSD, tests ran for 4-5 minutes, and on a HDD – not more than 11-12 minutes. The whole team could be equipped with SSD, but the essence of the problem would not be solved. Especially that they were soon planning to expand the functionality and the number of tests would become even greater.

All tests were grouped, and before running each of the groups, the old data were purged from the database. Previously, purging was performed by recreating the database, but this approach proved to be very slow in practice. It would be much faster just to clean all the tables from data and reset the IDENTITY value to zero, so that future inserts would form correct test data. So, my starting point was the script with such an approach:

EXEC sys.sp_msforeachtable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' 
DELETE FROM [dbo].[Project] 
DBCC CHECKIDENT('[dbo].[Project]', RESEED, 0) 
DBCC CHECKIDENT('[dbo].[Project]', RESEED) 
DELETE FROM [dbo].[RecentWorkTask] 
... 
EXEC sys.sp_msforeachtable 'ALTER TABLE ? WITH CHECK CHECK CONSTRAINT ALL' 

As such, an idea came straight into my mind to use dynamic SQL to generate a query. For example, if a table has foreign keys, then use the DELETE operation as before. Otherwise, you can delete data with minimal logging using the TRUNCATE command.

As a result, the query for data deletion will look as follows:

DECLARE @SQL NVARCHAR(MAX)
      , @FK_TurnOff NVARCHAR(MAX)
      , @FK_TurnOn NVARCHAR(MAX)

SELECT @SQL = (
    SELECT CHAR(13) + CHAR(10) +
        IIF(p.[rows] > 0,
            IIF(t2.referenced_object_id IS NULL, N'TRUNCATE TABLE ', N'DELETE FROM ') + obj_name,
            ''
        ) + CHAR(13) + CHAR(10) +
        IIF(IdentityProperty(t.[object_id], 'LastValue') > 0,
            N'DBCC CHECKIDENT('''+ obj_name + N''', RESEED, 0) WITH NO_INFOMSGS',
            ''
        )
    FROM (
        SELECT obj_name = QUOTENAME(s.name) + '.' + QUOTENAME(o.name), o.[object_id]
        FROM sys.objects o
        JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
        WHERE o.is_ms_shipped = 0
            AND o.[type] = 'U'
            AND o.name != N'__MigrationHistory'
    ) t
    JOIN sys.partitions p ON t.[object_id] = p.[object_id] AND p.index_id IN (0, 1)
    LEFT JOIN (
        SELECT DISTINCT f.referenced_object_id
        FROM sys.foreign_keys f
    ) t2 ON t2.referenced_object_id = t.[object_id]
    FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')

SELECT @FK_TurnOff = CAST(x.query('off/text()') AS NVARCHAR(MAX))
     , @FK_TurnOn = CAST(x.query('on/text()') AS NVARCHAR(MAX))
FROM (
    SELECT [off] = CHAR(10) + 'ALTER TABLE ' + obj + ' NOCHECK CONSTRAINT ' + fk
         , [on] = CHAR(10) + 'ALTER TABLE ' + obj + ' CHECK CONSTRAINT ' + fk
    FROM (
        SELECT fk = QUOTENAME(f.name)
             , obj = QUOTENAME(SCHEMA_NAME(f.[schema_id])) + '.' + QUOTENAME(OBJECT_NAME(f.parent_object_id))
        FROM sys.foreign_keys f
        WHERE f.delete_referential_action = 0
            AND EXISTS(
                    SELECT *
                    FROM sys.partitions p
                    WHERE p.[object_id] = f.parent_object_id
                        AND p.[rows] > 0
                        AND p.index_id IN (0, 1)
                )
    ) t
    FOR XML PATH(''), TYPE
) t(x)

IF @SQL LIKE '%[a-z]%' BEGIN

    SET @SQL = ISNULL(@FK_TurnOff, '') + @SQL + ISNULL(@FK_TurnOn, '')

    PRINT @SQL
    --EXEC sys.sp_executesql @SQL

END

Initially, the auto-tests ran for 11 minutes on my machine:

SQL SERVER - Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes tmetric2

But after I rewrote the query, all tests began to run 40 seconds faster:

SQL SERVER - Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes tmetric3

Of course, I could be happy about it and set resolved status for the task, but the basic problem remained:

SQL SERVER - Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes tmetric4

The disk was heavily loaded when tests were executed. I decided to see what waits are on the server. To do this, I first cleared sys.dm_os_wait_stats:

DBCC SQLPERF("sys.dm_os_wait_stats", CLEAR)

I ran autotests once again and then executed the query:

SELECT TOP(20)
      wait_type
    , wait_time = wait_time_ms / 1000.
    , wait_resource = (wait_time_ms - signal_wait_time_ms) / 1000.
    , wait_signal = signal_wait_time_ms / 1000.
    , waiting_tasks_count
    , percentage = 100.0 * wait_time_ms / SUM(wait_time_ms) OVER ()
    , avg_wait = wait_time_ms / 1000. / waiting_tasks_count
    , avg_wait_resource = (wait_time_ms - signal_wait_time_ms) / 1000. / [waiting_tasks_count]
    , avg_wait_signal = signal_wait_time_ms / 1000.0 / waiting_tasks_count
FROM sys.dm_os_wait_stats
WHERE [waiting_tasks_count] > 0
    AND max_wait_time_ms > 0
    AND [wait_type] NOT IN (
        N'BROKER_EVENTHANDLER', N'BROKER_RECEIVE_WAITFOR',
        N'BROKER_TASK_STOP', N'BROKER_TO_FLUSH',
        N'BROKER_TRANSMITTER', N'CHECKPOINT_QUEUE',
        N'CHKPT', N'CLR_AUTO_EVENT',
        N'CLR_MANUAL_EVENT', N'CLR_SEMAPHORE',
        N'DBMIRROR_DBM_EVENT', N'DBMIRROR_EVENTS_QUEUE',
        N'DBMIRROR_WORKER_QUEUE', N'DBMIRRORING_CMD',
        N'DIRTY_PAGE_POLL', N'DISPATCHER_QUEUE_SEMAPHORE',
        N'EXECSYNC', N'FSAGENT',
        N'FT_IFTS_SCHEDULER_IDLE_WAIT', N'FT_IFTSHC_MUTEX',
        N'HADR_CLUSAPI_CALL', N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',
        N'HADR_LOGCAPTURE_WAIT', N'HADR_NOTIFICATION_DEQUEUE',
        N'HADR_TIMER_TASK', N'HADR_WORK_QUEUE',
        N'KSOURCE_WAKEUP', N'LAZYWRITER_SLEEP',
        N'LOGMGR_QUEUE', N'ONDEMAND_TASK_QUEUE',
        N'PWAIT_ALL_COMPONENTS_INITIALIZED',
        N'QDS_PERSIST_TASK_MAIN_LOOP_SLEEP',
        N'QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP',
        N'REQUEST_FOR_DEADLOCK_SEARCH', N'RESOURCE_QUEUE',
        N'SERVER_IDLE_CHECK', N'SLEEP_BPOOL_FLUSH',
        N'SLEEP_DBSTARTUP', N'SLEEP_DCOMSTARTUP',
        N'SLEEP_MASTERDBREADY', N'SLEEP_MASTERMDREADY',
        N'SLEEP_MASTERUPGRADED', N'SLEEP_MSDBSTARTUP',
        N'SLEEP_SYSTEMTASK', N'SLEEP_TASK',
        N'SLEEP_TEMPDBSTARTUP', N'SNI_HTTP_ACCEPT',
        N'SP_SERVER_DIAGNOSTICS_SLEEP', N'SQLTRACE_BUFFER_FLUSH',
        N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
        N'SQLTRACE_WAIT_ENTRIES', N'WAIT_FOR_RESULTS',
        N'WAITFOR', N'WAITFOR_TASKSHUTDOWN',
        N'WAIT_XTP_HOST_WAIT', N'WAIT_XTP_OFFLINE_CKPT_NEW_LOG',
        N'WAIT_XTP_CKPT_CLOSE', N'XE_DISPATCHER_JOIN',
        N'XE_DISPATCHER_WAIT', N'XE_TIMER_EVENT'
    )
ORDER BY [wait_time_ms] DESC

The biggest delay occurs with WRITELOG.

wait_type wait_time waiting_tasks_count percentage
———————– ———— ——————– ———–
WRITELOG 546.798 60261 96.07
PAGEIOLATCH_EX 13.151 96 2.31
PAGELATCH_EX 5.768 46097 1.01
PAGEIOLATCH_UP 1.243 86 0.21
IO_COMPLETION 1.158 89 0.20
MEMORY_ALLOCATION_EXT 0.480 683353 0.08
LCK_M_SCH_S 0.200 34 0.03
ASYNC_NETWORK_IO 0.115 688 0.02
LCK_M_S 0.082 10 0.01
PAGEIOLATCH_SH 0.052 1 0.00
PAGELATCH_UP 0.037 6 0.00
SOS_SCHEDULER_YIELD 0.030 3598 0.00

“This wait type is usually seen in the heavy transactional database. When data is modified, it is written both on the log cache and buffer cache. This wait type occurs when data in the log cache is flushing to the disk. During this time, the session has to wait due to WRITELOG.” (Reference: SQLAuthority – WRITELOG)

And what do I need to find out now? Yes, each running autotest records something in the database. One of the solutions to the problem with WRITELOG waits could be inserting large chunks of data, rather than row by row. But SQL Server 2014 has a new Delayed Durability option on the database level, i.e. the ability to omit recording of data to disk upon committing transactions.

How is data modified in SQL Server? Suppose we are inserting a new row. SQL Server calls the Storage Engine component that, in turn, accesses Buffer Manager (that works with buffers in memory and the disk) and informs that I want to change the data.

After that, Buffer Manager accesses Buffer Pool (the cache in memory for all of our data, which stores information by page – 8 Kb per page), and then modifies the necessary pages in memory. If there are no such pages, it will load them from the disk. At the moment when a page is changed in memory, SQL Server cannot yet tell that the query was executed. Otherwise, one of the ACID principles, namely Durability, would be violated, when the end of modification ensures that the data will be written to disk.

After the page is modified in memory, Storage Engine accesses Log Manager, which writes data to the log. But it does not happen at once, but through Log Buffer having a size of 60Kb, which is used to optimize performance when working with the log. Data reset from the buffer to the log file occurs in the situation when:

  1. The buffer is filled and data is stored in the log.
  2. A user executed sys.sp_flush_log.
  3. A transaction was committed, and the entire Log Buffer was recorded to the log.

When the data are stored in the log, the data modification is confirmed, and SQL Server informs the client about it.

According to this logic, the data does not get into the data file. SQL Server uses an asynchronous mechanism for recording data to the files. There are two such mechanisms:

  1. Lazy Writer, that runs on a time basis and checks whether there is sufficient memory for SQL Server. If there is not, the pages are forced out of the memory and are recorded to the data file. And those that have been modified are flushed and thrown out of memory.
  2. Checkpoint, that scans dirty pages once a minute, flushes them and leaves in memory.

Suppose that a lot of small transactions are running in the system, for example, that modify data by row. After each modification, the data is transmitted from Log Buffer to the transaction log. Remember that all modifications get synchronously into the log and other transactions should wait for their turn.

Let me illustrate:

USE [master]
GO
SET NOCOUNT ON

IF DB_ID('TT') IS NOT NULL BEGIN
    ALTER DATABASE TT SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE TT
END
GO

CREATE DATABASE TT
GO
ALTER DATABASE TT
    MODIFY FILE (NAME = N'TT', SIZE = 25MB, FILEGROWTH = 5MB)
GO
ALTER DATABASE TT
    MODIFY FILE (NAME = N'TT_log', SIZE = 25MB, FILEGROWTH = 5MB)
GO

USE TT
GO

CREATE TABLE dbo.tbl (
      a INT IDENTITY PRIMARY KEY
    , b INT
    , c CHAR(2000)
)
GO

IF OBJECT_ID('tempdb.dbo.#temp') IS NOT NULL
    DROP TABLE #temp
GO

SELECT t.[file_id], t.num_of_writes, t.num_of_bytes_written
INTO #temp
FROM sys.dm_io_virtual_file_stats(DB_ID(), NULL) t

DECLARE @WaitTime BIGINT
      , @WaitTasks BIGINT
      , @StartTime DATETIME = GETDATE()
      , @LogRecord BIGINT = (
              SELECT COUNT_BIG(*)
              FROM sys.fn_dblog(NULL, NULL)
          )

SELECT @WaitTime = wait_time_ms
     , @WaitTasks = waiting_tasks_count
FROM sys.dm_os_wait_stats
WHERE [wait_type] = N'WRITELOG'

DECLARE @i INT = 1

WHILE @i < 5000 BEGIN

    INSERT INTO dbo.tbl (b, c)
    VALUES (@i, 'text')

    SELECT @i += 1

END

SELECT elapsed_seconds = DATEDIFF(MILLISECOND, @StartTime, GETDATE()) * 1. / 1000
     , wait_time = (wait_time_ms - @WaitTime) / 1000.
     , waiting_tasks_count = waiting_tasks_count - @WaitTasks
     , log_record = (
          SELECT COUNT_BIG(*) - @LogRecord
          FROM sys.fn_dblog(NULL, NULL)
       )
FROM sys.dm_os_wait_stats
WHERE [wait_type] = N'WRITELOG'

SELECT [file] = FILE_NAME(o.[file_id])
     , num_of_writes = t.num_of_writes - o.num_of_writes
     , num_of_mb_written = (t.num_of_bytes_written - o.num_of_bytes_written) * 1. / 1024 / 1024
FROM #temp o
CROSS APPLY sys.dm_io_virtual_file_stats(DB_ID(), NULL) t
WHERE o.[file_id] = t.[file_id]

Inserting 5 thousand rows took about 42.5 seconds, and the delay upon inserting into the log was 42 seconds.

elapsed_seconds wait_time waiting_tasks_count log_record
—————- ———- ——————– ———–
42.54 42.13 5003 18748

SQL Server physically accessed the log 5000 times and has recorded a total of 20Mb.

file num_of_writes num_of_mb_written
——- ————– ——————
TT 79 8.72
TT_log 5008 19.65

Delayed Durability is the right choice for these situations. When activated, an entry is made to the log only when Log Buffer is full. You can enable Delayed Durability for the entire database:

ALTER DATABASE TT SET DELAYED_DURABILITY = FORCED 
GO 

or for individual transactions:

ALTER DATABASE TT SET DELAYED_DURABILITY = ALLOWED 
GO 
BEGIN TRANSACTION t 
... 
COMMIT TRANSACTION t WITH (DELAYED_DURABILITY = ON) 

Let’s enabled for the database and execute the script once again.

The waits disappeared and the query ran for 170ms on my machine:

elapsed_seconds wait_time waiting_tasks_count log_record
—————- ———- ——————– ———–
0.17 0.00 0 31958

Due to the fact that the records were made to the log less intensely:

file num_of_writes num_of_mb_written
——- ————– ——————
TT 46 9.15
TT_log 275 12.92

Of course, there is a fly in the ointment. Before the data physically gets into the log file, the client is informed that the changes are recorded. In case of failure, we can lose data equal to the buffer size and damage the database.

In my case, the safety of the test data is not required. The DELAYED_DURABILITY was set to FORCED for the test database on which the TMetric autotests run, and the next time all tests ran for 2.5 minutes.

SQL SERVER - Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes tmetric5

All the delays associated with logging have a minimal impact on performance:

wait_type wait_time waiting_tasks_count percentage
——————– ———– ——————– ———–
PAGEIOLATCH_EX 16.031 61 43.27
WRITELOG 15.454 787 41.72
PAGEIOLATCH_UP 2.210 36 5.96
PAGEIOLATCH_SH 1.472 2 3.97
LCK_M_SCH_M 0.756 9 2.04
ASYNC_NETWORK_IO 0.464 735 1.25
PAGELATCH_UP 0.314 8 0.84
SOS_SCHEDULER_YIELD 0.154 2759 0.41
PAGELATCH_EX 0.154 44785 0.41
LCK_M_SCH_S 0.021 7 0.05
PAGELATCH_SH 0.011 378 0.02

Let’s summarize the results on Delayed Durability:

  1. Available in all editions starting from SQL Server 2014.
  2. It can be used if you have a bottleneck when writing to the transaction log (lazy commit in large blocks may be more effective than many small ones).
  3. Concurrent transactions will less likely compete for IO operations upon logging.
  4. When activated, the COMMIT operation does not wait for entries in the transaction log and we can get a significant performance boost in OLTP systems.
  5. You can go ahead and enable Delayed Durability, if you are ready to play Russian roulette and upon “fortuitous” combination of circumstances lose approximately 60Kb of data in case of failure.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes

SQL SERVER – Install Error – Could not Find the Database Engine Startup Handle

$
0
0

Note: There are multiple reason of above error and this blog shows just one of them. I am sure that this blog would give you guideline about what do to in case you see this error about install error.

Some errors are generic and there is no single path / root cause for the same. It is important to know at least the basic path one needs to take while troubleshooting. One of the reasons of loving my consultancy job is that I learn something new every day. I found myself again in learning state when I was trying to help one of my clients to assist in installation of SQL Server in a new environment. While installing SQL, it was stuck at below error message.

SQL SERVER - Install Error - Could not Find the Database Engine Startup Handle handle-01-800x602

Whenever I see any error in the setup, I start looking at setup logs. One thing which I have learned in the past is that whenever we see “Could not find the Database Engine startup handle” it is mostly due to SQL service startup failure during installation. We had no other option than hitting OK. At the end, I looked into the Summary.txt file as pointed at the last screen. Here is what I found:

Feature: Database Engine Services
Status: Failed: see logs for details
Reason for failure: An error occurred during the setup process of the feature.
Next Step: Use the following information to resolve the error, uninstall this feature, and then run the setup process again.
Component name: SQL Server Database Engine Services Instance Features
Component error code: 0x851A0019
Error description: Could not find the Database Engine startup handle.

In the setup logs folder, we should have a file with name “SQLServer_ERRORLOG_<DATETIME>” which is saved copy of ERRORLOG when SQL was unable to start. Here is what we found in that file.

Cannot use file ‘M:\MSSQL11.MSSQLSERVER\MSSQL\DATA\master.mdf’ because it was originally formatted with sector size 4096 and is now on a volume with sector size 2097152. Move the file to a volume with a sector size that is the same as or smaller than the original sector size.
Cannot use file ‘L:\MSSQL11.MSSQLSERVER\MSSQL\DATA\mastlog.ldf’ because it was originally formatted with sector size 4096 and is now on a volume with sector size 2097152. Move the file to a volume with a sector size that is the same as or smaller than the original sector size.

Since this is about sector size, I referred KB

https://support.microsoft.com/en-us/kb/926930 (Hard disk drive sector-size support boundaries in SQL Server)

This doesn’t list that big number 2097152. Since it was an issue due to hardware, I asked them to contact Vendor and they provided a fix.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Install Error – Could not Find the Database Engine Startup Handle

SQL SERVER Installation – FeatureUpgradeMatrixCheck (SQL Server 2016 Feature Upgrade) setup rule failed

$
0
0

One of the best ways to learn problems faced by individuals and organizations is to look at the forums for these issues. Often I have visited MSDN forums and see if there are some hot or interesting issues. Sometimes it is interesting to see some setup failures and investigate for the root cause. They might look trivial, but the best part is going the path of troubleshooting can be refreshing and gives you immense satisfaction to how things work inside SQL Server or Windows. This blog is inspired by one such error that encountered and I found it interesting enough to share it on this blog for the benefit of a larger audience. Let us learn about SQL Server Installation Error.

Recently there was a post where failure of rule FeatureUpgradeMatrixCheck was reported.

Rule “SQL Server 2016 Feature Upgrade” failed.

The specified edition upgrade from source Enterprise edition to target Evaluation edition is not supported. For information about supported upgrade paths, see the SQL Server 2016 version and edition upgrade in Books Online.

SQL SERVER Installation - FeatureUpgradeMatrixCheck (SQL Server 2016 Feature Upgrade) setup rule failed RuleFailed-800x597

I looked into Detail.txt and I have highlighted the line which explains the crux of the problem.

(17) 2016-07-11 03:05:42 Slp: Initializing rule : SQL Server 2016 Feature Upgrade
(17) 2016-07-11 03:05:42 Slp: Rule is will be executed : True
(17) 2016-07-11 03:05:42 Slp: Init rule target object: Microsoft.SqlServer.Configuration.SetupExtension.SkuUpgradeRule
(17) 2016-07-11 03:05:42 Slp: — SkuUpgradeRule : Rule ‘FeatureUpgradeMatrixCheck’ looking for previous version upgrade data for feature package ‘sql_as_Cpu64’.
(17) 2016-07-11 03:05:42 Slp: — SkuUpgradeRule : Rule ‘FeatureUpgradeMatrixCheck’ feature package ‘sql_as_Cpu64’ found no upgrade features.
(17) 2016-07-11 03:05:42 Slp: — SkuUpgradeRule : Rule ‘FeatureUpgradeMatrixCheck’ looking for previous version upgrade data for feature package ‘sql_engine_core_inst_Cpu64’.
(17) 2016-07-11 03:05:42 Slp: — SkuUpgradeRule : Found feature package ‘sql_engine_core_inst_SQL14_Cpu64’ with SkuValue=ENTERPRISE (Enterprise) ProductVersion=12.0.2000.8
(17) 2016-07-11 03:05:42 Slp: — SkuUpgradeRule : Rule ‘FeatureUpgradeMatrixCheck’ found sourceVersion 12.0.0 and edition ENTERPRISE for feature package ‘sql_engine_core_inst_Cpu64’.
(17) 2016-07-11 03:05:42 Slp: — SkuPublicConfigObject : ValidateSkuMatrix checking sku matrix for sourceVersion=12.0.0 sourceEdition=ENTERPRISE (Enterprise) sourceArchitecture=X64 targetEdition=EVAL (Evaluation) targetArchitecture=X64
(17) 2016-07-11 03:05:42 Slp: — SkuPublicConfigObject : ValidateSkuMatrix source and target architecture match.
(17) 2016-07-11 03:05:42 Slp: — SkuPublicConfigObject : ValidateSkuMatrix did not find a match in sku matrix .
(17) 2016-07-11 03:05:42 Slp: — SkuUpgradeRule : Rule ‘FeatureUpgradeMatrixCheck’ feature package ‘sql_engine_core_inst_Cpu64’ is blocking upgrade.
(17) 2016-07-11 03:05:42 Slp: — SkuUpgradeRule : Rule ‘FeatureUpgradeMatrixCheck’ detection result: IsValidFeatureUpgrade=False
(17) 2016-07-11 03:05:42 Slp: Evaluating rule : FeatureUpgradeMatrixCheck
(17) 2016-07-11 03:05:42 Slp: Rule running on machine: LAB-SQL-SERVER
(17) 2016-07-11 03:05:42 Slp: Rule evaluation done : Failed
(17) 2016-07-11 03:05:42 Slp: Rule evaluation message: The specified edition upgrade from source Enterprise edition to target Evaluation edition is not supported. For information about supported upgrade paths, see the SQL Server 2016 version and edition upgrade in Books Online.

Whenever there are issues with upgrade, I always look for topic “Supported Version and Edition Upgrades”

If we look at source edition, its ENTERPRISE and target is EVAL. Here is the document for SQL Server 2016.

As per books online “SQL Server 2014 Enterprise” can only be upgraded to “SQL Server 2016 Enterprise” and “SQL Server 2016 Business Intelligence” and hence the error.

Have you ever faced similar rule failure? How did you fix them?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER Installation – FeatureUpgradeMatrixCheck (SQL Server 2016 Feature Upgrade) setup rule failed

SQL SERVER – Unable to Create Listener for AlwaysOn Availability Group in Azure via Template Deployment

$
0
0

Recently I was trying to help a customer who was deploying AlwaysOn Availability Group in Microsoft Azure via template deployment.

SQL SERVER - Unable to Create Listener for AlwaysOn Availability Group in Azure via Template Deployment microsoft-azure-800x230

The template was failing with below error

StatusMessage
{
“status”: “Failed”,
“error”: {
“code”: “ResourceDeploymentFailure”,
“message”: “The resource operation completed with terminal provisioning state ‘Failed’.”,
“details”: [
{
“code”: “DeploymentFailed”,
“message”: “At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.”,
“details”: [
{
“code”: “Conflict”,
“message”: “{\r\n \”status\”: \”Failed\”,\r\n \”error\”: {\r\n \”code\”: \”ResourceDeploymentFailure\”,\r\n \”message\”: \”The resource operation completed with terminal provisioning state ‘Failed’.\”,\r\n \”details\”: [\r\n {\r\n \”code\”: \”VMExtensionProvisioningError\”,\r\n \”message\”: \”VM has reported a failure when processing extension ‘configuringAlwaysOn’. Error message: \\\”DSC Configuration ‘CreateFailoverCluster’ completed with error(s). Following are the first few: PowerShell DSC resource MicrosoftAzure_xSqlAvailabilityGroupListener failed to execute Set-TargetResource functionality with error message: The running command stopped because the preference variable \\\”ErrorActionPreference\\\” or common parameter is set to Stop: An error occurred while attempting to bring the resource ‘SQLAUTHORITY-AG’ online.\\n The cluster resource could not be brought online by the resource monitor The SendConfigurationApply function did not succeed.\\\”.\”\r\n }\r\n ]\r\n }\r\n}”
}
] }
] }

The message looks very ugly, but it’s a direct copy paste from Azure. Here is the relevant message:

An error occurred while attempting to bring the resource ‘SQLAUTHORITY-AG’ online. The cluster resource could not be brought online by the resource monitor

I looked further into Event log into the node and found below interesting error.

Log Name: System
Source: Microsoft-Windows-FailoverClustering
Date: 7/26/2016 6:11:45 PM
Event ID: 1193
Task Category: Network Name Resource
Level: Error
Keywords:
User: SYSTEM
Computer: demo.sqlauthority.local
Description:
Cluster network name resource ‘alwayson-ag-sqlauth-listener01’ failed to create its associated computer object in domain ‘sqlauthority.local’ for the following reason: Resource online.
The associated error code is: -1073741790
Please work with your domain administrator to ensure that:
– The cluster identity ‘win-cluster$’ can create computer objects. By default all computer objects are created in the ‘Computers’ container; consult the domain administrator if this location has been changed.
– The quota for computer objects has not been reached.
– If there is an existing computer object, verify the Cluster Identity ‘win-cluster$’ has ‘Full Control’ permission to that computer object using the Active Directory Users and Computers tool.

Interesting thing was that the deployment worked first time but failed second time and all subsequent attempts.

Initially I thought it’s a well-known permission issue of CNO and VCO so I went to pre-stage it.  (Prestage Cluster Computer Objects in Active Directory Domain Services)

Later, we found that listener name was created in AD as “alwayson-ag-sq” due to 15 characters’ limit. When we deployed again, it was still creating same name even though I gave ‘alwayson-ag-sqlauth-listener01’ in template.

Solution: Make sure we are keeping computer names within the limit of 15 characters. (Naming conventions in Active Directory for computers, domains, sites, and OUs)

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Unable to Create Listener for AlwaysOn Availability Group in Azure via Template Deployment


SQL SERVER – Setting Firewall Settings With Azure SQL Server VMs

$
0
0

Recently I had a session around using SQL Server on Azure and during this session one of the questions was to access the SQL Server in a Hybrid Setup. This is a typical scenario wherein the developer was requesting to connect to the SQL Server running on a VM from his application that was sitting inside his environment. Though there were specific steps that needed to be taken, he was getting some error. Let us learn about how to setup firewall with Azure SQL Server VMs.

After a moment of debugging, I realized it was a problem of Firewall. Though the initial test was by disabling the Firewall I was able to get the connectivity working. I enabled it immediately and then started to get into configuring the Firewall for exception for SQL Server traffic. This blog is more of a tutorial on how I was configuring the same.

Steps to enable the SQL Server traffic

To enable connection to the SQL Server from on-premises applications, you must open port 1433 on the SQL Server VM. Though this is an example, you need to see if you can change and use some other port for SQL Server connection. But this is reserved for some other time. The following steps will lead you through this:

  1. Login to SQL Server VM and Open firewall settings
  2. Create an inbound rule for the TCP port 1433 to allow connections
    SQL SERVER - Setting Firewall Settings With Azure SQL Server VMs SQL-Azure-VM-Firewall-Settings-01
    SQL SERVER - Setting Firewall Settings With Azure SQL Server VMs SQL-Azure-VM-Firewall-Settings-02
  1. Follow the default values on the wizard for the next steps and name the rule ‘SQL TCP’ and click OK.
    SQL SERVER - Setting Firewall Settings With Azure SQL Server VMs SQL-Azure-VM-Firewall-Settings-03

Alternatively, you can use execute this PowerShell cmdlet to configure inbound firewall rule:

netsh advfirewall firewall add rule name='SQL Server (TCP-In)' program='C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\sqlservr.exe' dir=in action=allow protocol=TCP 

This is an important step one needs to configure when working with VMs on Azure to access SQL Server. Though I have started to explore some of the core capabilities that come when working with Azure, I still see there are nuances that are quite different when compared to how an on-premise SQL Server would be. I am sure DBA’s and administrators are struggling to understand these use-cases that need to be configured because they have been used to connecting to SQL Server directly when working on their data centers.

Having said that, I felt this was something common and people are aware. But since I have done in my consulting couple of times in the past month that I get to write them down too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Setting Firewall Settings With Azure SQL Server VMs

SQL Complete – Smart Code Completion and SQL Formatting

$
0
0

A week ago, I saw a notification about the release of dbForge SQL Complete 5.5. As a long-time user of this product, I have updated to the new version. A week later I decided to share my impressions of the new version of SQL Complete.

The first thing that immediately caught my attention is the integration with SSMS 2016, which has recently become my main working tool. The new version of SSMS now includes a huge number of innovations, including Live Query Statistics that won my heart…

In addition to the SSMS 2016 integration, the new syntax of SQL Server 2016 was supported in SQL Complete. Previously, when writing articles on the new syntax, I had to look in MSDN sometimes, and now there is no necessity:

SQL Complete - Smart Code Completion and SQL Formatting sqlcomplete01

Previously, when typing GROUP or ORDER, BY was added automatically, and it seemed like a small revolution. In the new version, I liked the further development of this idea – to prompt more syntactic structures:

SQL Complete - Smart Code Completion and SQL Formatting sqlcomplete02

It is worth noting that phrase suggestion was implemented for DDL constructions:

SQL Complete - Smart Code Completion and SQL Formatting sqlcomplete03

Note also an improved suggestion inside the CROSS/OUTER APPLY constructions. I do not know for others, but these constructions are very useful for me. With their help, you can do UNPIVOT as shown in this picture:

SQL Complete - Smart Code Completion and SQL Formatting sqlcomplete04

As well as in individual cases influence the execution plan, forcing the optimizer to choose Index Seek when accessing data.

The context menu now includes the new command “Execute to cursor”, which came in handy a couple of times in practice:

SQL Complete - Smart Code Completion and SQL Formatting sqlcomplete05

What else can I say? Two new formatting profiles were added:

SQL Complete - Smart Code Completion and SQL Formatting sqlcomplete06

Now you can save time significantly without having to set up the formatting style for each construction from scratch. Here is an example of a badly formatted query:

select c.customername, o.orderdate,
ol.quantity
from sales.customers c join sales.orders o
on c.customerid = o.customerid join sales.orderlines
ol on o.orderid = ol.orderid
where c.isoncredithold = 0 and ol.unitprice &amp;gt; 0 order by o.orderdate desc,
ol.quantity

If you choose the first profile, after formatting you will get the following beauty:

SELECT c.CustomerName
     , o.OrderDate
     , ol.Quantity
FROM Sales.Customers c
    JOIN Sales.Orders o ON c.CustomerID = o.CustomerID
    JOIN Sales.OrderLines ol ON o.OrderID = ol.OrderID
WHERE c.IsOnCreditHold = 0
    AND ol.UnitPrice > 0
ORDER BY o.OrderDate DESC
       , ol.Quantity

If formatted with the second one, the result will be the following:

SELECT c.CustomerName,
       o.OrderDate,
       ol.Quantity
FROM Sales.Customers c
JOIN Sales.Orders o ON c.CustomerID = o.CustomerID
JOIN Sales.OrderLines ol ON o.OrderID = ol.OrderID
WHERE c.IsOnCreditHold = 0
    AND ol.UnitPrice > 0
ORDER BY o.OrderDate DESC,
         ol.Quantity

A small thing, but it makes a difference.

The rest of the improvements I saw “under the bonnet” – the speed when working with large scripts has increased (more than 1MB). This is especially true for those who often need to edit synchronization scripts of a schema or data.

Anyone who is interested in trying out the new version of SQL Complete can download it here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL Complete – Smart Code Completion and SQL Formatting

SQL SERVER – Event 17058 – initerrlog: Could not Open Error Log File

$
0
0

If you have ever interacted with me or taken my services, you would notice that I always ask for ERRORLOG in almost all of my contact with my client. Once of my friend contacted me and when I asked for ERRORLOG, he informed that there is no ERRORLOG file and SQL is not getting started.  I suggested him that we should analyze event log for further details, after interestingly, we found the following details in the event 17058 about initerrlog: Could not Open Error Log File.

Log Name: Application
Source: MSSQLSERVER
Event ID: 17058
Task Category: Server
Level: Error
Description:
initerrlog: Could not open error log file ”. Operating system error = 3(The system cannot find the path specified.).

We finally found that the error occurs due to the insufficient privilege of SQL Server Service Account in the Log directory: E:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Log

Here is the fix of the issue.

  1. Start SQL Server Configuration manager.
  2. Click to select ‘SQL Server Services’ from the left menu options.
  3. On the right panel, right click on ‘SQL Server (MSSQLSERVER)’ and click ‘Properties’. You may want to choose correct service.
  4. Click ‘Startup Parameters’ tab.
    SQL SERVER - Event 17058 - initerrlog: Could not Open Error Log File initerrlog-01
  1. Note the location after -e parameter.
  2. Browse the Log location ” E:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER \MSSQL\Log”.
  3. Right click on the folder “Log” and click Properties and then to visit to “Security” tab. Now, check SQL Server service account permission on this folder and give proper access to this folder.
  4. Now, Restart the SQL Server service. If you face the same error again, try to change highly privileged service account like “Local System”.

After giving permission, we were able to start the SQL Service. Have you faced a similar error?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Event 17058 – initerrlog: Could not Open Error Log File

SQL SERVER – Error: 14258 – Cannot perform this operation while SQLServerAgent is starting. Try again later

$
0
0

One of my clients asked assistance in fixing an interesting issue. They informed me that they are not able to run any job in SQL Agent. When they try to run the job manually, they are seeing below message related to SQLServerAgent .

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
Cannot perform this operation while SQLServerAgent is starting. Try again later. (Microsoft SQL Server, Error: 14258)

My initial search showed me below KB BUG: DTC Transactions May Fail When SQL Server Is Running in Lightweight Pooling Mode.

But above is not applicable as we were running SQL Server 2012 and fiber mode was not enabled. I went ahead and asked SQLAgent.out file to see if there is something interesting

2016-07-21 16:22:49 – ? [297] SQLServer Message: 15457, Configuration option ‘Agent XPs’ changed from 0 to 1. Run the RECONFIGURE statement to install. [SQLSTATE 01000] (DisableAgentXPs)
2016-07-21 16:22:50 – ? [100] Microsoft SQLServerAgent version 10.50.4000.0 (x86 unicode retail build) : Process ID 3748
2016-07-21 16:22:50 – ? [101] SQL Server IND-SAP-SQL version 10.50.4000 (0 connection limit)
2016-07-21 16:22:50 – ? [102] SQL Server ODBC driver version 10.50.4000
2016-07-21 16:22:50 – ? [103] NetLib being used by driver is DBNETLIB.DLL; Local host server is IND-SAP-SQL
2016-07-21 16:22:50 – ? [310] 2 processor(s) and 2048 MB RAM detected
2016-07-21 16:22:50 – ? [339] Local computer is IND-SAP-SQL running Windows NT 5.2 (3790) Service Pack 2
2016-07-21 16:22:52 – ! [364] The Messenger service has not been started – NetSend notifications will not be sent
2016-07-21 16:22:52 – ? [129] SQLSERVERAGENT starting under Windows NT service control
2016-07-21 16:22:52 – ? [392] Using MAPI32.DLL from C:\WINNT\SYSTEM32 (version 1.0.2536.0)
2016-07-21 16:22:52 – ? [196] Attempting to start mail session using profile ‘SQLSrvrSvc’…

When I checked on my machine, my last line was not same. So, it looks like MAPI32.dll is being used. I checked further and found that they were using SQLMail which uses MAPI client to send email. We changed the setting to use Database Mail as shown below.

SQL SERVER - Error: 14258 - Cannot perform this operation while SQLServerAgent is starting. Try again later sql-agt-01-800x460

Once we changed the mail profile to Database Mail, we were able to run the jobs manually.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Error: 14258 – Cannot perform this operation while SQLServerAgent is starting. Try again later

SQL SERVER – How to Recover Truncated or Deleted Data When a Database Backup is or is not Available

$
0
0

Recovering lost data that was deleted or truncated can be fairly quick and easy, depending on the environment in which the database resides, recovery measures implemented before and after the data loss has occurred, and the tools used for the job. In this blog post we will learn about how to recover truncated or deleted data when a database is (or not) available.

In general, there are three specific scenarios for recovery when the data is lost:

Scenario #1

Full backup has been created before the unintended delete/truncate operations have occurred, and no additional database changes have been performed since. This is by far the easiest data loss to solve, since it can be recovered with the simple restoration of the full database backup, since it will bring the database to the pre-disaster state, and since no additional changes have occurred, the recovery process will be fully valid and completed with the simple restore.

Scenario #2

Same as in the first scenario, the full database backup has been created prior to the deletes/truncates, but these were followed by the additional changes which need to be preserved. If we would restore the full database backup as in the first case, we would lose all of these additional changes, so another solution should be implemented. The most obvious solution is to restore our full database backup as a different database, and then to extract the tables affected by truncate/delete operations, and to import them into the original database.

Even though this solution is easy for smaller databases, in some environments, it is not possible to fully restore full database backups, due to the resources consumed (backup and database size, restoration time, restoration performance impact…). Logical solution here would be to extract those tables for recovery from the full database backup, but this cannot be done via SQL Server mechanisms, nor via SQL Server Management Studio.

In order to extract specific tables from a database backup, we can use ApexSQL Recover, a powerful recovery tool for Microsoft SQL Server which can recover data and structure lost due to delete, drop and truncate operations, recover deleted BLOBs, or extract BLOBs or specific tables directly from database backups without the need to restore them first.

To extract the tables:

  1. Install ApexSQL Recover on your workstation (the tool can be used to perform recovery both locally or remotely, so it can be installed directly on the production server to locally complete the recovery/extraction, or it can be installed on the workstation of the user’s preference to remotely connect to the SQL Server and perform recovery from there)
  2. Start ApexSQL Recover and choose the option to extract data from a database backup in the main ribbon
    SQL SERVER - How to Recover Truncated or Deleted Data When a Database Backup is or is not Available apexrecover1
  3. Add database backup file by clicking on the ‘Add file’ button and selecting your full database backup. If you have consecutive transaction log backups which need to be used for table extraction, add them in this step, and check them together with the full database backup
    SQL SERVER - How to Recover Truncated or Deleted Data When a Database Backup is or is not Available apexrecover2
  4. In the next wizard step, ApexSQL Recover will show the list of tables in the database/backup, and it is recommended to exclude all tables which are of no interest for recovery and to check only tables which were affected by the truncate/delete operations
    SQL SERVER - How to Recover Truncated or Deleted Data When a Database Backup is or is not Available apexrecover3
  5. In the next step of the wizard, between the choice of saving the recovery script to the .sql file and recovering to a new database, let’s opt for the first option, which enables us to inspect the recovery script, or to execute the recovery script against the production database to complete the recovery. We can also specify the script location.
    SQL SERVER - How to Recover Truncated or Deleted Data When a Database Backup is or is not Available apexrecover4
  6. In the final step, we should select the output type. Since we want to recover only data lost due to truncate/delete operations, choose the ‘Extract data only’ option and click next
    SQL SERVER - How to Recover Truncated or Deleted Data When a Database Backup is or is not Available apexrecover5
  7. After ApexSQL Recover completes processing, the application will inform the user on the recovery outcome, and users can now inspect the script directly from the application internal editor, or open it manually in a SQL Server Management studio or similar tool, from where it should be executed against the production database to complete the recovery process
    SQL SERVER - How to Recover Truncated or Deleted Data When a Database Backup is or is not Available apexrecover6

Scenario #3

A different scenario, and by far the worst one, is when a disastrous truncate/delete occurs, followed by additional important transactions which we also do not want to lose, but there is no full database backup to use as the recovery source. Since we do not have a database backup, restoring it, or extracting table data with ApexSQL Recover is not an option.

Luckily, we can use ApexSQL Recover to perform the recovery of truncated/deleted data, since it does not require full database backup in the recovery process, but instead utilizes information in the LDF and MDF files to rollback unintended changes. With this in mind, knowing the internal mechanisms of the SQL Server itself, prior to engaging the actual recovery, due to the fact that SQL Server overwrites MDF file information frequently, it is important to ensure that the information within is kept safe, and not overwritten by the upcoming traffic.

In general, here are the immediate steps to take once the disaster is detected:

  1. Change database mode to ‘Read only’ to prevent overwriting of the information in the MDF file
  2. Take the database offline and make copies of database LDF and MDF files, then take the database back online – these copies can then be used to create additional copies of the database which can be used to read recovery information from, and enable the production database to continue to receive changes while the recovery process is prepared and performed.

Even though both of these measures will mean that your database will have some downtime, it is important to take these precautionary steps whenever possible to secure the highest change of successful recovery.

As said above, we’ll use ApexSQL Recover to connect to the database affected by unintended truncate/delete operations and generate a recovery script. Here is how:

  1. Start ApexSQL Recover and choose to recover ‘Deleted data’ or ‘Truncated data’, depending on the operation which you need to recover from – the remaining steps of the wizard will be the same regardless of this choice
    SQL SERVER - How to Recover Truncated or Deleted Data When a Database Backup is or is not Available apexrecover7
  2. In the next step, provide the connection details to your SQL Server instance, provide connection credentials and choose the database for recovery
    SQL SERVER - How to Recover Truncated or Deleted Data When a Database Backup is or is not Available apexrecover8
  3. In the next step, we can go with any of the three options, depending on our case:
    1. ‘Help me decide’ option will guide the user through a quick series of questions in order to provide the most appropriate resources
    2. ‘Add transaction logs’ option should be used if we are creating transaction log backups so we can use them in addition to the MDF and LDF files as the recovery source
    3. ‘No additional transaction logs are available’ option should be chosen if our transaction log is not being backup-ed
      SQL SERVER - How to Recover Truncated or Deleted Data When a Database Backup is or is not Available apexrecover9
  4. Regardless of the choice in the previous step, once we proceed, we will arrive to the step which is used to determine the recovery window – a time frame which will be considered for the recovery. If we do have a knowledge of the time frame where a disaster has occurred, we should specify the time frame, as close as possible, which will result in faster processing, and in a more accurate recovery (additional delete/truncate operations may have occurred close to the disastrous once, so we will want to exclude those from the recovery process by using proper date/time filters)
    SQL SERVER - How to Recover Truncated or Deleted Data When a Database Backup is or is not Available apexrecover10
  5. The next step allows us to further filter-out the recovery and narrow it down to specific tables, so it is recommended to include only tables affected by disastrous truncates/deletes
    SQL SERVER - How to Recover Truncated or Deleted Data When a Database Backup is or is not Available apexrecover11
  6. Same as when we were extracting table information in the scenario #2, we will choose to create a recovery script, and once the processing is finished, our recovery script can be inspected and finally executed against the production database to complete the recovery process
    SQL SERVER - How to Recover Truncated or Deleted Data When a Database Backup is or is not Available apexrecover12

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – How to Recover Truncated or Deleted Data When a Database Backup is or is not Available

Viewing all 594 articles
Browse latest View live