Quantcast
Channel: SQL Archives - SQL Authority with Pinal Dave
Viewing all 594 articles
Browse latest View live

SQL SERVER – Installation Fails With Error – A Constraint Violation Occurred

$
0
0

In my recent past, I have helped around 10 customers who have had similar problems while installing SQL Server Cluster on windows. So I thought it would be a nice idea to pen it down as a blog post so that it can help others in future. In this blog post we will discuss about Installation Fails With Error – A Constraint Violation Occurred.

The issue which we saw was that SQL Server cluster setup would create a network name resource in failover cluster manager and it would fail. Here is the message which we would see in the setup

The cluster resource ‘SQL Server (SQLSQL)’ could not be brought online due to an error bringing the dependency resource ‘SQL Network Name (SAPSQL)’ online. Refer to the Cluster Events in the Failover Cluster Manager for more information.
Click ‘Retry’ to retry the failed action, or click ‘Cancel’ to cancel this action and continue setup.

When we look at event log, we saw below message (event ID 1194)

Log Name: System
Source: Microsoft-Windows-FailoverClustering
Date: 20/06/2016 19:55:45
Event ID: 1194
Task Category: Network Name Resource
Level: Error
Keywords:
User: SYSTEM
Computer: NODENAME1.internal.sqlauthority.com
Description:
Cluster network name resource ‘SQL Network Name (SAPSQL)’ failed to create its associated computer object in domain ‘internal.sqlauthority.com’ during: Resource online.
The text for the associated error code is: A constraint violation occurred.
Please work with your domain administrator to ensure that:
– The cluster identity ‘WINCLUSTER$’ has Create Computer Objects permissions. By default all computer objects are created in the same container as the cluster identity ‘WINCLUSTER$’.
– The quota for computer objects has not been reached.
– If there is an existing computer object, verify the Cluster Identity ‘WINCLUSTER$’ has ‘Full Control’ permission to that computer object using the Active Directory Users and Computers tool.

Another client got “Access is denied” messages instead of “A constraint violation occurred” in above event ID. My clients have informed that they have logged in as domain admin so Access denied is impossible. Error from another client is below.

SQL SERVER - Installation Fails With Error - A Constraint Violation Occurred setup-err-01

I explained all of them that when network name is created in a cluster, it would contact active directory (AD) domain controller (DC) via Windows Cluster Network name computer account also called as CNO (Cluster Network Object). So, whatever error, we are seeing are possible because the domain admin account (windows logged in user account) is not used to create a computer object for SQL in AD.

To solve this problem, we logged into the domain controller machine and created the Computer Account: SAPSQL (called as VCO – Virtual Computer Object). Gave the cluster name WINCLUSTER$ full control on the computer name. If we carefully read error message, we have the solution already listed there. Then clicked on the retry option in the setup. The setup continued and completed successfully.

Solution/Workaround:

Here are the detailed steps (generally done on a domain controller by domain admin):

  1. Start > Run > dsa.msc. This will bring up the Active Directory Users and Computers UI.
  2. Under the View menu, choose Advanced Features.
  3. If the SQL Virtual Server name is already created, then search for it else go to the appropriate OU and create the new computer object [VCO] under it.
  4. Right click on the new object created and click Properties.
  5. On the Security tab, click Add. Click Object Types and make sure that Computers is selected, then click Ok.
  6. Type the name of the CNO and click Ok. Select the CNO and under Permissions click Allow for Full Control permissions.
  7. Disable the VCO by right clicking.

This is also known as pre-staging of the VCO.

Hope this would help someone to save time and resolve issue without waiting for someone else assistance. Do let me know if you ever encountered the same.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Installation Fails With Error – A Constraint Violation Occurred


SQL SERVER – A Timeout (30000 milliseconds) was Reached While Waiting for a Transaction Response from the MSSQLSERVER

$
0
0

SQL SERVER - A Timeout (30000 milliseconds) was Reached While Waiting for a Transaction Response from the MSSQLSERVER anothererror Recently I was contacted by a client who reported very strange error in the SQL Server machine. These consulting engagements sometimes get the best out of you when it comes to troubleshooting. They reported that they see timeout error. My question was whether it connection timeout or query timeout which I explained in this blog post?

They said that they are seeing below error in the System Event log and during that time they were not able to connect to SQL Server.

Event ID: 7011
Message: A timeout (30000 milliseconds) was reached while waiting for a transaction response from the MSSQLSERVER service.

Once it happens, they were not able to stop SQL Server service. I asked about how do they reproduce the error or hang situation and strangely they said that it happens when they expand an Oracle Linked server in SQL Server Management Studio!!!

I told them to reproduce the error. As soon as they expand “catalog” under linked server to oracle, it was stuck in “expanding”. Luckily, I was with them and as soon as “hang” was reproduced, I connected via DAC connection. I was able to see PREEMPTIVE_OS_GETPROCADDRESS wait for the SSMS query. As per my internet search, it is called when loading DLL. In this case, the wait for increasing continuously. So I asked them to kill the SQL process from task manager.

As a next step, I wanted to know which is the DLL causing issue, so I captured Process Monitor while reproducing the issue. Finally, we were able to nail down that SQLServr.exe is trying to find “OraClient11.Dll” but not able to locate it.

It didn’t take much time to conclude that the hang was caused due to incorrect value in PATH variable for Oracle DLLs used by linked server. This is also explained here.

Solution

We found that PATH variable was not having C:\oracle11\product\11.2.0\client_1\BIN which was the folder contains OraClient11.Dll. As soon as we added above location to PATH variable, issue was resolved.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – A Timeout (30000 milliseconds) was Reached While Waiting for a Transaction Response from the MSSQLSERVER

Database Sharding – How to Identify a Shard Key?

$
0
0

I have written a number of posts in the past working on shared databases and the concepts around these can be read at Sharding or No Sharding of Database – Working on my Weekend Project for the starters point of view. In a recent discussion at a user group, I had someone ask me what is the rationale around building a sharding key and what should be used for sharding their database. The concept, though common, it is not in black-and-white to what should be used as an sharding key. There are guiding principles which can be applied. Let us learn about Database Sharding.

As the discussion matured, I thought to pen down some of the thoughts that were discussed. These can be used as guiding principles in general.

Database Sharding – How to Identify a Shard Key? shard-key-decision-01-800x390

Identifying an Appropriate Shard Key

One of the most difficult tasks in designing a data model is to identify an appropriate shard key. Many of the modeling decisions depend on this choice, and once committed, you cannot easily change the key. Hence, getting this right at the initial design phase is critical.

A best practice is to choose the most granular level of detail for the shard key.

Consider a SaaS solution provider that offers a service to multiple companies, each of which has a division with numerous assets. Each asset may generate a large amount of data or be used as the pivot point for many database transactions. One data modeler may choose to shard based on company, another on division, and yet another on asset. According to the best practice, asset is a good starting point.

Consider whether any DML queries will traverse the shards

In an ideal data model, no DML actions traverse across shards. As this ideal is very unlikely, the goal is to keep such requirements to a minimum. Such requirements can add complexity to the Data Access layer, reduce the usefulness and availability of RDBMS semantics, and expose your solution to greater risk should a shard become unavailable.

Depending on the database queries, you can decide to have multiple, logical groupings of shards, each one capable of been sharded

A logical grouping of shards is referred to as a shard set. A shard set is a collection of entities and database objects that are identical across shards within the shard set. For instance, a logical data model may have distinct functional areas, such as Inventory, Sales, and Customers, each of which could be considered a shard set. Each shard set has a shard key, such as ProductID for inventory and CustomerID for both Sales and Customers. A less common alternative for the Sales shard set is a shard key based on SalesOrderID. The choice depends on whether cross-shardlet queries can be handled.

It is common to encounter a case where logical relationships exist among shard sets—a big consideration when defining appropriate boundaries for the functional areas. When a relationship exists, the application tier must compensate for cross-area transactions. In this example, the Sales shard set has a logical relationship with Products shard set and a reference to ProductID. The Products shard set owns the metadata of the product. Of course, a reasonable option is to treat the Products table in the Sales shard set as a reference table. But this cannot be always possible because there can be a reference for Products even in the Orders shard and Shipment/Delivery shards etc. Think before you take a decision.

Finally, consider the data type of your shard key

The choice of shard key data type impacts database maintenance, troubleshooting, and resource consumption. The most efficient data type has an acceptable domain, is small, has a fixed storage size, and is well-suited for the processor. These factors tend to constrain the candidates for data types to integers (smallint, int, and bigint), char (4 -> 8), or binary (4 -> 8). Of these, bigint (Int64) is the best trade-off, but you can use smaller integer types if your business rules require.

The shard key must uniquely separate shardlets from one other. For example, if CustomerID is the shard key, then its value is unique among customers. In an entity that has child records of Customer, the shard key serves as part of the record’s primary key.

I am sure some of these discussion points have brought some insights that made a great write for me too. Do let me know if there are areas that I missed in my considerations. I would love to learn from you about them too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on Database Sharding – How to Identify a Shard Key?

SQL SERVER – False Error – Cannot bulk load because the file could not be opened. Operating system error code 5 (Access is denied.)

$
0
0

Yes, it’s possible that SQL Server can raise false and misleading error. I was trying to do some pretty straight forward stuff – import data from text file to SQL table using the BULK INSERT command of T-SQL. Let us learn about false errors from Operating system error code 5 (Access is denied.)

Here is the script to create some sample data.

Create Database ErrorDB
GO
USE ErrorDB
GO
CREATE TABLE [dbo].[SQLAuthotity](
	[Id] [int] NOT NULL,
	[Name] [char](200) NULL,
	[ModDate] [datetime] NULL
)
GO
INSERT INTO SQLAuthotity VALUES (1, 'MR. DAVE', GETDATE())
GO
INSERT INTO SQLAuthotity VALUES (2, 'MISS. DAVE', GETDATE())
GO
INSERT INTO SQLAuthotity VALUES (3, 'MRS. DAVE', GETDATE())
GO

Now, we can export this data to a text file using bcp command. Then, we would import the data back from the table. To export the data, we will use below bcp.exe command

bcp.exe ErrorDB..SQLAuthotity out “c:\Temp.txt” -c -T -S.\SQL2014

Once it was completed, I wanted to insert the data back into the table. So I ran this command

USE ErrorDB
GO
BULK INSERT SQLAuthotity
FROM 'C:\Temp'
WITH
(
  KEEPNULLS,
  FIRSTROW=2,
  FIELDTERMINATOR ='\t',
  ROWTERMINATOR ='\n'
)

SQL SERVER - False Error - Cannot bulk load because the file could not be opened. Operating system error code 5 (Access is denied.) bcp-err-01-800x257

To my surprise, it failed with below error

Msg 4861, Level 16, State 1, Line 1
Cannot bulk load because the file “C:\Temp” could not be opened. Operating system error code 5(Access is denied.).

I have tried all the combination of permission (service account, local admin, everyone etc.) but this error was not getting away.

I was unable to fix it so I gave up for the day and decided to start fresh. I ran the command and it was working. Do you know how?

Solution / Workaround

In this special case, if you look at the command which was not working I have given “C:\Temp” instead of “C:\Temp.txt”. Since I had a folder by name Temp in C drive, it was failing with an error, which was not true error. Yes, access denied error was a fake error in this case. It should have told me that the name is not a file, but it’s a directory.

Have you seen similar incorrect or unhelpful error in SQL Server?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – False Error – Cannot bulk load because the file could not be opened. Operating system error code 5 (Access is denied.)

SQL SERVER – Backup to URL Fails in Azure Resource Manager (ARM) Deployment Model with Error – (400) Bad Request

$
0
0

During my previous consulting engagement, I learned something new with SQL Server feature called back up to the URL. Since it was not so clearly documented, I am sharing this with you. In this blog post, let us learn about Azure Resource Manager. If you have not heard about this feature, you can think of it as variation of the SQL backup feature. In this type of backup, the destination would be an Azure storage rather than disk or tape. You can read more about this feature on my previous blog SQL SERVER – Backup to Azure Blob using SQL Server 2014 Management Studio.

My client was trying to take a backup on Azure, on a storage deployed under the ARM deployment model. You can read more about classic and ARM deployment models on below link.

Azure Resource Manager vs. classic deployment: Understand deployment models and the state of your resources

Here is how they are seen on https://ms.portal.azure.com  as of today.

SQL SERVER - Backup to URL Fails in Azure Resource Manager (ARM) Deployment Model with Error - (400) Bad Request b2url-01

Here is the error they were getting

Msg 3271, Level 16, State 1, Line 9
A nonrecoverable I/O error occurred on file “” Backup to URL received an exception from the remote endpoint. Exception Message: The remote server returned an error: (400) Bad Request..
Msg 3013, Level 16, State 1, Line 9
BACKUP DATABASE is terminating abnormally.

I was able to reproduce the same error for ARM storage accounts (blob storage)

Solution / Workaround

If you are trying to take a backup to Azure blob storage in the ARM deployment model, then make sure the “kind” of storage account should be selected as “General” while creating storage account. If we choose “Blob Storage” then it would fail.

The below image shows the option to choose while creating it.

SQL SERVER - Backup to URL Fails in Azure Resource Manager (ARM) Deployment Model with Error - (400) Bad Request b2url-02

SQL SERVER - Backup to URL Fails in Azure Resource Manager (ARM) Deployment Model with Error - (400) Bad Request b2url-03

If you already have an account created, then you can check it via looking at the icon.

SQL SERVER - Backup to URL Fails in Azure Resource Manager (ARM) Deployment Model with Error - (400) Bad Request b2url-04

Have you faced this problem?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Backup to URL Fails in Azure Resource Manager (ARM) Deployment Model with Error – (400) Bad Request

SQL SERVER – SSMS Error During Restore: No Backupset Selected to be Restored

$
0
0

There are various ways of learning in my current role. Along with client engagements, reading others blog, I also learn from comment/interaction of my own blog. One of my blog readers sent below email about Backupset.

Hi Pinal,
I am a Java developer and have a very less knowledge of SQL Server. To reproduce the scenario. I’m trying to restore my customer’s SQL Server 2008 database (foo.bak) on to our SQL Server 2012. I right clicked on the database in the Management studio 2012, restore database using source – device option, select foo.bak file for the restore. But it is not recognizing the backup file, I keep getting ‘No backupset selected to be restored‘. It doesn’t show up under the restore plan either. I am not sure what I’m missing.

Please help me!

I asked him more details and screenshot and below was given to me.

Here is the SSMS image after backup is selected.

SQL SERVER - SSMS Error During Restore: No Backupset Selected to be Restored backupset-01-800x358

If we click on the message, we see below the text of a pop-up message.

No backupset selected to be restored.

I did some research and shared possible caused by him.

POSSIBLE REASONS

1) Backup is corrupted or unreadable: To confirm this, we can run below command

RESTORE HEADERONLY FROM DISK='complete path to backup file>

If backup is corrupted, we would not be able to see the complete details in the output.

2) Restore to lower version: If a backup is taken on a higher version and restore is attempted to lower version then also we will get the same error.

He replied and told me that backup is not readable and its corrupted. They took a fresh backup from the source database and then they were able to restore.

Have you ever encountered a similar error? Please leave a comment.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – SSMS Error During Restore: No Backupset Selected to be Restored

SQL SERVER – Understanding Basic Memory Terms of Physical Memory and Virtual Memory

$
0
0

Recently I had been to an institute to talk about some database concepts to computer science students. Most of these academic engagements get me closer to young minds and this is an awesome way for me to bring some of the industry concepts in a digestible simple format. Trust me, this takes double the preparation I normally do for professional sessions because now I need to make the concepts so simple that even kids of any age and with no knowledge can still get the concepts being explained. A lot of times this is an exercise for me to show how self-learning can be much more rewarding and unique when compared to what text book teaching would look like. In this blog, let me take up the concept around Physical memory and Virtual memory which took some considerable time for me using a whiteboard. I am sure, you would have also learnt these concepts a while back, but this is a refresher in my opinion for all. Let us learn about Understanding Basic Memory Terms of Physical Memory and Virtual Memory.

SQL SERVER - Understanding Basic Memory Terms of Physical Memory and Virtual Memory memory-800x541

Physical Memory decoded

Physical memory refers to a volatile hardware main memory module installed in the system (Dynamic Random Access Memory (DRAM)). Physical memory is the intermediate storage location that is used to store, preserve, and recall data. Access to physical memory is much faster than access to non-volatile disk storage. The digital divide and innovations on some of the hardware’s are surely making the hard disks powerful and super-fast too. We will deal with that for a future post.

Many of the current designs in the memory management module of operating systems and features provided by the processors are influenced by the inherent properties of physical memory. There is always a compromise between the size and speed of memory and its cost.

The higher the physical memory requirements of the system, the costlier it is. In the initial days of computers, memory was quite expensive. Here, this was one of the factors that forced the system software programmers and processor manufacturers to come up with different techniques and features in the operating systems and processors. This made it possible to load and run more software on the system with a limited amount of memory.

Even today, these designs exist in operating systems and processors in evolved forms and are continuing to evolve, despite modern systems coming with gigabytes of memory at a comparatively lower price.

Physical Address Space on a system includes the RAM and IO space of devices. Every single byte in physical RAM is addressed by a physical address. With these concepts loaded, let us move to the next stage of this evolution – Virtual Memory.

Virtual Memory Decoded

As mentioned earlier, the RAM costs influenced the developers and processor manufacturers to come up with new techniques and designs in the software and hardware. One such innovation was the concept of virtual memory. A system that implements virtual memory provides an illusion to the applications running on the system by having more memory space available than the actual size of physical memory. This makes it possible to run more applications simultaneously irrespective of the amount of physical memory.

The whole process of providing such a mechanism is transparent to the software. Further, this is completely handled by the Memory Manager component of the OS along with support from the processor. This implies that the total memory requirement of every running application could exceed the total amount of physical memory on the system.

Windows provides a paged virtual memory model on top of flat memory addressing model provided by the hardware, and provides a consistent address range for every single process.

The virtual memory makes it possible for application software to be independent of the underlying physical memory. A range of virtual addresses belonging to a process can reside in any part of the RAM at any time without affecting the application itself.

For an application to execute, only the parts of the image that are needed at the time of execution needs to be resident, while certain other portions of the image need not be resident in the physical memory.

These are some of the basics one needs to know when reading about Physical Memory and Virtual Memory.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Understanding Basic Memory Terms of Physical Memory and Virtual Memory

SQL SERVER – Summer Sale and Monowheel Raffle

$
0
0

SQL SERVER - Summer Sale and Monowheel Raffle monowheelimage My good old friends from Devart keep exciting me with great news. As it became a tradition, the company treats its customers with “tasty” discounts on all its product range. But there’s more – each customer gets a chance to win a monowheel in the end of the summer sale. See details below.

Devart is a widely recognized provider of the state-of-the-art database tools and administration software, ALM solutions, data providers for various database servers, data integration and backup solutions. The company also implements Web and Mobile development projects.

The company actively cooperates with the key market shapers. Specifically, Devart is a Microsoft Silver Application Development Partner and a Silver Partner in the Oracle Partner Network (OPN) Specialized program.

Having about twenty years of experience in development of the database-oriented software, Devart constantly keeps exploring new fields of interest. This objective results in releases of brand new solutions and ongoing enhancement of existing ones.

The core product lines of the company include:

SQL Tools. The line features the must-have SQL Server tools and tools for MySQL, Oracle, and PostgreSQL database development, management and administration.

Data Connectivity line embraces the most trusted solutions for the data connectivity needs: ADO.NET, ODBC, SSIS, Excel, Delphi Components, dbExpress.

Productivity Tools help developers to write code, conduct code reviews, compare sources, track the working hours with the web application and much more.

Data Services all-in-one cloud solutions that allow you to integrate, back up, access, and manage your cloud data.

20% Off All Tools

This August is the right time to expand your SQL software collection – till the end of the month you can get any new license from a wide range of the Devart product lines with the 20% discount, including database tools for SQL Server and add-ins for Microsoft SQL Server Management Studio.

Get a Change to Win a Monowheel

Each purchase made till August 31 gives you a chance to become a happy owner of a super awesome monowheel. The prize will be raffled on September 15. The conditions are pretty straightforward – each purchase gives you one point at the random choice of the winner. The more licenses you buy, the bigger is your chance to get the prize.

To sum up, I can say that I personally use Devart products fro SQL Server for a long time and have perfect experience with them, as you can see from numerous posts on this blog. Also, Devart guys have their own blog featuring lots of interesting stuff about SQL Server in general and their products in particular.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Summer Sale and Monowheel Raffle


SQL SERVER – Script level upgrade for database ‘master’ failed because upgrade step ‘upgrade_ucp_cmdw.sql’

$
0
0

Recently once by my client contacted via Skype to know my thoughts about a cluster failover failure issue. They were having two nodes SQL Clustered instance and it was running fine of Node1. As soon as they failover to Node2, it was not able to start and was failing with upgrade script. Let us learn about the script level upgrade for database ‘master’ failed because upgrade step ‘upgrade_ucp_cmdw.sql’.

SQL SERVER - Script level upgrade for database 'master' failed because upgrade step 'upgrade_ucp_cmdw.sql' failover-mdw-01

2016-08-16 10:07:00.52 Logon Error: 18401, Severity: 14, State: 1.
2016-08-16 10:07:00.52 Logon Login failed for user ‘NT AUTHORITY\SYSTEM’. Reason: Server is in script upgrade mode. Only administrator can connect at this time. [CLIENT: 172.19.10.94] 2016-08-16 10:07:00.55 spid9s Restoring database to multi user mode before aborting the script
2016-08-16 10:07:00.55 spid9s Setting database option MULTI_USER to ON for database sysutility_mdw.
2016-08-16 10:07:00.55 spid9s Error: 14714, Severity: 21, State: 1.
2016-08-16 10:07:00.55 spid9s Attempting to upgrade a Management Data Warehouse of newer version ‘10.50.4042.0’ with an older version ‘10.50.4000.0’. Upgrade aborted.
2016-08-16 10:07:00.55 spid9s Error: 2745, Severity: 16, State: 2.
2016-08-16 10:07:00.55 spid9s Process ID 9 has raised user error 14714, severity 21. SQL Server is terminating this process.
2016-08-16 10:07:00.55 spid9s Error: 912, Severity: 21, State: 2.
2016-08-16 10:07:00.55 spid9s Script level upgrade for database ‘master’ failed because upgrade step ‘upgrade_ucp_cmdw.sql’ encountered error, state 2, severity 25. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the ‘master’ database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.
2016-08-16 10:07:00.55 spid9s Error: 3417, Severity: 21, State: 3.
2016-08-16 10:07:00.55 spid9s Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.
2016-08-16 10:07:00.55 spid9s SQL Trace was stopped due to server shutdown. Trace ID = ‘1’. This is an informational message only; no user action is required.

I asked to share the complete ERRORLOG and here is what I saw.

Here is one old blog on the same topic.

SQL SERVER – Script level upgrade for database ‘master’ failed because upgrade step ‘sqlagent100_msdb_upgrade.sql’

Here is an interesting line of the error:

Attempting to upgrade a Management Data Warehouse of newer version ‘10.50.4042.0’ with an older version ‘10.50.4000.0’. Upgrade aborted.

Above line means upgrade is aborted because SQL is trying to upgrade from newer version to an older version. As per online documentation build 10.50.4042 came from MS15-058: Description of the security update for SQL Server 2008 R2 Service Pack 2 GDR: July 14, 2015 and 10.50.4000 is SQL Server 2008 R2 Service Pack 2 (SP2)

This means where was a version mismatch of SQL Server binaries on both the nodes.

Solution/Workaround

We looked into version of SQLServr.exe on both nodes and found that the problem node was having version 10.50.4000 whereas working node was having version of 10.50.4042.

As soon as we applied the same patch on both the nodes, failover was working like a charm!

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Script level upgrade for database ‘master’ failed because upgrade step ‘upgrade_ucp_cmdw.sql’

SQL SERVER – Fix Error – Cannot use the backup file because it was originally formatted with sector size 4096 and is now on a device with sector size 512

$
0
0

Here is a recent email which I received from Madhu. He is a beginner in the SQL Server and when he tried to take a backup from SSMS (SQL Server Management Studio), he got error related to backup file. As soon as he sent me an email I knew what was the exact problem, he was facing and I was able to help him quickly. Let us first recreate the same error and later on see how to fix the same.

When he tried to take backup of his database, he received following error:

System.Data.SqlClient.SqlError: Cannot use the backup file because it was originally formatted with sector size 4096 and is now on a device with sector size 512. (Microsoft.SqlServer.Smo)

You can see in the following image the error window in SSMS.

SQL SERVER - Fix Error - Cannot use the backup file because it was originally formatted with sector size 4096 and is now on a device with sector size 512 sectorerror1

Well, the solution for the same is very easy.

Solution / Workaround:

The reason for the error is that the backup file which was created was created with different sector size and it is not possible to use it now. In the SSMS we can see there is already one file in the Destination.

First remove the existing file in SSMS.

SQL SERVER - Fix Error - Cannot use the backup file because it was originally formatted with sector size 4096 and is now on a device with sector size 512 sectorerror2

Next, add a new file to the backup location.

SQL SERVER - Fix Error - Cannot use the backup file because it was originally formatted with sector size 4096 and is now on a device with sector size 512 sectorerror3

That’s it! We are done!

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Fix Error – Cannot use the backup file because it was originally formatted with sector size 4096 and is now on a device with sector size 512

SQL SERVER – GetRegKeyAccessMask : Could Not Get Registry Access Mask For Registry Key – SQL Server Cluster

$
0
0

I have been getting many requests from my HIRE-ME page and a few of them are getting change to my blog. This is the outcome of one of my clients who was having a strange issue. They were having 2 nodes SQL Server Cluster (NODE1 and NODE2) and SQL Server resource was not able to come online on NODE21 but it was working fine when they failover to NODE2.

SQL SERVER - GetRegKeyAccessMask : Could Not Get Registry Access Mask For Registry Key - SQL Server Cluster fci-no-online-01-800x824

I asked them to look at ERRORLOG when they failover to NODE2 and it doesn’t come online. SQL SERVER – Where is ERRORLOG? Various Ways to Find ERRORLOG Location

I was surprised when they told me that they are not seeing the file being generated when they attempt to bring SQL to NODE2. I thought it could be a permission issue, but there was no such permission error in Application and System Event Log. Finally, I asked them to generate Cluster Log SQL SERVER – Steps to Generate Windows Cluster Log?

Here is the relevant information I found in Cluster log

Add-ClusterCheckpoint -ResourceName "SQL Server (SQL_INST1)" -RegistryCheckpoint "Software\Microsoft\Microsoft SQL Server\MSSQL12.SQL_INST1\Replication"
Add-ClusterCheckpoint -ResourceName "SQL Server (SQL_INST1)" -RegistryCheckpoint "SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.SQL_INST1\MSSQLServer"
Add-ClusterCheckpoint -ResourceName "SQL Server (SQL_INST1)" -RegistryCheckpoint "SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.SQL_INST1\Cluster"
Add-ClusterCheckpoint -ResourceName "SQL Server (SQL_INST1)" -RegistryCheckpoint "SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.SQL_INST1\SQLServerAgent"
Add-ClusterCheckpoint -ResourceName "SQL Server (SQL_INST1)" -RegistryCheckpoint "SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.SQL_INST1\Providers"
Add-ClusterCheckpoint -ResourceName "SQL Server (SQL_INST1)" -RegistryCheckpoint "SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.SQL_INST1\CPE"
Add-ClusterCheckpoint -ResourceName "SQL Server (SQL_INST1)" -RegistryCheckpoint "SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.SQL_INST1\SQLServerSCP"

Error 2 = The system cannot find the file specified. So now, it’s clear that we are having issues with registry missing on this node. When I compared the key, I found that they were having missing startup parameter earlier which they added manually. Now this is next key which is missing. In general, windows cluster registry checkpoint takes care of syncing the value of registry keys for SQL Server. Here is an article by Balmukund about registry checkpoint Information: Checkpoint in SQL Server Cluster Resources

I was able to re-add the checkpoint using below PowerShell command.

As soon as the checkpoint was enabled, the registry came in sync on both the nodes and issue was resolved. Have you ever come across such issues?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – GetRegKeyAccessMask : Could Not Get Registry Access Mask For Registry Key – SQL Server Cluster

SQL SERVER – Whitepaper – Optimizing SQL Server Network Performance

$
0
0

I have blogged before regarding how to Identify Application vs Network Performance Issues using SQL Server Dynamic Management Views (DMV). You do have some further diagnostics as well as optimizations you can perform to drill down into the network issue and address it.

SQL SERVER - Whitepaper - Optimizing SQL Server Network Performance NitroAccelerator-Performance-800x445

Once you have used sys.dm_os_wait_stats to identify that the top wait_type is ASYNC_NETWORK_IO you know you have a problem either with the client side of the application or the network. To drill down further, you should check SQL Server sys.configurations for the network packet size. The network packet size is not actually affecting the network layer, but changes the size of the Tabular Data Stream (TDS) packets which are then sent to TCP/IP for transmission. TDS packet size can be set from 512 bytes up to 32KB and your assumption may be that the larger the packet size, the faster your data will move. The primary consideration in allocating the TDS packet size is memory utilization so you’ll want to consider this when setting it. Some people like to match the TDS packet size to the MTU, while others simply test different TDS packet settings until they reach a level that balances memory utilization with performance requirements.

While it is good to optimize your TDS packet size, you still may have network pipe issues. To get more detail on how to query and optimize TDS packet sizing as well as addressing SQL Server network performance, check out DBA Tactics for Optimizing SQL Server Network Performance a relevant white paper,  from Nitrosphere Corporation. This can help you not only diagnose the problem, but offers several solutions to address it quickly and cost effectively.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Whitepaper – Optimizing SQL Server Network Performance

SQL SERVER – Database Disaster Recovery Process

$
0
0

Many SQL Server DBAs are from time to time confronted with a disaster caused by unintentional or malicious changes on their database. Whatever the nature or intention of these changes are, they can cause great issues whether it was the data loss or a structure loss. With this in mind, it is important to start with the recovery process as soon as possible. Having a recovery scenario and solution for these situations prepared is also a great benefit, and is generally advised where possible. Whatever enhances our chance to recover in case of a disaster, and increases chances of successful and full recovery should be taken without any second thoughts. In this blog post we will learn about Database Disaster Recovery Process.

Before actually focusing on the actual recovery process, let alone the solution, it is important to always take post-disaster steps in order to ensure that the post-disaster database state is kept as it is, to avoid overwriting any information in the .mdf and .ldf files so that they can be used in a ‘pristine’ state when performing a recovery. So, when possible, immediately after the disaster is detected, take the database offline (again, where possible) and make copies of current .ldf and .mdf files, then bring the database back online to minimize downtime. In the situations where taking a database offline is not an option, you can opt to switch the database to the read-only mode to prevent any overwriting. To do this, simply access the database context menu in the SQL Server Management Studio and choose Properties and the Options and finally change the value of Database Read-Only to True – this will prevent anyone from writing into the database and ensure current information/data is preserved. Do not forget to take the database out of the read-only state for the recovery and afterwards.

SQL SERVER - Database Disaster Recovery Process apexdr1

Now that we have a solution for the post-recovery steps which are one of the most critical steps to take once the disaster strikes, we need a quick and easy solution to perform the actual recovery. ApexSQL Recover is a SQL Server recovery tool from ApexSQL which is used to recover database data and structure lost due to delete, drop or truncate operations, as well as to recover deleted BLOBs. In addition to the general recovery tasks, ApexSQL Recover also enables users to access database backup files directly and to extract blobs or table structure and data only for specific tables without the need to restore those backups on the SQL Server, which can save many hours in some cases. All SQL Server versions from SQL Server 2005 onwards are fully supported, as well as the all different variations (developer, enterprise, express…)

ApexSQL Recover can be installed directly on the machine which hosts the SQL Server where the database which needs recovery resides on. The other option is to install the tool on a workstation and to access SQL Server remotely. In case of remote access, server-side components need to be installed on the remote machine which hosts SQL Server instance user will access. These server-side components are just windows service which allows the tool to access the online transaction log file remotely – nothing is installed on the SQL Server itself.

Once ApexSQL Recover is installed, and the application started, user is faced with a simple interface which offers several recovery scenarios. Here, user should choose the most appropriate option, depending on the nature of their recovery request, and choose between recovering from delete, drop or truncate operation, to recover deleted blobs, or to perform extraction from the database backup.

SQL SERVER - Database Disaster Recovery Process apexdr2

When the recovery type is chosen (e.g. recover dropped table), the first task for the user is to specify the SQL Server instance, authentication method and credentials, and to finally choose the database on which the recovery will be performed.

SQL SERVER - Database Disaster Recovery Process apexdr3

The next step of the wizard offers a choice for the user to add sources which ApexSQL Recover will use for the recovery, specifically additional transaction log files and backups. The ‘No additional transaction logs are available’ option should be chosen if all the information is still included in the online transaction log file. The ‘Add transaction logs’ option should be chosen if the transaction log backup was created since the drop table operation has occurred. The ‘Help me decide’ option will lead the user through a quick series of yes/no questions to help them choose the best option and provide full recovery sources available.

SQL SERVER - Database Disaster Recovery Process apexdr4

Regardless of the previous step choice, the next step of the wizard is a filter step. Here, user can specify when the data was lost in order to focus recovery only on a specific time frame. The smaller the time frame, the faster recovery time will be. Also, by limiting the recovery process to the exact time frame will also exclude false-positive results (e.g. other dropped tables that user doesn’t want to recover, or similar).

SQL SERVER - Database Disaster Recovery Process apexdr5

Once the filters are set, the next step of the wizard offers users a choice for the recovery output. Recovered data can be directly exported into new database, so user can access that database via SQL Server Management Studio and check the database on site, or the recovery results can be written into the SQL Server script, which can be then inspected and finally executed against the database to recover dropped structure/data.

SQL SERVER - Database Disaster Recovery Process apexdr6

Finally, the user gets to specify what he actually wants to recover. The choice is between structure, data, or both together.

SQL SERVER - Database Disaster Recovery Process apexdr7

Once this choice is made, ApexSQL Recover will start the recovery process and create specified output. Once the process completes, application will show quick summary of the recovery process, and allow users to access recovery script directly from there.

SQL SERVER - Database Disaster Recovery Process apexdr8

As can be seen in the example above, ApexSQL Recover has recovered 2 dropped objects with 32 rows inside. A closer inspection of the script will show that one system and one user table were dropped, and that the script contains full information to completely recover dropped structure as well as the dropped data.

SQL SERVER - Database Disaster Recovery Process apexdr9

SQL SERVER - Database Disaster Recovery Process apexdr10

Extraction of data from database backups is even simpler task. Once the export wizard is started, the user is only required to add backup file from which to extract and to check it.

SQL SERVER - Database Disaster Recovery Process apexdr11

Then, select table(s) from which to extract, and similar to the previous recovery process, choose whether to create a script with the export results, or to write them into the new database. Finally, the same choice to export data, structure or both needs to be made, and once this is done, application will perform the exporting.

SQL SERVER - Database Disaster Recovery Process apexdr12

In summary:

  • ApexSQL Recover is a recovery tool with capable of using information in the .mdf and .ldf files to recover the lost data.
  • To ensure you maximize recovery chances, do every step possible to prevent overwriting of the database .mdf and .ldf files (offline mode, file copies)
  • The tool can be used to create recovery scripts or to perform a recovery directly into a database when recovering from delete, truncate or drop operations, as well as the recovery of dropped blobs
  • Application can extract blobs or table data and structure directly from database backups which nullifies the need for long and performance intensive database restore jobs

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Database Disaster Recovery Process

SQL SERVER – Delete All Waiting Workflows in MSCRM to Speed Up Microsoft Dynamics

$
0
0

Last week, during performance tuning consultancy, I faced a very interesting situation when customer’s Microsoft Dynamics CRM was extremely slow. After quickly researched we found that they have a very high number of Mass Waiting Task Workflows in the system. They honestly did not care about those anymore as they were stuck there waiting for over a year and have no meaning at this point of time.

SQL SERVER - Delete All Waiting Workflows in MSCRM to Speed Up Microsoft Dynamics msdynamics

Due to the high number of waiting workflows, data their insert, update and delete in their CRM system was getting extremely slow. After quickly researching online we figured out a way to delete all the waiting task workflow from MSCRM database. Please note that this is not official advice from Microsoft or even not something I they support. This is absolutely from what I have seen in the real world and how I fixed them.

I strongly recommend that you take a full database backup of your MSCRM system before you do following task. However, once deleted all the waiting workflows from MSCRM, the queries which were taking over 10 minutes, started to take less than 10 seconds.

Here is the script which you should run against your MSCRM database.

DELETE FROM workflowwaitsubscriptionbase
WHERE asyncoperationid in
(SELECT asyncoperationid
FROM asyncoperationbase
WHERE StateCode = 1)
GO
DELETE FROM workflowlogbase
WHERE asyncoperationid in
(SELECT asyncoperationid
FROM asyncoperationbase
WHERE StateCode = 1)
GO
DELETE FROM asyncoperationbase
WHERE StateCode = 1
GO

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Delete All Waiting Workflows in MSCRM to Speed Up Microsoft Dynamics

SQL SERVER – Find Week of the Year Using DatePart Function

$
0
0

As I blog every single day, lots of people ask me if I have to build a calender for the same. The answer is yes, I do not have a personal calender and I keep a note of all the ideas in the calender. The next day when I get ready to blog, I use the same calender as a reference to blog about the same. Most of the time I plan out my week at a time.  Let us learn about how to find week of the year using datepart function.

SQL SERVER - Find Week of the Year Using DatePart Function weekoftheyear

Here is one of the functions which I use all the time personally. This function gives me three important details to me which I am looking for.

  1. Week of the Year
  2. First Date of the Week
  3. Last Date of the Week

Let us execute the following query and you will find necessary details.

SELECT DATEPART( WK, SYSDATETIME()) 'Week of the Year',
CONVERT(VARCHAR(10), DATEADD(dd, -(DATEPART(dw, SYSDATETIME())-1), SYSDATETIME()) ,111) 'First Date of theWeek',
CONVERT(VARCHAR(10), DATEADD(dd, 7-(DATEPART(dw, SYSDATETIME())), SYSDATETIME()) ,111) 'Last Date of the Week'

SQL SERVER - Find Week of the Year Using DatePart Function sysdatetime

I know some of you will say that I should have used DATE datatype, but I still use CONVERT function as it works in pretty much all the versions of SQL Server. As a consultant, I like to write scripts which work in most versions of SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Find Week of the Year Using DatePart Function


SQL SERVER – Understanding How to Play With Row Constructor

$
0
0

Every version of SQL Server brings in something new that challenges our understanding of how the software product works. For me, working with SQL Server is quite refreshing and fulfilling because every single day there is something about the product that I discover and learn. There are tons of professionals who are working on this product and they push the limits of use and bring interesting use cases which I get to learn. Let us learn about Row Constructor.

This journey of SQL Server on this blog is just my way to express all these learnings and get them documented. In a recent code review for one of my consulting assignment, I saw a code and was completely taken aback at how it even ran for such a long time.

I had to go ahead and do my homework to see how this was actually written by the developer. Assume you wanted to have some code that gave you a common value with something other value as a CROSS JOIN. The final output was looking like:

SQL SERVER - Understanding How to Play With Row Constructor row-constructor-sql-01-800x559

As you can see the “Kilo”, “Mega” “Giga” are repeated for Byte and Meter values. This can be done in a number of ways, but the code I wrote to get this is using the row-constructor feature of SQL Server.

SELECT [MetricsPre].[Pre] + [Measuring].[Unit] AS [Measuringment], [MetricsPre].[Multiplier], UPPER([Measuring].[Unit])
FROM (VALUES ('kilo', '10^3'),
             ('mega', '10^6'),
             ('giga', '10^9'),
             ('tera', '10^12')) AS [MetricsPre] ( [Pre], [Multiplier] ),
     (VALUES ('byte'),
             ('meter')) AS [Measuring] ( [Unit] )
GO

I know it is pretty unconventional, but it still does the job for us.

Test your TSQL Skills:

  1. Is there any other way one can achieve the same result using SQL Server in a single query? Do let me know via comments below. I know CTE is one of the other ways too. So try something else now.
  2. Is there any other implementation of row-constructor that you have used in your environment that you want to share with the blog readers? Do let me know via comments.

I look forward for some interesting implementations for you, and it is going to be a learning that I will cherish and look forward to.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Understanding How to Play With Row Constructor

SQL SERVER – Database Backup and Restore Job Management

$
0
0

ApexSQL Backup is a tool for Microsoft SQL Server, intended for database backup and restore job management. The application supports all native SQL Server backups (full, differential and transaction log backups), and allows users to easily create, save and manage all backup related jobs. ApexSQL Backup can be used to run, schedule and monitor all backup operations across the domain from a single location.

Database Backup and Restore Job Management

Creating and maintaining full backup chains is a priority for any disaster recovery plan. But running these jobs manually on a daily basis could be quite challenging and time consuming. ApexSQL Backup enables the automation of these jobs by applying schedules, or by running custom scripts in the command line interface. Furthermore, ApexSQL Backup supports the automation of other daily maintenance tasks such as deleting old backup files, rebuilding and reorganizing fragmented indexes, verifying created backups and much more.

ApexSQL Backup components

ApexSQL Backup consists of four main components:

  • Windows application – main part of the program, managed via GUI. Used for creating, managing and executing database backup and restore maintenance jobs in SQL Server.
  • Command line interface aka CLI – a console application that enables users to automate backup, restore, and log shipping maintenance jobs by running custom scripts. All jobs in a script are easily configured by using simple CLI switches.
  • Central repository database – SQL Server database that stores parameters for all created jobs as well as configuration settings data for the application
  • Agent – simple windows service that is used for communication between the application and SQL Server instances.

How does it work

In order to work properly, ApexSQL Backup needs all four components to be installed and connected. The application and central repository database are usually installed on the machine that will be used for managing and monitoring SQL Server instances inside the domain or a workgroup. ApexSQL Backup agent must be installed on each server machine that hosts one or more SQL Server instances that need to be managed. After all agents are running, each SQL Server instance can be added to the list of manageable servers.

To add the local SQL Server instance, click on the Add button in Home tab, and select local instance from dropdown menu. Select the authentication type for the selected server, and provide necessary credentials if SQL Server authentication was selected.

SQL SERVER - Database Backup and Restore Job Management apexrecovery01

To add a remote instance, type the name or IP address of the instance in SQL Server field. Alternatively, click on the Network server button to obtain the list of available network servers, and select the one that should be added to the list (in order to have all remote servers listed, make sure that SQL Server Browser services are enabled and running on the remote servers).

SQL SERVER - Database Backup and Restore Job Management apexrecovery02

As soon as all instances are added to the server list, they can be managed through the application interface. ApexSQL Backup contains many useful features that make database maintenance tasks fun and easy.

Backup job management

Making a foolproof disaster recovery plan should be one of the main priorities for every database administrator. Creating uninterrupted backup chains that consist of regular full, differential and transaction log backups is the base concept behind any serious disaster recovery plan. ApexSQL Backup enables the user to configure and automate the entire process of creating any type of SQL Server database backups.

To make a database backup click on the Backup button in the ribbon of the Home tab

SQL SERVER - Database Backup and Restore Job Management apexrecovery03

In the main tab of Backup wizard, provide the main parameters for the backup job:

  • Select SQL Server instance that needs to be managed
    SQL SERVER - Database Backup and Restore Job Management apexrecovery04
  • Click on the Browse (…) button in Databases box, and check the boxes in front of the databases that you wish to back up
    SQL SERVER - Database Backup and Restore Job Management apexrecovery05
  • Select the backup type (full database, differential database, transaction log, or filegroups backup) from the drop down menu in the Type box
  • Set name and description parameters for the job name in Name and Description boxes
  • To change default SQL Server backup path, and assign custom backup destination path, click the Browse button (…) under the Path In the opened form, set the backup folder, and specify the naming rules for the backup file.
    SQL SERVER - Database Backup and Restore Job Management apexrecovery06
  • If you want to execute the backup job only once, select the Execute immediately radio button at the bottom of the main tab in Backup wizard. If defined job needs to be executed on regular basis, set the schedule for the job, by selecting Schedule radio button. Set the frequency and duration for the schedule, and click Ok to save the schedule settings.
    SQL SERVER - Database Backup and Restore Job Management apexrecovery07

It is possible to set additional job parameters in Advanced tab of the backup wizard, such as: media, verification, compression, encryption and cleanup settings.

SQL SERVER - Database Backup and Restore Job Management apexrecovery08

Optionally, configure the email notifications for the job status.

SQL SERVER - Database Backup and Restore Job Management apexrecovery09

Click Ok to schedule or execute defined backup job.

SQL SERVER - Database Backup and Restore Job Management apexrecovery10

Backup templates

Another interesting and practical way of creating backup jobs is by using backup templates. Users can save all preferred settings for a backup job as a template, and apply it to one or more databases whenever necessary. Backup templates support all features that are supported for the regular backup jobs.

To create a new backup template, navigate to Home tab, and click on the Manage button in Backup templates group.

SQL SERVER - Database Backup and Restore Job Management apexrecovery11

  • Click on the New button to configure the template.
    SQL SERVER - Database Backup and Restore Job Management apexrecovery12
  • Provide the custom name and description for the template. Select the backup type for the template: full, differential, or transaction log backup. Naming rules may be defined for operation name and operation description (this will be displayed in Activities view), as well as for the backup filename. Schedule also needs to be set for every backup template.
    SQL SERVER - Database Backup and Restore Job Management apexrecovery13
  • Use the Advanced tab to set media, verification, compression and cleanup options. If necessary, set the email notifications in Notifications tab, and click OK when finished to save the template settings.
    SQL SERVER - Database Backup and Restore Job Management apexrecovery14

To deploy created template, click on Deploy button from Backup templates group in the Home tab.

SQL SERVER - Database Backup and Restore Job Management apexrecovery15

In Deploy templates form, you need to specify:

  • One or more templates that you wish to deploy – select previously created templates by clicking on Browse button in Templates text box.
  • Databases that will be targeted by the template – select the target databases by clicking on Browse button in Databases text box
  • Destination path for the backup files – type the path manually, or browse for the destination folder

SQL SERVER - Database Backup and Restore Job Management apexrecovery16

After all targets have been selected, click on Deploy button to apply the template(s).

SQL SERVER - Database Backup and Restore Job Management apexrecovery17

All backup jobs configured this way are created as schedules, and they can be managed in Schedules view as such.

Restore job management

Other important feature of ApexSQL Backup is the ability to configure and manage restore operations.

To start the restore job configuration, click on the Restore button in Home tab.

SQL SERVER - Database Backup and Restore Job Management apexrecovery18

In the main tab of Restore wizard, provide all required parameters for the job configuration:

  • Select the targeted SQL Server instance, and the database that needs to be restored from the drop down menus in SQL Server and Database
  • Choose to restore from a folder scan or from a specific backup files by selecting the respective radio button.
  • Select the desired restore type: Full; Full and differential; or Full, differential and transaction log restore. Note that you need to have suitable files selected in previous step to be able to execute any of the selected restore type operations.
  • Configure the restore job to run immediately, or set the schedule if job needs to be executed frequently.

SQL SERVER - Database Backup and Restore Job Management apexrecovery19

In Advanced tab of restore wizard, you can set the custom location for data and log files, include various DBCC operations in restore job, or provide the custom T-SQL script and run it after the restore job is done.

SQL SERVER - Database Backup and Restore Job Management apexrecovery20

Optionally, set email notifications in Notifications tab, and click Ok to schedule or execute the configured job.

SQL SERVER - Database Backup and Restore Job Management apexrecovery21

Log shipping jobs

Setting the regular log shipping jobs is one of the simple, cost effective ways of running the successful disaster recovery plan. It may not be as powerful as failover clustering and Always ON Availability Groups, but it is much easier to configure, and has much lower cost in human and server resources.

To configure the log shipping job, click on the Log shipping button in the main ribbon of the Home tab.

SQL SERVER - Database Backup and Restore Job Management apexrecovery22

In the main tab of Log shipping wizard, several parameters must be provided:

  • The primary (production) server is selected from dropdown menu in SQL Server textbox
  • The source database on a primary server is selected in Database box
  • The paths for the backup and shared folder must be provided
  • Backup and restore schedules have to be configured at the bottom of the form.

SQL SERVER - Database Backup and Restore Job Management apexrecovery23

  • To configure the settings for the secondary (destination) server, click on Add destination In the Log shipping destination form, specify the destination server, destination database, recovery model and local folder (note that destination server has to be added to the list of manageable servers in the main form). If needed, multiple destination servers and/or databases can be added to the list. Click Ok to save the settings.

SQL SERVER - Database Backup and Restore Job Management apexrecovery24

  • To complete job configuration, click OK. The wizard scheduled two operations: backup of transaction log on primary server, and restore of the created file on secondary server.

SQL SERVER - Database Backup and Restore Job Management apexrecovery25

  • To start log shipping immediately, go to Schedules view, select both created schedules, and click Run now from the context menu

SQL SERVER - Database Backup and Restore Job Management apexrecovery26

Reorganize and Rebuild indexes

ApexSQL Backup can handle some basic index maintenance operations. Rebuilding and reorganizing indexes regularly can significantly improve the performance of high traffic servers. On the other side, heavily fragmented indexes may degrade the overall query performance on the server.

To start the rebuild or reorganize index operation, click on the respective buttons in the Tasks group of the Home tab.

SQL SERVER - Database Backup and Restore Job Management apexrecovery27
Reorganize index wizard allows users to specify SQL Server instance and database, as well as to manually select tables and views for the reorganize job. The schedule may also be set for the job, so it can be automated to run on a regular basis. Email notification are configured in Notification tab.

SQL SERVER - Database Backup and Restore Job Management apexrecovery28
Rebuild index wizard works in a similar way. SQL Server instance and database are selected from the drop down menu, and tables and views are added from the list. The operation can be run both in online and offline mode. Rebuild index operation also supports schedules and notifications, and can therefore be automated.

SQL SERVER - Database Backup and Restore Job Management apexrecovery29

Monitoring backup jobs

All described jobs and operations can be easily monitored and managed in Schedules, Activities, and History views.

Schedules view shows all scheduled maintenance jobs and job details in a grid. The jobs include all types of backup and restore schedules, rebuild and reorganize index schedules, log shipping jobs, etc. All schedules located in the grid can easily be managed by using the commands from the context menu. They can be executed immediately, enabled, disabled or deleted.

SQL SERVER - Database Backup and Restore Job Management apexrecovery30

Activities view displays detailed information about all operations that were executed by ApexSQL Backup, either directly, or by the scheduled jobs. The form keeps track for both successful and failed operations, and can therefore be very useful in diagnosing various issues. Furthermore, all error messages that were returned by SQL Server are saved in the Message column. All activities can be grouped, sorted, or filtered by the values from any column. If the need arises, activities can be exported as CSV document.

SQL SERVER - Database Backup and Restore Job Management apexrecovery31

History view shows the complete backup history (since the moment ApexSQL Backup is installed) for any selected database. The backup history can be viewed in the grid and on the timeline. This is very useful for monitoring backup chains, and identifying best restore point in the case of the disaster. Database can be restored easily from the context menu by selecting any of the available backup points.

SQL SERVER - Database Backup and Restore Job Management apexrecovery32

Related links to Database Backup and Restore Job Management:

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Database Backup and Restore Job Management

SQL SERVER – FIX Error 18456, Severity: 14, State: 5. Login failed for user

$
0
0

Some errors are historic and have the most common root cause. Sometimes we really don’t know why they happen and I have seen clients go nuts to identify the real reason for the problem. In a recent email interaction with one customer – who was migrating from Oracle to SQL Server. He kept telling me that they were getting login errors from their Java application. And even when they used the SQL Server Management Studio it was erroring out with Error 18456 code. They sent me a snapshot like this for login failed:

SQL SERVER - FIX Error 18456, Severity: 14, State: 5. Login failed for user login-failed-18456-01

This was not self-explanatory and as usual, I searched this blog to get a few posts around this error. I sent him this and said, if these don’t solve your problem – can you please send me more details.

SQL SERVER - FIX Error 18456, Severity: 14, State: 5. Login failed for user login-failed-18456-04

In about an hour, I received a mail again stating the above were not solving his problem. He sent me a bigger screenshot as shown below:

SQL SERVER - FIX Error 18456, Severity: 14, State: 5. Login failed for user login-failed-18456-02

Though this was a good starting point, this was not good enough information for me based on what SSMS was sending as output.

I reviewed the above blogs just to realize I had forgotten to give a bigger detail. A lot of these login failures are also logged in ErrorLog. I realized and asked the Developer to check in their error log. And incidentally, they figured out the actual root cause. Since he didn’t get back to me for 3 hours – it was my turn to ask what went wrong because I was curious to understand the actual reason. I got a screen shot as shown below and it explained quite a bit.

SQL SERVER - FIX Error 18456, Severity: 14, State: 5. Login failed for user login-failed-18456-03

If you are not sure where to get the ErrorLog, check the post: SQL SERVER – Where is ERRORLOG? Various Ways to Find ERRORLOG Location.

On further investigation, it was learnt that their application was changing the password for their users in their application code, but since it was load balanced, it was getting into some mess. But I was glad how explicit and detailed information Error Logs give that helped this user.

Have you seen and used such information in your environments for such failures? What were your troubleshooting tips? Let me know via comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX Error 18456, Severity: 14, State: 5. Login failed for user

SQL SERVER – Setup Screen Not Launching While Updating a Patch

$
0
0

This is one of the client contacts where I realized the importance of looking at right log file. The client said that he double clicks the.exe, a window flash by and disappears. It looked like something was happening, but it’s failing to move forward. Let us learn a very rare but critical error about setup screen not launching while updating a patch.

I looked into the documentation of setup and found this link https://msdn.microsoft.com/en-us/library/ms143702.aspx (View and Read SQL Server Setup Log Files)

I looked into Bootstrap folder, but there were no logs getting generated so I asked to check %temp% (Start > Run > type %temp%) folder and found one file with current timestamp called as SqlSetup.log. Here are the last few lines

In the Sqlsetup.log, it raises an error at the end:

09/09/2016 13:02:20.411 .Net version 3.5 is already installed.
09/09/2016 13:02:20.411 Windows Installer version 4.5 is already installed.
09/09/2016 13:02:20.411 Patch related actions cannot be run from the local setup.exe, so continuing to run setup.exe from the media.
09/09/2016 13:02:20.411 Attempt to initialize SQL setup code group
09/09/2016 13:02:20.411 Attempting to determine security.config file path
09/09/2016 13:02:20.411 Checking to see if policy file exists
09/09/2016 13:02:20.411 .Net security policy file does exist
09/09/2016 13:02:20.411 Attempting to load .Net security policy file
09/09/2016 13:02:20.411 Error: Cannot load .Net security policy file
09/09/2016 13:02:20.411 Error: InitializeSqlSetupCodeGroupCore (64bit) failed
09/09/2016 13:02:20.411 Error: InitializeSqlSetupCodeGroup failed: 0x80004005
09/09/2016 13:02:20.411 Setup closed with exit code: 0x80004005

All I can see is that there is something wrong with .NET security. I asked my .NET expert friend to know if there is any tool to reset the permission? He told CASPOL.EXE can be used to reset the policies.

SOLUTION

We went into following directory “C:\WINDOWS\Microsoft.NET\Framework64\v2.0.50727” and ran this command: –  caspol.exe -machine -reset

C:\WINDOWS\Microsoft.NET\Framework64\v2.0.50727>caspol.exe -machine -reset

SQL SERVER - Setup Screen Not Launching While Updating a Patch caspol-01

If it would have been a 32-bit machine, we would have run the same command under “C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727” folder.

After doing above, SQL setup was able to start and finish as well.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Setup Screen Not Launching While Updating a Patch

MySQL – Fix Error – WordPress Database Error Duplicate Entry for key PRIMARY for Query INSERT INTO wp_options

$
0
0

As many of you know that this blog runs on WordPress and under the hood of WordPress there is a MySQL database. MySQL database is quite good and is able to hold massive traffic this blog receives every day. However, just like any database MySQL needs tuning as well as proper management of the same. In this blog post we will discuss about how I received a very weird error WordPress Database Error and how I resolved it.

MySQL - Fix Error - WordPress Database Error Duplicate Entry for key PRIMARY for Query INSERT INTO wp_options mysqlerror-800x430

Last week, suddenly I got call from a friend that our blog is loading very slow. Well, when a new blog post is published or newsletter is sent out, it is very common to see a spike in traffic and momentarily slowness in the website performance. However, in this case, the website was consistently running slow. After a while we found a couple of new problems on the site. Due to the slowness of the performance, we also found out that WordPress scheduler was not publishing new blog posts as well as was not taking routine backup of the system.

After careful diagnostic I figured out that the issue was with MySQL Database. When I checked the error log, I found the following error in the log.

[Fri Sep 09 04:58:03 2016] [error] [client] WordPress database error Duplicate entry ‘3354142’ for key ‘PRIMARY’ for query INSERT INTO wp_options (option_name, option_value,autoload) VALUES (…)

It was very clear that there was a primary key violation in the options table. However, the problem was not easy to solve as I had personally not done any transactions with this table or there was no new update or plugin changes in the recent time. My first attempt was to restore this particular table from older database backup (I take frequent backup my site and its database). Even this particular problem failed and I was not able to get rid of the error.

Finally, I searched the internet but alas, there was no real help. At that time, I decided to do various trial and error. Trust me, I spend over 4 hours and various different tricks to get rid of this error. It was very clear to me that it was logical integrity error on the database, I had to spend time with lots of tables and logic. Well, after 4 hours, I finally found a solution and it was a very simple solution. I wish I had known this earlier and would have not spent over 4 hours on various trials and errors.

Solution / Fix:

I just ran following command it my issue was resolved.

REPAIR TABLE wp_options

That’s it! It was done.

The reality was that my table was corrupted and due to the same reason I was getting error related to Duplicate Key for my database table. Once I fixed the corruption of the table, everything worked just fine. Remember, in my case it was wp_options table which was corrupted, you must replace it with your table name and script will just work fine.

Additionally, if you want to just repair all the tables in your database, you can execute following script and it will generate scripts for every single table in your MySQL database. Once you execute the script, you will repair every single table of your database.

SELECT CONCATE('repair table ', table_name, ';') 
FROM information_schema.tables 
WHERE table_schema='YourDatabaseName';

I hope you find this blog post useful. If you ever have any problem with the WordPress MySQL database, do reach out to me, I will be happy to help you to resolve any error related to the same.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on MySQL – Fix Error – WordPress Database Error Duplicate Entry for key PRIMARY for Query INSERT INTO wp_options

Viewing all 594 articles
Browse latest View live