Quantcast
Channel: SQL Archives - SQL Authority with Pinal Dave
Viewing all 599 articles
Browse latest View live

SQL SERVER – Fix Error 3271: A nonrecoverable I/O error occurred on file. The remote server returned an error: (404) Not Found

$
0
0

In SQL Server one error message can be caused due to many reasons. One of such error message is 3271 which is caused during failure of backup to URL command. Here are two earlier blogs about same error message 3271. (404) Not Found.

SQL SERVER – Fix Error 3271: A nonrecoverable I/O error occurred on file. The remote server returned an error: (403) Forbidden

SQL SERVER – Backup to URL Fails in Azure Resource Manager (ARM) Deployment Model with Error – (400) Bad Request

In this blog, we would discuss error 404 – Not Found. Here is the complete error message which can be seen.

Msg 3271, Level 16, State 1, Line 1
A nonrecoverable I/O error occurred on file “https://sqlprodbkups.blob.core.windows.net/sqldbbackups1/SQLAuthority.bak:” Backup to URL received an exception from the remote endpoint. Exception Message: The remote server returned an error: (404) Not Found..
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

And here was the command.

BACKUP DATABASE SQLAuthority 
TO URL='https://sqlprodbkups.blob.core.windows.net/sqldbbackups1/SQLAuthority.bak'
WITH CREDENTIAL ='AzBackupCred'

WORKAROUND/SOLUTION

In this case, we again need to look at exception message which is 404 – Not Found.

It didn’t take much time to figure out that the container name which was given in the command was incorrect. The container name should be picked from the Azure portal. We need to go to Home > Storage accounts > sqlprodbkups > Blob service. (our storage account name is sqlprodbkups)

SQL SERVER – Fix Error 3271: A nonrecoverable I/O error occurred on file. The remote server returned an error: (404) Not Found bkp2url-err404-01-800x316

If you re-look at the command which gave an error, we can clearly see that I misspelled the container name (sqldbbackup1 instead of sqldbbackup) to cause the error. Once you pick right name, this exception 404 would disappear.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Fix Error 3271: A nonrecoverable I/O error occurred on file. The remote server returned an error: (404) Not Found


SQL SERVER – Database Attach Failure – Msg 2571 – User ‘guest’ Does Not Have Permission to Run DBCC Checkprimaryfile.

$
0
0

One of my clients was trying to recover from a disaster and wanted to attach the MDF and LDF files. While trying to do that, they encountered an error. In this blog we would learn about how to fix the error – User ‘guest’ does not have permission to run DBCC checkprimaryfile while attaching the database.

Here is the exact error message which was encountered while attaching the database.

Failed to retrieve data for this request. (Microsoft.SqlServer.Management.Sdk.Sfc)
ADDITIONAL INFORMATION:
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
User ‘guest’ does not have permission to run DBCC checkprimaryfile. (Microsoft SQL Server, Error: 2571)

Here is the UI with the error message.

SQL SERVER - Database Attach Failure - Msg 2571 - User 'guest' Does Not Have Permission to Run DBCC Checkprimaryfile. checkprimaryfile-err-01-800x724

To look further, we captured profile to find exact command which is failing. We found following command in profiler before failure.

DECLARE @Path NVARCHAR(255)
DECLARE @Name NVARCHAR(255)
DECLARE @fileName NVARCHAR(255)
SELECT @fileName = N'F:\Data\SQLAuthority.mdf'
DECLARE @command NVARCHAR(300)
SELECT @command = 'dbcc checkprimaryfile (N'''+ @fileName + ''' , 2)'
CREATE TABLE #smoPrimaryFileProp(property SQL_VARIANT NULL, value SQL_VARIANT NULL) 
INSERT #smoPrimaryFileProp 
EXEC (@command)
SELECT
p.value AS [Value]
FROM
#smoPrimaryFileProp p
DROP TABLE #smoPrimaryFileProp

Above also fails with the same error in SSMS when we use the same login which is trying to attach the database.

SQL SERVER - Database Attach Failure - Msg 2571 - User 'guest' Does Not Have Permission to Run DBCC Checkprimaryfile. checkprimaryfile-err-02

WORKAROUND/SOLUTION

After looking at properties of the login, I found that current user was part of the public role and it was a SQL Login. The only solution which I could find was to give SYSADMIN rights to this account to use UI and bypass the error. I don’t believe that it’s a good idea to give SYSADMIN to someone who just wants to attach the database files. I feel a better solution would be to use T-SQL to attach the database and provide dbcreator role to the login who is trying to attach the files.

CREATE DATABASE SQLAuthority   
ON (FILENAME = 'F:\Data\SQLAuthority.mdf'),   
   (FILENAME = 'F:\Log\SQLAuthority_log.ldf')   
FOR ATTACH;

Above command requires CREATE DATABASE, CREATE ANY DATABASE, or ALTER ANY DATABASE permission.

I have written a blog decade ago to find all possible DBCC commands. You can read it using below link.  SQL SERVER – DBCC commands List – documented and undocumented

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Database Attach Failure – Msg 2571 – User ‘guest’ Does Not Have Permission to Run DBCC Checkprimaryfile.

What is the ROI of a SQL Server Monitoring Tool?

$
0
0

The increasing size of SQL Server databases, alongside the growing complexity of SQL Server estates, is making more organizations realize the need for a tool that enables proactive monitoring. Let us try to answer a question – What is the ROI of a SQL Server Monitoring Tool?

What is the ROI of a SQL Server Monitoring Tool? rg-dev-calc Hand-rolled scripts can provide basic information, like wait stats and memory utilization, but that is often not enough. With the database a key element of business operations, companies also need to know about abnormal resource patterns, like sustained CPU spikes, and problems such as failed backups.

The ability to see trends in data collected by a monitoring tool can also keep businesses informed about I/O, memory or disk space issues, long before they become a crisis.

For example, if you can map when you are likely to run out of disk space you can ensure you increase the availability at a time when the impact on users will be minimal.

This kind of valuable information does come at a cost. By using a SQL Server monitoring tool you are swapping the cost of manual monitoring with a tool that automates the process for you, so which is more affordable for your business?

With Redgate’s SQL Server monitoring return on investment (ROI) calculator you can work out how much you currently spend on manual monitoring, and what you could save by switching to using a tool such as SQL Monitor.

Simply enter the number of servers you monitor, your DBA’s hourly rate and how long they spend monitoring each SQL Server per day, and we’ll give you a personalized report on the cost of manual monitoring versus using a monitoring tool. You’ll also receive a free guide on further facts behind the figures, the benefits behind the facts and what to look for when purchasing a tool.

Calculate and download your free personalized ROI guide from Redgate now.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on What is the ROI of a SQL Server Monitoring Tool?

SQL SERVER – FIX: Msg 3231 – The Media Loaded on “Backup” is Formatted to Support 1 Media Families, but 2 Media Families are Expected According to the Backup Device Specification

$
0
0

While preparing for a demo for my upcoming session, I encountered an interesting error. I thought I have written a blog about it, but I was wrong. My earlier blog was about a different error which looks very similar. Here is my earlier blog for error 3132. SQL SERVER – FIX: Msg 3132, Level 16 – The Media Set has 2 Media Families, but Only 1 is Provided. All Members Must be Provided. In this blog post, we will learn how to fix the error The Media Loaded on “Backup” is Formatted to Support 1 Media Families, but 2 Media Families are Expected According to the Backup Device Specification.

In this blog, we would learn about error 3231. Here is the complete message of the error.

Msg 3231, Level 16, State 1, Line 5
The media loaded on “C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\Backup\SQLAuthority.bak” is formatted to support 1 media families, but 2 media families are expected according to the backup device specification.
Msg 3013, Level 16, State 1, Line 5
BACKUP DATABASE is terminating abnormally.

Here are the steps to reproduce the error.

CREATE DATABASE SQLAuthority
GO
BACKUP DATABASE SQLAuthority TO DISK = 'SQLAuthority.bak'
GO
BACKUP DATABASE SQLAuthority 
TO DISK = 'SQLAuthority.bak',
   DISK = 'SQLAuthority_1.bak' 
WITH DIFFERENTIAL

WORKAROUND/SOLUTION

As per error message, there is an existing backup available in SQLAuthority.bak, but in next backup, we have given two files to take split backup. One of the files is an earlier backup file which has only one media family.

The solution would be to use “FORMAT” and “INIT” to erase existing backup data in the existing backup. You might want to be careful as we are formatting existing media which has a backup. Here is the script with modification.

CREATE DATABASE SQLAuthority
GO
BACKUP DATABASE SQLAuthority TO DISK = 'SQLAuthority.bak'
GO
BACKUP DATABASE SQLAuthority 
TO DISK = 'SQLAuthority.bak',
   DISK = 'SQLAuthority_1.bak' 
WITH DIFFERENTIAL, FORMAT, INIT

If you want to use UI then below gives you the options.

SQL SERVER - FIX: Msg 3231 - The Media Loaded on "Backup" is Formatted to Support 1 Media Families, but 2 Media Families are Expected According to the Backup Device Specification backup-err-3231-01-800x587

CLEANUP SCRIPT

Here is the script to cleanup database backup history and dropping the database.

EXEC msdb.dbo.sp_delete_database_backuphistory @database_name = N'SQLAuthority'
GO
USE [master]
GO
ALTER DATABASE [SQLAuthority] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
USE [master]
GO
DROP DATABASE [SQLAuthority]
GO

Have you encountered such errors while taking a backup? Please share via comments.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX: Msg 3231 – The Media Loaded on “Backup” is Formatted to Support 1 Media Families, but 2 Media Families are Expected According to the Backup Device Specification

SQL SERVER – FIX: Error 14001: The Application Has Failed to Start Because its Side-by-side Configuration is Not Correct

$
0
0

SQL SERVER - FIX: Error 14001: The Application Has Failed to Start Because its Side-by-side Configuration is Not Correct credit-card There are so many ways to break things in the operating system. In this blog, we would learn about fixing error “The application has failed to start because its side-by-side configuration is not correct”.

While working with one of the clients, their Jr. DBA asked me for a quick help and I couldn’t deny it. He informed that on one of his development servers, SQL Service is not getting started. As usual, I asked for error messages which they are seeing.

Here is the error when they try to start SQL Server service.

Windows could not start SQL Server (MSSQLSERVER) service on local computer.
Error 14001: The application has failed to start because its side-by-side configuration is not correct.

Then, I asked to check system event log and application event log to see if they have any additional errors. We found the SideBySide error I application log.

Log Name: Application
Source: SideBySide
Event ID: 33
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Description: Activation context generation failed for “C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\Binn\sqlservr.exe”. Dependent Assembly Microsoft.VC80.ATL,processorArchitecture=”amd64″,publicKeyToken=”1fc8b3b9a1e18e3b”,type=”win32″,version=”8.0.50727.4053″ could not be found. Please use sxstrace.exe for detailed diagnosis.

I searched on the internet and found many tools who claim to fix this error. Never download such tools as they are viruses and malware.

WORKAROUND/SOLUTION

From SideBySide message, it looks like an issue with some Visual C++ (Microsoft.VC80.ATL) component file is missing. So, we performed following actions by going to Add/Remove Programs in the control panel.

  1. Repair “Microsoft Visual C++ 2008 SP1 Redistributable Package (x86)”
  2. Repair “Microsoft Visual C++ 2005 SP1 Redistributable Package (x86)”
  3. Download and install – Microsoft Visual C++ 2005 Service Pack 1 Redistributable Package ATL Security Update 32 bit
  4. Download and install – Microsoft Visual C++ 2005 Service Pack 1 Redistributable Package ATL Security Update 64 bit

If above doesn’t solve the issue, then you need to find the right version of C++ runtime which needs to be installed as it would depend on SQL Server version.

If you find any other solution, please share via comment to help others.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX: Error 14001: The Application Has Failed to Start Because its Side-by-side Configuration is Not Correct

SQL SERVER – The Report Server Cannot Open a Connection to the Report Server Database. (rsReportServerDatabaseLogonFailed)

$
0
0

When my clients contact me for any issue, I always try to help them or redirect them to someone who can help them better than me. Once a client contacted me for an issue with SQL Server Reporting Services. As usual, I didn’t give up, but I investigated error message and applied a logical approach to fixing it. Let us see how to fix the error related to report server database.

THE PROBLEM

They have a project in Visual Studio and from where they are calling the report and viewing it on using a report viewer. Below is the error message they were receiving while accessing the report.

Could not connect to the report server http://localhost/ReportServer. Verify that the TargetServerURL is valid and that you have the correct permissions to connect to the report server.

ADDITIONAL INFORMATION:
System.Web.Services.Protocols.SoapException: The report server cannot open a connection to the report server database. A connection to the database is required for all requests and processing. —> Microsoft.ReportingServices.Library.ReportServerDatabaseUnavailableException: The report server cannot open a connection to the report server database. A connection to the database is required for all requests and processing.

As we can see in the error message, they are using http://localhost/reportserver to access reporting services. As a next step, asked them to open the URL using a web browser and they received the error:

The report server cannot open a connection to the report server database. The log on failed. (rsReportServerDatabaseLogonFailed)
The user name or password is incorrect. (Exception from HRESULT: 0x8007052E)

WORKAROUND/SOLUTION

Based on the error, it is clearly visible that SSRS was not able to connect to SQL Server. Here is the page in SSRS Configuration Manager to set the database connection.

SQL SERVER - The Report Server Cannot Open a Connection to the Report Server Database. (rsReportServerDatabaseLogonFailed) ssrs-err-01-800x554

I checked their database settings and tried connecting SQL Server using SSMS and it was working fine. So, I re-applied the database setting from the report server configuration manager and tested the report server URL. Strangely, that fixed the issue. Looks like there was some mismatch in username password somewhere.

What kind of error have you fixed as a DBA related to SSRS?

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – The Report Server Cannot Open a Connection to the Report Server Database. (rsReportServerDatabaseLogonFailed)

QL SERVER – Unable to Create Listener: Msg 41009 – The Windows Server Failover Clustering (WSFC) Resource Control API Returned Error Code 5057

$
0
0

One of my clients contacted me for quick On-Demand consulting. They were working under close deadlines and delivering the solution on time was critical for their success. They contacted me and explained that they were having trouble creating a listener for their two nodes AlwaysOn Availability Group. The error message which they received from T-SQL is below. Let us learn how we can fix error related to Windows Server Failover Clustering.

Msg 41009, Level 16, State 7, Line 126
The Windows Server Failover Clustering (WSFC) resource control API returned error code 5057.  The WSFC service may not be running or may not be accessible in its current state, or the specified arguments are invalid.  For information about this error code, see “System Error Codes” in the Windows Development documentation.

Msg 19476, Level 16, State 3, Line 126
The attempt to create the network name and IP address for the listener failed. The WSFC service may not be running or may be inaccessible in its current state, or the values provided for the network name and IP address may be incorrect. Check the state of the WSFC cluster and validate the network name and IP address with the network administrator.

If we really see there are three error messages but first one is having more details. In one of my earlier blog I explained that whenever there are any issues related to cluster resource, we should always look at cluster log. If you are not sure how to generate cluster logs, read my earlier blog on the same topic.

SQL SERVER – Steps to Generate Windows Cluster Log?

We looked at cluster log to see what messages we see there at the same time. I have trimmed the log by removing date time field from the log.

The cluster log has these errors:

  1. IP Address <PRODAG_162.28.55.59>: IpaValidatePrivateResProperties: IP address 162.28.55.59 was detected on the network.
  2. Error 5057 from ResourceControl for resource PRODAG_162.28.55.59.

If we do NET HELPMSG for 5057, we get below

QL SERVER - Unable to Create Listener: Msg 41009 - The Windows Server Failover Clustering (WSFC) Resource Control API Returned Error Code 5057 dup-ip-err-01-800x265

SOLUTION/WORKAROUND

From above information, we can conclude that error appears because we are trying to use an IP address for the listener which is already in use. We need to use an unused IP and that should fix the issue.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on QL SERVER – Unable to Create Listener: Msg 41009 – The Windows Server Failover Clustering (WSFC) Resource Control API Returned Error Code 5057

SQL SERVER – AlwaysOn Automatic Seeding – Database Stuck in Restoring State

$
0
0

As a part of my consulting, I observed that there is a lot of interest in using AlwaysOn availability group. I have deployed many AlwaysOn solutions from start-to-finish. Recently I encountered an issue and in this blog, we would discuss reason of an availability database to stay in restoring mode even after using automatic seeding.

If you have not heard earlier, one of the new features in AlwaysOn which was introduced in SQL Server 2016 was “automatic seeding”. In the latest version of SSMS, we can also see the option of seeding as below.

SQL SERVER - AlwaysOn Automatic Seeding - Database Stuck in Restoring State ao-seed-err-01

My client used the option and their database size was huge and seeding is supposed to work faster than a backup and restore method. We started seeding and I asked them to use below query to monitor the progress of seeding.

SELECT local_database_name
 ,role_desc
 ,internal_state_desc
 ,transfer_rate_bytes_per_second
 ,transferred_size_bytes
 ,database_size_bytes
 ,start_time_utc
 ,end_time_utc
 ,estimate_time_complete_utc
 ,total_disk_io_wait_time_ms
 ,total_network_wait_time_ms
 ,is_compression_enabled
FROM sys.dm_hadr_physical_seeding_stats

They contacted me again and told that seeding is completed but the database is still shown in “Restoring” state.

SQL SERVER - AlwaysOn Automatic Seeding - Database Stuck in Restoring State ao-seed-err-02

When I attempted to join, I received an error

SQL SERVER - AlwaysOn Automatic Seeding - Database Stuck in Restoring State ao-seed-err-03

Failed to join the database ‘HiDB’ to the availability group ‘HiAG’ on the availability replica ‘(LOCAL)’. (Microsoft.SqlServer.Smo)
——————————
ADDITIONAL INFORMATION:
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
——————————
The remote copy of database “HiDB” has not been rolled forward to a point in time that is encompassed in the local copy of the database log. (Microsoft SQL Server, Error: 1412)

This error is what tells us the exact cause of the issue. It looks like there is a log backup which happened in between seeding process. I asked them to use the blog and find SQL SERVER – Get Database Backup History for a Single Database

WORKAROUND/SOLUTION

In this case, after running the script from my blog, we found that a backup job which was taking log backup on another replica every 15 minutes. We disabled the job and started seeding again. This time it worked without any issue.

Have you encountered some error during AlwaysOn AG? Please share via comments and help others.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – AlwaysOn Automatic Seeding – Database Stuck in Restoring State


MySQL – Connection Error – [MySQL][ODBC 5.3(w) Driver]Host ‘IP’ is Not Allowed to Connect to this MySQL Server

$
0
0

One of my clients contacted me for a linked server issue between SQL Server and MySQL Server. Since I mostly work with SQL Server, it was a fun installing and connecting MySQL Server. While I was trying to simulate the issue and I wanted to create a linked server. In this blog, we would learn how to fix MySQL connection error [MySQL][ODBC 5.3(w) Driver]Host ‘IP’ is not allowed to connect to this MySQL server.

I installed MySQL Server on a server and provided the root user’s password. When I was trying to connect to MySQL Server using ODBC.

MySQL - Connection Error - [MySQL][ODBC 5.3(w) Driver]Host 'IP' is Not Allowed to Connect to this MySQL Server mysql-login-err-01

As soon as I clicked on the test, I see below error.

MySQL - Connection Error - [MySQL][ODBC 5.3(w) Driver]Host 'IP' is Not Allowed to Connect to this MySQL Server mysql-login-err-02

The IP mentioned in error message is IP of the client which is trying to connect. The text message is as follows:

Connection Failed
[MySQL][ODBC 5.3(w) Driver]Host ‘IP’ is not allowed to connect to this MySQL server:

I have never seen this error while working with SQL Server, so I had no idea about this error. When I was researching I learned below.

By default, MySQL does not allow remote clients to connect to the MySQL database.

The fastest way to verify that is as below. If we check mysql.user table, there is an entry for user ‘root’ with host ‘localhost’.

MySQL - Connection Error - [MySQL][ODBC 5.3(w) Driver]Host 'IP' is Not Allowed to Connect to this MySQL Server mysql-login-err-03

So, we need to provide permission to connect to MySQL Server to a client.

WORKAROUND/SOLUTION

First, make sure it is not a firewall issue.

As we discussed earlier, it’s an issue with permissions. We can then give permission using the command.

Use mysql;
GRANT ALL ON *.* to root@'x.x.x.x' IDENTIFIED BY 'your-root-password';

You can also use MySQL Workbench to do that. Below screenshot tells the steps to be followed. Same as the command, we need to give username, password, and IP in the graphical screen.

MySQL - Connection Error - [MySQL][ODBC 5.3(w) Driver]Host 'IP' is Not Allowed to Connect to this MySQL Server mysql-login-err-04

Hope this blog would help. After the above changes, when you try to connect to the mysql database from this remote client (we have given the IP/hostname), you should not get the “Host is not allowed to connect to this MySQL server” error message anymore. We can also use % to allow all hosts but I don’t prefer that option.

I am not a MySQL expert so feel free to comment and let me know if there are better ways.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on MySQL – Connection Error – [MySQL][ODBC 5.3(w) Driver]Host ‘IP’ is Not Allowed to Connect to this MySQL Server

Avoiding Database Downtime via Replication – SharePlex

$
0
0

The scariest word for any DBA is – Database Downtime. Every DBA who is going off duty for the day or going away on vacation for a month is always worried about Database Downtime. I totally sympathize with everyone who is constantly worried about their database’s downtime. In the real world, we all know when our database does down, it means that our business is stopped. There is no other analogy we can give.

I am fortunate that during my job as a Data Consultant, I get to visit different organizations across the world and get to meet some finest Data Professionals and Business Leaders. During our interaction always one thing which comes up and that is Continuity of the Business. Every single organization understands a disruption in the business continuity for end customer can be way more expensive at the end of the day than it looks on the surface.

This brings the most anticipated question

How to avoid database downtime to keep our business running continuously and efficiently?

If you look at the question again, there are two very important asks – Continuously and Efficiently.

It is not important that our website is available and our database is online, it is equally important that we run our business also efficiently.

I am very much sure that you might be tempted to answer some kind of the solution at this point of time but before you say anything we must also understand that there are few more challenges when we are talking about the real world database downtime issues.

Before we talk about the solutions let us explore the three big challenges every single organization faces

Challenge 1: Multiple Database Platforms

In my professional life, I have seen only hardly any organizations using a single database platform. The most of the organizations have multiple database platforms running their critical business applications. I have quite commonly seen AWS, MySQL, SAP, SQL Server and other databases working together. The situation of multiple database platforms makes the entire downtime issue further complicated.

Challenge 2: Hybrid and Heterogeneous

Every single day, I have seen more and more businesses opting for cloud for their database needs. Now though cloud looks very promising, yet many organizations have to run their application on-premises. I have often seen organizations struggling to balance their business workload across on-premises and on a cloud. Now think about it, when you have entire two different database platform with a hybrid scenario, it is indeed a complex to maintain database uptime.

Challenge 3: Analytics and Monitoring

It is very important for any organization to have to right set of tools when having massive data movement across various database platforms. We need to have right monitoring tools for performance, conflict resolutions, data comparison as well as real-time analytics. Now when the database goes down, all of such functionality also disappears. Running enterprise-grade application requires much more than just database is online.

Now that we have three important challenges in front of us. We need to find a solution which overcomes all the three challenges and provide a simple solution for data availability.

Avoiding Database Downtime via Replication - SharePlex shareplex-800x443

SharePlex – Avoiding Database Downtime via Replication

Here is the where I suggest SharePlex to my customers. It is a very unique solution which replicates data from Oracle or SQL Server to more than a dozen other platforms, including Amazon Web Services (AWS), AuroraDB, Azure SQL Database service, EDB Postgres Advanced Server, Java Message Service (JMS), Kafka, MySQL, PostgreSQL, SAP ASE, SAP HANA, Oracle, Microsoft SQL Server, EDB Postgres Advanced Server, Teradata, Kafka, and flat, SQL and XML files. (Overcoming Challenge 1)

Additionally, with the help of this, we can migrate to cloud-based or open-source databases with near-zero downtime while allowing your daily operational processes to continue uninterrupted. (Overcoming Challenge 2). And Finally, Shareplex has built-in monitoring, conflict resolution, data comparison and synchronization capabilities which helps users to run their enterprise-grade application without downtime (Overcoming Challenge 3).

Here is the next action for you – Download Free Trial of SharePlex and validate it for your application. As I said earlier the best part of this product is that it replicates data in any environment which avoids the database downtime.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on Avoiding Database Downtime via Replication – SharePlex

SQL SERVER – Mirroring Error 1456 – The ALTER DATABASE Command Could not be Sent to the Remote Server Instance

$
0
0

SQL SERVER - Mirroring Error 1456 - The ALTER DATABASE Command Could not be Sent to the Remote Server Instance mirrorerror As a part of my consulting, I still get few clients who prefer to use database mirroring over Always On Availability Group. In this blog we would cover about one possible cause of database mirroring Error 1456 – The ALTER DATABASE command could not be sent to the remote server instance.

The client contacted me during a disaster and they were facing this issue after the disaster situation. They were trying to re-configure database mirroring witness server and were having a hard time in doing that. Their old witness server had crashed, and they had built a new server and were trying to add it to the database mirroring configuration.

Here is the error when we try to add witness server to the current mirrored database.

The ALTER DATABASE command could not be sent to the remote server instance ‘TCP://srv_w.sqlauthority.com:5022’. The database mirroring configuration was not changed. Verify that the server is connected, and try again. (Microsoft SQL Server, Error: 1456)

As usual, I asked them to check ERRORLOG and found below on Principal Server

2018-03-15 07:16:12.040 spid49s      Database mirroring is inactive for database ‘test’. This is an informational message only. No user action is required.
2018-03-15 07:16:12.110 Logon        Database Mirroring login attempt by user ‘SQLAUTHORITY\srv_w$’ failed with error: ‘Connection handshake failed. The login ‘SQLAUTHORITY\srv_w$’ does not have CONNECT permission on the endpoint. State 84.’.  [CLIENT: 10.17.144.60]
2018-03-15 07:16:12.110 spid109s     Error: 1474, Severity: 16, State: 1.
2018-03-15 07:16:12.110 spid109s     Database mirroring connection error 5 ‘Connection handshake failed. The login ‘SQLAUTHORITY\srv_w$’ does not have CONNECT permission on the endpoint. State 84.’ for ‘TCP://srv_w.sqlauthority.com:5022’.
2018-03-15 07:16:14.600 Logon        Database Mirroring login attempt by user ‘SQLAUTHORITY\srv_w$’ failed with error: ‘Connection handshake failed. The login ‘SQLAUTHORITY\srv_w$’ does not have CONNECT permission on the endpoint. State 84.’.  [CLIENT: 10.17.144.60]

I have already blogged about similar errors earlier. You may want to refer them if the solution in this blog is not helping.

SOLUTION/WORKAROUND

This issue looked like the CONNECT permissions are not provided on the endpoint, but it was not that easy. I used the below query to check of the CONNECT permission has been granted for this account to the endpoint.

SELECT EP.name, SP.STATE, 
   CONVERT(nvarchar(38), suser_name(SP.grantor_principal_id)) 
      AS GRANTOR, 
   SP.TYPE AS PERMISSION,
   CONVERT(nvarchar(46),suser_name(SP.grantee_principal_id)) 
      AS GRANTEE 
   FROM sys.server_permissions SP , sys.endpoints EP
   WHERE SP.major_id = EP.endpoint_id
   ORDER BY Permission,grantor, grantee; 
GO

If CONNECT permission is not provided we need to provide the same. In our case, everything looked good, but we still encountered failures. Then a thought provoked me that this is a new server and the account we are talking about is a Machine Account. i.e., SQLAUTHORITY\srv_w$. As this is a new server the SID of this account will be different of that of the account which was present on the old server.

As per documentation “For two server instances to connect to each other’s database mirroring endpoint point, the login account of each instance requires access to the other instance. Also, each login account requires “connect” permission to the Database Mirroring endpoint of the other instance.”

So, the SID for this login present in other servers will have an Old SID as they have not re-added the account. I ran below command to make a note of the SID.

select * from sys.syslogins where name = 'sqlauthority\srv_w$'

We deleted the old login and re-added the login which will create the new SID.  After that, we were able to add witness successfully.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Mirroring Error 1456 – The ALTER DATABASE Command Could not be Sent to the Remote Server Instance

SQL SERVER – FIX: Backup Detected Log Corruption in database MyDB. Context is Bad Middle Sector

$
0
0

During my consulting, I guide my clients to have a backup plan in place. One of my clients reported that whenever they try to take a Full Backup of a database, it fails precisely after 60% completion. Here is the error message while making database backup using SQL Server Management Studio. Let us fix error related to the bad middle sector in this blog.

System.Data.SqlClient.SqlError: BACKUP detected corruption in the database log. Check the errorlog for more information.  (Microsoft.SqlServer.Smo)

The above error gave us a hint that it is a corruption in transaction log file. As usual, I asked for ERRORLOG to check what we have there during the failure.

2018-03-07 13:25:40.570 spid62       Backup detected log corruption in database MyDB. Context is Bad Middle Sector. LogFile: 2 ‘D:\Data\MyDB_log.ldf’ VLF SeqNo: x280d VLFBase: x10df10000 LogBlockOffset: x10efa1000 SectorStatus: 2 LogBlock.StartLsn.SeqNo: x280d LogBlock.StartLsn.
2018-03-07 13:25:40.650 Backup       Error: 3041, Severity: 16, State: 1.
2018-03-07 13:25:40.650 Backup       BACKUP failed to complete the command BACKUP DATABASE MyDB. Check the backup application log for detailed messages

When I checked on the internet, most of the issue reported had a single solution – “Rebuild Transaction Log”. But as you can imagine, it is a risky operation concerning data loss. Then, something struck my mind to check VLF related info, and it was an interesting finding.

SQL SERVER - FIX: Backup Detected Log Corruption in database MyDB. Context is Bad Middle Sector error

In the above error, we do see the VLF SeqNo x280d, and it looks like a hexadecimal number. So, I converted it to decimal and got 10253. I ran the command “dbcc loginfo” on this database and could see this VLF SeqNo and the status was 2 which means active VLF. Then I tried to check if there was an active transaction using DBCC Opentran and output was as follows:

Transaction information for database ‘MyDB’.
Replicated Transaction Information:
Oldest distributed LSN : (0:0:0)
Oldest non-distributed LSN : (9877:25:1)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

I checked what the transaction log waiting for is and we found the below info.

SELECT log_reuse_wait
,log_reuse_wait_desc 
FROM sys.databases
WHERE database_id = DB_ID('MyDB')-- replace DB Name

Below is the output

log_reuse_wait Log_reuse_wait_desc
————– ———————-
6              REPLICATION

WORKAROUND/SOLUTION

Maybe this is a special situation where my client had no clue about using replication on this database. This could be another classic scenario where replication related metadata was not completed removed from this database. I used the below command to remove replication from this database,

sp_removedbreplication 'MyDB'

After this, we did not see any old running replicated transactions (DBCC OPENTRAN was clean). VLF was in status 0 which means inactive. Interestingly, we were also successful in taking a backup of this database, and it was a happy ending!

Have you seen such issue earlier?

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX: Backup Detected Log Corruption in database MyDB. Context is Bad Middle Sector

SQL SERVER – PowerShell Script – Delete Old Backup Files in SQL Express

$
0
0

If you have worked with various SQL Server editions, then you would know that SQL Server Express edition does not have SQL Server Agent service. Due to this, we lose capabilities to automate a lot of things. Let us learn in this blog post how to delete old backup files in SQL Express.

SQL SERVER - PowerShell Script - Delete Old Backup Files in SQL Express deletebackup

This article is not specific to SQL Express but can be implemented in any edition of SQL Server. Since we can use maintenance plan in other SQL editions, it may not be very useful there. One of the main things DBAs want to automate is taking backups on a regular basis. I have gone through this world of internet and there are a million ways to automate SQL backups and the most basic, easy and popular one adopted is using a Task Scheduler method. Microsoft has a Knowledge Base Article to achieve the same.

How to schedule and automate backups of SQL Server databases in SQL Server Express

While doing consulting with a client, I found that the server was having E drive almost full. Later I learned that they have adopted the above approach. Now, they asked me if there is an automated way to delete backup files which are older than ‘X’ number of days. This is because their disk space was getting filled quickly.

SOLUTION/WORKAROUND

Below is the script which can be used to achieve deletion of the files. We need to provide two parameters, Path and DaysToKeep. The below script will hold the latest 5 days backup and delete all the older files from the folder provided in Path variable.

$Path = "D:\Backups\"
$DaysToKeep = "-5"
$CurrentDate = Get-Date
$DatetoDelete = $CurrentDate.AddDays($DaysToKeep)
Get-ChildItem $Path -Recurse | Where-Object { $_.LastWriteTime -lt $DatetoDelete } | Remove-Item -Recurse

Get-ChildItem $Path -Recurse | Where-Object { $_.LastWriteTime -lt $DatetoDelete } | Remove-Item -Recurse

Note: this would delete any file in that location so if you want, you can add a filter to delete only .bak or .trn file.

USAGE STEPS

  1. Save the above script to a folder. for example: c:\temp\cleanupjob.ps1
  2. Open the Windows Task Scheduler. In the Task Scheduler, select the Create Task option
  3. Enter a name for the task, and give it a description (the description is optional)
  4. Click on the Triggers tab and set your schedule or event that will trigger the running of your PowerShell script
  5. Next, go to the Actions tab and click New to set the action for this task to run. Set the Action to Start a program.
  6. In the Program/script box enter “exe
  7. In the Add arguments box enter the value ->

-ExecutionPolicy Bypass c:\temp\cleanupjob.ps1

  1. Save your scheduled task and run it.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – PowerShell Script – Delete Old Backup Files in SQL Express

SQL SERVER – AlwaysOn Automatic Seeding Failure – Failure_code 15 and Failure_message: VDI Client Failed

$
0
0

Recently I wrote a blog explaining a situation where taking log backup can break automatic seeding and synchronization would not work. Below is the blog reference. SQL SERVER – AlwaysOn Automatic Seeding – Database Stuck in Restoring State

While fixing above, I came across an interesting situation where I wanted to add multiple databases to the availability group. In my availability group, two of them were able to seed properly but one database was not getting restored on the secondary replica. In this blog, we would learn how to VDI Client failed caused during automatic seeding.

After looking into various articles on the internet, I learned that dynamic management views (DMVs) sys.dm_hadr_physical_seeding_stats can be used to track the progress of seeding. I queried the DMV to see the cause of failure. Here is the query which I used to find the status.

SELECT local_database_name
,role_desc
,internal_state_desc
,failure_code
,failure_message
FROM sys.dm_hadr_physical_seeding_stats

Here is the output

SQL SERVER - AlwaysOn Automatic Seeding Failure - Failure_code 15 and Failure_message: VDI Client Failed seed-fail-vdi-01

I had no clue about failure_code 15 and failure_message VDI Client failed shown in above output. To move further, I check ERRORLOG on the secondary replica and found below interesting messages.

2018-03-17 02:01:48.56 spid69s Error: 911, Severity: 16, State: 1.
2018-03-17 02:01:48.56 spid69s Database ‘SQLAuthorityDB’ does not exist. Make sure that the name is entered correctly.
2018-03-17 02:01:48.70 spid69s Error: 3633, Severity: 16, State: 1.
2018-03-17 02:01:48.70 spid69s The operating system returned the error ‘5(Access is denied.)’ while attempting ‘RestoreContainer::ValidateTargetForCreation’ on ‘C:\Database\SQLAuthorityDB.mdf’ at ‘container.cpp'(2759).
2018-03-17 02:01:48.70 spid69s Error: 3634, Severity: 16, State: 1.
2018-03-17 02:01:48.70 spid69s The operating system returned the error ‘5(Access is denied.)’ while attempting ‘RestoreContainer::ValidateTargetForCreation’ on ‘C:\Database\SQLAuthorityDB.mdf’.
2018-03-17 02:01:48.70 spid69s Error: 3156, Severity: 16, State: 5.
2018-03-17 02:01:48.70 spid69s File ‘SQLAuthorityDB’ cannot be restored to ‘C:\Database\SQLAuthorityDB.mdf’. Use WITH MOVE to identify a valid location for the file.
2018-03-17 02:01:48.70 spid69s Error: 3119, Severity: 16, State: 1.
2018-03-17 02:01:48.70 spid69s Problems were identified while planning for the RESTORE statement. Previous messages provide details.
2018-03-17 02:01:48.70 spid69s Error: 3013, Severity: 16, State: 1.
2018-03-17 02:01:48.70 spid69s RESTORE DATABASE is terminating abnormally.
2018-03-17 02:01:48.70 spid71s Automatic seeding of availability database ‘SQLAuthorityDB’ in availability group ‘AG’ failed with a transient error. The operation will be retried.

Now, this was interesting and clear to tell us the problem. The automatic seeding was trying to restore the database and created file in C:\Database folder. It was failing with error operating system returned the error ‘5(Access is denied.)’. When I checked other two databases which I was adding along with this, I would that they were in a different location and permissions were perfect there.

WORKAROUND/SOLUTION

There were two options to fix the issue so that automatic seeding can work.

  1. Provide permission on the destination folder on secondary replica to service account. Once it’s done, we can restart the secondary so that seeding would kick in again. We can also remove and this database from availability group and add again.
  2. Move the files on primary to a location where seeding is working.

If we don’t want automatic seeding, then we can also perform manual backup and restore of this database and then add it to availability group by using “Join Only” option in the wizard.

Have you come across any other seeding failure code? If yes, please share your experience with other readers via comment. If I can reproduce those codes, I would write a blog on that.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – AlwaysOn Automatic Seeding Failure – Failure_code 15 and Failure_message: VDI Client Failed

SQL SERVER – Upgrade Error – ALTER DATABASE Statement Not Allowed Within Multi-statement Transaction

$
0
0

Most of the time applying service pack for SQL Server is a child’s play. Double click on exe, keep hitting next, next and finally press update. But when it fails, you would need an expert to fix the issue. Recently, one of my clients faced a problem while installing SQL Server 2014 Sp2. The Service Pack error failed at the final stage. And now the SQL Server services were not starting. This issue can have various variations where SQL Services doesn’t start after patching. Few on my earlier blogs on the same topic are listed below. Let us learn about ALTER DATABASE Statement Not Allowed Within Multi-statement Transaction.

After looking at all the log files we narrowed the issue down to the below section in SQL Server Errorlog. Based on the errors I saw here and based on my previous experiences with such errors, I realized that this is a Script Upgrade failure. Below are the steps which occur during a patching process,

  • Whenever any SQL Server patch is applied, the setup would patch the binaries first.
  • During the restart of the instance, SQL Server startup would go through “script upgrade mode” during the recovery
  • Script upgrade mode is the phase where objects inside the databases are upgraded based on recent patch applied.
  • Based on features installed and a number of databases available, it would take a varying amount of time.

2018-03-11 17:54:38.98 spid9s Upgrading subscription settings and system objects in database [TransRepl].
2018-03-11 17:54:39.12 spid9s Invalid object name ‘MSreplication_objects’.
2018-03-11 17:54:39.12 spid9s Error executing sp_vupgrade_replication.
2018-03-11 17:54:39.12 spid9s Saving upgrade script status to ‘SOFTWARE\Microsoft\MSSQLServer\Replication\Setup’.
2018-03-11 17:54:39.12 spid9s Saved upgrade script status successfully.
2018-03-11 17:54:39.12 spid9s Database ‘master’ is upgrading script ‘upgrade_ucp_cmdw_discovery.sql’ from level 201330692 to level 201331592.
2018-03-11 17:54:39.40 spid9s Database ‘master’ is upgrading script ‘msdb110_upgrade.sql’ from level 201330692 to level 201331592.
2018-03-11 17:54:39.40 spid9s ———————————-
2018-03-11 17:54:39.40 spid9s Starting execution of PRE_MSDB.SQL
2018-03-11 17:54:39.40 spid9s ———————————-
2018-03-11 17:54:39.64 spid9s Error: 15002, Severity: 16, State: 1.
2018-03-11 17:54:39.64 spid9s The procedure ‘sys.sp_dbcmptlevel’ cannot be executed within a transaction.
2018-03-11 17:54:39.64 spid9s —————————————–
2018-03-11 17:54:39.64 spid9s Starting execution of PRE_SQLAGENT100.SQL
2018-03-11 17:54:39.64 spid9s —————————————–
2018-03-11 17:54:39.65 spid9s Error: 226, Severity: 16, State: 6.
2018-03-11 17:54:39.65 spid9s ALTER DATABASE statement not allowed within multi-statement transaction.

From above we can see that we hit an error while executing the Stored Proc – sp_vupgrade_replication And it also said that it was not able to find the object – MSreplication_objects

As per Microsoft documentation sp_vupgrade_replication used for below.

Activated by setup when upgrading a replication server. Upgrades schema and system data as needed to support replication at the current product level. Creates new replication system objects in system and user databases. This stored procedure is executed at the machine where the replication upgrade is to occur.

WORKAROUND/SOLUTION

As per my previous articles on the same topic, I knew that I need to use trace flag 902 to bypass script upgrade mode and fix the real cause of the error. I started SQL using trace flag 902 as below

NET START MSSQL$SQL2014 /T902

Refer: SQL SERVER – 2005 – Start Stop Restart SQL Server from Command Prompt

I was able to connect to SQL Server because the problem script didn’t run due to trace flag.

Then I straight went to the database in question and expanded the System Tables section could see that the table – MSreplication_objects was missing. Now how do we get back this replication related system table? To quickly confirm this fact, we checked for the same in the other databases which were a part of replication and confirmed that this table needs to be present.

I then got an idea to script out this stored proc from the other database and create it here. I went ahead and did the same.

SQL SERVER - Upgrade Error - ALTER DATABASE Statement Not Allowed Within Multi-statement Transaction patch-repl-01

The extracted script looks like below,

SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[MSreplication_objects](
	[publisher] [sysname] NULL,
	[publisher_db] [sysname] NULL,
	[publication] [sysname] NULL,
	[object_name] [sysname] NOT NULL,
	[object_type] [char](2) NOT NULL,
	[article] [sysname] NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO

I was happy to see the script execute successfully. But to my surprise, the stored procedure got created as a user stored proc instead of a system stored proc.

SQL SERVER - Upgrade Error - ALTER DATABASE Statement Not Allowed Within Multi-statement Transaction patch-repl-02

Then to promote this stored proc as a system proc I used an undocumented stored procedure sp_MS_marksystemobject

exec sys.sp_MS_marksystemobject  MSreplication_objects

After that, we can see that the table moved into system tables folder.

SQL SERVER - Upgrade Error - ALTER DATABASE Statement Not Allowed Within Multi-statement Transaction patch-repl-03

Once this was done then, I stopped SQL and started it usually (without trace flag 902), and it was able to start successfully. Have you encountered any other flavor of script upgrade mode?

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Upgrade Error – ALTER DATABASE Statement Not Allowed Within Multi-statement Transaction


The Evolution of the DBA – Challenges, Changes and Upcoming Trends

$
0
0

What scares the DBA most besides the Text at midnight with word downtime? Well, the answer to this question is pretty simple – The changing work profile for DBA. I am fortunate that during my job as a Data Consultant, I get to visit different organizations across the world and get to meet some most exceptional Data Professionals and Business Leaders. I get the first-hand experience of the evolution of the DBAs in the constant changing and challenging world. It is essential for DBAs to get updated with the current trends to stay relevant to their job. Let us see today a quick story of Evolution of the DBA!

The Evolution of the DBA - Challenges, Changes and Upcoming Trends evolutionofdba-800x336

Change Is The Only Constant Thing In This World

Data are the most critical part of any application. Changing nature of the data always brings new challenges to DBAs. However, over the last few years, DBAs has been very comfortable with the data. Their primary responsibilities include keeping the data safe, secure and available. Eventually, as the data and workload grow, DBAs are often responsible for the performance of the system. Once the system is robust, most the DBAs have to troubleshoot the unexpected issues and resolve them. It was a great life.

However, things have been changing for last few years. New age DBAs now have to do many different tasks than before. Along with the running system at the most optimal settings, DBAs now have to also get their hands dirty with developments, DevOps, NoSQL technology as well as help organizations to migrate to the cloud.

Let us see three of the most significant shift which we observe in the DBAs evolution journey.

Evolution 1: Adapting NoSQL Platforms

We all know NoSQL stands for Not Only SQL. Essentially, Along with SQL, now DBAs have to also start understanding how Non-relational database platform works as well. No-SQL is not just a product, but it is a philosophy, which is inclusive of pretty much every product and platform which is related to data. When we talk about a relational database, we always think regarding tables. However, NoSQL has gone beyond tabular format, and you can pretty much store the data in any format you prefer.

The end goal of the NoSQL platform is to overcome limitation often surrounding a relational database. With the beginning of the new age of internet a few years ago, now applications are being built which can take advantages of the power of NoSQL. I often see in large organizations complex application needs heterogeneous database environment. DBAs sometimes have to either support this new platforms or often just learn them well enough to make sure that there is synergy between a relational database and other platforms.

I personally see quite a many DBAs evolving to the new change of adapting NoSQL Platforms.

Evolution 2: Move to Cloud Technology

The Evolution of the DBA - Challenges, Changes and Upcoming Trends questdb Cloud was merely a buss word a few years ago, and suddenly it has become a reality. Cloud Technology was very simple when it all started. However, as the evolution of cloud technology continues it has got many different layers. On cloud platform provisioning, data storage and security is very different from what DBAS has to do on-premises. Additionally, the fears competition of the cloud technology providers has created the life of DBAs more complicated as now not only they have to understand the current technology but also have to stay up to date with the future roadmaps.

Microsoft, AWS, EnterpriseDB, RackSpace, Google and many more cloud technology providers are out there offering their service for a single digit billing to multi-million dollars offering. DBAs are now forced to think like a business analysis due to an introduction of the pay as you go rather than paying once for the billing. The hybrid and heterogeneous environment is going to be here for next few years, and DBAs have evolved to accept them as part of their daily job.

DBAs have evolved from on-premises DBAs to Hybrid DBAs, and I think there is no regression in this evolution.

Evolution 3: Moderns Tools and Application

When I started my career as a DBA, there were limited tools, and I used to spend quite a lots of time collecting various metadata related information from a database. Once I gathered the data, the next time-consuming task was to analyze all the data and surface meaningful, actionable information from it. However, today when I look around, I can see quite a lots of tools around me which can help me automate my tasks and save my time.

Multiple vendors have come up with many different tools for automated routine and mechanical tasks. This has created some additional time for DBAs in their daily job. Many of the DBAs have taken advantages of this situation has started to learn new technology or develop themselves into a new area. This has opened entire new windows of opportunity for DBAs.

This particular evolution has been blessings for the DBAs who were always stuck in their daily routine life where their scope of the progressing was minimal.

The Evolution of the DBA – Call to Action

The evolution of the DBA is continuously changing and bringing new opportunities every single day. The change is going to be here, and it is our responsibility that we accommodate the latest upcoming trends. As an evolving DBA, we must adapt and overcome the challenges of new technology.

To gain insight into the evolving challenges for DBAs, Quest commissioned Unisphere Research to survey DBAs and those responsible for the management of the corporate data management infrastructure. The results are in, and the thought-provoking findings are now available.

I strongly recommend you download the whitepaper DBAs Face New Challenges: Trend in Database Administration.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on The Evolution of the DBA – Challenges, Changes and Upcoming Trends

SQL SERVER – FIX: Arithmetic Overflow Error in Object Execution Statistics Report in Management Studio

$
0
0

Many times, there are unforeseen conditions and few developers can’t predict which writing the code of a product. This is the reason we have bugs in the products and we all have job to do. One of my clients was not able to run one of the inbuilt reports which come with SQL Server Management Studio. The report name was Object Execution Statistics. As per them, this report was running fine until last week and it had suddenly stopped working. They wanted some idea this as it was not a show stopper for them. When they launch the report, they see below error

Msg 8115, Level 16, State 2, Line 1
Arithmetic overflow error converting expression to data type int.

If you have not seen already, there was a series of blogs which I have written about SSMS Reports. Here is the blog about the report which I am talking about. SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

Object Execution Statistics report is part of Database-Level Report. This report provides detailed historical execution data for all currently cached plans for objects within the Database. This execution data is aggregated over the time during which the plan has been in the cache.

Using the profiler, it was easy to find the query which runs in the background while launching the report. As expected, I could see the error by running query directly in SSMS.

SQL SERVER - FIX: Arithmetic Overflow Error in Object Execution Statistics Report in Management Studio ssms-rep-err-01-800x662

Before considering the query, let’s spend few seconds to understand the error we are facing.

Arithmetic overflow error converting an expression to data type int.

Which means, there is a value being passed to a column which is greater than the Integer Datatype [2,147,483,647]

Keep in mind that, we will get the same error by running the same query from management studio query window also. The query is so big that I am not going to paste here. We can copy the query from profiler and paste in SSMS. Now, if we need to identify all the columns which have the datatype defined as INT. Below were the identified columns.

declare @dbid int; 
declare @sql_handle_convert_table table(row_id int identity 
, t_SPRank int
, t_execution_count int
, t_plan_generation_num int

I changed the datatype for all of them to BIGINT and re-ran the query in SSMS query windows and it executed successfully. Now, which is the column causing the issue?

declare @dbid bigint; 
declare @sql_handle_convert_table table(row_id bigint identity 
, t_SPRank bigint
, t_execution_count bigint
, t_plan_generation_num bigint

We further isolated the issue by changing the datatype back to INT and found the column to be t_execution_count. This column – execution_count column is coming from DMV sys.dm_exec_query_stats. Then I ran the below query to check the top execution_count from DMV — sys.dm_exec_query_stats

SELECT TOP 5 execution_count
FROM sys.dm_exec_query_stats
ORDER BY execution_count DESC

execution_count
—————————-
35622915365 << more than the max of an integer.
858124017
48468962
48468962
30130034

When we try to convert this number to INT we get the same exact error what we get in the report.

SELECT CONVERT (INT, 35622915365)

Msg 8115, Level 16, State 2, Line 1
Arithmetic overflow error converting expression to data type int.

This confirms that the column value of execution_count in the DMV – sys.dm_exec_query_stats has a value which his higher than INT Datatype.

WORKAROUND/SOLUTION

This is a clear bug in the SSMS report. I checked the latest version today and it is not fixed. Till Microsoft fixes it, the only way to resolve this problem is to run FREEPROCCACHE for this Query_Handle which is having a high count. So, we got the Query Handle we ran the query,

dbcc freeproccache(Query_handle)

Then after, the report executed fine without any issues.

Have you found any such issues with Standard Reports?

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX: Arithmetic Overflow Error in Object Execution Statistics Report in Management Studio

SQL SERVER – FIX: Error: 5511 – FILESTREAM’s file system log record under log folder is corrupted

$
0
0

SQL SERVER - FIX: Error: 5511 - FILESTREAM's file system log record under log folder is corrupted errortv Many times, people have asked me to share my experiences as a freelance consultant. This is one of the experiences I would like to share. Sometimes, I have been in a situation where I have to ask a restart of SQL Server. The situation becomes stressful when the server is restarted, and production database fails to come online. In this blog, we would discuss error 5511 for filestream which I encountered after restarting SQL Server service.

We restarted the SQL Server service and one of the databases fails to come online and we see the error: 5511. We have even restarted the server, but it does not seem to help them. Here are the messages in SQL Server ERRORLOG.

The error was referring to a FileStream Corrupt file — ‘GUID.txt’ under log folder ‘\\?\J:\Blobstore_SQLAuthority\$FSLOG’

I searched on the internet and found —  https://blogs.msdn.microsoft.com/psssql/2011/06/23/how-it-works-filestream-rsfx-garbage-collection/

As per article: In the file stream container directory tree is the $FSLOG directory with a set of files as well.  The file names have embedded information and are considered part of the database log chain. Part of the file name is an LSN that matches the LSN file name of the actual file storage.  The File stream garbage collector uses these files to help determine the cleanup requirements.

DANGER:  Deleting a file directly from the file stream container is considered database corruption and dbcc checkdb will report corruption errors.  These files are linked to the database in a transactional way and the deletion of a file is the same as a damaged page within the database.

WORKAROUND/SOLUTION

In our case, we just restart SQL Service and there was no hardware failure. So, I thought of taking a risk and we moved the corrupted file to a different folder and then tried to bring the database online. The database came online. In the above blog, it is said that we should not directly delete files from the $FSLOG folder but we have done it and it has worked. I did not know how this has affected the database itself and the application. We ran CheckDB and that came out clean. My client also monitored the server for a couple weeks and updated me that everything seems to work fine.

Have you seen such issues?

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX: Error: 5511 – FILESTREAM’s file system log record under log folder is corrupted

SQL SERVER – Error: 1067 – Unable to Bring Analysis Service Online in Cluster

$
0
0

Even after working with many clients on various issues (Always On, Deployment, Performance Tuning) I always get new challenges as a part of my freelancing job. One thing I have learned is that one needs a systematic approach to diagnose the issue and, of course, passion to solve the problem. In this blog, we would learn about how to fix the error: 1067 which comes while trying to bring Analysis Service online in the cluster.

Recently, one of my clients wanted me to fix an issue where they were not able to start SQL Analysis Services which was installed in a cluster. The major challenge here was that all the data available in the logs were very generic and what was surprising is that even the cluster logs had generic error messages. Refer below:

If you are new to cluster and wanted to know how to generate cluster log, then refer my earlier blog.

ERR [RES] Generic Service : Service failed during initialization. Error: 1067.
ERR [RHS] Online for resource Analysis Services (SSAS) failed.
WARN [RCM] HandleMonitorReply: ONLINERESOURCE for ‘Analysis Services (SSAS)’, gen(29) result 5018/0.
ERR [RCM] rcm::RcmResource::HandleFailure: (Analysis Services (SSAS))
WARN [RCM] Not failing over group Data Warehousing Instances, failoverCount 4, failoverThresholdSetting 4294967295, computedFailoverThreshold 1, lastFailover

From cluster log (ERR stands for error) we can see that we pretty much have a very generic error – “Error: 1067” which means “The process terminated unexpectedly”

SQL SERVER - Error: 1067 - Unable to Bring Analysis Service Online in Cluster ssas-clu-err-01-800x167

As I mentioned earlier, let’s start with the basic approach. We know this is a cluster and SSAS instance is a clustered instance. There are two ways we can confirm that the service is starting successfully.

  • Successfully starts as a clustered resource in the failover cluster manager
  • Successfully starts when we try to start it locally using Service Control Manager.

We know that we are failing with the first option. So, we went on to the second option.

  • We opened services.msc à right click on SSAS serviceà Click Start. We got the same error 1067.
  • Used NET START <servicename> method and that too failed with the same error.
  • Then we tried to start the service using the SSAS exe file exe.
  • We went to the properties of the SSAS service in Service Control Manager to get the path of the EXE and the startup parameters being used. And we saw the below values:

“D:\Program Files\Microsoft SQL Server\MSAS11.SSAS\OLAP\bin\msmdsrv.exe” -s “F:\OLAP\Config”

SOLUTION/WORKAROUND

As soon as my client saw the above data, he screamed I think, I know what the problem is” So, I asked him what is it? And he replied – “We don’t have F:\drive in this cluster!!”

He then explained the cluster storage re-shuffling they had done recently which may have introduced this issue. Now how do we fix this?

  1. Go back and rename the cluster drive letters to the original ones [this was unlikely to happen]
  2. Find another way to edit those values and we are talking about the registry here.

Every service installed in windows get their registration under,

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\<ServiceName>\

So, in this case we headed to,

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSAS$SSAS\ImagePath

Here we could see the value what we saw in services properties. We changed that to the below,

“D:\Program Files\Microsoft SQL Server\MSAS11.SSAS\OLAP\bin\msmdsrv.exe” -s “M:\OLAP\Config”

Then on we could start SSAS service locally from service control manager. And we were also able to start it from Failover Cluster Management.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Error: 1067 – Unable to Bring Analysis Service Online in Cluster

SQL SERVER – Be Careful with Logon Triggers – Don’t use Host_Name

$
0
0

Before you read further, let me warn you that Logon triggers are potentially evil and might also end up locking everyone out of the instance. Be careful! Don’t just copy paste the code and create the trigger, you might end up in downtime to fix the issue.

Recently, one of my blog readers sent an email to me asking a quick suggestion. He informed me that he is using a LOGON trigger to control the access to the server. Here is the code for the trigger.

CREATE TRIGGER RestrictAccessPerHostname 
ON ALL SERVER
FOR LOGON
AS
BEGIN
IF
(
HOST_NAME() NOT IN ('HostName1','HostName2','HostName3','HostName4','HostName5')
)
BEGIN
RAISERROR('You cannot connect to SQL Server from this machine', 16, 1);
ROLLBACK;
END
END 

This code would check host_name for incoming connection and if its not in the list login would not be allowed.

He asked if this method is OK to restrict the server access? By looking at the code and logic it seems very logical that we are checking the host_name with the predefined list.

MY SUGGESTION

In my opinion, we shouldn’t use logon trigger as a mechanism to stop login to SQL Server when the filter can be bypassed. What if someone changes his/her laptop IP or name to match the allowed IP or name? You should ideally use a firewall for this purpose. In remaining part of the blog, you can see that how easy it is to spoof host_name sent to SQL Server.

If I create the trigger and try to login from my laptop (which is not in the listed hostname) I would get an error as below.

SQL SERVER - Be Careful with Logon Triggers - Don’t use Host_Name logon-trig-01

Now, here is the trick to bypass this specific trigger. In SSMS, we can add “Workstation ID = HostName1” as shown below under “Additional Parameter”.

SQL SERVER - Be Careful with Logon Triggers - Don’t use Host_Name logon-trig-02

This would allow us to connect from any server even if the server name is not HostName1 (or any other name in the trigger). This is not what you might have thought while designing the trigger.

Are you using such triggers to restrict server access? What filter you are using? Can that be bypassed like I showed above? Please comment and let me know.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Be Careful with Logon Triggers – Don’t use Host_Name

Viewing all 599 articles
Browse latest View live