Quantcast
Channel: SQL Archives - SQL Authority with Pinal Dave
Viewing all 594 articles
Browse latest View live

SQL SERVER – Installation Stuck at Offline Installation of Microsoft Machine Learning Server Components

$
0
0

I was trying to install SQL Server 2017 on my latest formatted machine and was stuck at below screen which says – Offline Installation of Microsoft Machine Learning Server Components. In this blog, we would learn about how to move forward in the Installation when it is stuck at Offline Installation of Microsoft Machine Learning Server Components.

SQL SERVER - Installation Stuck at Offline Installation of Microsoft Machine Learning Server Components setup-offline-01

First let’s understand the cause the screen because when I was installing SQL with my client, I didn’t see this screen at all.

It looks like If we select Machine Learning, the setup would connect to the internet and download few components. If it is not able to reach to the internet, then it would provide above the screen.

I looked at Detail.txt  and then RSetup.log and found below in RSetup.log.

2018-11-26T03:48:50 INFO Command invoked: C:\Program Files\Microsoft SQL Server\140\Setup Bootstrap\SQL2017\x64\RSetup.exe /checkurl /component SRS /version 9.2.0.24 /language 1033 /logfile C:\Program Files\Microsoft SQL Server\140\Setup Bootstrap\Log\20181125_193933\RSetup.log
2018-11-26T03:48:50 INFO RSetup.exe version: 9.2.0.44
2018-11-26T03:48:50 INFO Validating URL: https://go.microsoft.com/fwlink/?LinkId=851507&clcid=1033
2018-11-26T03:49:11 WARN Error making request: Unable to connect to the remote server
2018-11-26T03:49:11 WARN Error validating download
2018-11-26T03:49:11 INFO Exiting with code 851507

All it says that it is not able to reach to the internet and download the packages. This depends on what features were selected during the feature selection screen. A picture speaks a thousand words and below two images would explain what I mean.

SQL SERVER - Installation Stuck at Offline Installation of Microsoft Machine Learning Server Components setup-offline-02

https://blog.sqlauthority.com/i/d/setup-offline-03.jpg

WORKAROUND/SOLUTION

As explained in the message, we need to download ALL the files from ALL the link provided below and then keep them in some folder. Here are the links for your ready reference.

Here are the URLs which come when “R” is selected.

Here are the URLs which would come when “Python” is selected.

Hope this would help someone and save time which I spent in finding the behavior/solution.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Installation Stuck at Offline Installation of Microsoft Machine Learning Server Components


SQL SERVER – FIX: Msg 15281 – SQL Server Blocked Access to STATEMENT ‘OpenRowset/OpenDatasource’ of Component ‘Ad Hoc Distributed Queries’

$
0
0

I wrote a blog earlier about reading data from excel via SQL Server. This was done using ‘OpenRowset. One of my blog readers sent an email to me that they were getting an error. In this blog, we would see how to fix Msg 15281 – SQL Server blocked access to STATEMENT ‘OpenRowset/OpenDatasource’ of component ‘Ad Hoc Distributed Queries’

SQL SERVER - FIX: Msg 15281 - SQL Server Blocked Access to STATEMENT 'OpenRowset/OpenDatasource' of Component 'Ad Hoc Distributed Queries' blockedaccess

Here is the query which my blog reader was using.

SELECT * FROM OPENROWSET(
'Microsoft.ACE.OLEDB.12.0'
,'Excel 12.0;Database=\\\\FileServer\\ExcelShare\\HRMSDATA.xlsx;HDR=YES;IMEX=1'
,'SELECT * FROM [EMPMASTER$]')

the error message while running the query is as follows.

Msg 15281, Level 16, State 1, Line 1

SQL Server blocked access to STATEMENT ‘OpenRowset/OpenDatasource’ of component ‘Ad Hoc Distributed Queries’ because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of ‘Ad Hoc Distributed Queries’ by using sp_configure. For more information about enabling ‘Ad Hoc Distributed Queries’, search for ‘Ad Hoc Distributed Queries’ in SQL Server Books Online.

SOLUTION/WORKAROUND

Here is the T-SQL which can be used to enable the usage of open rowset.

EXEC sp_configure 'show advanced options', 1
RECONFIGURE WITH OVERRIDE
GO
EXEC sp_configure 'ad hoc distributed queries', 1
RECONFIGURE WITH OVERRIDE
GO

After this, he was able to pull data from excel sheet to SQL Server. Well, this was quite a simple solution for a complicated looking problem. If you have alternative suggestions, please do share.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX: Msg 15281 – SQL Server Blocked Access to STATEMENT ‘OpenRowset/OpenDatasource’ of Component ‘Ad Hoc Distributed Queries’

SQL SERVER – xp_create_subdir() Returned Error 183, ‘Cannot Create a File When that File Already Exists

$
0
0

New client, new idea and a new blog! That’s how I have been writing a daily blog for many years now. In this blog, I would share my learning from a client who faced error “xp_create_subdir() returned error 183, ‘Cannot create a file when that file already exists” while running maintenance plan.

THE SITUATION

This was a newly deployed server where they were having a performance issue. They hired me for  Comprehensive Database Performance Health Check. Along with many other issues, I also found that they don’t have regular backups of the system databases (master, model, and msdb). We discovered that even though they had a maintenance plan created, but it was failing. I checked the maintenance plan logs and SQL Agent job logs and found that maintenance plan was failing due to below errors.

SQL SERVER - xp_create_subdir() Returned Error 183, 'Cannot Create a File When that File Already Exists xp_subdir-01

Here is the Error message from the highlighted section.

Executing the query “EXECUTE master.dbo.xp_create_subdir N’F:\\Backup\\ma…” failed with the following error: “xp_create_subdir() returned error 183, ‘Cannot create a file when that file already exists.'”. Possible failure reasons: Problems with the query, “ResultSet” property not set correctly, parameters not set correctly, or connection not established correctly.

Then I used “View T-SQL” hyperlink to dig further and found that it creates a directory for each database and then takes backup. Here is the command which I found was failing.

EXECUTE master.dbo.xp_create_subdir N'F:\Backup\master'

When I looked at the folder, I found that there were files with the same name as the database name.

SQL SERVER - xp_create_subdir() Returned Error 183, 'Cannot Create a File When that File Already Exists xp_subdir-02

and that’s the cause of the error. You would get the same error if you try to create a folder in windows with the same name.

SQL SERVER - xp_create_subdir() Returned Error 183, 'Cannot Create a File When that File Already Exists xp_subdir-03

WORKAROUND/SOLUTION

As we understood now, we already have a file with the same name which SQL Server wants to create a folder. In my client’s situation, they moved a database from old production server to new production server. So, these were the backups taken from the previous server. We renamed them and provided extension as .bak and some meaningful name. Fired maintenance plan again and backups were taken successfully.

SQL SERVER - xp_create_subdir() Returned Error 183, 'Cannot Create a File When that File Already Exists xp_subdir-04

In short, if you get this error, check if you already have a file with the database name in the folder. If yes, do something to resolve the conflict. Either rename or move the file somewhere.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – xp_create_subdir() Returned Error 183, ‘Cannot Create a File When that File Already Exists

Detect and Diagnose SQL Server Performance Issues with Spotlight Cloud

$
0
0

Every user wants their applications to run faster. My primary responsibility is to help people with SQL Server Performance Tuning and Optimization, and I believe there are three important challenges any DBA faces. Today we will discuss how we can detect and diagnose SQL Server Performance Issues with Spotlight Cloud while discussing the three important challenges for today’s DBA: Workload Analyses, Smart Alarms, and Health Checks.

Detect and Diagnose SQL Server Performance Issues with Spotlight Cloud spotlight

Workload Analysis

When DBAs and Developers start building an application, it usually runs faster as there is no real workload running over it. Often, DBAs and Developers do stress testing with the third-party tools or by simulating the workload. However, honestly, when an application goes live, the entire scenario is very much different. It is near impossible to predict the usage pattern and workload distribution in any application once it starts to grow. This is one area where we often have to depend on a third-party tool to help us out. Spotlight Cloud provides a way to dissect your workload across a range of dimensions and from different perspectives to allow DBAs to pinpoint what is having the greatest impact on specific aspects of their workload.

Detect and Diagnose SQL Server Performance Issues with Spotlight Cloud spotlight2

Smart Alarms

The Internet is full of the solutions to the various problems one can face in SQL Server. The real challenge with DBAs is that they are not able to find the real problem while they are struggling with application performance. Think of it this way – suddenly our application starts running slowly and our customers are starting to call us as they are not able to use the application. It is indeed very embarrassing that we have to wait for our customers to call us when things start to go downhill. Additionally, finding the root cause of the performance problem gets even more difficult as our customers are watching us continuously. Spotlight Cloud enables the start of problem resolution the second an issue is detected by presenting data trends, contextual data, and advisories in the alarm itself. This allows DBAs to shortcut the initial problem triage and begin solving issues immediately, getting to the root cause faster.

Detect and Diagnose SQL Server Performance Issues with Spotlight Cloud spotlight3

Health Checks

Detect and Diagnose SQL Server Performance Issues with Spotlight Cloud spotlight4

One of the biggest challenges of every technical decision maker is the health check. SQL Server application is very vast and the versatility of the features makes a life of the DBA very complicated. All the time, we hear words like Health Checks, but to be very honest, there is no clear definition of health check and there is very little guidance available on this topic on the internet. In addition to SQL Server Performance, DBAs have many other tasks to handle as well. In this situation, they often ignore the proactive SQL Server Health Checks which eventually leads to poor performance of the application during the course of the time. Spotlight Cloud runs a range of proactive checks on databases to look for worrying trends or deviations from best practice. This feature can identify problems in the making and will suggest or take corrective action.

Besides the three key features listed above, Spotlight Cloud also has other important functionalities: Spotlight Tuning Pack, Native Mobile Application and Data History. We will discuss those features in the future blog posts.

Why Spotlight Cloud?

Let me ask you one question: What is the most important factor for your business – Cost or Performance? If your answer is both, I think you are at the right place, and you should consider Spotlight Cloud as an effective tool for detecting and diagnosing SQL Server performance issues. Being able to efficiently track how much compute and storage your cloud workloads consume, allows your business to plan early and be proactive. It is very critical for any business to efficiently analyze workloads and set up smart alarms for an optimal health check. Failing to do proactive monitoring on your application(s) for potential issues can lead to substandard performance.

Spotlight Cloud is an efficient tool that can detect and diagnose SQL Server performance issues.  Here is a good overview video of Spotlight Cloud. Because it is a SaaS application, it is really easy to set up and configure. There are no on-premise performance repositories to worry about, and it will keep your performance data for up to a year in order to monitor trends and patterns. Try out Spotlight Cloud and let me know what you think about it in the comments sections.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on Detect and Diagnose SQL Server Performance Issues with Spotlight Cloud

SQL SERVER – Error: Parameter ‘ProbePort’ does not exist on the cluster object. Unable to set Probe Port for Azure Load Balancer

$
0
0

Azure is gaining popularity and I am getting clients who want to create Always On availability group as their high availability solution in Azure Virtual machine. To keep myself up to date, I also try creating the customer’s scenario in my lab. In this blog, we would how to fix error Parameter ‘ProbePort’ does not exist on the cluster object while configuring probe port.

SQL SERVER – Error: Parameter 'ProbePort' does not exist on the cluster object. Unable to set Probe Port for Azure Load Balancer alwaysonerror

I was following Microsoft’s article about Configure a load balancer for an Always On availability group in Azure. I was going flawlessly without any error till I ran below script.

$ClusterNetworkName = “WinCluster” $IPResourceName = “MyListenerIP” $ListenerILBIP = “10.0.0.22” [int]$ListenerProbePort = 59999 Import-Module FailoverClusters Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{“Address”=”$ListenerILBIP”;”ProbePort”=$ListenerProbePort;”SubnetMask”=”255.255.255.255″;”Network”=”$ClusterNetworkName”;”EnableDhcp”=0}

It failed with below long error.

Set-ClusterParameter : Parameter ‘ProbePort’ does not exist on the cluster object ‘WinCluster’. If you are trying to update an existing parameter, please make sure the parameter name is specified correctly. You can check for the current parameters by passing the .NET object received from the appropriate Get-Cluster* cmdlet to “| Get-ClusterParameter”. If you are trying to update a common property on the cluster object, you should set the property directly on the .NET object received by the appropriate Get-Cluster* cmdlet. You can check for the current common properties by passing the .NET object received from the appropriate Get-Cluster* cmdlet to “| fl *”. If you are trying to create a new unknown parameter, please use -Create with this Set-ClusterParameter cmdlet.
At line:5 char:39
+ … ourceName | Set-ClusterParameter -Multiple @{“Address”=”$ILBIP”;”Prob …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Set-ClusterParameter], ClusterCmdletException
+ FullyQualifiedErrorId : InvalidOperation,Microsoft.FailoverClusters.PowerShell.SetClusterParameterCommand

The message says that ProbePort in not the right parameter. I initially thought that my PowerShell might be old and its doesn’t understand the parameter. This was not the cause.

WORKAROUND/SOLUTION

Actually, I was using the wrong parameter values which were causing the error. If you look closely at my command, I was using Windows Cluster Network Name in first parameter “$ClusterNetworkName”. This ideally should be “Cluster Network 1” (or a value shown by Get-ClusterNetwork).

The second mistake was “$IPResourceName” value. This should be the name of the IP Address resource, not the value shown by cluster manager UI. We need to right click on IP resource, go to properties and pick Name from there.

Once I fixed both the parameters, I was able to run the script and configure ILB correctly.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Error: Parameter ‘ProbePort’ does not exist on the cluster object. Unable to set Probe Port for Azure Load Balancer

SQL SERVER – Fix : Error Msg 1813, Level 16, State 2, Line 1 Could not open new database databasename. CREATE DATABASE is aborted

$
0
0

It is indeed a pleasure when we write a timeless article for SQL Server and DBAs read it after many years. In the year 2007, I wrote an article about an error related to Could not open a new database. It has been one of the top viewed blog posts all the time. Recently SQL Server Expert Jeremy Roe has sent me an updated script for a recent version of SQL Server for the same error.

SQL SERVER - Fix : Error Msg 1813, Level 16, State 2, Line 1 Could not open new database databasename. CREATE DATABASE is aborted firelog

Here is the error message:

Fix: Error Msg 1813, Level 16, State 2, Line 1
Could not open new database ‘yourdatabasename’. CREATE DATABASE is aborted.

Jeremy wrote this script for SQL Server 2016 but we believed it will just work for any version which is SQL Server 2012 and onwards. Let us see his script where he fixes the error related to not opening a database.

Modified step tips:

  • Rename the old log file if present like blah_log_OLD just to be safe
  • Make sure to rename the DBS carefully to not mix them up or accidentally delete
  • Once confirmed everything is working, delete the fake database files and the unutilized older log file if no longer needed

SQL Server logs are corrupted and they need to be rebuilt to make the database operational.
Follow all the steps in order. Replace the yourdatabasename name with real name of your database.

1. Create a new database with same name which you are trying to recover or restore. (In our error message it is yourdatabasename). Make sure the name of the MDF file (primary data file) and LDF files (Log files) same as previous database data and log file.

2. Stop SQL Server. Move original MDF file from older server (or location) to new server (or location) by replacing just created MDF file. Delete the LDF file of new server just created.

3. Start SQL Server. Database will be marked as suspect, which is expected.

4. Make sure system tables of Master database allows to update the values.

USE MASTER
GO
sp_CONFIGURE 'allow updates', 1
RECONFIGURE WITH OVERRIDE
GO

5. Change database mode to emergency mode. The following statement will return the current status of the database

For SQL Server 2005-2008 version:

BEGIN
UPDATE sysdatabases
SET status = 32768
WHERE name = 'yourdatabasename'
COMMIT TRAN

This is replaced in sql2012+: 

ALTER DATABASE yourdatabasename SET EMERGENCY;

6. Restart SQL Server (This is must, it is not done SQL Server will through an error)

7. Execute this DBCC command in query window of Management Studio, this will create a new log file. Keep the name of this file same as LDF file just deleted from new server :

DBCC TRACEON (3604)
DBCC REBUILD_LOG(yourdatabasename,'c:\yourdatabasename_log.ldf')
GO
DBCC TRACEON (3604)

For SQL Server 2005-2008 version:

DBCC REBUILD_LOG(yourdatabasename,'c:\yourdatabasename_log.ldf')
ALTER DATABASE yourdatabasename 
REBUILD LOG ON (NAME=yourdatabasename_log,FILENAME='A:\sql_log\yourdatabasename_log.ldf')

DBCC accepts two parameters: the first parameter is the database name and the second parameter is the physical path of the log file. Make sure the path is physical, if you put logical file name it will return an error.

8. Reset the database status using the following command.

sp_RESETSTATUS yourdatabasename

9. Turn off the update to system tables of Master database running following script.

USE MASTER
GO
sp_CONFIGURE 'allow updates', 0
RECONFIGURE WITH OVERRIDE
GO

Take DB out of emergency mode and back to MULTI_USER.

ALTER DATABASE yourdatabasename SET MULTI_USER;
GO

Well, that’s it. Thanks, Jeremy for an awesome update to the original script.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Fix : Error Msg 1813, Level 16, State 2, Line 1 Could not open new database databasename. CREATE DATABASE is aborted

SQL SERVER – FIX: Msg 15190 – There are still remote logins or linked logins for the server

$
0
0

One of the most successful offerings from me has been Comprehensive Database Performance Health Check. Sometimes during my assistance, some random issues appear which I try to solve as well. In a recent engagement, one of their developers asked a question about an error which he was receiving while running sp_dropserver. In this blog, we would see how to fix error 15190 – There are still remote logins or linked logins for the server.

Here is the query which we were using to drop the linked server.

EXEC sp_dropserver 'SQLAUTHORITY'

And here is the output

Msg 15190, Level 16, State 1, Procedure sp_dropserver, Line 56 [Batch Start Line 11] There are still remote logins or linked logins for the server ‘SQLAUTHORITY’.

In SSMS, we could see below.

SQL SERVER - FIX: Msg 15190 - There are still remote logins or linked logins for the server sp_drop-err-01

SOLUTION/WORKAROUND

As the message says, there are logins associated with the linked server. So, if you want to know what they are, you can use

EXEC sp_helplinkedsrvlogin

to find them and drop them one by one using below system stored procedure.

EXEC sp_droplinkedsrvlogin

In my case, I was not worried about them and I used below.

EXEC master.dbo.sp_dropserver @server=N'SQLAUTHORITY', @droplogins='droplogins'

After adding the additional parameter, I was able to drop linked server. Well, sometimes there are simple solutions to complex problems. Let me know if you have ever faced such error with your production environment.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX: Msg 15190 – There are still remote logins or linked logins for the server

SQL SERVER – Error 21028 : Replication Components are not Installed on This Server

$
0
0

One of my clients, for whom I installed SQL Server and configured AlwaysOn came back to me with an error message. While trying to configure replication they were getting an error message. In this blog, we would learn how to fix error “Microsoft SQL Server Management Studio is unable to access replication components because replication is not installed on this instance of SQL Server”

Here is the exact error message which they received when they tried configuring distributor.

Microsoft SQL Server Management Studio is unable to access replication components because replication is not installed on this instance of SQL Server. For information about installing replication, see the topic Installing Replication in SQL Server Books Online.

ADDITIONAL INFORMATION:
Replication components are not installed on this server. Run SQL Server Setup again and select the option to install replication. (Microsoft SQL Server, Error: 21028)

SQL SERVER - Error 21028 : Replication Components are not Installed on This Server repl-missing-01

WORKAROUND/SOLUTION

As the message says, we need to install replication feature. Since SQL Server product is already installed, we need to “add” replication feature to an existing SQL.

Here are the steps to add the feature.

  1. Launch SQL Server Installation Center and choose below option.
    SQL SERVER - Error 21028 : Replication Components are not Installed on This Server repl-missing-02
  2. Keep the following wizard and here is the screen where you need to make a change in selection. We need to choose the second option and select the instance name.
    SQL SERVER - Error 21028 : Replication Components are not Installed on This Server repl-missing-03
  3. And as mentioned in error message, add replication as shown below.
    SQL SERVER - Error 21028 : Replication Components are not Installed on This Server repl-missing-04
  4. And this is how it should end.
    SQL SERVER - Error 21028 : Replication Components are not Installed on This Server repl-missing-05

All is well that ends well.

Hope this would help some new DBA to fix the error. Please comment and let me know if this was useful.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Error 21028 : Replication Components are not Installed on This Server


SQL SERVER – Information Message: Cannot shrink file in database to xxxx pages as it only contains yyyy pages.

$
0
0

I often get emails from my blog readers asking various types of clarifications about my own blog. Here is one of the interesting questions from a reader about message Cannot shrink file in the database to xxxx pages as it only contains yyyy pages which he encountered after following my below blog.

How to Shrink All the Log Files for SQL Server? – Interview Question of the Week #203

Actually, he modified the script and put the hardcode value in shrink file command to 1 GB size and he was getting an information message.

It is easy to reproduce the error message.

USE [master]
GO
IF DB_ID('DB_Shrink_Test') IS NOT NULL
BEGIN
	ALTER DATABASE [DB_Shrink_Test] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE
	DROP DATABASE [DB_Shrink_Test]
END
GO
CREATE DATABASE [DB_Shrink_Test]
GO
USE [master]
GO
ALTER DATABASE [DB_Shrink_Test] MODIFY FILE ( NAME = N'DB_Shrink_Test_log', SIZE = 2GB )
GO
-- shrinking first time to minimum possible size using my script (I have added my database name)
DECLARE @ScriptToExecute VARCHAR(MAX);
SET @ScriptToExecute = '';
SELECT
@ScriptToExecute = @ScriptToExecute +
'USE ['+ d.name +']; CHECKPOINT; DBCC SHRINKFILE ('+f.name+');'
FROM sys.master_files f
INNER JOIN sys.databases d ON d.database_id = f.database_id
WHERE f.type = 1 AND d.database_id > 4
AND d.name = 'DB_Shrink_Test'
--SELECT @ScriptToExecute ScriptToExecute
EXEC (@ScriptToExecute)
-- shrinking again with 1 GB size
USE [DB_Shrink_Test]
GO
DBCC SHRINKFILE(DB_Shrink_Test_log,1024)
GO
-- cleanup
/*
USE [master]
GO
ALTER DATABASE [DB_Shrink_Test] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
USE [master]
GO
DROP DATABASE [DB_Shrink_Test]
GO
*/

Here is the output.

Cannot shrink file ‘2’ in database ‘DB_Shrink_Test’ to 131072 pages as it only contains 1024 pages.
DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages
—— ———– ———– ———– ———– ————–
8 2 1024 1024 1024 1024
(1 row affected)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

If you know the basics of SQL Server shrinking, the message should be easy to understand but for those, who are not very well versed with SQL Server database architecture, it would be an interesting read.

WORKAROUND/SOLUTION

Read message again. Let’s covert the pages to size and read it again. Each page is 8 KB in size.

Cannot shrink file ‘2’ in database ‘DB_Shrink_Test’ to 1024 MB as it only contains 8 MB.

So, the message is essentially saying that you are trying to shrink a file and given the number which is more than the current size.

In my case, the database was initially created with 8 MB size (taken from the model database) that’s why my script was able to shrink to maximum possible size.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Information Message: Cannot shrink file in database to xxxx pages as it only contains yyyy pages.

SQL SERVER – Cannot Shrink Log File Because Total Number of Logical Log Files Cannot be Fewer than 2

$
0
0

Since I wrote my article about the script of shrinking the Log Files, there have been many emails from my blog readers about various messages which they are seeing during shrink. I have been sharing them with other blog readers as well. In this blog, we would discuss a message Cannot shrink log file because a total number of logical log files cannot be fewer than 2.

First, go through my earlier blog to understand the context.

How to Shrink All the Log Files for SQL Server? – Interview Question of the Week #203

Here is the exact message which was reported by DBCC SHRINKFILE command

Cannot shrink log file 2 (DB_Shrink_Test_log) because total number of logical log files cannot be fewer than 2.
DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages
—— ———– ———– ———– ———– ————–
8 2 497 497 496 496
(1 row affected)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Note that this message would appear when we provide the desired size of the file as follows.

DBCC SHRINKFILE ('<LDF Logical Name>',<some size>);

Here is the complete script to reproduce the message.

USE [master]
GO
IF DB_ID('DB_Shrink_Test') IS NOT NULL
BEGIN
	ALTER DATABASE [DB_Shrink_Test] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE
	DROP DATABASE [DB_Shrink_Test]
END
GO
CREATE DATABASE [DB_Shrink_Test]
GO
USE [DB_Shrink_Test]; 
CHECKPOINT; 
DBCC SHRINKFILE ('DB_Shrink_Test_log',1);

SQL SERVER - Cannot Shrink Log File Because Total Number of Logical Log Files Cannot be Fewer than 2 shrink2-01

What is the meaning of the message? If I run DBCC LOGINFO, you can see the output.

SQL SERVER - Cannot Shrink Log File Because Total Number of Logical Log Files Cannot be Fewer than 2 shrink2-02

NOTE: I got the message in SQL Server 2017 but not in SQL 2008.

Which means that we already have only 2 VLFs in the database so SQL can’t do anything further to shrink the file and hence the message.

If you have such message and LDF file size is huge (by only 2 VLFs) then you need to find a way to get rid of those big VLFs. I think this can be done by increasing the LDF size and make sure that Status becomes zero for those VLFs. If you have such a situation, please share your solution via comments to help others.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Cannot Shrink Log File Because Total Number of Logical Log Files Cannot be Fewer than 2

SQL SERVER – Cluster Install Failure – Code 0x84cf0003 – Updating Permission Setting for Folder Failed

$
0
0

SQL SERVER - Cluster Install Failure - Code 0x84cf0003 - Updating Permission Setting for Folder Failed SQL-Cluster There are various issues which I have seen SQL installation and most of the time they are intuitive. The error message is mostly helpful and provides the right direction. In this blog we would discuss error Updating permission setting for folder failed:

Here is the exact error which we could see in setup logs under the BootStrap folder.

Updating permission setting for folder ‘C:\ClusterStorage\FIN_Data\MSSQL\MSSQL13.MSSQLSERVER\MSSQL\DATA’ failed. The folder permission setting were supposed to be set to ‘D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266783-3050454056-335720097-2195381415)’.

Permission error occurs when you use a volume mount point in SQL Server Setup

My client was not installing it on “root” of the mountpoint. The complete message from the Detail.txt is shown below. (I have added line number and remove DateTime for better visibility)

  1. SQLEngine: : Checking Engine checkpoint ‘SetSecurityDataDir’
  2. SQLEngine: –SqlEngineSetupPrivate: Setting Security Descriptor D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926) on Directory C:\ClusterStorage\FIN_Data\MSSQL\MSSQL13.MSSQLSERVER\MSSQL\DATA
  3. Slp: Sco: Attempting to set security descriptor for directory C:\ClusterStorage\FIN_Data\MSSQL\MSSQL13.MSSQLSERVER\MSSQL\DATA, security descriptor D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926)
  4. Slp: Sco: Attempting to normalize security descriptor D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926)
  5. Slp: Sco: Attempting to replace account with sid in security descriptor D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926)
  6. Slp: ReplaceAccountWithSidInSddl — SDDL to be processed: D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926)
  7. Slp: ReplaceAccountWithSidInSddl — SDDL to be returned: D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926)
  8. Slp: Prompting user if they want to retry this action due to the following failure:
  9. Slp: The following is an exception stack listing the exceptions in outermost to innermost order
  10. Slp: Inner exceptions are being indented
  11. Slp:
  12. Slp: Exception type: Microsoft.SqlServer.Configuration.Sco.SqlDirectoryException
  13. Slp: Message:
  14. Slp: Updating permission setting for folder ‘C:\ClusterStorage\FIN_Data\MSSQL\MSSQL13.MSSQLSERVER\MSSQL\DATA’ failed. The folder permission setting were supposed to be set to ‘D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-1715010018-2870266784-3050454057-335720098-2195381926)’.
  15. Slp: HResult : 0x84cf0003
  16. Slp: FacilityCode : 1231 (4cf)
  17. Slp: ErrorCode : 3 (0003)

WORKAROUND/SOLUTION

We checked and made sure that service account was having below permission in security policy:

  • Act as Part of the Operating System
  • Bypass Traverse Checking
  • Lock Pages In Memory
  • Log on as a Batch Job
  • Log on as a Service
  • Replace a Process Level Token
  • Backup files and directories
  • Debug Programs
  • Manage auditing and security log

I gave all the possible permissions to the various account on the folders including “Full Control” to “Everyone”.

At last, we found that this was due to “Audit Object Access” policy, which was enabled from domain controller via GPO. Once we disabled it, the installation went fine.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Cluster Install Failure – Code 0x84cf0003 – Updating Permission Setting for Folder Failed

SQL SERVER – Installation Failure – Specified Instance Via Transform is Already Installed. MSINEWINSTANCE Requires a New Instance that is not Installed

$
0
0

This is one of the situations where my client installed one clustered instance of SQL Server and the second one was failing. In this blog, I would share my findings of error: Specified instance via transform is already installed. MSINEWINSTANCE requires a new instance that is not installed.

THE SITUATION

My client tried installing SQL Server clustered instance with name Contoso. It failed with below error in Summary.txt file.

Feature: Database Engine Services
Status: Failed: see logs for details
Reason for failure: An error occurred during the setup process of the feature.
Next Step: Use the following information to resolve the error, uninstall this feature, and then run the setup process again.
Component name: SQL Server Database Engine Services Instance Features
Component error code: 0x86D80052
Error description: The common properties for resource ‘SQL IP Address 1 (CONTOSO)’ could not be saved. Error: There was a failure to call cluster code from a provider. Exception message: The cluster IP address is already in use.
MSI (s) (A0:54) [16:36:00:810]: Specified instance {863E9807-97F0-417A-9957-DE4372A13404} via transform :InstID02.mst;:InstName02.mst is already installed. MSINEWINSTANCE requires a new instance that is not installed.

It looks like we haven’t un-installed the previous installation which was a failure. As per one of the StackExchange link it mentioned “This means in the windows installer registry there are remnants of the previous installation which has left orphaned entries there. You have to manually remove it or use some tool.”

THE SOLUTION

Now, we knew that we need to clean up SQL install but the problem was that none of the uninstall UI options were showing us failed instance. I found below tool to look into a registry of MSI database.

I also found an interesting WMIC command which also helped me.

WMIC PRODUCT Where "Caption like '%SQL%'" GET Caption, IdentifyingNumber

Here is the output of the command from my lab server.

SQL SERVER - Installation Failure - Specified Instance Via Transform is Already Installed. MSINEWINSTANCE Requires a New Instance that is not Installed msinew-err-01

As per Microsoft’s blog, we did cleanup using a list of all the GUIDs for which we needed to run uninstaller using below from command prompt.

Msiexec /X {GUID}

Almost all components got removed and few failed. There were some stale entries in the below keys.

HKEY_CLASSES_ROOT\Installer\Products\
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Products

We need to search for {GUID} and remove the hive which holds it.

Always take backup of the registry before making any change and don’t tell anyone how you solved the issue. Please comment and let me know if you found some other tricks.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Installation Failure – Specified Instance Via Transform is Already Installed. MSINEWINSTANCE Requires a New Instance that is not Installed

SQL SERVER – Which Settings Change in sp_configure Needs Restart to Take Effect?

$
0
0

Learning one new thing every day keeps me passionate about my job. Recently, I had a very interesting experience with one of my customer while working with them on Comprehensive Database Performance Health Check and found some non-default values in sp_configure.  They asked me – Which settings change in sp_configure needs restart to take effect? Do we need to remember the values?

THE QUESTION

What is an easy way to find out which settings in sp_configure which can be changed without recycling SQL Service?

THE ANSWER

Starting SQL Server 2008, Microsoft has introduced a new catalog view which can be used to see various server-wide configuration option value in the system.

sys.configurations (Transact-SQL)

It has a little-detailed output as compared to sp_configure. Here are the interesting columns.

  1. Is_dynamic: This column is used to know if the option is dynamic or not. If the value is 1 (one) then the parameter change takes effect when the RECONFIGURE statement is executed. If the value is 0 (zero) the value takes effect when the SQL Server service is restarted.
  2. Is_advanced: This column is used to know if the option is an advanced option or not. If the value for a parameter is 1 then it’s an advanced option and would is displayed or can be changed only when “show advanced options” is set to 1 through sp_configure.

Below is the query which given an answer to our question!

-- these configuration values which need restart
SELECT name ,description
FROM sys.configurations
WHERE is_dynamic = 0

SQL SERVER - Which Settings Change in sp_configure Needs Restart to Take Effect? sp-configue-restart-01

Here is the list as of today in SQL Server 2017 (build 14.0.3045)

  • user connections
  • locks
  • open objects
  • fill factor (%)
  • remote access
  • c2 audit mode
  • priority boost
  • set working set size
  • lightweight pooling
  • scan for startup procs
  • affinity I/O mask
  • affinity64 I/O mask
  • common criteria compliance enabled
  • automatic soft-NUMA disabled
  • external scripts enabled
  • hadoop connectivity
  • polybase network encryption

Please let me know if you know some other tricks. I would be more than happy to publish on my blog with due credit.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Which Settings Change in sp_configure Needs Restart to Take Effect?

SQL SERVER – FIX: Msg 41105: Failed to Create the Windows Server Failover Clustering (WSFC) Resource With Name and Type ‘SQL Server Availability Group’

$
0
0

Sometimes there are errors which give us the solution to the problem and I love to discover other ways to fix it. In this blog, we would learn how to fix Failed to create the Windows Server Failover Clustering (WSFC) resource with name and type ‘SQL Server Availability Group’

Here is the exact error which I received while creating an availability group:

Msg 41105, Level 16, State 0, Line 3
Failed to create the Windows Server Failover Clustering (WSFC) resource with name ‘SQLAUTHORITY_AG’ and type ‘SQL Server Availability Group’. The resource type is not registered in the WSFC cluster. The WSFC cluster many have been destroyed and created again. To register the resource type in the WSFC cluster, disable and then enable Always On in the SQL Server Configuration Manager.
Msg 41152, Level 16, State 2, Line 3
Failed to create availability group ‘SQLAUTHORITY_AG’. The operation encountered SQL Server error 41105 and has been rolled back. Check the SQL Server error log for more details. When the cause of the error has been resolved, retry CREATE AVAILABILITY GROUP command.

The T-SQL which I used was as follows.

USE [master]
GO
CREATE AVAILABILITY GROUP [SQLAUTHORITY_AG]
WITH (AUTOMATED_BACKUP_PREFERENCE = SECONDARY,
DB_FAILOVER = OFF,
DTC_SUPPORT = NONE,
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT = 0)
FOR DATABASE [SQLAUTHORITY_DB]
REPLICA ON N'NODE1' WITH (ENDPOINT_URL = N'TCP://NODE1.SQLAUTHORITY.COM:5022', FAILOVER_MODE = AUTOMATIC, AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, SESSION_TIMEOUT = 10, BACKUP_PRIORITY = 50, SEEDING_MODE = MANUAL, PRIMARY_ROLE(ALLOW_CONNECTIONS = ALL), SECONDARY_ROLE(ALLOW_CONNECTIONS = NO));
GO

Here is the screenshot of the error message.

SQL SERVER - FIX: Msg 41105: Failed to Create the Windows Server Failover Clustering (WSFC) Resource With Name and Type 'SQL Server Availability Group' ao-type-err-01

I checked ERRORLOG and there were no messages. As the error message says, “The resource type is not registered in the WSFC cluster” so I checked the PowerShell to get resource type and found below.

Get-ClusterResourceType | where name -like "SQL Server Availability Group"

The output showed no result which means the error message is correct.

SQL SERVER - FIX: Msg 41105: Failed to Create the Windows Server Failover Clustering (WSFC) Resource With Name and Type 'SQL Server Availability Group' ao-type-err-02

As we can see there is no result and that’s what is being told by an error message.

WORKAROUND/SOLUTION

I was able to find two ways to fix the issue:

  1. Register the resource type manually using below PowerShell.
Add-ClusterResourceType -Name "SQL Server Availability Group" -DisplayName "SQL Server Availability Group" -Dll "C:\Windows\System32\hadrres.dll"

SQL SERVER - FIX: Msg 41105: Failed to Create the Windows Server Failover Clustering (WSFC) Resource With Name and Type 'SQL Server Availability Group' ao-type-err-03

  1. The better way is: Disable and Enable the feature using SQL Server Configuration Manager. That is what has been told in error message as well.

Now, if we run the same command as earlier, we should see the output

SQL SERVER - FIX: Msg 41105: Failed to Create the Windows Server Failover Clustering (WSFC) Resource With Name and Type 'SQL Server Availability Group' ao-type-err-04

Have you encountered the same error? What was the cause of it?

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX: Msg 41105: Failed to Create the Windows Server Failover Clustering (WSFC) Resource With Name and Type ‘SQL Server Availability Group’

SQL SERVER – Unable to Set Cloud Witness. Error: The Client and Server Cannot Communicate, Because They do Not Possess a Common Algorithm

$
0
0

Recently, I was trying to simulate a client’s environment for which I need to have windows cluster with cloud witness. It didn’t go well, and I encountered an error message. In this blog, I will share the solution of error – “The client and server cannot communicate, because they do not possess a common algorithm” which I received while adding cloud witness. I was following below documentation: Deploy a Cloud Witness for a Failover Cluster

Here is the screenshot of the error message.

SQL SERVER - Unable to Set Cloud Witness. Error: The Client and Server Cannot Communicate, Because They do Not Possess a Common Algorithm cloudwitness-tls-01

The text of the message is as follows.

An error was encountered while modifying the quorum settings.
Your cluster quorum settings have not been changed.
The client and server cannot communicate, because they do not possess a common algorithm.

Based on my earlier experience, this error can be seen when client and server do not talk using the same version of TLS protocols. Based on my search I found that the communication between storage account and cluster nodes happens on using TLS 1.0. The error appeared because TLS 1.0 was disabled on the server. To overcome this, we can use below PowerShell commands to use TLS1.2 for cloud quorum setup.

SQL SERVER - Unable to Set Cloud Witness. Error: The Client and Server Cannot Communicate, Because They do Not Possess a Common Algorithm cloudwitness-tls-02

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Set-ClusterQuorum -Cluster ClusterName -CloudWitness -AccountName "NameOfStorageAccount" -AccessKey "AccessKeyForStorageAccount"

As you can see command ran successfully and witness for setup correctly.

Have you encountered some other error while deploying cloud witness?

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Unable to Set Cloud Witness. Error: The Client and Server Cannot Communicate, Because They do Not Possess a Common Algorithm


SQL SERVER – FIX: Cannot Connect to SQL in Azure Virtual Machine From Laptop

$
0
0

To keep up with the market trend, I have already done my hands-on with SQL in Azure Virtual Machine. In this blog, I would share my learning about a connectivity issue which I faced recently.

Here were the steps I followed to get my server up and running:

  1. Took an image from Azure gallery for Windows Server 2016 with no SQL installed on it.
  2. Downloaded SQL Server 2019 (CTP as if now) and installed it.
  3. Made sure I am able to connect to SQL Server locally and it was working great.

Now, I opened SSMS on my laptop, provided Public IP Address of this Azure Virtual Machine (taken from the portal)

SQL SERVER - FIX: Cannot Connect to SQL in Azure Virtual Machine From Laptop azure-vm-conn-01

As soon as I hit connect, I was not able to connect and getting below error.

Cannot connect to XXX.YYY.219.81.
ADDITIONAL INFORMATION:
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53)
The network path was not found

This error is explained in one of my most visited blogs:

SQL SERVER – FIX : ERROR : (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: )

The only thing which is missing in that blog is Azure part because that’s the only difference in SQL in On-Premise and SQL in Azure VM.

When I checked some blogs, they referred to “SQL Server Configuration” blade like below which can be used to allow remote connections.

SQL SERVER - FIX: Cannot Connect to SQL in Azure Virtual Machine From Laptop azure-vm-conn-02

In my case, this was a Windows VM and SQL Server was installed manually so I don’t have that option.

WORKAROUND/SOLUTION

Based on my troubleshooting, I knew that somewhere port 2433 (I generally change the port of SQL from a default (1433) to something else) is getting blocked.

Later, I learned that there is something called as “Network Security Group” in Azure which is like a firewall to allow/deny the connectivity to ports.  Securing Azure Virtual Machines using Network Security Groups (NSGs)

Here is where I have added 2433 inbound.

SQL SERVER - FIX: Cannot Connect to SQL in Azure Virtual Machine From Laptop azure-vm-conn-03

.. and I was able to connect to SQL Server from my laptop now.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX: Cannot Connect to SQL in Azure Virtual Machine From Laptop

SQL SERVER – FIX: Msg 9514 – XML Data Type is Not Supported in Distributed Queries

$
0
0

Sometimes, I get some quick questions via my blog comments and I do spend the time to search on them and reply or write a blog. In this blog we would learn how to fix: Msg 9514 – XML data type is not supported in distributed queries.

Here is the complete error message while running a query via linked server.

Msg 9514, Level 16, State 1, Line 4
Xml data type is not supported in distributed queries. Remote object ‘LinkedServer.DB.dbo.Table’ has xml column(s).

The error message is self-explanatory. Here is the way to reproduce the error.

  1. Create a linked server (SQL2019 in my demo)
  2. Create a table in the database (XMLDB in my demo) using below script.
    CREATE TABLE [dbo].[EmpInfo](
    	[Id] [int] NULL,
    	[FName] [nchar](10) NULL,
    	[MoreInfo]  NULL
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO
  3. Insert some dummy data (not a mandatory step)
    INSERT INTO [dbo].[EmpInfo] ([Id] ,[FName] ,[MoreInfo])
         VALUES (1,'SQL','<LName>Server</LName>')
    GO
    
  4. Now, run a select statement from the source server to reproduce the server.
    SELECT [Id]
    ,[FName]
    ,[MoreInfo]
    FROM [SQL2019].[XMLDB].[dbo].[EmpInfo]
    GO
    

SQL SERVER - FIX: Msg 9514 - XML Data Type is Not Supported in Distributed Queries xml-err-01

SOLUTION/WORKAROUND

The error message is clear that there is an XML column which we are selecting and that can’t happen in a distributed transaction. So, we need to fool SQL Server and tell that this is not an XML column.

Here is what worked for me:

SELECT op.[Id]
	,op.[FName]
	,CAST(op.[MoreInfo] AS XML) AS MoreInfo
FROM (
	SELECT *
	FROM OPENQUERY([SQL2019], 'SELECT [Id] ,[FName]
	,cast([MoreInfo] as varchar(max)) AS [MoreInfo] 
	FROM [XMLDB].[dbo].[EmpInfo]')
	) AS op

SQL SERVER - FIX: Msg 9514 - XML Data Type is Not Supported in Distributed Queries xml-err-02

You can simplify the query, but it gives you an idea about the solution. As you can see below, we have the data which we inserted.

Is there any other solution which you have can think of?

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX: Msg 9514 – XML Data Type is Not Supported in Distributed Queries

SQL SERVER – FIX: Msg 8180 – Statement(s) Could not be Prepared. Deferred Prepare Could not be Completed

$
0
0

While running a linked server query, I encountered an error and learned something new. In this blog we would learn about how to fix error – Msg 8180 – Statement(s) could not be prepared.

Here is the complete error message which I received.

OLE DB provider “SQLNCLI11” for linked server “SQL2019” returned message “Deferred prepare could not be completed.”.
Msg 8180, Level 16, State 1, Line 13
Statement(s) could not be prepared.
Msg 4104, Level 16, State 1, Line 13
The multi-part identifier “NAME.ID” could not be bound.

Reproducing the error is very easy. In my case, I created a linked server (called as SQL2019) and ran below query.

SELECT *
FROM OPENQUERY([SQL2019],'SELECT NAME . ID FROM SYS.DATABASES')

SQL SERVER - FIX: Msg 8180 - Statement(s) Could not be Prepared. Deferred Prepare Could not be Completed linked-stmt-prep-01

WORKAROUND/SOLUTION

When I captured profiler, I was able to understand the meaning of it. We can see below in profiler.

SQL SERVER - FIX: Msg 8180 - Statement(s) Could not be Prepared. Deferred Prepare Could not be Completed linked-stmt-prep-02

The statement which came to the linked server was

declare @p1 int
set @p1=0
exec sp_prepare @p1 output,NULL,N'SELECT NAME . ID FROM SYS.DATABASES',1
select @p1

and that failed as seen in profiler.

The message essentially means that the statement could not be compiled on the destination server. Based on my search on the internet, we should see the real error at the end of multiple messages. In this situation the error is

Msg 4104, Level 16, State 1, Line 13
The multi-part identifier “NAME.ID” could not be bound.

Of course, I know the error. I have put dot instead of a comma to generate an error.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX: Msg 8180 – Statement(s) Could not be Prepared. Deferred Prepare Could not be Completed

Oracle to SQL Server Data Migration and Replication – Budget and ROI

$
0
0

My primary job is to help various organizations design scalable architecture which can provide robust performance all the time. I love my job very much when I have complete freedom to make the decisions to build elaborate infrastructure. However, not every time I am allowed to do what I want to do because every CEO and CTO are concerned about one thing – “Budget and ROI (Return of investment).”

Oracle to SQL Server Data Migration and Replication - Budget and ROI oracletoss

Popular Belief for Oracle EE

Oracle Enterprise Edition (EE) can be extremely expensive and always takes a huge share in the IT budget of any organizations. Additionally, many quite frequently believe that once you implement Oracle Enterprise Edition (EE), it is not possible to free out of the product. Often the feeling of the organization is that they must use the same product and they are locked in with the product. This is really very advantages to Oracle as they would love their product to be used as much as possible. If users stay with their product for the longest time, they make more money in licenses and subscriptions.

ScaleOut from Oracle EE

If you read the title, you will notice that the words in the title, instead of the word migration, I have used the word ScaleOut. The reason, I am not suggesting right out-migration as it may involve a lot of planning. However, sometimes it totally makes sense to offload the non-essential as well as some of the essential workloads to a totally different server.

There are always solutions where you can replicate or migrate your data to a more cost-effective platform like Oracle Standard Edition (SE) or even to SQL Server (which is a competing product of Oracle). However, migrating or replicating data to ScaleOut is not easy as it requires lots of planning and programming.

SharePlex to Replicate and Integrate Data

This is where SharePlex actually comes to rescue organizations where it can help us to easily replicate and integrate data from one platform to another platform very easily. With SharePlex data replication, one can safely and affordably move data to another platform with zero downtime and zero data loss.

Shareplex helps to accomplish database operational goals without affecting business with a real-time copy of your production data. It also helps to break free from the expensive databases which can just hold your data as ransom and makes you pay for expensive licenses. Additionally, SharePlex helps to offload reporting to another server and implement load balancing to improve database performance, optimize your reporting environment and improve OLTP performance.

Call for Action

I believe you can achieve the five 9’s of availability during database disaster recovery, migrations, patches and upgrades, eliminating stress and risk while giving you more time to focus on other priorities if you use SharePlex.

Download and try out SharePlex now.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on Oracle to SQL Server Data Migration and Replication – Budget and ROI

How to Prevent Common SQL Server Performance Problems Efficiently With Smart Capacity Planning

$
0
0

Have you ever imagined how often SQL Server performance issues are actually caused by poor capacity planning? Sometimes a SQL Server database has insufficient storage performance, an instance is running out of memory, or the server level CPU is skyrocketing to an unhealthy level.

All these things are very common in environments where there has not been a proper method and software for planning and optimizing the SQL Server capacity.

Common SQL Server Performance Problems

How to Prevent Common SQL Server Performance Problems Efficiently With Smart Capacity Planning sql-governor-server-cpu-heatmaps-capacity

A very typical scenario is a situation where the storage device latency or throughput is too high compared to the volume of data that needs to be processed in the database. It’s very tricky to try to optimize such an environment, especially when the size of the database is large and there are many arbitrary TSQL queries ongoing, returning large datasets. All of these lead to heavy RAM usage and exhausted disk lookup and scan operations.

Another common scenario is a situation where the SQL Server instance page life expectancy and buffer cache hit ratio start to fall down, constituting significant query performance issues. There is just not enough RAM to handle all the generated workload.

One of the worst scenarios is when the CPU usage hits the server limits and there is no way to scale up the existing server. Adding an extra socket for physical hardware or some VCPU’s for the virtual machine may be the only reasonable option when TSQL queries and other SQL Server services are already running in the most optimal way.

How Capacity Planning Comes into Play?

How to prevent all this mess before it’s too late? With a proper capacity planning method and software, you can decide how well your server hosts, virtual machines, SQL Server instances, and databases will run.

The most important point of the capacity planning is to secure the desired service levels without sacrificing the performance or availability of the SQL Server platform during its planned lifecycle. Another great advantage in well-executed capacity planning is the generated savings compared to guesswork.

Proper capacity planning software offers you the possibility to set up different baseline and service level criteria for each environment and individual server. You must be able to define which servers are for development, testing, acceptance, and production use. If you can’t, you will run into serious problems, because you cannot set different capacity needs for different kinds of server workloads. You need to set the proper threshold levels for each server in terms of CPU and RAM usage, and you have to be able to understand the storage needs of new setups. The software should consider individual trends for each performance counter such as average and peaking CPU, RAM usage, storage latencies, IOPS, MB/s and more. You must be able to automate the process for setting up physical servers, virtual machines, Azure VM’s and such for the source servers and instances, and if you want to consolidate your servers you need to have intelligent automation for calculating the optimal arrangement of source instances into target servers. You should also understand the performance differences of different hardware, VM and Azure setups.

How to Prevent Common SQL Server Performance Problems Efficiently With Smart Capacity Planning sql-governor-server-cpu-heatmaps

Call for Action

Did you know that there is such software available on the market? It’s called SQL Governor. With this software, you can automate tedious, cumbersome capacity planning work in the most optimal way. SQL Governor helps you to tackle all the various aspects listed above. In addition, it benefits your whole organization, as with SQL Governor you can end up saving up to 50% on the licensing, hardware and service costs of your new SQL Server platform. And what is even more important, you won’t end up in an out of the capacity situation! You can read more about the software in one of my previous articles.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on How to Prevent Common SQL Server Performance Problems Efficiently With Smart Capacity Planning

Viewing all 594 articles
Browse latest View live