Quantcast
Channel: SQL Archives - SQL Authority with Pinal Dave
Viewing all 594 articles
Browse latest View live

SQL SERVER – How to Downgrade SQL Server Edition?

$
0
0

One of a unique advantage of freelancing is learning more and more. I get a chance to hear some of the real word problem and it gives me immense pleasure to provide a feasible solution to them. In this blog post we will discuss How to Downgrade SQL Server Edition?

SQL SERVER - How to Downgrade SQL Server Edition? downgradess-800x182

As the subject says, I was asked by a client to provide steps to change the edition of SQL Server. By mistake their vendor has installed SQL Server Enterprise Edition with his own copy. During an internal audit, it was identified that they are not having license of Enterprise so they should change the edition to Standard.

Keep in mind, we are not talking about version change (i.e. SQL 2016 to SQL 2014 etc.), we are talking about edition (standard, developer, enterprise). So, changing Edition of SQL Server is possible, but the steps would depend on the source and destination edition.

  1. Upgrade path supported by SQL Server: (Supported Version and Edition Upgrades)
  1. Upgrade path not supported by SQL Server (downgrade): whatever doesn’t fall in an upgrade matrix on above link would come here.

 Here are the simple steps which I suggested to my client. (Enterprise to Standard)

  • Make sure you are not using any feature which is not there in destination edition. This can be done by using catalog view dm_db_persisted_sku_fearures. This should be run on each database to find features.
  • Backup ALL the databases before uninstall. Stop the SQL services and backup your database files and stored it in a safer place.
  • Note down the current configuration of the database server. Keep track of the installation folder and note down the exact install location. Also make a note of select @@version, sp_configure.
  • Uninstall the SQL server enterprise edition and install the standard edition.
  • Apply patches, if needed, to come to same @@version as early.
  • Backup the databases of new instance. Stop the SQL service of the new instance and copy the database files for safety.
  • Restore the database backups or stop the SQL server and replace the files from the old instance to the new instance.
  • Start the SQL services to check if everything is fine.

Have you ever down similar downgrade? Did you face any issue when downgrading SQL Server?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – How to Downgrade SQL Server Edition?


SQL SERVER – Case of Different Default Collation on Two Servers

$
0
0

Along with long term performance tuning engagement, I also provide quick consult (On Demand) to assist with any short-term issue, which might get fixed by talking to an expert. Some of the questions are very good and I get to learn something. This blog is the result of one such conversation like Case of Different Default Collation on Two Servers.

My client told that they are seeing that the default collation provided by SQL 2012 installation is different on 2 different servers. I always thought that the default would always be same for setup, but I was not correct.

SQL SERVER – Default Collation of SQL Server 2008

I learned that during SQL Server installation, the default collation is determined by the windows system locale. Which means, if we install SQL on one server, which has English locale, and on another server which has French locale, the SQL Server installation will choose 2 different default collation.

SQL SERVER - Case of Different Default Collation on Two Servers coll-01-800x376

On a UK machine, it would be Latin1_General_CI_AS

I asked them to check the Windows system locale on those two servers. They reported back that one is US English, and another is Singapore English. I asked them to change US English to Singapore English and restart the server. After that, the SQL installation selected the same collation on the 2 servers.

SOLUTION/CONCLUSION

There is no universal default collation. It would be dependent on the locale setting of the operating system.  Here is the documentation:

How to: Change Operating System Settings to Support Localized Versions

In short, if you want same collation to be offered by SQL setup, make sure the locale is same. We can always override it by choosing the value we want.

Have you ever faced such an issue?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Case of Different Default Collation on Two Servers

SQL SERVER – How to Join a Table Valued Function with a Database Table

$
0
0

Just another day I received following question and I find it very interesting. I decided to try it out on SQL Server 2016’s new WideWorldImporters database. The question was – “How to Join a Table Valued Function with a Database Table?” This is indeed very interesting as this particular feature was introduced in SQL Server 2008, so what you will see in this blog post applies to every version of SQL Server after SQL Server 2008.

SQL SERVER - How to Join a Table Valued Function with a Database Table tablevaluedfunctions-800x409

In database WideWorldImporters we have a table valued function – [Application].[DetermineCustomerAccess], which accepts city id as an input and returns us result if that particular city have accessed enabled or not. We will join this to Sales.Customers table which also have a column city. Now we want to know if the customer city has access enabled or not. To find out we have to join a table valued function with our customer table. Here is the syntax to join table valued function to a table. We will apply CROSS APPLY to connect function and table.

USE WideWorldImporters
GO
SELECT c.CustomerName, a.AccessResult
FROM Sales.Customers c
CROSS APPLY [Application].[DetermineCustomerAccess] (DeliveryCityID) a
GO

Let me know if you have any question in the comment area.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – How to Join a Table Valued Function with a Database Table

PowerShell – Reading Tables Data Using Script

$
0
0

Earlier in this blog, I have written a number of posts by using PowerShell. The more you play around with this scripting language more are the hidden gems that come out. Personally for me, everyday is like a new learning that needs to be discovered working with SQL Server and PowerShell. One of the posts that I would like to recollect is: PowerShell – Querying SQL Server From Command Line

In the post mentioned above, I have mentioned one of the ways to query SQL Server from command line. This blog post is almost on similar lines in some way. But in this blog post, I will want to show the PowerShell commandlet for querying SQL Server.

Let me introduce you to Read-SQLTableData. When I saw this command, I went to my SQL Server box to check how this works. It was amazing to see the output from the PowerShell window. From SQL Server Management Studio, I went to a database, expanded Tables and selected a table with right click and “Start PowerShell”. In this example, I have taken the AdventureWorks, dbl.DatabaseLog table. Once on command prompt, fire the powershell commandlet as shown below:

PowerShell - Reading Tables Data Using Script Read-SqlTableData-01

This gets this simple from a query pattern point of view. It will display the complete table data on the console. From this is a console, we can always take the output to a .txt file if required.

One of the addendum to this is the ability to add few interesting parameters to the same query.

Read-SQLTableData -top 2 -OutputAs DataTable -ColumnName Event,Schema,Object,PostTime

Here in the above query, I have gone ahead to take the top 2 rows and have explicitly called out the column names of interest. The typical output of this would look like:

PowerShell - Reading Tables Data Using Script Read-SqlTableData-02

Interesting, isn’t it? I am sure once you start to play with PowerShell and the SQL commandlets, you will find interesting use cases of using the same. Feel free to let me know how you used this in your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on PowerShell – Reading Tables Data Using Script

SQL SERVER – Patch Installation Error: The version of SQL Server instance does not match the version

$
0
0

Applying a patch is something which is part of everyone’s life in the software industry. One of my clients reported below error related to Patch Installation Error.

The version of SQL Server instance <InstanceName> does not match the version expected by the SQL Server update. The installed SQL Server product version is 11.1.3000.0, and the expected SQL Server version is 11.3.6020.0″

They said that they are not able to move ahead of below the screen.

SQL SERVER - Patch Installation Error: The version of SQL Server instance does not match the version version-error-01

Here is the text of the message which is shown at the bottom of the screen: There are no SQL Server instances or shared features that can be updated on this computer.

I considered support articles and found that 11.1.3000 is service pack 1 and 11.3.6020 is service pack 3. When I asked about the patch, they told that it is CU2 for SP3.

I told them that to apply any CU of service pack, we first need to apply the base service pack. Once the installed SP3, then they could install CU.

SOLUTION/WORKAROUND

There could be two reasons of such error:

  1. Version of CU is not matching with the base service pack. Like the one which happened above.
  2. Same error can also appear if last patch was incorrectly installed. If that is the case, you can try to repair the instance and then retry the patch.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Patch Installation Error: The version of SQL Server instance does not match the version

SQL SERVER – Rule Windows Server 2003 FILESTREAM Hotfix Check failed on Windows 2012 R2 Cluster

$
0
0

For my upcoming training, I was trying to deploy SQL Server 2008 R2 Cluster in my client’s lab machines. I encountered a strange error and I was clueless. Let us learn about Rule Windows Server 2003 FILESTREAM Hotfix Check failed on Windows 2012 R2 Cluster.

SQL SERVER - Rule Windows Server 2003 FILESTREAM Hotfix Check failed on Windows 2012 R2 Cluster setup-wrong-error-800x595

I tried below:

  1. Ran cluster validation- it was all green
  2. I even download the patch mentioned, but as expected, it was only for the 2003 OS.

I looked into the setup logs and found below

2013-02-04 10:51:05 Slp: Initializing rule : Windows Server 2003 FILESTREAM Hotfix Check
2016-09-06 10:51:05 SQLEngine: –FilestreamRequiredClusterPatchFacet: Engine_FilestreamRequiredHotfixesCheck: Version: 5.2.3790.4083
2016-09-06 10:51:05 SQLEngine: –FilestreamRequiredClusterPatchFacet: Engine_FilestreamRequiredHotfixesCheck: C:\Windows\system32\Drivers\Clusdisk.sys version : 6.2.9200.16384
2016-09-06 10:51:05 SQLEngine: –FilestreamRequiredClusterPatchFacet: Engine_FilestreamRequiredHotfixesCheck: C:\Windows\Cluster\Clusres.dll version : 6.2.9200.16384
2016-09-06 10:51:05 Slp: C:\Windows\system32\W03a2409.dll
2016-09-06 10:51:05 Slp: Rule initialization failed – hence the rule result is assigned as Failed
2016-09-06 10:51:05 Slp: Send result to channel : RulesEngineNotificationChannel

Later I came across Using SQL Server in Windows 8 and later versions of Windows operating system. It pointed me to the direction where it mentioned that we should do a slipstream of the media.

SOLUTION/WORKAROUND

I did a slipstreaming of SQL Server 2008 R2 media using KB How to update or slipstream an installation of SQL Server 2008. Once I slipstreamed the media, I was able to install 2 nodes SQL Server failover cluster.

Have you seen any such incorrect messages with SQL Server?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Rule Windows Server 2003 FILESTREAM Hotfix Check failed on Windows 2012 R2 Cluster

SQL SERVER – Fix Error Msg 35336 Working with ColumnStore Indexes

$
0
0

Since my consulting have started around performance tuning, I am able to see a number of customers who want to start using the new capabilities of SQL Server for their existing application as they plan to do an upgrade of their infrastructure. In one of engagement with a bank, who was upgrading their SQL Server 2012 to SQL Server 2016, they were interested in knowing how some of the new InMemory or even usage of ColumnStore Indexes can be used. It was an interesting conversation that started which I am using as a blog here.

I confronted them with the question on their inability to use a Non-Clustered ColumnStore index even in SQL Server 2012? They said they had evaluated and then saw that table was rendered read only once the Index was created. It made complete sense and as I planned for their session on a SQL Server 2016 developer edition, I said now the non-clustered columnstore indexes were actually now updateable.

To start playing with the demo, I just went about creating a table and then a non-clustered Columnstore index based on my memory of the syntax. It is sometimes customary for me to do the same to write code randomly in front of the customer. To my surprise, I got the following error:

Msg 35336, Level 15, State 1, Line 1
The statement failed because specifying a key list is missing when creating an index.
Create the index with specifying key list.

SQL SERVER - Fix Error Msg 35336 Working with ColumnStore Indexes NCCI-Error-35336-01-800x321

As you can see, the first reaction seeing the “Red” error message was that I was doing something terribly wrong. It was a face-palm reaction for me. But I took a deep breath and saw that the error message it had what I was missing. I changed the same code to the column key values as shown below, it just worked fine:

CREATE NONCLUSTERED COLUMNSTORE INDEX
t_non_clust_colstor_cci on t_non_clust_colstor (acc_description, acc_type)
WITH (DATA_COMPRESSION= COLUMNSTORE);

I couldn’t get over to why I did that code and on my way to the airport, I was trying to figure out the reason. Then it suddenly struck me that the key values are not applicable for Clustered ColumnStore Indexes and that was in my mind. Now I felt relaxed as I boarded the flight.

Thanks to SQL Server for guiding me with a self-explanatory error message. I am sure you have in the past seen a number of error messages for debugging made easier. Do let me know if you ever got to such hero moments and found yourself in a soup? Do share via comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Fix Error Msg 35336 Working with ColumnStore Indexes

SQL SERVER – Fix: The Cluster Resource Could not be Deleted Since it is a Core Resource

$
0
0

Working with SQL Server is fun. Since SQL Clustering, and AlwaysOn availability group needs Windows Clustering so sometimes there are some cluster issues which I have to deal with and fix. Let us learn about an error related to core resource.

Recently, one of my client did some changes to the cluster and wanted to change file share witness to the new share. So they modified file share witness and after that they started seeing two file share resources in cluster core resources. When they tried deleting the unused one, we got below error.

If we press Ctrl+C on the message, we can paste it and will get below.

SQL SERVER - Fix: The Cluster Resource Could not be Deleted Since it is a Core Resource clus-core-01

[Window Title] Error
[Main Instruction] The operation has failed
[Content] The cluster resource could not be deleted since it is a core resource.
[^] Hide Details [OK] [Expanded Information] Error Code: 0x800713a2
The cluster resource could not be deleted since it is a core resource.

I was not able to reproduce the error in my lab. When I provide a new share name, old one gets removed automatically, but that was not the case when my client faced it.

SOLUTION

In this case the solution was not hard. We ran “Quorum Configuration Wizard” and modified the quorum model from “Node and File Share Majority” to “Node Majority”. As soon as we did that, both file shares disappeared from the failover cluster manager. Later we changed the quorum model back to “Node and File Share Majority” and selected the share with we needed for witness.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Fix: The Cluster Resource Could not be Deleted Since it is a Core Resource


SQL SERVER – Unable to Start Services After Patching (sqlagent_msdb_upgrade.sql)

$
0
0

In the recent past, I have heard this issue, at least once or twice every month. Whenever I try to help a such client and I think I know the issue, I get something new. They said that they have applied a patch and after that they were not able to access the SQL server and it is going offline. Here is a blog post where I discuss about Unable to Start Services After Patching (sqlagent_msdb_upgrade.sql).

I immediately told them that this would be an issue with upgrade script failure.

SQL SERVER - Unable to Start Services After Patching (sqlagent_msdb_upgrade.sql) sscm-stop-01-800x162

SQL SERVER – Script level upgrade for database ‘master’ failed because upgrade step ‘sqlagent100_msdb_upgrade.sql’

They confirmed that the issue is with upgrade script, but the error message is not the same. I asked them to send the ERRORLOG file to me and I found below error

2016-08-16 20:41:57.95 spid9s Granting login access ‘pan\svc-sql-agt’ to msdb database…
2016-08-16 20:41:57.96 spid9s A problem was encountered granting access to MSDB database for login ‘(null)’. Make sure this login is provisioned with SQLServer and rerun sqlagent_msdb_upgrade.sql

I have never seen above error earlier, but I went and searched for “sqlagent_msdb_upgrade.sql” file and found it in INSTALL folder. Here are the lines causing error in that file.

--add job_owner to the SQLAgentUserRole msdb role in order to permit the job owner to handle his jobs
-has this login a user in msdb?
IF NOT EXISTS(SELECT * FROM sys.database_principals WHERE sid = @owner_sid)
BEGIN
PRINT ''
PRINT 'Granting login access''' + @owner_name + ''' to msdb database...'
BEGIN TRY
EXEC sp_grantdbaccess @loginame = @owner_name
END TRY
BEGIN CATCH
RAISERROR('A problem was encountered granting access to MSDB database for login ''%s''. Make sure this login is provisioned with SQLServer and rerun sqlagent_msdb_upgrade.sql ', 10, 127) WITH LOG
END CATCH
END

Solution/Workaround

I did some more troubleshooting and found that ‘pan\svc-sql-agt’ was owning a schema and hence we were not able to drop it.

The biggest challenge was that SQL was not getting started and I was not able to connect. Fortunately, there is a trace flag 902 which can help in starting SQL by bypassing the script. ERRORLOG can tell the cause and trace flag helps in fixing the cause.  So, whenever you encounter any issue with upgrade script and need to troubleshoot, then use trace flag 902.  You need to make sure to remove it and start SQL normally.

Have you ever used any such trace flags?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Unable to Start Services After Patching (sqlagent_msdb_upgrade.sql)

SQL SERVER – Error: Property BackupDirectory is Not Available for Settings

$
0
0

I break a lot of things on my SQL environment and I believe that is a very good way to learn things. Today, I wanted to perform a restore of a database using a backup which I took earlier. In this blog post, let us learn how I fixed Property BackupDirectory is Not Available for Settings.

SQL SERVER - Error: Property BackupDirectory is Not Available for Settings smo-01

As soon as I clicked database, I was welcomed with the below error message.

SQL SERVER - Error: Property BackupDirectory is Not Available for Settings smo-02

Here is the text of the message.

Event ID: 7011
Property BackupDirectory is not available for Settings ‘Microsoft.SqlServer.Management.Smo.Settings’. This property may not exist for this object, or may not be retrievable due to insufficient access rights. (Microsoft.SqlServer.Smo)

I was not sure from where I would get BackupDirectory property so I capture profiler trace while clicking on restore option and found below when I searched for BackupDirectory

declare @BackupDirectory nvarchar(512)
if 1=isnull(cast(SERVERPROPERTY('IsLocalDB') as bit), 0)
select @BackupDirectory=cast(SERVERPROPERTY('instancedefaultdatapath') as nvarchar(512))
else
exec master.dbo.xp_instance_regread @HkeyLocal, @InstanceRegPath, N'BackupDirectory', @BackupDirectory OUTPUT

After debugging further, it was easy to find that its looking for a registry key for my SQL instance and then I recalled that I was playing with a registry setting for one of my previous blog posts.

Solution

I found that below setting was empty.

SQL SERVER - Error: Property BackupDirectory is Not Available for Settings smo-03

As soon as I entered a value in the highlighted text box of “backup” and hit OK, I was able to get to the restore UI without any error.

Have you ever used default setting for backup? Do you know the use of it?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Error: Property BackupDirectory is Not Available for Settings

SQL SERVER – Performance Benefit of Using SPARSE Columns?

$
0
0

I have written a number of blogs around working with SPARSE Columns in the past and a primer to what SPARSE Columns can be got from the list mentioned:

As you get through these, there are a number of other considerations one needs to take. In a recent consulting assignment, I had the luxury of having a discussion with the developers around using SPARSE columns. It was a SaaS based application and many of the columns in their table design would be NULL. Hence, once I saw the dataset, I talked about SPARSE Columns.

Immediately, the developer and DBA reverted back saying is it always performant to use SPARSE columns in their design? There are edge cases where this would fail – we will reserve this for the future? Here is a classic example to showcase the same.

To showcase this, let us try to create and prepare our environment:

Create Database TestDB
GO
USE TestDB
GO
CREATE TABLE sparse_tbl1 (Col1 INT SPARSE, Col2 INT)
GO
INSERT INTO sparse_tbl1 VALUES (null, 7)
INSERT INTO sparse_tbl1 VALUES (3, null)

Now the specifics of data inserted and visualizing come here.

--run DBCC IND to find the page we stored this data on. Look for PageType=1
DBCC IND('TestDB', 'sparse_tbl1', 1)
GO
DBCC TRACEON (3604);
GO
--run DBCC PAGE
DBCC PAGE ('TestDB', 1, 312, 3)

SQL SERVER - Performance Benefit of Using SPARSE Columns? sparse-column-performance-01-800x275

I have taken the output and also shown below as text.

Slot 0 Offset 0x60 Length 11
Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP Record Size = 11
………….. <>
Slot 1 Offset 0x6b Length 27
Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP VARIABLE_COLUMNS
Record Size = 27

Let me read you through the same. Let’s look at the DBCC PAGE output of the data inserted. In the first row we inserted a NULL in the sparse column and the value 7 in the non-sparse column. As a result the row length is 11 bytes overall and only Column 2 has been stored.

Now examine the second row (slot 1). Here we inserted a non-null value in the sparse column and a NULL in the ordinary column.  The row length is now 27 bytes. We see two columns, the first one being a sparse column. I have highlighted the actually data value in green. As you can observe, 16 bytes were gained in using NULL in a sparse column versus using a NULL in a non-sparse column in this example using INT data type.

In the above example we found that NULL values used in a sparse column lead to more efficient space usage. Let us examine the opposite scenario, i.e how much space is used up when non-null data is stored in a Sparse column – this will be discussed in a different blog.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Performance Benefit of Using SPARSE Columns?

SQL SERVER – Msg 3047, Level 16, State 1: The BackupDirectory Registry Key is Not Configured Correctly

$
0
0

While writing my earlier blog, I broke a few settings and started getting new errors. Earlier, when I used to run below command, it used to work fine. However, just a day ago, it has given me error related to the BackupDirectory Registry Key.

BACKUP DATABASE [SQLAuthority] TO  DISK = N'SQLAuthority.bak'

When I ran the command today, I faced below error.

SQL SERVER - Msg 3047, Level 16, State 1: The BackupDirectory Registry Key is Not Configured Correctly backup-directory-01

Msg 3047, Level 16, State 1, Line 1
The BackupDirectory registry key is not configured correctly. This key should specify the root path where disk backup files are stored when full path names are not provided. This path is also used to locate restart checkpoint files for RESTORE.
Msg 3038, Level 16, State 1, Line 1
The file name “SQLAuthority.bak” is invalid as a backup device name. Reissue the BACKUP statement with a valid file name.
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

Here is the UI message for the same error:

The command would work if I provide the complete path of the backup. If you notice the command which I normally run (to avoid more typing) is just a file name and it goes to the configured default backup directory. We are getting error because I messed up that setting.

Here is an earlier blog which talks about this setting.

SQL SERVER – Error: Property BackupDirectory is Not Available for Settings

Solution/Workaround

Enter the correct value in backup folder under “Database Settings” tab under “Properties” of a SQL instance in management studio.

SQL SERVER - Msg 3047, Level 16, State 1: The BackupDirectory Registry Key is Not Configured Correctly backup-directory-02

Once I entered the path in the “Backup” textbox, backup command was working as earlier.

Have you ever shot yourself in the foot like me?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Msg 3047, Level 16, State 1: The BackupDirectory Registry Key is Not Configured Correctly

SQL SERVER – Error – Attempted to Perform an Unauthorized Operation

$
0
0

As you might know I try hard to help the community via various channels. Comment on my blog is one of that channels. Here is a blog post which talks about error about Unauthorized Operation.

Recently, one of my community members shared a SQL Server 2016 setup log which me. He wanted to understand why he was seeing below error

Attempted to perform an unauthorized operation.

I asked complete log and below is what we saw.

Error: Local Discovery

(01) 2016-07-30 04:23:21 Slp: Running Action: RunRemoteDiscoveryAction
(01) 2016-07-30 04:23:21 Slp: Running discovery on local machine
(01) 2016-07-30 04:23:23 Slp: Discovery on local machine is complete

Error: Node MyNode02

(01) 2016-07-30 04:23:23 Slp: Running discovery on remote machine: MYNODE02
(01) 2016-07-30 04:23:24 Slp: Discovery on MYNODE02 failed due to exception

Error: Node MyNode03

(01) 2016-07-30 04:23:24 Slp: Running discovery on remote machine: MYNODE03
(01) 2016-07-30 04:23:28 Slp: Discovery on MYNODE03 is complete

I have highlighted section of log. First one is Local Discovery which worked. Second one was for node MyNode02 which failed with error and for third remote node MyNode03 it worked.

I asked him to check below.

  1. Start > Run > RegEdit
  2. File > Connect Network Registry.

SQL SERVER - Error - Attempted to Perform an Unauthorized Operation reg-error-01

  1. Provide remote machine name and connect.

He informed me that he was getting below error:

SQL SERVER - Error - Attempted to Perform an Unauthorized Operation reg-error-02

Solution: As explained in the error message, after starting “remote registry” service, the issue was resolved.

Have you ever seen some other setup error?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Error – Attempted to Perform an Unauthorized Operation

SQL SERVER – Error: 26014, Severity: 16. Unable to Load User Specified Certificate

$
0
0

Recently I was consulting a customer and they had plans to change certificate used by SQL Server. After making changes to certificate, we found that we were not able to start the SQL Server Service. Let us learn about how to fix error 26014 about unable to load User Specified Certificate.

SQL SERVER - Error: 26014, Severity: 16. Unable to Load User Specified Certificate cert-01

Below were the errors in the SQL Server ERRORLOG. SQL SERVER – Where is ERRORLOG? Various Ways to Find ERRORLOG Location

2016-06-29 01:59:48.07 Server Error: 26014, Severity: 16, State: 1.
2016-06-29 01:59:48.07 Server Unable to load user-specified certificate [Cert Hash(sha1) “692169CAAE3FA02AB216876A6CC468B60BB4C153”]. The server will not accept a connection. You should verify that the certificate is correctly installed. See “Configuring Certificate for Use by SSL” in Books Online.
2016-06-29 01:59:48.07 Server Error: 17182, Severity: 16, State: 1.
2016-06-29 01:59:48.07 Server TDSSNIClient initialization failed with error 0x80092004, status code 0x80. Reason: Unable to initialize SSL support. Cannot find object or property.

I looked into certificate and found that 692169CAAE3FA02AB216876A6CC468B60BB4C153 was a valid thumbprint in properties of certificate. Tried various options and searched on internet.

We also verified that below key has correct thumbprint value. HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\<instance>\MSSQLServer\SuperSocketNetLib

Solution / Workaround:

Finally, we changed the service account from NT SERVICE\MSSQLSERVER to LocalSystem and we were able to start SQL server service.

Let me know if you have faced this error on your production server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Error: 26014, Severity: 16. Unable to Load User Specified Certificate

Creating and Running an SQL Server Unit Test – Best Ways to Test SQL Queries

$
0
0

I hope it is no secret that testing of written code is as important as writing the code itself, and sometimes even more important. Writing unit test for C#/Java/…code coverage is the responsibility of each software developer.

However, there is not always an opportunity to use autotests. For example, until recently, there were no good computer-aided testing systems for SQL Server, and many people had to create self-made products that were expensive to support and develop. To tell the truth, I was doing that too.

In 2014, I first discovered tSQLt, which turned to be a very nice free open-source framework for unit testing. In this post, I will try to show you how tSQLt can greatly simplify your life.

Oftentimes, I have to audit servers to identify non-optimal configuration settings.

Usually, it looks like this. I run a bunch of scripts on the server and manually analyze the results of the samples. Let’s try to automate this.

First, we need to download the latest version of tSQLt. Next, we should configure the instance of SQL Server to work with CLR and create a database, on which the framework will be installed:

EXEC sys.sp_configure 'clr enabled', 1
RECONFIGURE
GO
USE [master]
GO
IF DB_ID('tSQLt') IS NOT NULL BEGIN
    ALTER DATABASE tSQLt SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE tSQLt
END
GO
CREATE DATABASE tSQLt
GO
USE tSQLt
GO
ALTER DATABASE tSQLt SET TRUSTWORTHY ON
GO

Then, run tSQLt-based script file tSQLt.class.sql from the archive. The script will create its own tSQLt scheme, the CLR assembly and a plurality of scripted objects. Some procedures will have prefix Private_, which are intended for internal use by the framework.

Upon successful installation, we will get the following message in the Output:

+-----------------------------------------+
| Thank you for using tSQLt. |
| tSQLt Version: 1.0.5873.27393 |
+-----------------------------------------+

Now, let’s create a scheme, in which autotests will be created:

USE tSQLt
GO
CREATE SCHEMA [Server] AUTHORIZATION dbo
GO
EXEC sys.sp_addextendedproperty @name = N'tSQLt.TestClass'
                              , @value = 1
                              , @level0type = N'SCHEMA'
                              , @level0name = N'Server'
GO

Please note that the extended property defines a particular object belonging to tSQLt functionality.

Let’s create a test in the Server scheme, make sure to indicate prefix test– in the name of the test:

CREATE PROCEDURE [Server].[test MyFirstAutoTest]
AS BEGIN
    SET NOCOUNT ON;
    EXEC tSQLt.Fail 'TODO: Implement this test.'
END

Execute the created autotest. We can either execute:

EXEC tSQLt.RunAll

or explicitly specify the schema:

EXEC tSQLt.Run 'Server'

or a specific test:

EXEC tSQLt.Run 'Server.test MyFirstAutoTest'

If you need to run the last test executed, you can call Run without the parameters:

EXEC tSQLt.Run

After executing one of the commands above, we will get the following information:

[Server].[test MyFirstAutoTest] failed: (Failure) TODO: Implement this test.
+----------------------+
|Test Execution Summary|
+----------------------+
|No|Test Case Name |Dur(ms)|Result |
+--+-------------------------------+-------+-------+
|1 |[Server].[test MyFirstAutoTest]| 0|Failure|

Let’s create an autotest with a more useful content. For example, we can check which databases have never been backed up. Or go even further… and learn which databases in FULL RECOVERY require BACKUP LOG.

Why is it an essential problem that should be checked automatically?

I mentioned not once that you should always make backups. But BACKUP LOG is worth mentioning as a separate issue. After the first FULL BACKUP to the database in FULL RECOVERY, the log file is no longer cleared and due to the lack of free space in the file, it will gradually grow until free space expires on the disk or until we execute BACKUP LOG.

USE [tSQLt]
GO
CREATE PROCEDURE [Server].[test CheckBackup]
AS BEGIN
    SET NOCOUNT ON;
    DECLARE @SQL NVARCHAR(MAX)
    SELECT @SQL = (
    SELECT '
' + QUOTENAME(d.name) + ': ' +
        CASE
            WHEN t.database_id IS NULL OR t.full_backup = 0
                THEN 'NO FULL BACKUP'
            WHEN d.recovery_model IN (0,1) AND log_backup = 0
                THEN 'NO LOG BACKUP'
            WHEN DATEDIFF(DAY, t.last_full_backup, GETDATE()) > 7
                THEN 'FULL BACKUP IS OUTDATED (LAST BACKUP: '
                    + CONVERT(NVARCHAR(MAX), t.last_full_backup, 120) + ')'
        END
    FROM sys.databases d
    LEFT JOIN (
        SELECT
              database_id = DB_ID(s.database_name)
            , last_full_backup = MAX(CASE WHEN s.[type] = 'D' THEN s.backup_finish_date END)
            , full_backup = COUNT(CASE WHEN s.[type] = 'D' THEN 1 END)
            , log_backup = COUNT(CASE WHEN s.[type] = 'L' THEN 1 END)
        FROM msdb.dbo.backupset s
        WHERE s.[type] IN ('D', 'L')
        GROUP BY s.database_name
    ) t ON t.database_id = d.database_id
    WHERE d.name NOT IN ('tempdb')
        AND (
                t.database_id IS NULL
            OR
                DATEDIFF(DAY, t.last_full_backup, GETDATE()) > 7
            OR
                t.full_backup = 0
            OR
                (d.recovery_model IN (0,1) AND log_backup = 0)
        )
    FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')
    IF @SQL IS NOT NULL
        EXEC tSQLt.Fail @SQL
END

Let’s execute this:

EXEC tSQLt.Run 'Server.test CheckBackup'

[Server].[test CheckBackup] failed: (Failure)
[master]: NO FULL BACKUP
[model]: NO FULL BACKUP
[msdb]: NO FULL BACKUP
[WideWorldImporters]: FULL BACKUP IS OUTDATED (LAST BACKUP: 2016-06-01 14:48:29)
[tSQLt]: NO LOG BACKUP

+----------------------+
|Test Execution Summary|
+----------------------+
|No|Test Case Name |Dur(ms)|Result |
+--+---------------------------+-------+-------+
|1 |[Server].[test CheckBackup]| 27|Failure|

Let’s execute the required operations on creation of backups to check the correctness of the test:

+----------------------+
|Test Execution Summary|
+----------------------+
|No|Test Case Name |Dur(ms)|Result |
+--+---------------------------+-------+-------+
|1 |[Server].[test CheckBackup]| 10|Success|

Let’s create a couple of useful tests. For example, for keeping track of how many AutoGrow events occur on the server.

I hope it’s no secret that the execution of any transaction requires a certain space on the disk, in the data file or log. In general, if there is not enough space, then the file automatically grows. At this point, the file is blocked and SQL Server will wait for the disk subsystem to make the necessary operations on allocation of space on the disk.

By default, SQL Server zero-initializes new space on the disk. This behavior can be disabled for data files through the use of Instant File Initialization, and the time for allocating disk space can be reduced. But initialization will still happen for log files, and this is definitely slow. Therefore, it is recommended to keep track of Auto Grow events on a regular basis:

USE [tSQLt]
GO
CREATE PROCEDURE [Server].[test CheckAutoGrow]
AS BEGIN
    SET NOCOUNT ON;
    DECLARE @SQL NVARCHAR(MAX)
    SELECT @SQL = (
    SELECT '
' + QUOTENAME(DB_NAME(DatabaseID)) + ': ' + CAST(COUNT(1) AS NVARCHAR(10))
            + ' events in ' + [FileName]
            + ' (total waits: ' + CAST(SUM(Duration) / 1000 AS NVARCHAR(10))
            + 'ms)'
    FROM sys.traces i
    CROSS APPLY sys.fn_trace_gettable([path], DEFAULT) t
    WHERE t.EventClass IN (92, 93)
        AND i.is_default = 1
    GROUP BY DatabaseID, [FileName]
    HAVING COUNT(1) > 2
        OR SUM(Duration) / 1000 > 300
    FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')
    IF @SQL IS NOT NULL
        EXEC tSQLt.Fail @SQL
END

[Server].[test CheckAutoGrow] failed: (Failure)
[tempdb]: 67 events in templog (total waits: 2492ms)
[tempdb]: 112 events in tempdev (total waits: 8149ms)
[tSQLt]: 1 events in tSQLt_log (total waits: 346ms)
[tSQLt]: 3 events in tSQLt (total waits: 283ms)

+----------------------+
|Test Execution Summary|
+----------------------+
|No|Test Case Name |Dur(ms)|Result |
+--+-----------------------------+-------+-------+
|1 |[Server].[test CheckBackup] | 10|Success|
|2 |[Server].[test CheckAutoGrow]| 26|Failure|

Now let’s add one more test, which will check whether Instant File Initialization I’ve mentioned earlier is enabled on the server:

USE [tSQLt]
GO
CREATE PROCEDURE [Server].[test InstantFileInitializationEnabled]
AS BEGIN
    SET NOCOUNT ON;
    DECLARE @IsEnabled BIT
    IF EXISTS (
        SELECT 1
        FROM sys.configurations
        WHERE name = 'xp_cmdshell'
            AND value_in_use = 1
            AND IS_SRVROLEMEMBER('sysadmin') = 1
    ) BEGIN
        DECLARE @temp TABLE (Value VARCHAR(8000))
        INSERT INTO @temp
        EXEC sys.xp_cmdshell 'whoami /priv'
        SELECT @IsEnabled =
            CASE WHEN Value LIKE '%Enabled%' COLLATE SQL_Latin1_General_CP1_CI_AS
                THEN 1
                ELSE 0
            END
        FROM @temp
        WHERE Value LIKE '%SeManageVolumePrivilege%'
    END
    IF ISNULL(@IsEnabled, 0) = 0
        EXEC tSQLt.Fail 'Instant File Initialization NOT ENABLED'
END

Thus, we already have three autotests, which are stored in tSQLt. If we need to conduct an audit and verify the correctness of settings, all we have to do is to run autotests. And to check another server, we need to backup the tSQLt database and deploy it at a new location.

What else should be mentioned? Don’t forget that tSQLt turns each test you run into a transaction. Therefore, if your stored procedure uses its own transactions – it should be done cautiously. For example, the test of this procedure will fail:

CREATE PROC TEST
AS BEGIN
    BEGIN TRAN TR
    BEGIN TRY
        SELECT 1 / 0
        COMMIT TRAN TR
    END TRY
    BEGIN CATCH
        IF @@TRANCOUNT > 0
            ROLLBACK TRAN TR
    END CATCH
END

Although the procedure works without errors outside the test. The reason for the problem is that ROLLBACK in the procedure rolls back not only your transaction, but also the tSQLt transaction and the number of active transactions will change on return.

And now a small dessert…

For those who like GUI, green and red checkmarks in front of tests, etc., the Devart company has developed dbForge Unit Test for SQL Server – a powerful tSQLt-based plug-in for SSMS that allows to automate all the work with Unit-tests.

Creating and Running an SQL Server Unit Test - Best Ways to Test SQL Queries unittest1

All that we have done using scripts, you can easily create and edit with the help of GUI:

Creating and Running an SQL Server Unit Test - Best Ways to Test SQL Queries unittest2

You can also run tests and analyze the results of execution:

Creating and Running an SQL Server Unit Test - Best Ways to Test SQL Queries unittest3

At this point, I finish telling you about the capabilies of unit testing in SQL Server. I hope this information will be helpful.

For those who are interested in trying out dbForge Unit Test for SQL Server – you can download it here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on Creating and Running an SQL Server Unit Test – Best Ways to Test SQL Queries


PowerShell Scripts – get-process with SQL Server process

$
0
0

Working with powershell scripts can be interesting. I have in the past shown a number of such scripts that we can use with SQL Server. In this blog, I was playing around understanding how I can use the get-process commandlet and how it can be used with SQL Server. This exploration and playing around has got me to write this rather simple yet something useful blog that you might use in your environments.

I start off my looking for the SQL Server process information from the list of processes available running inside a system.

#list all running process
Get-process 

#list only sql server running processes
Get-process sqlservr  

As you can see the output would look something like this. Here we have the host’s process id too.

PowerShell Scripts - get-process with SQL Server process get-process-01

There is additional information we can get from the same, but that we will reserve for some other day. What I was interested in understanding how we can extend and what all members can be called as an extension.

#Below cammand will list properties that we can list with get-process.
Get-Process sqlservr | Get-Member -MemberType Properties  

Now this gives us all the extensions we can use with the Get-Process commandlets. The output will look like below:

PowerShell Scripts - get-process with SQL Server process get-process-02

What I was experimenting was if there is a method to copy content from the output window. First I was playing around to understand how to copy from the command line window. It was interesting and nothing new. But what I found was an interesting extension that can help you copy the output to clipboard.

#To save time of copying the output.We can put the output in
#clipboard so that we can directly copy the output at our end.
Get-Process sqlservr | select ProcessName, id, StartTime, 
FileVersion, Path, Description, product | clip

Now that we have it in our clipboard, we can paste the same into any of the application of choice.

I thought this was worth a share as I play more with SQL Server and PowerShell combinations. Though such learnings get me started with some simple scripts, I would love to learn from you to how you use PowerShell with SQL Server in your environments.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on PowerShell Scripts – get-process with SQL Server process

PowerShell – Tip: How to Format PowerShell Script Output?

$
0
0

I have been writing on various ways of working with PowerShell and how to connect to SQL Server. Personally, when I see text on command prompt it is quite a mess and very difficult to decipher the same. If you play around and look at various blog posts, they show some interesting outputs even though they work to format PowerShell.

In this blog post, let me take you through 3 of the most common ways in which people format works with PowerShell script. This is not always the exhaustive way of working, but a great start and you will surely start falling in love with this capability. Trust me on it.

Method 1: This is an age old classic wherein we can format the output from a PowerShell script in a formatted table (ft) to be short. A typical command looks like:

#Demo Format-Table cmdlet. Alias ft
invoke-sqlcmd "Select * from sys.dm_exec_connections" -ServerInstance . | ft

PowerShell - Tip: How to Format PowerShell Script Output? format-powershell-01

As you can see above, the output of our SQL query is not formatted in a nice table structure and is properly delimited. I am generally a big fan of using this with my scripts as it is easily readable.

Method 2: This is yet another variation of the same output, but this time we can take the output and make it into a formatted list (fl). A typical usage of this would look like:

#Demo Format-List cmdlet. Alias fl
invoke-sqlcmd "Select * from sys.dm_exec_connections" -ServerInstance . | fl

There are people who like to see these as property sheets and I am not inclined to this output in general. But I am sure there will be use cases wherein it would make complete sense to have this output.

PowerShell - Tip: How to Format PowerShell Script Output? format-powershell-02

Option 3: This is a revelation for me. I was using PowerShell ISE and one of the output is to use the GridView. This can be used like:

#Demo Format-List cmdlet. Alias Gridview
invoke-sqlcmd "Select * from sys.dm_exec_connections" -ServerInstance . | Out-Gridview

PowerShell - Tip: How to Format PowerShell Script Output? format-powershell-03

As you can see, the output now is in the format of a window just like what I am used to with SQL Server Management Studio. This was an awesome capability I personally felt.

I am sure many of you are power users and might have used these in different ways. Please let me know which of the output you like the most and let me know if there are any other methods that I need to know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on PowerShell – Tip: How to Format PowerShell Script Output?

SQL SERVER – Displaying SQL Agent Jobs Running at a Specific Time

$
0
0

Recently I was troubleshooting at a customer location something that looked trivial. When the customer approached me for a consulting requirement wherein they were saying their system was going unresponsive every day in the morning around a certain time. They were clueless to what is happening and why this was the case almost every day in the week. I got curious to understand what was going wrong with SQL Agent Jobs.

Some of these problems can take a really long time or some of them can be as simple as you think. Here I was clueless to what was the problem. When I got into active discussion with the team, I was curious, there was something they were not telling me. After random troubleshooting with the team and using tools like PerfMon, Profiler etc – I figured out there was a background process running at that time.

SQL SERVER - Displaying SQL Agent Jobs Running at a Specific Time jobsrunning-800x486

I asked the team if there were any Agent jobs that were running at that time. I could see they were clueless and were looking at each other. One developer jumped to put the ball on my court by asking if there is a way to find if there are any methods or script to help them find if any jobs were running. I had to get to my scripts bank that I use and I figured out there was already one handy with me.

Listing SQL Agent Jobs Running at a Specific Time

SELECT * FROM
(
 SELECT JobName, RunStart, DATEADD(second, RunSeconds, RunStart) RunEnd, RunSeconds
 FROM
 (
  SELECT j.name AS 'JobName',
    msdb.dbo.agent_datetime(run_date, run_time) AS 'RunStart',
    ((jh.run_duration/1000000)*86400) 
    + (((jh.run_duration-((jh.run_duration/1000000)*1000000))/10000)*3600) 
    + (((jh.run_duration-((jh.run_duration/10000)*10000))/100)*60) 
    + (jh.run_duration-(jh.run_duration/100)*100) RunSeconds
  FROM msdb.dbo.sysjobs j 
  INNER JOIN msdb.dbo.sysjobhistory jh ON j.job_id = jh.job_id 
  WHERE jh.step_id=0 --The Summary Step
 ) AS H
) AS H2
WHERE '2016-05-19 10:16:10' BETWEEN RunStart AND RunEnd
ORDER BY JobName, RunEnd

I personally found this handy and the problem was solved as soon as this query ran. They figured out some batch process that was recently deployed and instead of running them at 10PM the administrator had mistakenly scheduled this for 10AM.

This revelation was an eye opener to what one needs to do while doing the configurations. I felt such a simple task can sometimes take ages to solve or a human error can bring the system down so easily. I think I learnt something new and felt this learning can be useful to you too. Do let me know if you find this script about SQL Agent Jobs useful or feel free to extend the same and share via comments.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Displaying SQL Agent Jobs Running at a Specific Time

SQL SERVER – Fix: Error: Msg 1904 The statistics on table has 65 columns in the key list

$
0
0

SQL SERVER - Fix: Error: Msg 1904 The statistics on table has 65 columns in the key list statisticserror With SQL Server 2016, I have come to know some of the restrictions which were applicable earlier are no longer the limits to look for. In one such experimentation is what I stumbled upon the blog post: SQL SERVER – Fix: Error: Msg 1904, Level 16 The statistics on table has 33 column names in statistics key list. The maximum limit for index or statistics key column list is 32.

I thought having 32 columns itself was something far too many but to my surprise when I used the same script from the blog to just realize how this error was not popping up.

I went into a mode of exploration to find when the error would pop-up. The script this time was:

DROP DATABASE IF EXISTS TestDB
GO
CREATE DATABASE TestDB
GO
USE TestDB
GO
CREATE TABLE Test1
(ID1 INT,  ID2 INT, ID3 INT, ID4 INT, ID5 INT, ID6 INT,
ID7 INT, ID8 INT, ID9 INT, ID10 INT, ID11 INT, ID12 INT,
ID13 INT, ID14 INT, ID15 INT, ID16 INT, ID17 INT, ID18 INT,
ID19 INT, ID20 INT, ID21 INT, ID22 INT, ID23 INT, ID24 INT,
ID25 INT, ID26 INT, ID27 INT, ID28 INT, ID29 INT, ID30 INT,
ID31 INT, ID32 INT, ID33 INT, ID34 INT, ID35 INT, ID36 INT,
ID37 INT, ID38 INT, ID39 INT, ID40 INT, ID41 INT, ID42 INT,
ID43 INT, ID44 INT, ID45 INT, ID46 INT, ID47 INT, ID48 INT,
ID49 INT, ID50 INT, ID51 INT, ID52 INT, ID53 INT, ID54 INT,
ID55 INT, ID56 INT, ID57 INT, ID58 INT, ID59 INT, ID60 INT,
ID61 INT, ID62 INT, ID63 INT, ID64 INT, ID65 INT)
GO

And for creating the statistics, I have used the below:

CREATE STATISTICS [Stats_Test1] ON [dbo].[Test1] (ID1,
ID2,ID3,ID4,ID5,ID6,ID7,ID8,ID9,ID10,ID11,ID12,ID13,ID14,ID15,
ID16,ID17,ID18,ID19,ID20,ID21,ID22,ID23,ID24,ID25,ID26,ID27,ID28,
ID29,ID30,ID31,ID32,ID33,ID34,ID35,ID36,ID37,ID38,ID39,ID40,ID41,
ID42,ID43,ID44,ID45,ID46,ID47,ID48,ID49,ID50,ID51,ID52,ID53,ID54,
ID55,ID56,ID57,ID58,ID59,ID60,ID61,ID62,ID63,ID64,ID65)
GO

Msg 1904, Level 16, State 2, Line 21
The statistics ‘Stats_Test1’ on table ‘dbo.Test1’ has 65 columns in the key list. The maximum limit for statistics key column list is 64.

As you can see, now the maximum limit for statistics for key column list has been doubled to 64. So the new limit for the same has changed with SQL Server 2016. I am sure you are going to hit this limit if you are going to create an index or statistics having columns more than 64.

At this moment I would like to bring a note to Maximum Capacity Specifications for SQL Server page on MSDN for your reference because this is the root page for all these limits that one needs to be aware off. Have you ever hit this limit in your environments or scripts? Do let me know via comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Fix: Error: Msg 1904 The statistics on table has 65 columns in the key list

SQL SERVER – FIX Error: Maintenance plan scheduled but job is not running as per schedule

$
0
0

SQL SERVER - FIX Error: Maintenance plan scheduled but job is not running as per schedule anothererror This was one of the interesting issues I solved in many days. One of my clients contacted me and told that they have scheduled a maintenance plan to take t-log backup at 10 PM but it’s not running. When we look into the job history, it was not showing any history. Some problem statements like these are interesting because they look trivial and simple – yet they are convoluted and not straightforward to solve. Let us learn how to solve Error “Maintenance plan scheduled, but the job is not running as per schedule

I asked them to show the problem so that I can see live and look at various things happening. I asked them to share LOG folder which contains SQLAgent.out files.

SQL SERVER – Where is ERRORLOG? Various Ways to Find ERRORLOG Location

When I looked into the file, I was able to find interesting messages like below.

2016-06-29 20:00:00 – ! [298] SQLServer Error: 229, The EXECUTE permission was denied on the object ‘sp_sqlagent_log_jobhistory’, database ‘msdb’, schema ‘dbo’. [SQLSTATE 42000] (ConnExecuteCachableOp)
2016-06-29 20:10:36 – ! [298] SQLServer Error: 229, The SELECT permission was denied on the object ‘sysjobschedules’, database ‘msdb’, schema ‘dbo’. [SQLSTATE 42000] (SaveAllSchedules)
2016-06-29 20:10:36 – ! [298] SQLServer Error: 229, The UPDATE permission was denied on the object ‘sysjobschedules’, database ‘msdb’, schema ‘dbo’. [SQLSTATE 42000] (SaveAllSchedules)
2016-06-29 20:10:36 – ! [376] Unable to save 1 updated schedule(s) for job T-log Backup 10 PM.Subplan_1

So above was the problem in Agent log file at 8 PM. I asked to open maintenance plan to try to save schedule it again. As soon as we did that, there was no error raised, but the job was not reflecting that schedule.

Fix/ Solution / Workaround:

I captured profiler while saving the Maintenance plan and found that permission from the SQL Agent account was not sufficient as I was seeing the error 298.

As per documentation: (Select an Account for the SQL Server Agent Service)

The account that the SQL Server Agent service runs as must be a member of the following SQL Server roles:

  • The account must be a member of the sysadmin fixed server role.
  • To use multiserver job processing, the account must be a member of the msdb database role TargetServersRole on the master server.

Later client informed me that this all started happening when they followed an article on internet to harden the security.

Moral of the story: Never trust on internet advice as not everything would be true. Always look at author and check his/her reliability.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX Error: Maintenance plan scheduled but job is not running as per schedule

Viewing all 594 articles
Browse latest View live