:::: MENU ::::

Posts Categorized / SQL Server Agent scheduled Jobs

  • Jan 31 / 2014
  • 0
Database Mail, dbDigger, SQL Server Agent scheduled Jobs, SQL Server Error messages

dbmail error File attachment or query results size exceeds allowable value of 1000000 bytes

Last day i was working on a task which was meant to generate some data and send it as an email attachment to a specific email address. This process was going to be configured as a scheduled job in SQL Server agent. I completed the first part of task and data was ready as a CSV file to be send through email.

Problem Phase

In the second phase while trying to send the file as an attachment i got following error message.

dbmail error File attachment or query results size exceeds allowable value of 1000000 bytes

Amount of bytes shows that about 1 MB file size cap was there on attachment file that should be send through DBMail. In my case we were expecting the attachment file of size up to 25 MB.
So the questions are

  • Can we change the default limit on attachment size in database mail .i.e. 1000000 bytes?\
  • If yes then where can we change this parameter?

Solution

Yes we can change the default limit on attachment size in database mail. And related to the procedure of changing this parameter following are the steps

  • Open SSMS and go to Management folder
  • Right click on Database Mail and select Configure Database Mail
  • Click Next and select view or change system parameters radio button
  • Click next and you will be presented the parameters screen where you can view and modify the parameters as well
  • Here you can change the value against the Maximum File Size (Bytes) parameter
  • Specify the required size in bytes and click Next to save the configurations

Now you can send attachment up to mentioned max size here without any error.

What is the limit of Max value?

It is a valid question here at this point that what is the limit of value that we can enter as maximum file size in bytes? The answer is 2147483646.
Yes this parameter is of type int and can accept value under 2^31-1 (2,147,483,647) bytes. This mean that roughly you can get about 2GB size against this parameter. If you enter exceeded value for size then following error message will be generated

Account Retry Attempts, Account Retry Delay, Maximum File Size, Database Mail Executable Minimum Lifetime, and Maximum Number of Mails must have valid values to continue.

Value was either too large or too small for an Int32. (mscorlib)

  • Dec 24 / 2013
  • 0
DBCC Commands, dbDigger, Maintenance plans, Performance Tunning and Optimization, SQL Server Agent scheduled Jobs

Optimal run of DBCC CHECKDB on production servers

DBCC CHECKDB is SQL Server built-in utility command to analyze and report the database integrity in various aspects. It checks the logical and physical integrity of all the objects in the specified database by performing the following operations:

  • Runs DBCC CHECKALLOC on the database
  • Runs DBCC CHECKTABLE on every table and view in the database
  • Runs DBCC CHECKCATALOG on the database
  • Validates the contents of every indexed view in the database

It is a useful command and should not be abandoned due to its resource hungry execution. However under default deployment it may take significant time on production databases. With increasing time window, risk of performance degradation increases. You may have faced these issues several times on your production servers. Following short comings may be noticed in default implementation.

  • Error or information messages are not stored any where when scheduled job is run
  • Check is required to exclude any databases other than ONLINE from the target databases list
  • A strategy is required to minimize the activity on server

Solution

We may take few steps to make the process optimized and log the output. Following are points that are recommended for it.

  • Save log file of DBCC CHECKDB scheduled job output to disk. It may have just error messages if any or also the informational messages.
  • Make sure to exclude the databases whose CHECKDB is not required. CHECKDB is performed for ONLINE databases, so make sure to include the check in the script that will filter out all the databases that are not ONLINE.
  • Use parameter PHYSICAL_ONLY. It will greatly reduce the process time and will only consider the data as it is stored on physical layer.
  • We can check the logical structure on any fresh restored copy of same databases on other server. Logical checks are not dependent on machine or disk. It will totally eliminate the load of logical checks from production server. This process will also make sure the validity check of backups.

Page Verification Settings and DBCC CheckDB

It seems here relevant to discuss the relationship between the Page verification check sum and DBCC CheckDB. PAGE VERIFICATION is a database level setting related to data recovery and integrity. Its value may be NULL, TORN_PAGE_DETECTION or CHECKSUM. For SQL Server 2005 and onwards CHECKSUM is the default option for new databases. CHECKSUM is more detailed and granular algorithm than TORN_PAGE_DETECTION. CHECKSUM covers all aspects of TORN_PAGE_DETECTION. However it may require more CPU resources as compared to TORN_PAGE_DETECTION. CHECKSUM Calculates a checksum over the contents of the whole page and stores the value in the page header when a page is written to disk.

A common misconception is that if we enable CHECKSUM for Page verification feature then we may skip the DBCC CheckDB command as CHECKSUM will be evaluating the data integrity.
It is not true at all. Page verification CHECKSUM is not alternate of DBCC CheckDB however it may effectively enhance the scope of DBCC CheckDB. Page verification CHECKSUM is a limited scope page level verification whereas DBCC CheckDB covers far more areas than that.
Page verification CHECKSUM is calculated/updated only when a data page is saved back to disk. If a data page gets corrupt after it has been saved to disk with CHECKSUM calculated on it then we will not get its report until it will be retrieved again. To verify through the calculated CHECKSUM we have to run DBCC CheckDB necessarily.
Technical detail of differences is not under scope of this statement. For better verification framework we should use both features.

  • Dec 06 / 2013
  • 0
Backup and Restore, dbDigger, SQL Server Agent scheduled Jobs, SQL Server Error messages, Xp_CmdShell

Executed as user: NT AUTHORITYSYSTEM. xp_create_subdir() returned error 5, ‘Access is denied.’

Recently i got an job failure email alert. This job is involved in backup of few databases and as a part of process it was required to create directories for each database as well. When i explored the job history to get the failure reason following message was found in log

Message
Executed as user: NT AUTHORITYSYSTEM. xp_create_subdir() returned error 5, ‘Access is denied.’ [SQLSTATE 42000] (Error 22048). The step failed.

sa is owner of this job and there is no apparent role of NT AUTHORITYSYSTEM as mentioned in the message. However there is a by design flow that involved NT AUTHORITYSYSTEM.
Point to note is that SQL Server service was configured to run under local system account.

Reason:

When xp_cmdshell is invoked by a user who is a member of the sysadmin fixed server role, xp_cmdshell will be executed under the security context in which the SQL Server service is running. In this case xp_cmdshell was being invoked by sa i.e. sysadmin so it is being executed in context of local system account. Local account was NT AUTHORITYSYSTEM that has no access to create the directories on backup location.

Solution:

To solve the issue i changed the SQL Server service login to a valid domain account that has access to create folders and files on network location for backup.

  • Aug 27 / 2013
  • 0
dbDigger, PowerShell, SQL Server Agent scheduled Jobs, SQL Server Error messages

Error message while using PowerShell in scheduled job

I was required to call PowerShell script from SQL Server scheduled job. Following was the command to access and run the PS script saved on disk.
powershell -command “& ‘D:PscriptsTransfer.ps1’ ”

This command syntax may be saved in batch file or directly run from cmd. However surprisingly i was facing following error message in job history. It is notable that job status was successful however error messages were logged there in job history.

A job step received an error at line 1 in a PowerShell script. The corresponding line is ‘powershell -command “&’%'”‘. Correct the script and reschedule the job. The error information returned by PowerShell is:

Apparently it was pointing towards syntax used in the command however i did not find any issue in the command. It was working fine outside the SQL Server scheduled job environment. Also i did not get any help through googling. However i analyzed the issue in terms of permissions that we mostly face in scheduled job environment.

So first step was to create a proxy account. I will cover the topic of creating proxy account for SQL Server agent job in a separate post. After creating the proxy account i used it in scheduled job and ran the job to test.
The idea worked and job executed successfully.
  • Dec 13 / 2012
  • 0
dbDigger, Disaster Recovery, Maintenance plans, SQL Server Agent scheduled Jobs, SQL Server Error messages, Transaction log files

Log backups fail after changing the SIMPLE recovery model

Recently a scenario was shared with me where maintenance plan was failed to create the log backups. Actually recovery model of DB was set to SIMPLE to prevent the log file population for some log intensive bulk operations. After completing the operations recovery model was put back to FULL. Every thing was OK till this point but maintenance plan job began to failed later for creating log backups of that database with following error

BACKUP LOG cannot be performed because there is no current database backup.

The reason for error is that after changing the recovery model of DB from SIMPLE to BULK LOGGED or FULL, we have to create a full or differential backup before going for log backup.
Solution to avoid such error is simple that we have to go in following sequence

  • Change recovery model to simple
  • Complete the required operations
  • Change back to FULL or BULK LOGGED recovery model
  • Create FULL or DIFFERENTIAL backup
  • Successfully create log backups
  • Here is a quick demo to simulate the above steps

    USE [master]  
     GO  
     -- Set the DB to simple recovery model  
     ALTER DATABASE [DBDigger]   
     SET RECOVERY SIMPLE  
     GO   
     -- Set the DB to full recovery model  
     ALTER DATABASE [DBDigger]   
     SET RECOVERY FULL  
     GO   
     -- Try to Backup the log, it would fail with error  
     Backup LOG DBDIgger   
     to Disk = 'C:DBD-LogBackup.Log'  
     GO  
     -- Now Create full or differential backup of database  
     Backup DATABASE DBDIgger   
     to Disk = 'C:DBD-FullBackup.bak'   
     GO  
     -- Try to Backup the log, it would be OK  
     Backup LOG DBDIgger   
     to Disk = 'C:DBD-LogBackup.Log'  
     GO  
    
    • Dec 03 / 2012
    • 0
    dbDigger, Monitoring and Analysis, SQL Server Agent scheduled Jobs, T-SQL Scripts

    Get list of scheduled jobs and associated steps

    I am required to generate a list of SQL Server scheduled jobs along with there associated steps. It would be compared to same list from another server where the jobs just have been transferred. Following code would provide me job names along with their steps

     USE MSDB  
     GO  
     SELECT v.name AS JobName, step_name   
     FROM sysjobs_view v   
     inner join sysjobsteps t ON v.job_id = t.job_id  
     ORDER BY v.name, step_name  
     GO   
    

    We may add further properties of job or step from SYSJOBS_VIEW and SYSJOBSTEPS respectively.

    Pages:123
    Consult us to explore the Databases. Contact us