The Data Used (Percent) metric calculates the size of all data files used as a percent of the added allocated size of all data files.

In a typical medium-to-large-scale OLTP environment with a data file size of over 50 GB, this value slowly increases over time in a linear fashion with the population of any available history tables.

Reduce the percentage of data used

If your site experiences a sudden sharp increase in this value of over 10% in an hour, investigate the issue to fully understand the underlying causes which often lead to poor performance. Possible causes include:

  • A large number of rows were inserted into a table. If so, then make sure that the table is not fragmented due to this operation. To verify, run DBCC SHOWCONTIG.
  • An index with very poor cardinality (uniqueness) was added to a large table. Indexes with poor cardinality often exceed the size of the underlying table itself and in almost all cases are not used by any SQL Server query.
  • A large number of rows were incorrectly changed by performing an insert of a new row followed by a delete of the old row. Although the net effect of the change is essentially the same number of rows; the database would have grown significantly in size and require you to issue a DBCC SHRINKDATABASE. It is likely that the table is also fragmented because the inserts were done before the deletes so run DBCC SHOWCONTIG. Always delete the old rows out of a table first before inserting the new rows; otherwise the tables are fragmented more quickly, databases increase in size, and the performance of the transactions is degraded due to the large number of page splits the table had to undergo.

SQL Diagnostic Manager identifies and resolves SQL Server performance problems before they happen. Learn more > >