:::: MENU ::::

Implementing a Continuous Analysis Services Solution

  • May 13 / 2009
  • 0
Analysis Services SSAS, dbDigger

Implementing a Continuous Analysis Services Solution

If your availability plan requires that users be able to query the cubes and dimensions on a continuous, 24-houra-day basis, there are a number of challenges that you must overcome. These include the following:

  • The repository If you use multiple servers to assure availability, you must ensure that the repository on each server remains synchronized with the Data folder, which you are also synchronizing. While the repository is not required for query processing, it is required when any structural change is made to the Analysis Services cubes and dimensions. If users are querying a copy of the Data folder on a secondary Analysis Services instance, you must make the change in the original Analysis Services instance and then update the secondary instance (using the file copy technique or msmdarch).
  • Writeback If you enable your cubes or dimensions for writeback, they can only write back to a single location (such as a SQL Server table). This creates a single point of failure (and possibly a performance bottleneck).
  • Processing Dimension processing might force cubes offline when structural changes have been made to nonchanging dimensions. When implementing a continuous Analysis Services solution, Microsoft offers two technologies to help you achieve this goal: Microsoft Cluster Services and Network Load Balancing.Memory Consumption by Connections Analysis Services allocates approximately 32 kilobytes (KB) for each client connection. By default, up to 10% of memory on the Analysis server can be allocated to each agent cache. Because more than one of these caches can be allocated at the same time (to service multiple clients issuing remote queries), reduce this value when many remote queries are being evaluated to reserve memory for the query results cache.
  • Disk The Data folder stores the data for all databases in an Analysis Services instance. The Temporary folder (or folders) stores the temporary files used during processing, if any. The usage of disk space in the Data folder will assist you in determining when partitioning of a cube will be useful. The usage of disk space in the Temporary folder will indicate that memory resources are in short supply.
  • Data Folder Within the Data folder, Analysis Services creates a separate subfolder for each Analysis Services database. Within each database folder, Analysis Services creates a separate subfolder for each cube within that database. While there are a number of different files created in the subfolder for each cube, there are two file types that you should particularly monitor:
  • Partition files Each partition file has an extension of fact.data. When a partition file exceeds 5 GB or more than 20 million records, you should begin considering the benefits of dividing the partition up into multiple partitions. While these are two general rules of thumb and may vary with circumstances, clearly smaller partitions can be processed faster than larger partitions. Also, with partitions, you frequently do not have to process all partitions in the cube in response to data change. In addition, smaller partitions can be queried quicker because, if the data slice is set properly, Analysis Services needs to scan less data to resolve many queries.
  • Aggregation Files The files containing rigid aggregations have an extension of agg.rigid.data. The files containing flexible aggregations have an extension of agg.flex.data. As these files get larger, the time required to process aggregations becomes longer. If you monitor the size of these files over time, you can see trends as they develop.
Consult us to explore the Databases. Contact us