Geeks With Blogs
James Rogers BI from the trenches...

                Many legacy Analysis Services (or OLAP Services) implementations use ProClarity as a querying tool and publishing framework.  The server infrastructure of ProClarity allows crafty implementers to push much of the customization in KPI’s and additional measures into the “Presentation” layer of the infrastructure.  This is easier to manage and change in many cases than creating named sets and calculated members in the cubes as this generally takes a SSAS administrator.

                With Microsoft purchasing ProClarity and its inclusion into PerformancePoint leads to a quandary for many solution designers.  While ProClarity still works well as it was designed and is available as a report type in PerformancePoint, the ProClarity views have a very different look and feel than the native PerformancePoint report views.  Also, due to the fact that the report views are basically ProClarity Web Standard views, the drill-from-anywhere-to-anywhere capabilities are not available directly though the PerformancePoint interface for these views. Additionally, the library of calculated measures and sets created and published in ProClarity are not available to PerformancePoint and thus cannot be used in scorecards and report views.

                In many cases, the optimum solution here is to build these ProClarity sets and measures into their respective cubes as named sets and calculated members.  CAVEAT – this is not a silver bullet solution.  There is a cost in terms of SSAS processing and resources for named sets and calculated measures so this should not be done without ample testing prior to implementation in production environments.

                Let’s first talk about calculated measures.  Existing ProClarity solutions may have a laundry list of calculated measures in their libraries. Many times these may contain nested measures (i.e. a calculated measure that uses another calculated measure) and as these get more they tend to not perform as well as desired.  First, it is important to identify which measures will get heavy use in other applications such as PerformancePoint (both scorecards and reports) and Excel.  Those that have a wide range of applicability justify their creation and maintenance as calculations in their respective cubes.  These calculation (and set) definitions can be extracted from the ProClarity repository database for those of you who are familiar with it.  This is not a “recommended” approach in much the same way as using the “sys” tables in production code for SQL Server is discouraged.  As a rule, I like to take these calculations and run through extensive performance analysis using Query Analyzer and Profiler.  For instance, let’s convert a calculation that reflects sales growth over the prior week:

 

[Measures].[Sales]/paralleperiod([Date].[Week].[Week],1,[Date].[Week].currentmember)

 

The query would look something like:

 

With member [Measures].[Sales Growth] as ‘[Measures].[Sales]/paralleperiod([Date].[Week].[Week],1,[Date].[Week].currentmember)’

Select [Org].members on rows

,[Date].[Week].&[01/01/2007 – 01/07/2007] on columns

From sales

Where [Measures].[Sales Growth]

 

Make sure to test the query using dimension that can take advantage of your cube partitioning strategy (in this case [Date]).  Test the query with the measure as a filter, on rows and on columns.  Checking profiler to make sure that the query does not cause a scan of all process partitions or excessive subcubing.  Once you have settled on a calculation formula that performs well in these cases it is now time to create the calculated member through Business Intelligence Studio.  When you create the calculated member in the cube, make sure you define the format and especially the non-empty behavior.  If the calculation involves division, I tend to use one of the measures from the denominator as the non-empty behavior measure.  I realize that this may not always be feasible if the resulting non-empty behavior is not correct so do not do this thoughtlessly.

                Next, let’s talk about named sets.  Many times users and/or publishers have created dimensional sets in ProClarity to ease reporting and facilitate exception-based metrics (I will talk to this in a later post).  Putting these into the cube involves much the same process as the calculated measures.  First I test the named set through Query Analyzer watching with Profiler.  Let’s consider a named set for a dimension Sales Type that identifies gift card sales:

 

{[Sales Type].[Sales Type].&[Gift Card], [Sales Type].[Sales Type].&[Vendor Card]}

 

Next we use this in the query:

 

With set [Gift Card] as ‘{[Sales Type].[Sales Type].&[Gift Card], [Sales Type].[Sales Type].&[Vendor Card]}’

member [Measures].[Sales Growth] as  ‘[Measures].[Sales]/paralleperiod([Date].[Week].[Week],1,[Date].[Week].currentmember)’

Select [Org].members on rows

,[Date].[Week].&[01/01/2007 – 01/07/2007] on columns

From sales

Where ([Measures].[Sales Growth], [Gift Card])

 

Move the set around in the query to test it from all angles and try using it as part of an aggregate measure as well:

 

With set [Gift Card] as ‘{[Sales Type].[Sales Type].&[Gift Card], [Sales Type].[Sales Type].&[Vendor Card]}’

member [Measures].[Gift Card Sales] as  ‘sum([Gift Card],[Measures].[Sales])’

Select [Org].members on rows

,[Date].[Week].&[01/01/2007 – 01/07/2007] on columns

From sales

Where ([Measures].[Gift Card Sales])

 

If the results are satisfactory then create the set in the cube through Business Intelligence Studio.  Keep in mind that sets are cached in memory for each connection so extremely large sets are not recommended in some cases as are too many sets.

                I tend to define the named sets at the top of my calculations definition so they can be used in all the subsequent calculations, if necessary.  The sets and calculations created in Business Intelligence Studio are appended to the cube definition as an MDXScript so order of creation in the Studio is important as you must define parent sets and calculations before their dependent sets and calculations.  Adding and modifying these generally does not require cube reprocessing but I have seen it happen in rare instances.  Make sure you test the cube modifications first in a non-production environment to make sure that they do not require reprocessing.  You wouldn’t want to do something that would cause reprocessing the cube partitions as the source data may have been purged and unrecoverable (hopefully you have planned for this with a good archive process).  Since named sets are cached in memory and calculated members are generally resolved in memory, often the combination of these results in better query performance.  Using named sets in calculated measures often requires aggregate functions (such as SUM) which have a higher CPU requirement so consider the calculation mdx definition carefully to find the correct balance of query performance and scalability considering your server resources.  On last advantage of calculated measures is that the mdx definition is static so you lessen the likelihood of the same measure being calculated multiple, different ways which can not only end up with different results but vastly different queries and performance.

                With these high-use sets and calculations defined in the cube(s) in an optimum way they are now available to all applications that connect to the cube such as PerformancePoint, Excel and Reporting Services as well as ProClarity and the resulting query performance should be more predictable than when they were in the ProClarity repository and assembled on the fly by ProClarity.

Posted on Wednesday, February 13, 2008 10:15 PM PerformancePoint , Analysis Services | Back to top


Comments on this post: Performance, Named Sets, Calculated Measures, PerformancePoint and ProClarity

# re: Performance, Named Sets, Calculated Measures, PerformancePoint and ProClarity
Requesting Gravatar...
ProClaity documentation claims - and I have tested to confirm - that cube calculated measures cannot be published in ProClarity.
Left by bill s on Feb 15, 2009 8:32 AM

# re: Performance, Named Sets, Calculated Measures, PerformancePoint and ProClarity
Requesting Gravatar...
You are correct. I did not intend to imply that ProClarity actually built these calculated measures in the cube. Rather, I meant that the logic from the ProClarity calculation can be put into the cubes manually as calculated measures as an alternative to the ProClarity calculation. Thanks for the feeback. Glad someone is looking at this!
Left by James Rogers on Feb 15, 2009 10:47 AM

# re: Performance, Named Sets, Calculated Measures, PerformancePoint and ProClarity
Requesting Gravatar...
Darn!

I was hoping you would prove me wrong.

This limitation is a major irritant.

Left by bill s on Feb 15, 2009 1:04 PM

# re: Performance, Named Sets, Calculated Measures, PerformancePoint and ProClarity
Requesting Gravatar...
New info: If you publish a ProClarity book that uses calculated measures on the cube, they work up to a point.

o If you read the book with Web Professional, then the calculated measures are viewable.
o If you read the book with Web Standard, then the calculated measures disappear.

I'm still testing the effects of using calculated measures in KPIs. So far I have not been able to get KPIs to do anything sensible with ProClarity shared calculated measures.

Left by bill s on Feb 16, 2009 3:19 PM

# re: Performance, Named Sets, Calculated Measures, PerformancePoint and ProClarity
Requesting Gravatar...
I'm going to stop publishing comments - after this one.

Closer examination of ProClarity Profeesional and Web Professional shows that only the first can handle calculations on the cube.

I blame my earlier confusion on the fact that I had Pro installed on my desktop, and when using web access (and - I thought - Web Pro) sometimes got the wrong Professional started.
Left by biill s on Feb 19, 2009 10:37 AM

# re: Performance, Named Sets, Calculated Measures, PerformancePoint and ProClarity
Requesting Gravatar...
You have explained how to handle the efficient calculated measures by including them in the cube itself. It would be great if you could also explain how to handle not so efficient calculated measures? Is there any way to create calculated measures in presentation layer in PerformancePoint Services?

Thanks a lot.
Left by Mahesh Shrestha on Sep 01, 2010 11:23 AM

# re: Performance, Named Sets, Calculated Measures, PerformancePoint and ProClarity
Requesting Gravatar...
There are two ways I know of to push calculated measures to the presentation layer in PerformancePoint. First, when defining a KPI, you can use Custom MDX to define it instead of an actual measure from the drop-down list. Additionally, reports can be defined with custom queries that are parameterized to allow filters to be passed in. However, those reports lose their dynamic drill-ability and you cannot pass those calculations into a report (using the custom properties of the KPI). Generally, for the best performance and reusability, I have found it is better to define the calculated measures in the cube rather than in the PerformancePoint application.
Left by James on Nov 02, 2010 7:00 AM

Your comment:
 (will show your gravatar)


Copyright © James Rogers | Powered by: GeeksWithBlogs.net