Dave Rodabaugh’s Analysis Services Interview Questions

Think you know your SSAS? Read this five part series by Dave Rodabaugh to test your SSAS knowledge in an interview setting. Don’t know how I ever missed this one…

Dave Rodabaugh’s Five Part Series of Analysis Services Interview Questions
My Analysis Services Interview Questions
Part II of My Analysis Services Interview Questions: Cool Business Problems
Part IV of My Analysis Services Interview Questions: Technical Features
Part V of My Analysis Services Interview Questions: The Most Common MDX Functions

As Kenny Bania from Seinfeld says: "That’s gold, Jerry! Gold!"

Posted in Uncategorized | 1 Comment

Large SSAS Partition

 
Earlier this week I had to take ownership of a new SSAS system due to one of our offices shutting down. I’ve heard of it exisiting but never was able to take a look at it until just a few days ago.
  
I had heard there were performance problems and when I opened up the cube in BIDS, I came across this…
 
 

 

A single partition with a size of over 40GB. I initially thought it was due to too many aggregations (60% indicated), however after a closer look, the entire 40GB lies in the leaf fact data. All the dimension attributes are set to a RelationshipType of Flexible, and since the dimensions are being processed  by a ProcessUpdate without ProcessAffectedObjects set to true, all the aggregations have never been rebult. My best guess is there are around 3-6 billion fact rows contained in the partition. Wow.

Personally I always try to keep disk size under 1GB – row wise under 100 million. I know MS best practices says to keep it around 20 million, however after talking to others it seems like you can go a fair amount over that 20 million. All depends on how many dimensions/aggregations are associated with a row of course…

What’s the largest partition size you guys have seen?

 

Posted in Uncategorized | Leave a comment

Two Facts In a Measure Group – One Correct, One Incorrect

It’s been awhile since I’ve run into a "bang the head against the wall" problem, however today I ran into a doozy.
 
I had a measure group (no aggs, two summed measures) associated with three dimensions. There were some modifications done to the fact table that would now allow this measure group to be associated with seven dimensions. Great, more detail! Since these dimensions were already in the database and in the cube, all I’d have to do is just set up a few relationships through the dimension designer, reprocess, and BAM.
 
I do just that and validate the numbers against the fact table. While measure number one looked correct and returned a sum of 29,875.40, measure two returned incorrect numbers. The fact table had 34,450.60 and the cube was returning 34,448.20 – only a difference of 2.40. I process the indexes, process update the dimensions, reprocess the entire cube database, drop and recreate the partitions, and a few other things – NOTHING. It didn’t make any sense why would one number would match and the other didn’t – they were both coming from the same measure group and fact table.
 
One by one I removeed the new dimensionlal relationships in the measure group. By removing the first dimension the difference drops to 2.15, dimension number two – 1.87, dimension number three – 1.19, ect. There was no rhyme or reason to the difference narrowing.
 
After four hours I open up the cube live through Visual Studio, and on the Cube Measure, tab I rearrange the order of the two measures. Just drop and drag the bottom measure above the other. Reprocess, check the numbers..and…what do you know…the numbers finally match up like they should.
 
Totally frustrating.
Posted in Uncategorized | 3 Comments

70-448 Exam Results

Late July I took the 70-448 (SQL Server 2008, Business Intelligence Development and Maintenance) Beta Exam. It looks like the results are now being released being that I just received an email notifying me that I had passed. Woot woot!
 
My prep for the test was zilch. My real world experience was enough for me to pass. Cool eh? In all it took me about an hour and a half to go through the test.
 
I was initially hesitant in taking the exam, but certainly glad I did!
 

 
 
 
 
Posted in Uncategorized | 1 Comment

Tips: Processing Large Dimensions

With the good weather subsiding here in Seattle for a few days I’m feeling motivated to post something today.
 
Recently there was a post on LinkedIn regarding large dimensional updates (millions of members) in SSAS. The release of Analysis Services 2005 greatly increased the ability to process large dimension compared to AS 2000. Dimensional updates that weren’t possible in 2000 are now easily done in 2005.
 
I’ve dealt with somewhat large dimensions (five to twenty million members) so I thought I’d share some tips that have allowed me to perform daily updates in an acceptable manner.
 
When procssing dimensions, it’s not the processing of the dimension members that takes a lot of time, but rather it’s the rebuilding of the aggregates. (set ProcessAffectedObjects to True).
 
One of our larger dimension has ten million members and when no data shifts the processing of the dimension takes 15 minutes. If data does shift the reprocessing takes three hours. We also have a dimension with only 30 members that will take 15 seconds to reprocess if there is no data movement; two hours if there is. Data will move around if you have flexible attribute relationships.
 
Things to keep in mind:

1. Carefully examine all flexible relationships, just don’t set everything to flexible. This is set by the RelationshipType property on an attribute.

2. Process all your dimensions in parallel so the aggs are rebuilt at once. If you process dimensions in serial this may happen: dimension A processed, aggs are rebuilt, dimension B processed, data moves around, dimension C processed, aggs need to be rebuilt again. Always try to process dimensions in parallel. ALWAYS.

3. Be creative with your partitioning and aggregation designs. Say you have five years of data and you create a partition for each year. Chances are the most recent year or two will be queried differently than the data from four or five years ago. The partitions for years one and two should use a different aggregation design than years four and five. Rebuilding aggregates can take awhile (especially with larger dimensions) and the less you have to rebuild the faster the dimensional update will take.

For example: I have a cube (partitioned by day) with a rolling one year of data with the majority of queries only being run against the last 90 days. After ninety days I assign a different aggregation design to any partition older than that ninety days and reprocess. The last 90 days use an aggregation design with eight aggs, days 90 and older have a design with only three aggs.

4. Get the fastest box you can. From what i’ve experienced the real limiting factor in fast dimension updates isn’t memory, it’s CPU.

 
5. Only process dimensions that need to be processed. It’s easy to be lazy and process all the dimensions in a cube every single time, however if the data hasn’t been changed, why update the dimension? I have a stored procedure that takes the last ETL load time for a table and compares that to the last process time for that corresponding SSAS dimension. If the ETL time is later than the cube process time then the SSAS dimension is processed. This greatly cuts down on processing time.
 
6. To implement most of these suggestions you’ll need to write AMO code. If you want to process your cubes in any sort of efficient way the built-in SSAS tasks within SSIS just don’t cut it.  You’ll need custom code.
Posted in Uncategorized | 1 Comment

No cost beta 70-452: Designing a Business Intelligence Infrastructure Using Microsoft SQL Server 2008

You are invited to take beta exam 71-452: Designing a Business Intelligence Infrastructure Using Microsoft SQL Server 2008. If you pass the beta exam, the exam credit will be added to your transcript and you will not need to take the exam in its released form. The results will not appear on your transcript until several weeks after the final form of the exam is released. The 71-xxx identifier is used for registering for beta versions of MCP exams, when the exam is released in its final form the 70-xxx identifier is used for registration.

71-452: Designing a Business Intelligence Infrastructure Using Microsoft SQL Server 2008 counts as credit towards the following certification(s).

· Microsoft Certified IT Professional: Business Intelligence Developer 2008. In order to earn this certification you must also pass exam 70-448: TS: Microsoft SQL Server 2008, Business Intelligence Development and Maintenance.

Find exam preparation information: http://www.microsoft.com/learning/exams/70-452.mspx

Registration begins: August 8, 2008

Beta exam period runs: August 13, 2008– September 10, 2008

Registration Information

Please use the following promotional code when registering for the exam: 3568C


You must register at least 24 hours prior to taking the exam.

To register in North America, please call:

· Prometric: (800) 755-EXAM (800-755-3926)

Outside the U.S./Canada, please contact:

· Prometric: http://www.register.prometric.com/ClientInformation.asp

More info here: http://blogs.msdn.com/gerryo/archive/2008/08/08/sql-server-2008-beta-exam-71-452-designing-a-business-intelligence-infrastructure-using-microsoft-sql-server-2008.aspx 

Posted in Uncategorized | 1 Comment

Take the beta 71-448 for free

 
Had the chance to go take the beta 71-448 for free this morning. Took me about an hour and a half. Since the exam is still in beta you won’t receive your score for at least a few weeks. Bummer.
 
Definitely worth a shot, especially if you’ve never had the opportunity to take a MS exam test.
 
Only good until 7/31.
 
 
—————————————————————————–

71-448 – Promo code B6543

You are invited to take beta exam 71-448: TS: Microsoft SQL Server 2008, Business Intelligence Development and Maintenance. You were specifically chosen to participate in this beta because of your current Microsoft Certification status or previous participation with Microsoft Learning. If you pass the beta exam, the exam credit will be added to your transcript and you will not need to take the exam in its released form. The 71-xxx identifier is used for registering for beta versions of MCP exams, when the exam is released in its final form the 70-xxx identifier is used for registration.
By participating in beta exams, you have the opportunity to provide the Microsoft Certification program with feedback about exam content, which is integral to development of exams in their released version. We depend on the contributions of experienced IT professionals and developers as we continually improve exam content and maintain the value of Microsoft certifications.

71-448: TS: Microsoft SQL Server 2008, Business Intelligence Development and Maintenance counts as credit towards the following certification(s).

· Microsoft Certified Technology Specialist: SQL Server 2008, Business Intelligence Development and Maintenance


 Availability

Registration begins: June 15, 2008

Beta exam period runs: June 16, 2008– July 31, 2008

Receiving this invitation does not guarantee you a seat in the beta; we recommend that you register immediately. Beta exams have limited availability and are operated under a first-come-first-served basis. Once all beta slots are filled, no additional seats will be offered.

Testing is held at Prometric testing centers worldwide, although this exam may not be available in all countries (see Regional Restrictions).  All testing centers will have the capability to offer this exam in its live version.

Regional Restrictions: India, Pakistan, China


Registration Information

You must register at least 24 hours prior to taking the exam.
Please use the following promotional code when registering for the exam: 943F6
Receiving this invitation does not guarantee you a seat in the beta; we recommend that you register immediately.

To register in North America, please call:

Prometric: (800) 755-EXAM (800-755-3926)

Outside the U.S./Canada, please contact:

Prometric: http://www.register.prometric.com/ClientInformation.asp

—————————————————————————–

More info here: http://blogs.msdn.com/gerryo/

 

Posted in Uncategorized | Leave a comment

Microsoft BI Conference 2008 October 6-8 – Seattle, WA

Registration is now open for the 2008 BI Conference here in Seattle. I was able to attend last year and to say it was worth it would be an understatement. Definitely attend if the opportunity presents itself.
 
With SQL PASS also taking place in Seattle, it’s quite a busy few months in the SQL Server world here.
 
Register by August 8th you’ll be able to get it at the $995 price, after the 8th it goes up to $1295.
 
 
If anyone needs tips on where to stay in the Seattle area (I live a seven minute walk from the convention center), let me know.
 
 
Posted in Uncategorized | Leave a comment

AMO – Delete All Partitions From a Database

From time to time I’ll need to recreate the dev/beta environment and I have to do that from production. I want an exact copy of production minus all the partitions that have been created with the exception of the template partitions.

 

At first I would go in and change the XMLA script manually to remove those partitions. However, if there were a lot of measuregroups (50-100), this could take upwards of an hour. Being that it was such a pain I probably didn’t keep dev and beta as up-to-date as I should have.

 

An Analysis Services Stored Procedure is perfect to handle this task. Works beautifully and what took me an hour to do before now takes 30 seconds.

 

I decided to use arrays within the procedure even though you could write the proc without them.

 

 

 

*objDatabase is a database object I’ve set in another function

 

 

  Public Sub DeleteAllPartitions()

 

      Dim oCube As Cube

      Dim oMeasureGroup As MeasureGroup

      Dim oPartition As Partition

      Dim i As Integer

      Dim j As Integer

      Dim k As Integer

      Dim AryCubes As String()

      Dim AryCubesSize As Integer

      Dim AryMeasureGroups As String()

      Dim AryMeasureGroupsSize As Integer

      Dim AryPartitions As String()

      Dim AryPartitionsSize As Integer


     
‘Create the cube array (AryCubes)

      AryCubesSize = (objDatabase.Cubes.Count – 1)

      ReDim AryCubes(AryCubesSize)

 

      ‘Loop through each of the cubes in the database and throw the cube names in an array

      For Each oCube In objDatabase.Cubes

          AryCubes(i) = oCube.Name

          i = i + 1

      Next oCube

 

      ‘Loop through the cube array

      For i = 0 To AryCubesSize

 

          ‘Set the cube object to the current item in AryCubes

          oCube = objDatabase.Cubes.GetByName(AryCubes(i))

 

          ‘Create the measuregroup array (AryMeasureGroups)

          AryMeasureGroupsSize = (oCube.MeasureGroups.Count – 1)

          ReDim AryMeasureGroups(AryMeasureGroupsSize)

 

          ‘Set j back to zero for the current item

          j = 0

 

          ‘Loop through each of the measure groups in the cube and throw the measure group names in an array

          For Each oMeasureGroup In oCube.MeasureGroups

              AryMeasureGroups(j) = oMeasureGroup.Name

              j = j + 1

          Next oMeasureGroup

 

          ‘Loop through the measure group array

          For j = 0 To AryMeasureGroupsSize

 

              ‘Set the measure group object to the current item in AryMeasureGroups

              oMeasureGroup = oCube.MeasureGroups.GetByName(AryMeasureGroups(j))

 

              ‘If the measuregroup is linked then don’t delete partitions

              If oMeasureGroup.IsLinked = False Then

 

                  ‘Create the partition array (AryPartitions)

                  AryPartitionsSize = (oMeasureGroup.Partitions.Count – 1)

                  ReDim AryPartitions(AryPartitionsSize)

 

                  ‘Set k back to zero for current item

                  k = 0

 

                  ‘Loop through each of the partitions in the measure group and throw the name in an array

                  For Each oPartition In oMeasureGroup.Partitions

                      AryPartitions(k) = oPartition.Name

                      k = k + 1

                  Next oPartition

 

                  ‘Loop through the partition array and drop the partition if its not the template partition

                  For k = 0 To AryPartitionsSize

                      If AryPartitions(k) like "template" Then         

                          oPartition = oMeasureGroup.Partitions.GetByName(AryPartitions(k))

                          oPartition.Drop()

                      End If

                  Next k

 

              End If

 

          Next j

 

      Next i

 

  End Sub

Posted in Uncategorized | 1 Comment

Star vs. Snowflake in OLAP Land

About six months ago I had a discussion with another guy about what my preferred data warehouse schema was: snowflake or star. Without hesitation I said snowflake. He looked at me with befuddlement and asked why. I told him that OLAP processes dimensions more efficiently against a snowflaked schema instead of a star. We had nearly a twenty minute discussion exactly why Analysis Services likes snowflakes better than stars but I failed to convince him.  He firmly believed that the star schema was superior and anything short of me taking his firstborn hostage wouldn’t change his belief in that. Star vs snowflake usually initiates that type of steadfastness.
 
To back up my believe I put together a test. I created a dimension with three levels with each level having two attributes that were outside of the "Advertiser-Ad Campaign-Banner Ad" hierarchy. A total of nine attributes in the dimension.
 
1. Advertiser (15k rows)
  • PaymentType (2 rows)
  • Status (2 rows)
2. Sales Campaign (500k rows)
  • Category(16 rows)
  • TargetCountry (225 rows)
3. Banner Ad (12 million rows)
  • Size (8 rows)
  • AdType (4 rows)
An Advertiser has Sales Campaigns and a Sales Campaign has Banner Ads with Banner Ads being the attribute key. 
 
-In star schema land this would all be put into a single table having nine columns across.
-In the snowflake world this ends up in nine different tables.  There’d be an Advertiser table with three columns: Advertiser/PaymentType/Status, a PaymentType table with a single column PaymentType, a Status table with a single column Status, ect (generically speaking).

This dimension has more levels and more attributes but I decided to pare it down for simplicity. The test was run on an Intel Xeon 2.8 with 4GB of RAM with SQL Server RDBMS and Analysis Services on the same box. I created two different dimensions: one based off of a star schema and the other off of a snowflake schema (separate DSV’s). Each dimension was processed nine times (three Process Full, six Process Update) and the times averaged.
 
The star based dimension averaged a total of 8:35 per process vs 6:42 for the snowflaked based dimension. Why the big difference?
 
Each attribute runs a SELECT DISTINCT against the source dimensional table. Take the AdType attribute under the Banner Ad level. Against a star schema this SELECT DISTINCT query would execute against a table with 12 million rows, however, against a snowflake schema it would execute on a table with only four rows.
 
For smaller dimensions this doesn’t matter much, however if you have large dimensions and update quite frequently such as we do (hourly) a snowflake schema can make a world of difference.
 
Snowflakes are harder to read and tougher for the ETL guy to write, however dimensions process much faster against them. Also, if you’re building a dimension using a wizard (shame on you!) the wizard will be able to detect natural hierarchies whereas a star schema won’t.
 
Of course this test was done in an afternoon and not under the most scrutinizing conditions so I’m curious as to what others have experienced or think about the subject. What are your thoughts?
 
 
  
OLAP: How to Index Star/Snowflake Schema Data:
http://support.microsoft.com/kb/199132
 
 
Posted in Uncategorized | 5 Comments