Consultant at a tech consulting company with 501-1,000 employees
My 30 tips for building a Microsoft BI solution, Part IV: Tips 16-20
A note about the SSAS tips: Most tips are valid for both dimensional and tabular models. I try to note where they are not.
#16: Implement reporting dimensions in your SSAS solution
Reporting dimensions are constructs you use to make the data model more flexible for reporting purposes. They usually also simplify the management and implementation of common calculation scenarios. Here are two examples:
- A common request from users is the need to select which measure to display for a given report in Excel through a normal filter. This is not possible with normal measures / calculations. The solution is to create a measure dimension with one member for each measure. Expose a single measure in your measure group (I frequently use “Value”) that you assign the correct measure to in your MDX script / DAX calculation based on the member selected in the measure dimension. The most frequently used measure should be the default member for this dimension. By doing this you not only give the users what they want, but you also simplify a lot of calculation logic such as the next example.
- Almost all data models require various date related calculations such as year to date, same period last year, etc. It is not uncommon to have more than thirty such calculations. To manage this effectively create a separate date calculation dimension with one member for each calculation. Do your time based calculations based on what is selected in the time calculation dimension. If you implemented the construct in the previous example this can be done generically for all measures that you have in your measure dimension. Here is an example for how to do it tabular. For dimensional use the time intelligence wizard to get you started.
#17: Consider creating separate ad-hoc and reporting cubes
Analysis Services data models can become very complex. Fifteen to twenty dimensions connected to five to ten fact tables is not uncommon. Additionally various analysis and reporting constructs (such as a time calculation dimensions) can make a model difficult for end users to understand. There are a couple of features that help reduce this complexity such as perspectives, role security and default members (at least for dimensional) but often the complexity is so ingrained in the model that it is difficult to simplify by just hiding measures / attributes / dimensions from users. This is especially true if you use a “reporting cube” which I talked about in tip #16. You also need to consider the performance aspect of exposing a large, complex model to end user ad-hoc queries. This can very quickly go very wrong. So my advice is that you consider creating a separate model for end users to query directly. This model may reduce complexity in a variety of ways:
- Coarser grain (Ex: Monthly numbers not daily).
- Less data (Ex: Only last two years, not since the beginning of time).
- Fewer dimensions and facts.
- Be targeted at a specific business process (Use perspectives if this the only thing you need).
- Simpler or omitted reporting dimensions.
Ideally your ad-hoc model should run on its own hardware. Obviously this will add both investment and operational costs to your project but will be well worth it when the alternative is an unresponsive model.
#18: Learn .NET
A surprisingly high number of BI consultants I have met over the years do not know how to write code. I am not talking about HTML or SQL here but “real” code in a programming language. While we mostly use graphical interfaces when we build BI solutions the underlying logic is still based on programming principles. If you don’t get these, you will be far less productive with the graphical toolset. More importantly .Net is widely used in Microsoft based solutions as “glue” or to extend the functionality of the core products. This is especially true for SSIS projects where you quite frequently have to implement logic in scripts written in C# or VB.net but also applies to most components in the MS BI stack. They all have rich API’s that can be used for extending their functionality and integrating them into solutions.
#19: Design your solution to utilize Data Quality Services
I have yet to encounter an organization where data quality has not been an issue. Even if you have a single data source you will probably run into problems with data quality. Data quality is a complex subject. Its expensive to monitor and expensive to fix. So you might as well be proactive from the get-go. Data Quality Services is available in the BI and Enterprise versions of SQL Server. It allows you to define rules for data quality and monitor your data for conformance to these rules. It even comes with SSIS components so you can integrate it with your overall ETL process. You should include this in the design stage of your ETL solution because implementing it in hindsight will be quite costly as it directly affects the data flow of your solution.
#20: Avoid SSAS unknown members
Aside from the slight overhead they cause when processing, having unknown members means that your underlying data model has issues. Fix them there and not in the data model.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Consultant at a tech consulting company with 501-1,000 employees
My 30 tips for building a Microsoft BI solution, Part III: Tips 11-15
#11: Manage your own surrogate keys.
In SQL Server it is common to use an INT or BIGINT set as IDENTITY to create unique, synthetic keys. The number is a sequence and a new value is generated when we execute an insert. There are some issues with this. Quite often we need this value in our Integration Services solution to do logging and efficient loads of the data warehouse (there will be a separate tip on this). This means that sometimes we need the value before an insert and sometimes after. You can obtain the last value generated by issuing a SCOPE_IDENTITY command but this will require an extra trip to the server per row flowing through your pipeline. Obtaining the value before an insert happens is not possible in a safe way. A better option is to generate the keys yourself through a script component. Google for “ssis surrogate key” and you will find a lot of examples.
#12: Excel should be your default front-end tool.
I know this is a little bit controversial. Some say Excel lacks the power of a “real” BI tool. Others say it writes inefficient queries. But hear me out. Firstly, if you look at where Microsoft is making investments in the BI stack, Excel is right up there at the top. Contrast that to what they are doing with PerformancePoint and Reporting Services and its pretty clear that Excel is the most future proof of the lot. Microsoft have added lot of BI features over the last couple of releases and continue to expand it through new add-ins such as data explorer and geoflow. Additionally, the integration with SharePoint gets tighter and tighter. The Excel web client of SharePoint 2013 is pretty on par with the fat Excel client when it comes to BI functionality. This means that you can push out the new features to users who have not yet upgraded to the newer versions of Excel. When it comes to the efficiency with which Excel queries SSAS a lot has become better. But being a general analysis tool it will never be able to optimize its queries as you would if you wrote them specifically for a report.Please note that I am saying “default” not “best”. Of course there are better, pure bred, Business Intelligence front-ends out there. Some of them even have superior integration with SSAS. But its hard to beat the cost-value ratio of Excel if you are already running a Microsoft shop. If you add in the fact that many managers and knowledge workers already do a lot of work in Excel and know the tool well the equation becomes even more attractive.
#13: Hug an infrastructure expert that knows BI workloads.
Like most IT solutions, Microsoft BI solutions are only as good as the hardware and server configurations they run on. Getting this right is very difficult and requires deep knowledge in operating systems, networks, physical hardware, security and the software that is going to run on these foundations. To make matters worse, BI solutions have workloads that often differ fundamentally from line of business applications in the way they access system resources and services. If you work with a person that knows both of these aspects you should give him or her a hug every day because they are a rare breed. Typically BI consultants know a lot about the characteristics of BI workloads but nothing about how to configure hardware and software to support these. Infrastructure consultants on the other hand know a lot about hardware and software but nothing about the specific ways BI solutions access these. Here are three examples: Integration Services is mainly memory constrained. It is very efficient at processing data as a stream as long as there is enough memory for it. The instant it runs out of memory and starts swapping to disk you will see a dramatic decrease in performance. So if you are doing heavy ETL, co-locating this with other memory hungry services on the same infrastructure is probably a bad idea. The other example is the way data is loaded and accessed in data warehouses. Unlike business systems that often do random data access (“Open the customer card for Henry James”) data warehouses are sequential. Batches of transactions are loaded into the warehouse and data is retrieved by reports / analysis services models in batches. This has a significant impact on how you should balance the hardware and configuration of your SQL Server database engine and differs fundamentally from how you handle workloads from business applications. The last example may sound extreme but is something I have encountered multiple times. When businesses outsource their infrastructure to a third party they give up some of the control and knowledge in exchange for an ability to “focus on their core business”. This is a good philosophy with real value. Unfortunately if you do not have anyone on the requesting side of this partnership that knows what to ask for when ordering infrastructure for your BI project what you get can be pretty far off from what you need. Recently a client of mine made such a request for a SQL Server based data warehouse server. The hosting partner followed their SLA protocol and supplied a high availability configuration with a mandatory full recovery model for all databases. You can imagine the exploding need for disk space for the transaction logs when loading batches of 20 million rows each night. As these examples illustrate, it is critical for a successful BI implementation to have people with infrastructure competency on your BI team that also understand how BI solutions differ from “traditional” business solutions and can apply the right infrastructure configurations.
#14: Use Team Foundation Server for your BI projects too.
A couple of years ago putting Microsoft BI projects under source control was a painful experience where the benefits drowned in a myriad of technical issues. This has improved a lot. Most BI artifacts now integrate well with TFS and BI teams can greatly benefit from all the functionality provided by the product such as source control, issue tracking and reporting. Especially for larger projects with multiple developers working against the same solution TFS is the way to go in order to be able to work effectively in parallel. As an added benefit you will sleep better at night knowing that you can roll back that dodgy check-in you performed a couple of hours ago. With that said there are still issues with the TFS integration. SSAS data source views are a constant worry as are server and database roles. But all of this (including workarounds) is pretty well documented online.
#15: Enforce your attribute relationships.
This is mostly related to SSAS dimensional but you should also keep it in mind when working with tabular. Attribute relationships define how attributes of a dimension relate to each other (roll up into each other). For example would products roll up into product subgroups which would again roll into product groups. This is a consequence of the denormalization process many data warehouse models go through where complex relationships are flattened out into wide dimension tables. These relationships should be definied in SSAS to boost general performance. The magic best-practice analyzer built into data tools makes sure you remember this with its blue squiggly lines. Usually it takes some trial and error before you get it right but in the end you are able to process your dimension without those duplicate attribute key errors. If you still don’t know what I am talking about look it up online such as here. So far so good. Problems start arising when these attribute relationships are not enforced in your data source, typically a data warehouse. Continuing with the example from earlier over time you might get the same product subgroup referencing different product groups (“parents”). This is not allowed and will cause a processing of the dimension to fail in SSAS (those pesky duplicate key errors). To handle this a bit more gracefully than simply leaving your cube(s) in an unprocessed state (with the angry phone calls this brings with it) you should enforce the relationship at the ETL level, in Integration Services. When loading a dimension you should reject / handle cases where these relationships are violated and notify someone that this happened. The process should make sure that the integrity of the model is maintained by assigning “violators” to a special member of the parent attribute that marks it as “suspect”. In this way your cubes can still be processed while highlighting data that needs attention.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Buyer's Guide
Microsoft Power BI
March 2026
Learn what your peers think about Microsoft Power BI. Get advice and tips from experienced pros sharing their opinions. Updated: March 2026.
884,976 professionals have used our research since 2012.
Consultant at a tech consulting company with 501-1,000 employees
My 30 tips for building a Microsoft BI solution, Part II: Tips 6-10
# 6: Use a framework for your Integration Services solution(s) because data is evil
I know how it is. You may have started your ETL project using the SQL Server import / export wizard or you may have done a point integration of a couple of tables through data tools. You might even have built an entire solution from the ground up and been pretty sure that you thought of everything. You most likely have not. Data is a tricky thing. So tricky in fact that I over the years have built up an almost paranoid distrust against it. The only sure thing I can say is that it will change (both intentionally and unintentionally) over time and your meticulously crafted solution will fail. Best case scenario is that it simply will stop working. Worst case scenario is that this error / these errors have not caused a failure technically but have done faulty insert / update / delete operations against your data warehouse for months. This is not discovered until you have a very angry business manager on the line who has been doing erroneous reporting up the corporate chain for months. This is the most likely scenario. A good framework should have functionality for recording data lineage (what has changed) and the ability to gracefully handle technical errors. It won’t prevent these kinds of errors from happening but it will help you recover from them a lot faster. For inspiration read The Data Warehouse ETL Toolkit.
#7: Use a framework for your Integration Services solution(s) to maintain control and boost productivity
Integration Services is a powerful ETL tool that can handle almost any data integration challenge you throw at it. To achieve this it has to be very flexible. Like many of Microsoft’s products its very developer oriented. The issue with this is that there are as many ways of solving a problem as there are Business Intelligence consultants on a project. By implementing a SSIS framework (and sticking with it!) you ensure that the solution handles similar problems in similar ways. So when the lead developer gets hit by that bus you can put another consultant on the project who only needs to be trained on the framework to be productive. A framework will also boost productivity. The up-front effort of coding it, setting it up and forcing your team to use it is dwarfed by the benefits of templates, code reuse and shared functionality. Again, read The Data Warehouse ETL Toolkit for inspiration.
#8: Test and retest your calculations.
Come into the habit of testing your MDX and DAX calculations as soon as possible. Ideally this should happen as soon as you finish a calculation, scope statement, etc. Both MDX and DAX get complicated really fast and unless you are a Chris Webb you will loose track pretty quickly of dependencies and why numbers turn out as they do. Test your statements in isolation and the solution as a whole and verify that everything works correctly. Also these things can have a severe performance impact so remember to clear the analysis services cache and do before and after testing (even if you have cache warmer). Note that clearing the cache means different things to tabular and dimensional as outlined here.
#9: Partition your data and align it from the ground up.
Note that you need the enterprise version of SQL Server for most of this. If you have large data sets you should design your solution from the ground up to utilize partitioning. You will see dramatic performance benefits from aligning your partitions all the way from your SSIS process to your Analysis Services cubes / tabular models. Alignment means that if you partition your relational fact table by month and year, you should do the same for your analysis services measure group / tabular table. Your SSIS solution should also be partition-aware to maximize its throughput by exploiting your partitioning scheme.
#10: Avoid using the built-in Excel provider in Integration Services.
I feel a bit sorry for the Excel provider. It knows that people seeing it will think “Obviously I can integrate Excel data with my SSIS solution, its a MS product and MS knows that much of our data is in Excel”. The problem is that Excel files are inherently unstructured. So for all but the simplest Excel workbooks the provider will struggle to figure out what data to read. Work around this by either exporting your Excel data to flat files or look at some third party providers.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Hi Peter !
Let's discuss from point 6 to 10 in here;
#6: I totally agree with you, never trust or undermine the fact that data will be coming in the format as suggested by the ETL Team. There is always a possibility of wrong data types, bad data, switched data, all kind of data to be appear as source data, so as a ETL developer you need to make sure you put data validation checks for each and every case you have in mind. Still you might miss out some cases. The good thing about MS SQL Server 2012 is now they have provided the TRY_CAST function which can be used to avoid casting errors. A craftily designed framework would be handy to have where ETL developers need to know about the framework, so invest on building a framework which can be used across multiple ETL projects. I strongly agreed with your point that data is evil and sometimes is such hard to load single files which have all kind of these bad data validation errors.
#7: Definitely, by having a framework you can save time by not spending your time on writing same piece of code again and again. While designing your ETL, please beware of the data types which you are using, for some people there is slight difference between Float, Decimal & Numeric data type but if you have been writing ETL solutions you know what kind of a mess it would create if you don\t pick up the right data type, same for Date & DateTime data types.
#8: MDX calculation needs to be tested again and again which is called regression testing. All these years i have been building end to end BI solutions, which involves writing complex ETL's, it is like impossible for QA agents to identify the problem in calculations, so while you assign someone task of verifying MDX calculation or just verifying the BI Dashboard output, make sure he has enough knowledge of Data Analysis. He would be proficient enough to query the database and be able to browse the Cube and also perform cross Data Verification. As a BI Consultant I invest much time in training my QA agents to be able to perform this regression testing.
#9: Partition is always a good practice when you are sure that data influx might going to be run into billions of rows. But if you are designing a BI Solution for an organization which might not have this big amount of data under Analysis then you may avoid partitioning.
#10: Strongly recommended, built in Excel provider is going to make you crazy really soon by having it own data type sensing ability, although you can try to turn it off by setting the property of Type Guess = 0, but there are so many problem with excel provider it always sense the data types for each source column.
One thing I need to mention, is carefully designed ETL with customized logging process can save you tons of time while analyzing the cause of data failure. And it's always good to have the ETL logging process which can be shared with your client as well.
Regards,
Hasham Niaz
Consultant at a tech consulting company with 501-1,000 employees
My 30 tips for building a Microsoft BI solution, Part I: Tips 1-5
Having worked with Microsoft BI for more than a decade now here are the top 30 things I wished I knew before starting development of a solution. These are not general BI project recommendations such as “listen to the business” or “build incrementally” but specific lessons I have learned (more often than not the hard way) designing and implementing Microsoft based Business Intelligence solutions. So here are the first five:
#1: Have at least one SharePoint expert on the team.
The vast majority of front-end BI tools from Microsoft are integrated with SharePoint. In fact, some of them only exist in SharePoint (for instance PerformancePoint). This means that if you want to deliver Business Intelligence with a Microsoft solution, you will probably deliver a lot of it through SharePoint. And make no mistake: SharePoint is very complex. You have farms, site collections, lists, services, applications, security… the list goes on and on. To make matters worse you may have to integrate your solution with an already existing SharePoint portal. There is a reason there are professional SharePoint consultants around, so use them.
#2: Do not get too excited about Visio integration with Analysis Services.
Yes, you can query and visualize Analysis Services data in Visio. You may have seen the supply chain demo from Microsoft which looks really flashy. You might think about a hundred cool visualizations you could do. Before you spend any time on this or start designing your solution to utilize it, try out the feature. While its a great feature, it requires a lot of work to implement (at least for anything more than trivial). Also, it (currently) only supports some quite specific reporting scenarios (think decomposition trees).
#3: Carefully consider when to use Reporting Services.
Reporting Services is a great report authoring environment. It allows you to design and publish pixel perfect reports with lots of interactivity. It also provides valuable services such as caching, subscriptions and alerts. This comes at a cost though. The effort needed to create SSRS reports is quite high and needs a specialized skill set. This is no end user tool. There are also issues with certain data providers (especially Analysis Services). But if you need any combination of multiple report formats , high scalability (caching, scale-out), subscriptions or alerts, you should seriously consider Reporting Services.
#4: Use Nvarchar / unicode strings throughout the solution.
Unless you live in the US (and are pretty damn sure you will never have “international data”) use unicode. Granted, varchars are more efficient but you do not want to deal with collations / codepages. Ever. Remember this is not only an issue with the database engine but also with other services such as Integration Services.
#5: Check if it exists on codeplex.
Do not build anything before you have checked codeplex. Chances are someone has already done the same or something similar that can be tweaked. If you are skeptical of including “foreign” code in your solution (like me) use the codeplex code as a cheat-sheet and build your own based on it. There is a lot stuff there including SSAS stored procedures, SSIS components and frameworks and much more.
Disclosure: The company I work for is a Microsoft Partner
[Syndicated from www.peterkollerbi.wordpress.com]
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Hi Peter !
Let’s talk about the difficulties you have faced during your BI career;
#1: I do agree with you partially, that having a dedicated Share Point resource would be handy because you might be going to run into performance or security issues somewhere along the project, but my idea is to have a Single Share Point resource which can be share between multiple BI projects. Because from my experience what i have seen is, it’s not that much hard to configure the Performance Point Services, Power View & Reporting Services on Share Point. With some help BI consultant can do this on his own, and as a BI consultant one should take the ownership of the project and try to resolve issues on his own. This will give them more of a learning curve and hands on other front end tools. You can't always rely on someone else to fix the issues for you.
#2: I haven't yet to see any BI Consulting firm delivering their solutions through Microsoft Visio integration with SSAS. All I could say is Microsoft has done investment in lot of tools to see which tool become a real contender for replacing all other BI stack, or get most popular response from the market. This is more of a market strategy to see which product / tool gets more response.
#3: SSRS has been the greatest thing Microsoft has delivered for Reporting apart from PPS lately. I still feel there is still lot of areas where SSRS need to be improved, like SSRS don't have alters, or its very restricted when it comes to dynamic dashboard or interactive reporting. If you have seen PPS, as a BI Consultant i want to show my client how much interactive my BI Solution is. Still there are areas like you mentioned Subscription & caching are great from SSRS. Additionally SSRS is designed to keep in mind that developers will be using it for building reports. For End User Microsoft Excel is best they can have where they can slice & dice and with Power Pivot included there is a lot End User can do with SSAS Cube.
#4: Use navarchar / varchar will always be a debate between developers. It's more of a choice thing. But if you are developing a BI Solution which is going to be used across multiple regions, consider using nvarchar but keep in mind the overhead of extra storage that you will be paying as a developer.
#5: CodePlex is a great community, but most of the clients want things to be customized and be their own proprietary. This is what we are paid for as a BI Consultant to provide them solution which fulfills organization needs and you might agree every management has different needs. But still good idea to look on CodePlex and peer sites for reference.
When choosing between tools, there is no single tool which can meet all of your customer requirements, so keep in mind that you might be using some tool which you have rejected in your initial analysis, and believe me this will save you big time facing problem against clients, because one you communicate that we won't be using this tool, and then you go back and say now we are providing this particular report using the tool which you have discarded in your earlier review.
So my point is as a BI Consultant, one needs to be flexible, adaptive & responsive to be a successful BI Consultant.
Regards,
Hasham Niaz
Head of Data Analytics with 51-200 employees
Why would you choose Microsoft as your BI platform?
This morning I was on the train going to a briefing session and I was compelled to look again at the Gartner Magic Quadrant paper on BI – in the same way as mid-exam you might go back and look at the question to make sure you are answering it. Here are the things I pulled out for my slides. You might find them useful.
I see Gartner as the arbiters of good-taste in matters informatics. They explain the market and solutions, they rate vendors and they offer thought-provoking insight to people making technology choices – whether you are buying or making. I love ‘em. I’m making no apologies for my promotion of Microsoft. I believe it to be the most complete in terms of the company’s vision, the easiest to execute and I buy into the visionaries in Redmond and beyond (especially Cambridge in the UK) as Microsoft tries to lead the market. I bet my house on this a few years ago and I still live there. Phew.
Thinking about what BI is. It’s really about getting people with the right tools for their job to work effectively and collaboratively in managing the flow of information across an integrated infrastructure (so the flow doesn’t break), an integrated data architecture (so that when you blend the liquids flowing through the pipes they taste nice), without IT being constantly in their homes / offices / cars / clients houses. It’s about delivering the information to people who need it to make good business and clinical decisions in the right way at the right time. It’s about being able to find information and getting information to find me – I want to hear the erudite information shouting loudest at me amid the tumult of data chatter. It’s about the information being structured so that I can plug tools into it and predictive model, run SPC and do all the other things that I want to do in order to improve the safety, quality and cost-effectiveness of my services.
The Microsoft stack does this for me – see previous posts. This is recognised. Gartner points out that the Microsoft solution set is wide in scope – there is something in the toolset for everyone, however the set is integrated and so it works. See my article on why you wouldn’t buy reporting solutions for example – in and of themselves they don’t solve your problems.
Clearly the Microsoft Bi stack is designed with Gartner’s feedback in mind, he said smilingly, as we can directly map what they have done, to the above description of good BI.
Microsoft BI is recognised as being wide in scope and deep in functionality so that it ticks all of the above boxes and the UI has something in it for everyone in terms of the abilities of the combined tools to enable access to data. Some might say they have too many tools – see previous post – however the partner eco-system of people like us in Ascribe should be able to line up features and functions to roles and so that shouldn’t be a concern. The eco-system is actually another reason why people buy Microsoft. As the technology giant creates a giant platform niche (and even scale) vendors build targeted solutions on the platform – which is why it’s as good for banks as it is for hospitals. Giants feed themselves on R&D and Redmond leads the biggest R&D budget in the world which means the platform that Ascribe work upon is always the best. The scale makes it cheap – particularly if you invest in Microsoft across your enterprise and then sweat BI out of the asset with marginal cost. You can also use a range of resources to help, whether its software vendors with Microsoft powered software or consultancies who configure BI solutions or contractors or your own staff. Finally there is the architecture. The software is designed to align with industry standard methodologies such as Agile, so you can build solutions quickly, and Kimball so you can have a concrete data management strategy but a rubber implementation plan. Thanks Simon M for the concrete and rubber….
The other big play is cloud – I’ll post on that later. All in all then it’s easy to see why I bought into the platform, as the foundation to my business. It should have clear benefits for you too.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Great Ali. That is another advantage to the Microsoft BI product against their competitors.
Head of Data Analytics with 51-200 employees
Does Microsoft Have Too Many BI Products?
I am quite excited about the launch of SQL2012 and in particular PowerView, or Crescent as some of you may know it as. I am pleased that Microsoft are sharpening their In-Memory BI story and they have a drag and drop user interface that can compete with the likes of Qlik-View et al. Blimey, this has started off like a techy post – didn’t mean to. I’ll write more about our use of PowerView on a really interesting project, next time. Let me get to the point.
Microsoft now has Excel, ProClarity, PerformancePoint, PowerView, PowerPivot, Reporting Services, Visio and BingMaps interfacing with its dimensional model (Analysis Services) and now its BISM (BI Semantic Model) which seems to have replaced the Report Model. I am confused and so are my customers. This is also an issue that Gartner picked up on when they did the last magic quadrant review. In fact I remember being at a presentation on SQL 2012 (Denali as was) last year and a poor guy from Microsoft was mullered by the audience of technical guys who berated him for the lack of coherence in Microsoft’s BI message.
I wasn’t that worried actually because, as a partner, it’s my job to take the platform Microsoft gives me and manipulate it to meet my customers’ needs and vice versa – in fact, probably more vice versa.
In my mind I have this sorted out. This is what I do.
Firstly, I talk about the health and social care BI portal as a gateway to all the knowledge assets the organisation holds and my customers shout out things like EDRM / Collaboration / Search / BI / Unstructured Content / nice-looking web-site. We don’t really talk SharePoint. I don’t talk about the different platforms and their naming conventions. For example, trying to explain the evolution of Performance Point only distracts from the need it serves. The need it serves is to provide people who live in a one –five mouse-click world to go from a macro to micro view of organisational performance using a scorecard / dashboard. I think about Public Health Maps, organisational strategy maps and caseload reports (Reporting Services) in the same way – how many clicks does it take to get the information need and how can I, as an end-user be best connected with my data.
I would then think about Excel meeting the needs of analysts by providing direct access to data and I would tell the story of in-memory BI using PowerPivot.
Then I have to think about PowerView. That’s okay – in my first sentence I articulated the value to people who sit between Excel Pivot-table Gods and people who consume data via dashboards. So individually I can map each sort of user profile to a solution and to an underlying Microsoft technology. The problem comes when you step back and think about this strategically. I don’t mean as a programme of work because things like the UI are very similar and so the training overhead isn’t a problem. I think more about the coherence and I go back to that very hot room and the hot talk that made my mate at Microsoft sweat.
I don’t think that has been figured out. Maybe in the next iteration of SharePoint all the BI will be brought together and made into a seamless application so the alignment of function to “user need” doesn’t jar but emphasises the richness of the platform. Let’s see. Microsoft friends if you are reading, what do you think?
For now, I’ll keep on telling my tale – looking into the eyes of each of the different users that I pitch to and pointing out which application is exactly for them and emphasising how we, at Ascribe, understand that this can appear confusing but actually isn’t. So does it matter that when we step back it looks a little messy, when we are actually meeting the needs of our people. I don’t think it does, yet, but I think it will as the BI becomes more embedded.
Because that is the point of BI – to a large extent. You want people to come together to look at information and make sense of it and use it – we may be victims of our own success if we solve the “one version of the truth” issue (so they are all looking at the same data) but we create confusion through the range of tools we offer.
This one will run and run.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Thank you for the great information you have shared. However, I got a simple question. If Microsoft indeed has several BI products, does that give them any competitive advantage over their competitors? And, does that make their products any better in terms of functionality?
BI Expert with 51-200 employees
We’ve Got The Power: “Power BI”, New Microsoft BI Suite Announced
Power BI: a new suite of Business Intelligence tools
Over the past few months, teams at Microsoft have made several new Business Intelligence tools available for preview; some only privately and some to the public. The entire suite will soon be available for either public preview or release under the new name: “Power BI”. All of the components of Power BI are listed below but the big news is a new hosted offering called “Power BI for Office 365” and “Power BI Sites”. The announcement was made at the Worldwide Partner Conference this week. Users can sign-up to be notified when the new offerings are available for general availability, apparently in the very near future. I’ve had an opportunity to work with early, pre-released versions and it has been interesting to see the gaps being filled a little at a time. On the heals of the new suite, some of the names of existing products are also being changed. It’s hard to have a conversation about the collection of Microsoft’s “Power”/”Pivot”/”Point”…named tools and not get tongue twisted but these changes bring more consistency.
Bottom line: this is good news and a promising step forward – especially for smaller businesses. Larger, enterprise customers should know that this move is consistent with Microsoft’s “cloud first” philosophy and these capabilities are being introduced through Office365/Azure platform with required connectivity. Read the commentary on community leaders’ sites below. I have no doubt that there will be a lot of discussion on this in the weeks to come with more announcements from Microsoft in the near future.
Power BI for Office 365 and Power BI Sites
When Power View was released with SQL Server 2012 Enterprise and Business Intelligence Editions, it was available only when integrated with SharePoint 2010 Enterprise Edition. This is a good solution for enterprise customers but it was complex and expensive for some to get started. Power View was also offered only as a Silverlight application that wouldn’t work on many mobile devices and web browsers. For this reason, Power View has really been viewed as a “Microsoft only” tool and only for big companies with deep pockets and very capable IT support groups. Even the new Power View add-in for Excel 2013 ProPlus Edition requires Silverlight which is not a show-stopper for most folks but a hindrance for multi-platform and tablet users. This all changes with this new offering as the Power View visualization tool in the hosted product come in 3 new flavors: native Windows 8 app (runs on desktop, Surface RT & Pro), native iOS (targeting the iPad) and HTML5 (works on practically any newer device). This means that when you open a Power View report on your Surface or iPad, it can run as an installed app with all the cool pinch-zoom and gestures you’ve come to expect on a tablet device. For now, this is good news for the cloud user as no on-premises option is currently available. An interesting new edition will be the introduction of a semantic translation engine for natural language queries, initially for English.
Power Query
Formerly known as “Data Explorer”, this add-in for Excel 2013 allows you to discover and integrate data into Excel. Think of it as intelligent, personal ETL with specialized tools to pivot, transform and cleanse data obtained from web-based HTML tables and data feeds.
Power Map
This Excel 2013 ProPlus add-in, which was previously known as “GeoFlow”, uses advanced 3-D imaging to plot data points on a global rendering of Bing Maps. Each data point can be visualized as a column, stacked column or heat map point positioned using latitude & longitude, named map location or address just like you would in a Bing Maps search. You can plot literally thousands of points and then tour the map with the keyboard, mouse or touch gestures to zoom and navigate the globe. A tour can be created, recorded and then played back. Aside from the immediate cool factor of this imagery, this tool has many practical applications.
Power Pivot
The be reveal is that “PowerPivot” shall now be known as “Power Pivot”. Note, the space added so that the name is consistent with the other applications. We all know and love this tool, an add-in for Excel 2010 and Excel 2013 ProPlus (two different versions with some different features) that allow large volumes of related, multi-table data sources to be imported into an in-memory semantic model with sophisticated calculations. On a well-equipped computer, this means that a model could contain tens of millions of rows that get neatly compressed into memory and can be scanned, queried and aggregated very quickly. Power Pivot models (stored as an Excel .xlsx file) can be uploaded to a SharePoint where they become a server-managed resource. A Power Pivot model can also be promoted to a server-hosted SSAS Tabular model where data is not only managed and queried on an enterprise server but also takes on many of the features and capabilities of classic SSAS multidimensional database. Whether a Power Pivot model is published to a SharePoint library or promoted to a full-fledged SSAS Tabular model, the data can be queried by any client tool as if it were an Analysis Services cube.
Power View
For now, Power View in Excel 2013 ProPlus and Power View in SharePoint 2010 Enterprise and SharePoint 2013 Enterprise remain the same – the Silverlight-based drag-and-drop visual analytic tool. With the addition of SQL Server 2012 CU4, Power View in SharePoint can be used with SharePoint published Power Pivot models, SSAS Tabular models and SSAS Multidimensional “cube” models. There has been no news yet about a non-Silverlight replacement for the on-premise version of Power View. The Microsoft teams and leadership have heard the requests and feedback, loud-and-clear, from the community and we can only guess that there is more is in-the-works but I make no forecast or assumptions about the eventual availability of an on-premise offering similar to Power BI for Office 365.
Additional thoughts and information from the community can be found at:
Chris Webb: Some Thoughts About Power BI
Andrew Brust: Microsoft Announces Power BI for Office 365
SQL Server Blog: Introducing Power BI for Office 365
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Entrepreneurs who run small businesses have another reason to smile or keep smiling. However, doesn't it seem like other clients will be locked out from using this great product in the making? My reasons entail the fact that the Power BI is designed for compatibility with Azure or Office 365 platforms? There are many businesses across the globe that utilize other platforms other than these two. Does it mean they will be locked out due to compatibility issues? If so, then the platform the Power BI will support will limit its use to some extent, making this a con.
BI Expert with 51-200 employees
Taking the Tabular Journey
A Getting-Started and Survival Guide for planning, designing and building Tabular Semantic Models with Microsoft SQL Server 2012 Analysis Services.
by Paul Turley
This post will be unique in that it will be a living document that will be updated and expanded over time. I will also post-as-I-go on the site about other things but this particular post will live for a while. I have a lot of good intentions – I know that about myself and I also know that the best way to get something done is to get it started – especially if I’m too busy with work and projects. If it’s important, the “completing” part can happen later. In the case of this post, I’ll take care of building it as I go, topic by topic. Heck, maybe it will never be “finished” but then are we ever really done with IT business solutions? I have been intending to get started on this topic for quite some time but in my very busy project schedule lately, didn’t have a concise message for a post – but I do have a lot to say about creating and using tabular models.
I’ve added some place-holder topic headers for some things that are on my mind. This list is inspired by a lot of the questions my consulting customers, students, IT staff members and business users have asked me on a regular basis. This will motivate me to come back and finish them and for you to come back and read them. I hope that you will post comments about your burning questions, issues and ideas for related topics to cover in this living post about tabular model design practices and recommendations.
Why Tabular?
SQL Server Analysis Services is a solid and mature platform that now serves as the foundation for two different implementations. Multidimensional models are especially suited for large volumes of dimensionally-structured data that have additive measure values that sum-up along related dimensional attributes & hierarchies.
By design, tabular architecture is more flexible than multidimensional in a number of scenarios. Tabular also works well with dimensional data structures but also works well in cases where the structure of the data doesn’t resemble a traditional star or snowflake of fact and dimension tables. When I started using PowerPivot and tabular SSAS projects, I insisted on transforming data into star schemas like I’ve always done before building a cube. In many cases, I still do because it’s easier to design a predictable model that performs well and is easy for users to navigate. A dimensional model has order and disciple however, the data is not always shaped this way and it can take a lot of effort to force it into that structure.
Tabular is fast for not only additive, hierarchal structured data but in many cases, it works well with normalized and flattened data as long as all the data fits into memory and the model is designed to support simple relationships and calculations that take advantage of the function engine and VertiPaq compression and query engine. It’s actually pretty easy to make tabular do silly, inefficient things but it’s also not very hard to make it work really well, either.
James Serra has done a nice job of summarizing the differences between the two choices and highlighted the strengths and comparative weaknesses of each in his April 4 blog post titled SQL Server 2012: Multidimensional vs Tabular. James points out that tabular models can be faster and easier to design and deploy, and that they concisely perform well without giving them a lot of extra attention for tuning and optimization. Honestly, there isn’t that much to maintain and a lot of the tricks we use to make cubes perform better (like measure group partitioning, aggregation design, strategic aggregation storage, usage-base optimization, proactive caching and cache-warming queries) are simply unnecessary. Most of these options don’t really exist in the tabular world. We do have partitions in tabular models but they’re really just for ease of design.
What About Multidimensional – Will Tabular Replace It?
The fact is the multidimensional databases (which most casual SSAS users refer to as “cubes”) will be supported for years to come. The base architecture for SSAS OLAP/UDM/Multidimensional is about 13 years old since Microsoft originally acquired a product code base from Panorama and then went on to enhance and then rewrite the engine over the years as it has matured. In the view of many industry professionals, this is still the more complete and feature-rich product.
Both multi and tabular have some strengths and weaknesses today and one is not clearly superior to the other. In many cases, tabular performs better and models are more simple to design and use but the platform is lacking equivalent commands and advanced capabilities. In the near future, the tabular product may inherit all of the features of its predecessor and the choice may become more clear; or, perhaps a hybrid product will emerge.
Isn’t a Tabular Model Just Another Name for a Cube?
No. …um, Yes. …well, sort of. Here’s the thing: The term “cube” has become a defacto term used by many to describe the general concept of a semantic model. Technically, the term “cube” defines a multidimensional structure that stores data in hierarchies of multi-level attributes and pre-calculated aggregate measure values at the intersect points between all those dimensions and at strategic points between many of the level members in-between. It’s a cool concept and an an even cooler technology but most people who aren’t close to this product don’t understand all that. Users just know that it works somehow but they’re often confused by some of the fine points… like the difference between hierarchies and levels. One has an All member and one doesn’t but they both have all the other members. It makes sense when you understand the architecture but it’s just weird behavior for those who don’t.
Since the tabular semantic model is actually Analysis Services with a single definition of object metadata, certain client tools will continue to treat the model as a cube, even though it technically isn’t. A tabular Analysis Services database contains some tables that serve the same purpose as measure groups in multidimensional semantic models. The rest of the tables are exposed as dimensions in the same way that cube dimensions exists in multidimensional. If a table in a tabular model includes both measures and attribute fields, in certain client tools like Excel, it will show up twice in the model; once as a measure group table and once as a dimension table.
(more to come)
Preparing Data for a Tabular Model
Data Modeling 101 for Tabular Models
Are There Rules for Tabular Model Design?
Tabular Model Design Checklist
What’s the Difference Between Calculated Columns & Measures?
What are the Naming Conventions for Tabular Model Objects?
What’s the Difference Between PowerPivot and Tabular Models?
How to Promote a Business-created PowerPivot Model to an IT-managed SSAS Tabular Model
Getting Started with DAX Calculations
DAX: Essential Concepts
DAX: Some of the Most Useful Functions
DAX: Some of the Most Interesting Functions
Using DAX to Solve real-World Business Scenarios
Do I Write MDX or DAX Queries to Report on Tabular Data?
Can I Use Reporting Services with Tabular & PowerPivot Models?
Do We Need to Have SharePoint to Use Tabular Models?
What Do You Teach Non-technical Business Users About PowerPivot and Tabular Models?
What’s the Best IT Tool for Reporting on Tabular Models?
What’s the Best Business User Tool for Browsing & Analyzing Business Data with Tabular Models?
Survival Tips for Using the Tabular Model Design Environment
How Do You Design a Tabular Model for a Large Volume of Data?
How Do You Secure a Tabular Model?
How to Deploy and Manage a Tabular Model SSAS Database
Tabular Model Common Errors and Remedies
Tabular Model, Workspace and Database Recovery Techniques
Scripting Tabular Model Measures
Simplifying and Automating Tabular Model Design Tasks
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Buyer's Guide
Download our free Microsoft Power BI Report and get advice and tips from experienced pros
sharing their opinions.
Updated: March 2026
Popular Comparisons
Tableau Enterprise
Teradata
IBM Cognos
Amazon QuickSight
SAP Analytics Cloud
SAP BusinessObjects Business Intelligence Platform
Oracle OBIEE
MicroStrategy
QlikView
ThoughtSpot
TIBCO Spotfire
Oracle Analytics Cloud
Buyer's Guide
Download our free Microsoft Power BI Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
- BI Reports for business users - which BI solutions should we choose?
- Business users moving from Tableau to MS Report builder
- Is Power BI a complete platform or only a visualization tool?
- What are the key advantages of OBIEE compared to Microsoft BI?
- What Is The Biggest Difference Between Microsoft BI and Oracle OBIEE?
- Is Microsoft Power BI good for an ETL process?
- How would you decide between Microsoft Power BI and TIBCO Spotfire?
- Is it easy to extract data from Oracle Fusion into Power BI?
- PowerBI or SyncFusion - which is better?
- What challenges to expect when migrating multiple dashboards from TIBCO Spotfire to Microsoft Power BI?















Hi Peter !
Nice article, now we discuss from point 11 to 15 in detail;
#11: I do agree with you partially on this, because I don't understand the need for creating a separate surrogate key for SSIS. My point is using the keys from Production tables; personally I use Change Table method to perform incremental loads. If a separate key is required in your Data warehouse Model, you can create in using a combination or reading the value from source table or by loading a value into SSIS variable and then assigning this to your table.
#12: I prefer to use Excel as a tool where i can perform quick data verification or number reconciliation by connecting to my cube. I know Microsoft has been investing lot in Excel through Power Pivot and all. But what about the future of "Power BI" which we heard a new tool which will have the capabilities to become the number one BI tool for reporting. Personally I think excel can't be used as enterprise reporting tool.
#13: A rare to have thing. Another thing to add is really hard to find BI Consultant which has experiences in not only Cube optimization, but also in Report and Database optimization as well. If you have one of these, I called them as a "Real Asset", because they not only help you in OLAP, they will help you in OLTP, in your SSIS and in your reporting as well. I must suggest including at least one of these guys in a BI project, this will actually save your time and money.
#14: I have been using TFS for keeping my SSRS reports to source control, and it’s been nice that it doesn't act up badly. But i do have a reservation about keeping my SSIS to TFS, because it happens to me multiple times where it got corrupted somehow, luckily I am not only relying on TFS so I have the source back with me. Always use a backup strategy if your source control might fail how you can do the recovering. So be prepared for this because it might be happening anytime soon.
#15: Always good to define hierarchies and attribute relationships, whenever possible define hierarchies. Remember once you define the Hierarchy, hide the attribute so that it won't be duplicated in reporting tool like if you are using Performance Point, end user might see same attribute both inside hierarchy and in the dimension as well. So do set the visibility of attribute to hidden.
Designing a BI Solution is an interesting job; in each development you will learn new things. Always plan your development, choose the right tools to be used for your final solution, if you are unsure about something better discuss it with some other Consultants to pick the right product for your solution.
Regards,
Hasham Niaz