Microsoft SSRS biggest advantage is its ability to integrate with other third party components. You could integrate it with Share Point web part or you can access it from a Web browser, depending on your existing scenario you don’t need to change much to be able to integrate SSRS with your existing application. Microsoft SSAS offers low cost BI solutions comparing with IBM Cognos, licensing for SSAS is core-based or you could opt for server + CAL-based license which is very low in comparison with IBM Cognos. Microsoft SSRS & SSAS suite is uncompromising when it comes to performance. It provides very high performance and it could be connected to many different data sources. It requires very less time to build Microsoft SSRS & SSAS solution.
Developer at a tech consulting company with 51-200 employees
It offers high performance & low cost BI solutions with ability to integrate with other third party tools.
What is most valuable?
What needs improvement?
Microsoft SSRS is unable to provide a mechanism where poor performance reports could be prevented to run on production environment. In case if a user creates a report which can cause huge impact on server resources / performance it should not be allowed to run on production server during normal working hours. Alternative that is provided is to use subscription option to deliver reports to particular directory or email them at scheduled time (preferably non-working hours).Microsoft SSRS does not offer any interoperability with Andriod or iPad at this time (need to consider this if you are planning to develop reports for these environments) Microsoft SSRS & SSAS suite is very easy to configure and maintain. You don’t need any license if you are looking to try out Microsoft SSRS; you can simply download and use Microsoft SQL Server Express (with Advanced Services). You can create dashboards, drill through reports, charts, graphs, maps, dynamic grouping using Microsoft SSRS.
What other advice do I have?
It offers a very vibrant MSDN community along with Microsoft support staff for assistance.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
BI Junior consultant at a tech services company with 501-1,000 employees
Excel in BI mode provides an excellent approach to the BI world for end users
Valuable Features:
Power BI (except Power Query)
Improvements to My Organization:
Reduced time to develop and deployment projects.
Room for Improvement:
PowerQuery (so long) + smt exists in SSAS cube but cannot do the same in tabular.
Use of Solution:
2 years
Deployment Issues:
Yes. Language DAX
Stability Issues:
No
Scalability Issues:
No
Disclosure: My company has a business relationship with this vendor other than being a customer. partner
Buyer's Guide
Microsoft Power BI
June 2025

Learn what your peers think about Microsoft Power BI. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
861,490 professionals have used our research since 2012.
Information System Manager with 501-1,000 employees
We evaluated both SSRS and Business Objects. We chose SSRS and currently use Cognos as well.
I've been using Microsoft SSRS for 2 years. For me, the most valuable feature is the integration feature with our ERP. SSRS has room for improvement including drill down and graphs. Although I am happy with the products, I have experience performance degradation at times.
We're currently running both SSRS and Cognos. We ended up choosing SSRS after evaluating it alongside Business Objects. Once we chose to go with SSRS, we implemented through the vendor and found their level of expertise to be high which made the process very easy. I would say that our ROI is 200%. I would recommend SSRS to others.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
BI Expert at a tech company with 10,001+ employees
Comparing BO Webi vs SSRS
Webi from BO and SSRS from Microsoft are always in competition with each other. Both of them are good products catering to similar business needs in different styles. I have been using Webi and SSRS for my projects and i always think first which one should i use before starting a new one. Here i would be doing this comparison that may help others to opt between the two.
- Formatting - SSRS provides you good flexibility in formatting your reports but Webi does it better. In webi you can align your components charts with better look and feel than SSRS. Also formatting is easier in webi as its all gui based while in SSRS its all visual studio like environment so bit of complexity. Hence i would give webi 5 and SSRS 4
- Rendering - SSRS rendering is really bad when it comes to PDF, they don't give bookmarks for tabs in report. Also quality of formatting in Webi generated PDF is better. Here SSRS looses and webi wins. Hence Webi gets 5 and SSRS 3
- Ease of use - Again webi takes advantage here. SSRS is too complex for a new beginner. Like tablix concept, conditional formatting and all, everything is expression based which makes it too much to code. i will give webi 4 and SSRS 3
- Flexibility - Here SSRS is champ. you have got every thing to be modified as per your requirement. You can use custom functions, custom codes, colors, borders, text, and any property can be made dynamic using expressions. While in Webi only limited things are there to be modified. so Webi gets 2 and SSRS gets 5
- Error reporting - When you schedule webi and if it gets failed, Webi gives you error details specifically. While, in SSRS error details is mostly generic. Webi gets 5 and SSRS gets 3
- Managing History Instances - SSRS have tried to do a lot but less when comparing to Webi. Webi is better. Webi 5 and SSRS 4
- Error prone - While working with webi, there have been many instances where you just cant explain the erroneous behavior of Webi reports. Some times your report tabs goes away, some times you formatting your webi formatting is corrupted. And most of all java creates a lot of problem while working with webi. In case of SSRS its completely free of such errors. So SSRS 5 and webi 2
Finally total scores : Webi - 28 and SSRS - 27
Over all if you are looking for better formatting and easy to use gui Webi is for you. But if you want better flexibility and less rework then SSRS can be the choice.
Please Note: these are my views based on my experience. Some people may have different opinions.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Consultant at a tech consulting company with 501-1,000 employees
My 30 tips for building a Microsoft BI solution, Part VI: Tips 26-30
This is the last part in my series of things I wished I knew about before starting a Microsoft BI project. I’ll be taking my summer vacation now so the blog will be quiet the next month. After the break I will revise a couple of the tips based on feedback so stay tuned.
#26: Decide how to source your data in Analysis Services and stick with it.
Ideally you will source your data from a correctly modeled star schema. Even then you may need to massage the source data before feeding it into SSAS. There are two ways of accomplishing this: Through views in the database or through data source views (dimensional) or queries (tabular). Unless you are unable to create views in your database (running on a prod system etc) I would strongly suggest using them. This will give you a clean separation of logic and abstraction between the SSAS solution and the data source. This means that clients connecting to the data warehouse directly will see the same data model as the SSAS solution. Also migrating between different front-ends (like dimensional and tabular) will become much simpler. In my solutions I never connect to tables directly I always bind to views for everything and never implement any logic in the DSV or via queries.
#27: Have some way of defining “current” time periods in your SSAS solution
Most SSAS solutions have a time dimension with dates, months, years, etc. In many ways its the most important dimension in your solution as it will be included in most reports / analyses as well as form the basis for a lot of calculations (see previous tips). Having a notion of what is the current period in your time dimension will greatly improve the usability of your solution: Reports will automatically be populated with the latest data without any user interaction. It can also simplify ad-hoc analysis by setting the default members to the most current date / month / year so that when users do not put these on one of the axes it will default to the most recent time period. There are a number of ways of implementing this including calculated members and named sets (for dimensional) and calculations for Tabular and the internet is abundant with sample solutions. Some of them are fully automated (using VBA time functions) and some require someone to manually set the current period. I prefer to use the latter if possible to avoid reports showing incorrect data if something went wrong in the ETL.
#28: Create a testable solution
This is a really big topic so I will emphasize what I have found most important. A BI solution has a lot of moving parts. You have your various source systems, your ETL pipeline, logic in the database, logic in your SSAS solution and finally logic in your reporting solution. Errors happen in all of these layers but your integration services solution is probably the most vulnerable part. Not only do technically errors occur, but far more costly are logic errors where your numbers don’t match what is expected. Luckily there are a lot of things you can do to help identify when these errors occur. As mentioned in tips #6 and #7 you should use a framework. You should also design your solution to be unit testable. This boils down to creating lots of small packages that can be run in isolation rather than large complex ones. Most importantly you should create validation queries that compares the data you load in your ETL with data in the source systems. How these queries are crafted varies from system to system but a good starting point would be comparisons of row counts, sums of measures (facts) and number of unique values. The way I do it is that I create the test before building anything. So if I am to load customers that have changed since X, I first create the test query for the source system (row counts, distinct values etc.) then the query for the data warehouse together with a comparison query and finally I start building the actual integration. Ideally you will package this into a SSIS solution that logs the results into a table. This way you can utilize your validation logic both while developing the solution but also once its deployed. If you are running SQL Server 2012 you might want to look into the data tap features of SSIS that lets you inspect data flowing through your pipeline from the outside.
#29: Avoid the source if you are scaling for a large number of users
Building a BI solution to scale is another very large topic. If you have lots of data you need to scale your ETL, Database and SSAS subsystems. But if you have lots of users (thousands) your bottleneck will probably be SSAS. Concurrently handling tens to hundreds of queries with acceptable performance is just not feasible. The most effective thing is to avoid this as much as possible. I usually take a two pronged approach. Firstly I implement as much as possible as standard (“canned”) reports that can be cached. Reporting Services really shines in these scenarios. It allows for flexible caching schemes that in most circumstances eliminates all trips to the data source. This will usually cover around 70-80% of requirements. Secondly I deploy an ad-hoc cube specifically designed and tuned for exploratory reporting and analysis. I talked about this in tip #17. In addition you need to consider your underlying infrastructure. Both SSRS and SSAS can be scaled up and out. For really large systems you will need to do both, even with the best of caching schemes.
#30: Stick with your naming standards
There are a lot objects that need to be named in a solution. From the more technical objects such as database tables and SSIS packages to objects exposed to users such as SSAS dimensions and measures. The most important thing with naming conventions is not what they are, but that they are implemented. As I talked about in tip #24 changing a name can have far reaching consequences. This is not just a matter of things breaking if you change them but consider all of the support functionality in the platform such as logging that utilize object names. Having meaningful, consistent names will make it a heck of a lot easier to get value out of this. So at the start of the project I would advise to have a “naming meeting” where you agree upon how you will name your objects. Should dimension tables be prefixed with Dim or Dim_? Should Dimension names be plural (CustomerS) or singular (Customer), etc.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Thanks Peter for the great range of tips for using Microsoft BI tool. They are indeed a must-read for all developers and novice users of this great tool for businesses.
Consultant at a tech consulting company with 501-1,000 employees
My 30 tips for building a Microsoft BI solution, Part V: Tips 21-25
I might just get all 30 done before summer vacation

#21: Avoid using discretization buckets for your dimension attributes
Discretization buckets lets you group numerical attributes into ranges. Say you have a customer dimension including the age of the customer you can use this feature to group them into age clusters such as 0-5, 6-10 and so on. While you can tweak how the algorithm creates groups and even provide naming templates for the groups you still have relatively limited control over them. Worst case scenario: A grouping is removed / changed by the algorithm which is referenced in a report. A better way of grouping these attributes is by doing it yourself either in the data source view or a view in the database (there will be a separate tip on this). This way you have complete control over the distribution of values into groups and the naming of the groups.
#22: Do not build a SSAS solution directly on top of your source system
SSAS has a couple of features that enable it to source data directly from a normalized data model typically found in business applications such as ERP systems. For instance you can “fake” a star schema through queries in the data source view. You can also utilize proactive caching to eliminate any ETL to populate your cube with data. This all sounds very tempting but unfortunatly I have never seen this work in reality. Unless you are working with a very small source system with impeccable data quality and few simultanous users you should avoid the temptation for all the usual reasons: Proactive caching will stress your source system, data quality will most likely be an issue, integrating new data sources will be nearly impossible,etc. There is a reason BI projects spend 70-80% of their time working with modelling and integrating data.
#23: Deploy SSAS cubes with the deployment tool
If you are working with multiple environments (dev/test/prod) do not use the deployment functionality of visual studio to deploy to another environment. This will overwrite partitions and roles that may be different between the environments. Use the deployment wizard.
#24: Remember that your SSAS cubes are a single point of failure
Keep in mind that most client tools do not cope well with changes to SSAS data models. Any renames or removals you do in the model will most likely cause clients that reference those entities to fail. Make sure you test all your reports against the changed model before deploying it to production. Also, if you allow ad-hoc access to your SSAS solution be aware that users may have created reports that you do not know about. Query logging may help you a little here (it gives you an indication of which attribute hierarchies are in use). The best way to avoid all of this is to thoughtfully design your cube and the naming of your SSAS objects so that there is no need to change or remove anything in the first place.
#25: Avoid “real time”
“Real time” means different things to different people. Some interpret it as “simultaneous to an event occurring” while others have more leeway and have various levels of tolerance for delays. I prefer the term “latency”: How old can the data in the BI solution get before it needs to be refreshed?. The lowest latency I have ever implemented is two hours. That is hours not minutes. I know this does not sound very impressive but that is honestly the best I have been able to do at a reasonable cost. When doing “real time” you need to consider a lot of factors: Partitioning, changes to dimensions, ROLAP vs MOLAP / direct query vs xVelocity, source system access, how to administer it, etc., etc. These things add up quickly to a point where the value simply does not justify the cost.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Consultant at a tech consulting company with 501-1,000 employees
My 30 tips for building a Microsoft BI solution, Part IV: Tips 16-20
A note about the SSAS tips: Most tips are valid for both dimensional and tabular models. I try to note where they are not.
#16: Implement reporting dimensions in your SSAS solution
Reporting dimensions are constructs you use to make the data model more flexible for reporting purposes. They usually also simplify the management and implementation of common calculation scenarios. Here are two examples:
- A common request from users is the need to select which measure to display for a given report in Excel through a normal filter. This is not possible with normal measures / calculations. The solution is to create a measure dimension with one member for each measure. Expose a single measure in your measure group (I frequently use “Value”) that you assign the correct measure to in your MDX script / DAX calculation based on the member selected in the measure dimension. The most frequently used measure should be the default member for this dimension. By doing this you not only give the users what they want, but you also simplify a lot of calculation logic such as the next example.
- Almost all data models require various date related calculations such as year to date, same period last year, etc. It is not uncommon to have more than thirty such calculations. To manage this effectively create a separate date calculation dimension with one member for each calculation. Do your time based calculations based on what is selected in the time calculation dimension. If you implemented the construct in the previous example this can be done generically for all measures that you have in your measure dimension. Here is an example for how to do it tabular. For dimensional use the time intelligence wizard to get you started.
#17: Consider creating separate ad-hoc and reporting cubes
Analysis Services data models can become very complex. Fifteen to twenty dimensions connected to five to ten fact tables is not uncommon. Additionally various analysis and reporting constructs (such as a time calculation dimensions) can make a model difficult for end users to understand. There are a couple of features that help reduce this complexity such as perspectives, role security and default members (at least for dimensional) but often the complexity is so ingrained in the model that it is difficult to simplify by just hiding measures / attributes / dimensions from users. This is especially true if you use a “reporting cube” which I talked about in tip #16. You also need to consider the performance aspect of exposing a large, complex model to end user ad-hoc queries. This can very quickly go very wrong. So my advice is that you consider creating a separate model for end users to query directly. This model may reduce complexity in a variety of ways:
- Coarser grain (Ex: Monthly numbers not daily).
- Less data (Ex: Only last two years, not since the beginning of time).
- Fewer dimensions and facts.
- Be targeted at a specific business process (Use perspectives if this the only thing you need).
- Simpler or omitted reporting dimensions.
Ideally your ad-hoc model should run on its own hardware. Obviously this will add both investment and operational costs to your project but will be well worth it when the alternative is an unresponsive model.
#18: Learn .NET
A surprisingly high number of BI consultants I have met over the years do not know how to write code. I am not talking about HTML or SQL here but “real” code in a programming language. While we mostly use graphical interfaces when we build BI solutions the underlying logic is still based on programming principles. If you don’t get these, you will be far less productive with the graphical toolset. More importantly .Net is widely used in Microsoft based solutions as “glue” or to extend the functionality of the core products. This is especially true for SSIS projects where you quite frequently have to implement logic in scripts written in C# or VB.net but also applies to most components in the MS BI stack. They all have rich API’s that can be used for extending their functionality and integrating them into solutions.
#19: Design your solution to utilize Data Quality Services
I have yet to encounter an organization where data quality has not been an issue. Even if you have a single data source you will probably run into problems with data quality. Data quality is a complex subject. Its expensive to monitor and expensive to fix. So you might as well be proactive from the get-go. Data Quality Services is available in the BI and Enterprise versions of SQL Server. It allows you to define rules for data quality and monitor your data for conformance to these rules. It even comes with SSIS components so you can integrate it with your overall ETL process. You should include this in the design stage of your ETL solution because implementing it in hindsight will be quite costly as it directly affects the data flow of your solution.
#20: Avoid SSAS unknown members
Aside from the slight overhead they cause when processing, having unknown members means that your underlying data model has issues. Fix them there and not in the data model.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Consultant at a tech consulting company with 501-1,000 employees
My 30 tips for building a Microsoft BI solution, Part III: Tips 11-15
#11: Manage your own surrogate keys.
In SQL Server it is common to use an INT or BIGINT set as IDENTITY to create unique, synthetic keys. The number is a sequence and a new value is generated when we execute an insert. There are some issues with this. Quite often we need this value in our Integration Services solution to do logging and efficient loads of the data warehouse (there will be a separate tip on this). This means that sometimes we need the value before an insert and sometimes after. You can obtain the last value generated by issuing a SCOPE_IDENTITY command but this will require an extra trip to the server per row flowing through your pipeline. Obtaining the value before an insert happens is not possible in a safe way. A better option is to generate the keys yourself through a script component. Google for “ssis surrogate key” and you will find a lot of examples.
#12: Excel should be your default front-end tool.
I know this is a little bit controversial. Some say Excel lacks the power of a “real” BI tool. Others say it writes inefficient queries. But hear me out. Firstly, if you look at where Microsoft is making investments in the BI stack, Excel is right up there at the top. Contrast that to what they are doing with PerformancePoint and Reporting Services and its pretty clear that Excel is the most future proof of the lot. Microsoft have added lot of BI features over the last couple of releases and continue to expand it through new add-ins such as data explorer and geoflow. Additionally, the integration with SharePoint gets tighter and tighter. The Excel web client of SharePoint 2013 is pretty on par with the fat Excel client when it comes to BI functionality. This means that you can push out the new features to users who have not yet upgraded to the newer versions of Excel. When it comes to the efficiency with which Excel queries SSAS a lot has become better. But being a general analysis tool it will never be able to optimize its queries as you would if you wrote them specifically for a report.Please note that I am saying “default” not “best”. Of course there are better, pure bred, Business Intelligence front-ends out there. Some of them even have superior integration with SSAS. But its hard to beat the cost-value ratio of Excel if you are already running a Microsoft shop. If you add in the fact that many managers and knowledge workers already do a lot of work in Excel and know the tool well the equation becomes even more attractive.
#13: Hug an infrastructure expert that knows BI workloads.
Like most IT solutions, Microsoft BI solutions are only as good as the hardware and server configurations they run on. Getting this right is very difficult and requires deep knowledge in operating systems, networks, physical hardware, security and the software that is going to run on these foundations. To make matters worse, BI solutions have workloads that often differ fundamentally from line of business applications in the way they access system resources and services. If you work with a person that knows both of these aspects you should give him or her a hug every day because they are a rare breed. Typically BI consultants know a lot about the characteristics of BI workloads but nothing about how to configure hardware and software to support these. Infrastructure consultants on the other hand know a lot about hardware and software but nothing about the specific ways BI solutions access these. Here are three examples: Integration Services is mainly memory constrained. It is very efficient at processing data as a stream as long as there is enough memory for it. The instant it runs out of memory and starts swapping to disk you will see a dramatic decrease in performance. So if you are doing heavy ETL, co-locating this with other memory hungry services on the same infrastructure is probably a bad idea. The other example is the way data is loaded and accessed in data warehouses. Unlike business systems that often do random data access (“Open the customer card for Henry James”) data warehouses are sequential. Batches of transactions are loaded into the warehouse and data is retrieved by reports / analysis services models in batches. This has a significant impact on how you should balance the hardware and configuration of your SQL Server database engine and differs fundamentally from how you handle workloads from business applications. The last example may sound extreme but is something I have encountered multiple times. When businesses outsource their infrastructure to a third party they give up some of the control and knowledge in exchange for an ability to “focus on their core business”. This is a good philosophy with real value. Unfortunately if you do not have anyone on the requesting side of this partnership that knows what to ask for when ordering infrastructure for your BI project what you get can be pretty far off from what you need. Recently a client of mine made such a request for a SQL Server based data warehouse server. The hosting partner followed their SLA protocol and supplied a high availability configuration with a mandatory full recovery model for all databases. You can imagine the exploding need for disk space for the transaction logs when loading batches of 20 million rows each night. As these examples illustrate, it is critical for a successful BI implementation to have people with infrastructure competency on your BI team that also understand how BI solutions differ from “traditional” business solutions and can apply the right infrastructure configurations.
#14: Use Team Foundation Server for your BI projects too.
A couple of years ago putting Microsoft BI projects under source control was a painful experience where the benefits drowned in a myriad of technical issues. This has improved a lot. Most BI artifacts now integrate well with TFS and BI teams can greatly benefit from all the functionality provided by the product such as source control, issue tracking and reporting. Especially for larger projects with multiple developers working against the same solution TFS is the way to go in order to be able to work effectively in parallel. As an added benefit you will sleep better at night knowing that you can roll back that dodgy check-in you performed a couple of hours ago. With that said there are still issues with the TFS integration. SSAS data source views are a constant worry as are server and database roles. But all of this (including workarounds) is pretty well documented online.
#15: Enforce your attribute relationships.
This is mostly related to SSAS dimensional but you should also keep it in mind when working with tabular. Attribute relationships define how attributes of a dimension relate to each other (roll up into each other). For example would products roll up into product subgroups which would again roll into product groups. This is a consequence of the denormalization process many data warehouse models go through where complex relationships are flattened out into wide dimension tables. These relationships should be definied in SSAS to boost general performance. The magic best-practice analyzer built into data tools makes sure you remember this with its blue squiggly lines. Usually it takes some trial and error before you get it right but in the end you are able to process your dimension without those duplicate attribute key errors. If you still don’t know what I am talking about look it up online such as here. So far so good. Problems start arising when these attribute relationships are not enforced in your data source, typically a data warehouse. Continuing with the example from earlier over time you might get the same product subgroup referencing different product groups (“parents”). This is not allowed and will cause a processing of the dimension to fail in SSAS (those pesky duplicate key errors). To handle this a bit more gracefully than simply leaving your cube(s) in an unprocessed state (with the angry phone calls this brings with it) you should enforce the relationship at the ETL level, in Integration Services. When loading a dimension you should reject / handle cases where these relationships are violated and notify someone that this happened. The process should make sure that the integrity of the model is maintained by assigning “violators” to a special member of the parent attribute that marks it as “suspect”. In this way your cubes can still be processed while highlighting data that needs attention.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Hi Peter !
Nice article, now we discuss from point 11 to 15 in detail;
#11: I do agree with you partially on this, because I don't understand the need for creating a separate surrogate key for SSIS. My point is using the keys from Production tables; personally I use Change Table method to perform incremental loads. If a separate key is required in your Data warehouse Model, you can create in using a combination or reading the value from source table or by loading a value into SSIS variable and then assigning this to your table.
#12: I prefer to use Excel as a tool where i can perform quick data verification or number reconciliation by connecting to my cube. I know Microsoft has been investing lot in Excel through Power Pivot and all. But what about the future of "Power BI" which we heard a new tool which will have the capabilities to become the number one BI tool for reporting. Personally I think excel can't be used as enterprise reporting tool.
#13: A rare to have thing. Another thing to add is really hard to find BI Consultant which has experiences in not only Cube optimization, but also in Report and Database optimization as well. If you have one of these, I called them as a "Real Asset", because they not only help you in OLAP, they will help you in OLTP, in your SSIS and in your reporting as well. I must suggest including at least one of these guys in a BI project, this will actually save your time and money.
#14: I have been using TFS for keeping my SSRS reports to source control, and it’s been nice that it doesn't act up badly. But i do have a reservation about keeping my SSIS to TFS, because it happens to me multiple times where it got corrupted somehow, luckily I am not only relying on TFS so I have the source back with me. Always use a backup strategy if your source control might fail how you can do the recovering. So be prepared for this because it might be happening anytime soon.
#15: Always good to define hierarchies and attribute relationships, whenever possible define hierarchies. Remember once you define the Hierarchy, hide the attribute so that it won't be duplicated in reporting tool like if you are using Performance Point, end user might see same attribute both inside hierarchy and in the dimension as well. So do set the visibility of attribute to hidden.
Designing a BI Solution is an interesting job; in each development you will learn new things. Always plan your development, choose the right tools to be used for your final solution, if you are unsure about something better discuss it with some other Consultants to pick the right product for your solution.
Regards,
Hasham Niaz

Buyer's Guide
Download our free Microsoft Power BI Report and get advice and tips from experienced pros
sharing their opinions.
Updated: June 2025
Popular Comparisons
Teradata
IBM Cognos
Amazon QuickSight
SAP Analytics Cloud
SAP BusinessObjects Business Intelligence Platform
Oracle OBIEE
MicroStrategy
Oracle Analytics Cloud
QlikView
Looker Studio
TIBCO Spotfire
Microsoft SQL Server Reporting Services
Salesforce Einstein Analytics
Buyer's Guide
Download our free Microsoft Power BI Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
Learn More: Questions:
- BI Reports for business users - which BI solutions should we choose?
- Business users moving from Tableau to MS Report builder
- Is Power BI a complete platform or only a visualization tool?
- What are the key advantages of OBIEE compared to Microsoft BI?
- What Is The Biggest Difference Between Microsoft BI and Oracle OBIEE?
- Is Microsoft Power BI good for an ETL process?
- How would you decide between Microsoft Power BI and TIBCO Spotfire?
- Is it easy to extract data from Oracle Fusion into Power BI?
- PowerBI or SyncFusion - which is better?
- What challenges to expect when migrating multiple dashboards from TIBCO Spotfire to Microsoft Power BI?
I found Microsoft BI stack to be far better and it's "free" since it comes along with sql server. Since it is free people usually do not care much about its advantages but more loud about its disadvantages. SSRS is one of my favorite tools, it provide flexibility and good web support and strong AD integration for security. SSRS should not be running in the production server instance as it will kill your performance. Second, make sure it runs in a scale out manner (more than one server) based on total load. Third, change the memory setting in your SSRS configuration so that it can utilize more memory in the server rather than default, it will cut down the rendering. Fourth, never run big reports on the production server, find an alternative solution, if that's not possible then use resource governor so that your production database will not get the performance hit. Check query response time and actual query plan for long running reports match up with dba's performance baseline for that day.