Consultant at a computer software company with 51-200 employees
Understanding the Basics of Windows Azure Service Bus
As we become more distributed in our everyday lives, we must change our approach and view of how we build software. Distributed environments call for distributed software solutions. According to Wikipedia, a distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages. The most important part of a distributed system is the ability to pass a unified set of messages. Windows Azure Service Bus allows developers to take advantage of a highly responsive and scalable message communication infrastructure through the use of their Relayed Messaging or Brokered Messaging solutions.
-- Relay Messaging --
Relay Messaging provides the most basic messaging requirements for a distributed software solution. This includes the following:
- Traditional one-way Messaging
- Request/Response Messaging
- Peer to Peer Messaging
- Event Distribution Messaging
These capabilities allow developers to easily expose a secured service that resides on a private network to external clients without the need of making changes to your Firewall or corporate network infrastructure.
Relay Messaging does not come without limitations. One of the greatest disadvantages of relay messaging is that it requires both the Producer (sender) and Consumer (receiver) to be online. If the receiver is down and unable to respond to a message, the sender will receive an exception and the message will not be able to process. Relay messaging not only creates a dependency on the receiver due to its remoting nature, this behavior also makes all responses subject to network latency. Relay Messaging is not suitable for HTTP-style communication therefore not recommended for occasionally connected clients.
-- Brokered Messaging --
Unlike Relay Messaging, Brokered Messaging allows asynchronous decoupled communication between the Producer and Consumer. The main components of the brokered messaging infrastructure that allows for asynchronous messaging are Queues, Topics, and Subscriptions.
Queues
Service bus queues provides the standard queuing theory of FIFO (First In First Out). Queues bring a durable and scalable messaging solution that creates a system that is resilient to failures. When messages are added to the queue, they remain there until some single agent has processed the message. Queues allow overloaded Consumers to be scaled out and continue to process at their own pace.
Topics and Subscriptions
In contrast to queues, Topics and Subscriptions permit one-to-many communication which enables support for the publish/subscribe pattern. This mechanism of messaging also allows Consumers to choose to receive discrete messages that they are interested in.
-- Common Use Cases --
When should you consider using Windows Azure Service Bus? What problems could Service Bus solve? There are countless scenarios where you may find benefits in your application having the ability to communicate with other application or processes. A few example may include an inventory transfer system or a factory monitoring system.
Inventory Transfer
In an effort to offer exceptional customer service, most retailers will allow their customers to have merchandise transferred to a store that is more conveniently located to their customers. Therefore, the store that has the merchandise must communicate to the store that will be receiving the product of this transaction. This includes information such as logistical information, customer information, and inventory information. To solve this problem using Windows Azure Service Bus, the retailer would setup relay messaging service for all retail locations that could receive a message describing the inventory transfer transaction. When the receiving store gets this notification they will use this information to allow the store to track the item and update their inventory.
Factory Monitoring
Windows Azure Service Bus could also be used to enable factory monitoring. Typically machines within a factory are constantly monitored to insure system health and safety. Accurate monitoring of these systems is a cost saver in the manufacturing industry because it allows factory workers to take a more proactive response to potential problems. By taking advantage of Brokered Messaging, the factory robots and machines can broadcast various KPI (Key Performance Indicator) data to the server to allow subscribed agents such as a monitoring software to respond to the broadcasted messages.
-- Summary --
In summary, Windows Azure Service Bus offers a highly responsive and scalable solution for distributed systems. For basic request/response or one-way messaging such as transferring inventory within a group of retail stores, Relay Messaging will meet most system requirements. If your requirements call for a more flexible system that will support asynchrony and multiple message consumers, it is better to take advantage of the Queues and Topics that are made available in Brokered Messaging.
Disclosure: The company I work for is a Microsoft Partner - http://magenic.com/AboutMagenic
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Head of IT with 51-200 employees
(Some) Best Practices for Building Windows Azure Cloud Applications
In this blog post, I will talk about some of the best practices for building cloud applications. I started working on it as a presentation for a conference however that didn’t work out thus this blog post. Please note that these are some of the best practices I think one can follow while building cloud applications running in Windows Azure. There’re many-many more available out there. This blog post will be focused on building Stateless PaaS Cloud Services (you know that Web/Worker role thingie :) utilizing Windows Azure Storage (Blobs/Queues/Tables) and Windows Azure SQL Databases (SQL Azure).
So let’s start!
Things To Consider
Before jumping into building cloud applications, there’re certain things one must take into consideration:
- Cloud infrastructure is shared.
- Cloud infrastructure is built on commodity hardware to achieve best bang-for-buck and it is generally assumed that eventually it will fail.
- A typical cloud application consist of many sub-systemswhere:
- Each sub-system is a shared system on its own e.g. Windows Azure Storage.
- Each sub-system has its limits and thresholds.
- Sometimes individual nodes fail in a datacenter and though very rarely, but sometimes entire datacenter fails.
- You don’t get physical access to the datacenter.
- Understanding latency is very important.
With these things in mind, let’s talk about some of the best practices.
Best Practices – Protection Against Hardware Issues
These are some of the best practices to protect your application against hardware issues:
- Deploy multiple instances of your application.
- Scale out instead of scale up or in other words favor horizontal scaling over vertical scaling. It is generally recommended that you go with more smaller sized Virtual Machines (VM) instead of few larger sized VMs unless you have a specific need for larger sized VMs.
- Don’t rely on VM’s local storage as it is transient and not fail-safe. Use persistent storage like Windows Azure Blob Storage instead.
- Build decoupled applications to safeguard your application against hardware failures.
Best Practices – Cloud Services Development
Now let’s talk about some of the best practices for building cloud services:
- It is important to understand what web role and worker role are and what benefit they offer. Choose wisely to distribute functionality between a web role and worker role.
- Decouple your application logic between web role and worker role.
- Build stateless applications. For state management, it is recommended that you make use of distributed cache.
- Identify static assets in your application (e.g. images, CSS, and JavaScript files) and use blob storage for that instead of including them with your application package file.
- Make proper use of service configuration / app.config / web.config files. While you can dynamically change the values in a service configuration file without redeploying, the same is not true with app.config or web.config file.
- To achieve best value for money, ensure that your application is making proper use of all VM instances in which it is deployed.
Best Practices – Windows Azure Storage/SQL Database
Now let’s talk about some of the best practices for using Windows Azure Storage (Blobs, Tables and Queues) and SQL Database.
Some General Recommendations
Here’re some recommendations I could think of:
- Blob/Table/SQL Database – Understand what they can do for you. For example, one might be tempted to save images in a SQL database whereas blob storage is the most ideal place for it. Likewise one could consider Table storage over SQL database if transaction/relational features are not required.
- It is important to understand that these are shared resources with limits and thresholds which are not in your control i.e. you don’t get to set these limits and thresholds.
- It is important to understand the scalability targets of each of the storage component and design your application to stay within those scalability targets.
- Be prepared that you’ll encounter “transient errors” and have your application handle (and recover from) these transient errors.
- It is recommended that your application uses retry logic to recover from these transient errors.
- You can use TOPAZ or Storage Client Library’s built-in retry mechanism to handle transient errors. If you don’t know, TOPAZ is Microsoft’s Transient Fault Handling Application Block which is part of Enterprise Library 5.0 for Windows Azure. You can read more about TOPAZ here: http://entlib.codeplex.com/wikipage?title=EntLib5Azure.
- For best performance, co-locate your application and storage. With storage accounts, the cloud service should be in the same affinity group while with WASD, the cloud service should be in the same datacenter for best performance.
- From disaster recovery point of view, please enable geo-replication on your storage accounts.
Best Practices – Windows Azure SQL Database (WASD)
Here’re some recommendations I could think of as far as working with WASD:
- It is important to understand (and mentioned above and will be mentioned many more times in this post :)) that it’s a shared resource. So expect your requests to get throttled or timed out.
- It is important to understand that WASD != On Premise SQL Server. You may have to make some changes in your data access layer.
- It is important to understand that you don’t get access to data/log files. You will have to rely on alternate mechanisms like “Copy Database” or “BACPAC” functionality for backup purposes.
- Prepare your application to handle transient errors with WASD. Use TOPAZ for implementing retry logic in your application.
- Co-locate your application and SQL Database in same data center for best performance.
Best Practices – Windows Azure Storage (Blobs, Tables & Queues)
Here’re some recommendations I could think of as far as working with Windows Azure Storage:
- (Again :)) It is important to understand that it’s a shared resource. So expect your requests to get throttled or timed out.
- Understand the scalability targets of Storage components and design your applications accordingly.
- Prepare your application to handle transient errors with WASD. Use TOPAZ or Storage Client library’s Retry Policies for implementing retry logic in your application.
- Co-locate your application and storage account in same affinity group (best option) or same data center (next best option) for best performance.
- Table Storage does not support relationships so you may need to de-normalize the data.
- Table Storage does not support secondary indexes so pay special attention to querying data as it may result in full table scan. Always ensure that you’re using PartitionKey or PartitionKey/RowKey in your query for best performance.
- Table Storage has limited transaction support. For full transaction support, consider using Windows Azure SQL Database.
- With Table Storage, pay very special attention to “PartitionKey” as this is how data in a table is organized and managed.
Best Practices – Managing Latency
Here’re some recommendations I could think of as far as managing latency is concerned:
- Co-locate your application and data stores. For best performance, co-locate your cloud services and storage accounts in the same affinity group and co-locate your cloud services and SQL database in the same data center.
- Make appropriate use of Windows Azure CDN.
- Load balance your application using Windows Azure Traffic Manager when deploying a single application in different data centers.
Some Recommended Reading
Though you’ll find a lot of material online, a few books/blogs/sites I can recommend are:
Cloud Architecture Patterns – Bill Wilder: http://shop.oreilly.com/product/0636920023777.do
CALM (Cloud ALM) – Simon Munro: https://github.com/projectcalm/Azure-EN
Windows Azure Storage Team Blog: http://blogs.msdn.com/b/windowsazurestorage/
Patterns & Practices Windows Azure Guidance: http://wag.codeplex.com/
Summary
What I presented above are only a few of the best practices one could follow while building cloud services. On purpose I kept this blog post rather short. In fact one could write a blog post for each item. I hope you’ve found this information useful. I’m pretty sure that there’re more. Please do share them by providing comments. If I have made some mistakes in this post, please let me know and I will fix them ASAP. If you have any questions, feel free to ask them by providing comments.
http://gauravmantri.com/2013/01/11/some-best-practices-for-building-windows-azure-cloud-applications/Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Buyer's Guide
Microsoft Azure
December 2024

Learn what your peers think about Microsoft Azure. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
861,524 professionals have used our research since 2012.
Architect at a tech vendor with 10,001+ employees
Building Private Clouds with Windows Azure Pack (WAP)
The model of elastic self-service deployment of VMs and applications that come with the Azure public cloud are changing the way IT departments allocate servers. Rather than tying servers to a specific application, IT departments now look to provide a pool of shared and dynamically self-allocated resources. There are compelling needs to run on premise a private version of the Azure Cloud that provides a lot of the multi-tenant services and benefits of the public Cloud. There are a lot of hosting partners that want to offer these Azure Cloud OS services to their customers. MS wants to give a consistent platform across hosting providers, private DCs, and Azure Cloud. The newly released Windows Azure Pack (WAP) decouples and brings a few of Azure OS features and a modified portal with common code base into the private Cloud. It allows an enterprises to assume the role of service providers. It removes limitations to allow service providers to try and garner enterprise workloads.
Using WAP, your IT department can install these new features. (This was previously Windows Azure Services for Windows Server released at start of 2013 with System Center). The Azure Pack is built on top of Windows Server 2012 and System Center R2/ with Service Provider Foundation. An IT Dept. that builds on w2012 and Sys Center can move to WAP anytime. One of WAP’s goals is to drive a consistent IT ops and developer experience. These technologies will evolve over time. Some features for Azure will be released first in WAP and rolled into Azure Cloud, and vice versa. WAP comes at no cost for datacenters running System Center and Windows Server 2012.
Here are the services/workloads in the first release of Windows Azure Pack.
1. Web sites
• IIS currently is a server-centric platform but needs to evolve to be Cloud-first. IIS team rebuilt a new hosting PaaS with LB and scaling on-demand, dev0ps optimized. High-density supports 1000s of users on less of a cost than IIS with new capabilities. This is a good motivation to move into the on premise Cloud instead of running original IIS.
• Multi-machine PaaS container with data and app tier and Load balancing. The platform can talk to many source code providers. As an IT Ops person you just deploy the Web PaaS and don’t have to mess with configuration issues.
2. Service Bus
• Been on premise awhile but had restrictions. Now is same messaging architecture as Azure Cloud service bus with no limitations.
• Reliable messaging to build a cloud app that scales and communicates with other apps or across other boundaries. Messaging allows a way to pass and receive messages cross platform.
• Supports publish and subscribe messaging patterns across a variety of access points on multiple platforms using standard protocols.
3. Virtual Machines (IaaS)
• Allows you to provision and manage VMs as a consumer and define your networking. Gallery of apps and fully self-service experience for provisioning VMs.
• Consistent Azure VM API on premise and in Cloud so you can access VMs the same way regardless of where DC is that you are using.
• Adds a new Azure feature called Virtual Machine Roles (like AMIs in AWS which are Amazon EC2 Virtual Machine Templates). A VM Role provides a way to scale VMs elastically and define metadata for its container and its parameters. They are VM templates the IT Department can define to make available for self-provisioning and can scale. Templates can be versioned and take initial container info such as instance count, VM size, and hard disk. Provide admin credentials and OS version, IP address type and allocation method for IP address. You can specify app specific settings as well.
• Virtual Networks allows you to define VMs. Site to Site connectivity allows customers to connect their Cloud networks to their private networks. Good for hosters as well as the enterprise.
4. Service Management Portal and API
• Federate identities, Active Directory, and standards based.
• Take same portal as in Azure, decouple it, and run it in the on premise DC and talks to the consistent Service Management API.
Service Consumers
Service consumers are those who consume apps (developers) and infrastructure (IT Ops) from Service Providers. They need self-service admin and want to acquire capacity upon demand within limits defined by IT Dept. or hosting provider (have an internal approval process to increase beyond limits). Need predictable costs and get up and running quickly.
IT Depts. are now moving internally using a charge-back model (internal dollars vs. credit card) where IT Ops are charging back to different departments, almost like internal hosters. Today some internal IT requests lead internal folks to go out of band to get their job done via external hosting providers or acquire HW/SW without IT approval. WAP helps with simple and quick self-provisioning so no longer need to acquire hosting hardware outside IT budget.
Additional Consumer Services
• Integration with AD for the enterprise. ADFS and co-admins that are critical for the enterprise (Not for service providers).
• Integration with SQL Server and MySQL. Support for SQL Server always on to make DBs highly available across cluster.
• Co-Admins in WAP allows you now to associate an IT group with a co-admin account. This does not exists in Azure Cloud yet.
• Console Connect – Today Remote desktop in Azure Cloud IaaS will only work on a public network (RDP for Windows VM or SSH for Linux). If you can’t get to it publicly you can’t remote into VM. Now, with WAP, you have a new feature called “Console Connect” through a secure channel that allows you to connect into a machine that is not running on a public network but in an enterprise on premise network.
Service Providers
Service Providers want to provide the most service at lowest cost to service consumers. Providers want to use hardware efficiency by automating everything. Also may desire to provide differentiate on SLAs and profiles for different environments – thus different SLAs per workload that is not present in public cloud.
As the enterprise looks to move from capital to operational expenditures service providers see a window of opportunity to acquire enterprise business in the leased model of a private Cloud. WAP allows service providers to easily shift their offerings in this direction to attract this business from the enterprise.
Provider Portal
WAP supplies a Provider Portal for the cloud services that Service providers can offer their tenants (for enterprises or hosters). Can provides different SLAs to customers through portal and tailor how you offer those services. The Provider portal runs inside the enterprise firewall. It manages a different set of objects than the normal portal. You can manage a high-level PaaS Web hosting container that hosts multiple Web sites. You can connect to VM clouds and service bus deployments along with their health. There is an automation tab that integrates with run books in System Center and you can edit workbook jobs and schedule them, and tie them to events coming from System Center.
Additional Provider Services
In the Provider portal there is a Plans service that allows providers to decide what types of plans a customer can access. Providers pick services to make available and then define a set of constraints and quotas for each subscription for subscribers. Providers can pick the VM template and Gallery items available. Maps capabilities to backend infrastructure.
• Public plan allows subscribers to try out a plan
• Private plan allows you to manually permit a subscription.
Additionally in the Provider Portal there is a User Accounts service allowing providers to manage users and add co-admins or suspend/delete a subscription.
For additional information on the Windows Azure Pack go to http://www.microsoft.com/en-us/server-cloud/windows-azure-pack.aspx.
Disclosure: The company I work for is a Microsoft Partner - http://www.aditi.com/about-us/alliance/
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Operations Expert at a tech services company with 5,001-10,000 employees
Bring Windows Azure to your datacenter
How about having the Windows Azure experience locally on your datacentre?
Microsoft is now enabling Hosting Service Providers to use Windows Server 2012 and System Center 2012 to deliver the same great experiences already found in Windows Azure.
The first two of these finished services are high density website hosting and virtual machine provisioning and management. Hosting Service Providers enable these modules through the new Service Management API and optional portal.
Create high scale WebSites – Out of the box automation lowers customer onboarding costs while metering and throttling of resources can help tailor customer offerings. Supports many frameworks including ASP.NET, Classic ASP, PHP and Node.js with full Git integration for Source Code Control. Download and install the Web Sites service on machines dedicated for the Web Sites roles.
Create Virtual Machines – Leverage the power of System Center and Windows Server to easily create an Infrastructure as a Service solution for customers to provision and manage VMs. Download the System Center 2012 SP1 and install and configure SPF per the deployment guide.
Administer WebSites – Administer Web Sites and Virtual Machine services on Windows Server while also offering customers the same Windows 8-style self-service user experience as found on Windows Azure to provision and manage their Web Sites and Virtual Machines. Download the Service Management Portal and Service Management API Express bits to install the Admin and Tenant portals, and the Service Management API on one machine.Download the WebPI and click on the Products tab. Select Windows Azure to deploy the portals and the Service Management API on separate machines.
More Info:http://www.microsoft.com/hosting/en/us/services.aspx
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
CTO at a tech vendor with 10,001+ employees
Early Thoughts on the Windows Azure Announcements
Today’s release marks a significant milestone for Windows Azure. To date, Windows Azure has been a platform that allows developers to build and run applications across Microsoft’s global datacenters – the key emphasis has been on “applications”. Windows Azure has not been a platform for providing the underlying infrastructure for running your own virtual machine – this has been a key pain point for many customers looking to move to the cloud that Microsoft has heard loud and clear. Today’s announcement makes it clear that Windows Azure is more than just a Platform-as-a-Service provider.
In my opinion, there are three significant components of today’s announcements worth delving into deeper:
- New Infrastructure-as-a-Service (IaaS) capabilities.
- Free (or low-cost) hosting with Windows Azure Websites.
- Enhanced cloud networking capabilities that support VPN connections between an on-premises corporate network and Windows Azure.
Until now, Microsoft has never competed directly with Amazon EC2 with respects to IaaS nor with cloud platforms like Heroku. The new IaaS and Websites capabilities, combined with the ability to extend on-premises networks to the cloud, provides a number of ways that Windows Azure can now distinguish itself from other platforms and—in my opinion—will drive many new enterprises and a large number of developers to adopt Windows Azure.
Infrastructure-as-a-Service
Windows Azure has long had the concept of a “Virtual Machine role” but the fundamental problem has been the inability to persist changes made to the virtual machine image provided by the customer (i.e. the guest VM) during reboots or recycling. Supporting VM persistence in Windows Azure means that the guest VM will not lose these updates. This unlocks many workloads that previously did not work in Windows Azure – certainly products like SharePoint and SQL Server but also custom line-of business applications that previously were difficult to move to Windows Azure.
In addition to VM persistence, Windows Azure will also give customers the ability to run Linux VMs. There’s been a lot of interest and speculation regarding Microsoft’s strategy moving forward with Linux and open source. I think Microsoft recognizes that their customers run more than just Windows in their enterprise, and this is an opportunity for Windows Azure to run as many workloads as possible. We’ve seen this shift in Microsoft in a number of different ways – support for Node.js and Java in Windows and Windows Azure, the creation of a new interoperability subsidiary, and many more. The cloud provides a way to make it easier to connect all of these different platforms and technologies, and my take is that Microsoft is trying to make Windows Azure the best and simplest place to run your applications regardless of the platform or technology.
Windows Azure Websites
It’s exciting to see Microsoft continue to evolve its strategy with Windows Azure to make it increasingly accessible to the breadth of developers out there.
Windows Azure Websites is a hosting platform for web applications. It provides a number of different deployment and runtime options beyond the existing Web Role, including:
- Target both Microsoft and non-Microsoft technologies already running in the environment, including SQL Azure, MySQL, PHP, Node.js, and (of course) .NET.
- Deploy via Git, Web Deploy, FTP, or TFS.
- Run in a high-density / multitenant VM for little-to-no cost or choose a dedicated deployment path.
In addition to providing simpler and more consistent ways to deploy applications across different hosting platforms (e.g. Windows Azure, Windows Server, and hosting providers), Windows Azure Websites provides a way for Microsoft to bring thousands—perhaps even hundreds of thousands—of new developers to the platform with the offer of little-to-no cost hosting.
Cloud Networking
Windows Azure Virtual Networks allows a company to connect their cloud applications and solutions to their local network. This occurs at the networking layer through standard VPN devices. Coupled with IaaS support, this provides a ton of flexibility with respects to the kinds of workloads a customer moves to Windows Azure. Don’t want to move your sensitive SQL Server database? You don’t need to. Setup a VPN to your applications in Windows Azure and let them communicate directly back to your applications that live on-premises.
There’s certainly a lot more to talk about – new services, portal, SDK, tools, and so much more! These thoughts are pretty early—in fact, I write this before today’s MEET Windows Azure event—and there’s so much more to talk about!
Disclosure: The company I work for is partners with several vendors
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
I totally agree with your review. My opinion is that Windows Azure is only a part of the future. The future is the concept of pushing all applications into the cloud and utilizing world wide hosting providers. The upfront costs of pushing products out the door is heavily reduced this way.
CEO at a computer software company with 51-200 employees
Top Reasons Developers Should Use Windows Azure Mobile Services
With the recent release of Social Cloud, I asked the RedBit teamwhat are the top reasons for using Windows Azure Mobile Services and here is what we have.
Easy Third Party Authentication
Using the Identity feature of Azure Mobile Services allows developers to quick implement OAuth based authentication without having to worry about a lot of the plumbing code that is required when writing everything from scratch.
You can easily incorporate authentication with
- Microsoft Account
As a developer all you would have to do is
- Specify the keys in the portal
- Use the mobile SDK for iOS, Android, Windows 8, Windows Phone with application
- Authenticate via the SDK calling MobileServiceClient.LoginAsync()
Here is what it would look like from the dashboard to setup keys
To learn more about this feature see Get Started With Authentication with Mobile Services
Data Storage
Most mobile apps written today need some form of data storage and usually the process is
- Figure out where to host it
- Figure out what type of database to use
- Write some REST APIs to access the data
- Make sure the APIs are secured
Using the data feature of Azure Mobile Services developers can quickly create data tables, secure the data tables for read/write operations and also write custom scripts to run when an insert, update, delete or read operation is performed on the data.
From the client side, using the SDK, you call the MobileServiceClient.GetTable<>() method and data will be retrieved. If the data is secured via the portal settings, you will need to login using the client SDK before attempting to retrieve the data.
For more information see Get Started with Data in Mobile Services.
Client Libraries
Azure Mobile Services comes with clients libraries for the main mobile platforms available in the market today which are
- iOS
- Android
- Windows Phone 8
- Windows 8 (C# & JavaScript)
- Xamarin for iOS & Android
Leveraging this library and Azure Mobile Services on the back end, developers can focus on writing their app and not all the extra plumbing required for things such as authentication.
Custom APIs
The API feature is relatively new (as of Jun 24 2013) to Azure Mobile Services but allows developers to quick build APIs to the systems to be accessed by various client applications. You can quickly build out the APIs required by your app and just as quickly secure the APIs making sure only authenticated users have access to the APIs. Definitely something to use more often in the future!
Push Notifications
I’m a big fan of push notifications for mobile apps because it allows users to stay connected and engaged with their users. It’s also a great way to entice users to open your apps and this is especially useful if you are monetizing your apps with in app advertising.
Using Azure mobile Services, developers can quickly get this up and running on the various platforms such as iOS, Android, Windows Phone 8 and Windows 8 and it’s as easy as setting a few keys in your Azure Mobile Services Dashboard
Definitely something every developer should look at to keep their users engaged with their app.
For more information on how to get this running, see Get Started with Push Notifications in Mobile Services.
Overall, I think Windows Azure Mobile services really helps accelerate the development cycle and get your product to market faster. It allows you to focus on building out your product on not have to worry about server infrastructure or plumbing code required for things like authentication. When you need to scale, it’s just a few clicks and you are ready to handle your extra load from your users.
So those are our top reasons for using Windows Azure Mobile Services. If you have used it, what are your top reasons? Ping me or the RedBit team on Twitter or leave a comment here.
https://www.redbitdev.com/top-reasons-developers-should-use-windows-azure-mobile-services/Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Consultant at a tech services company with 51-200 employees
5 Great Features in Windows Azure Backup
Windows Azure Backup was recently released as a preview feature in Windows Azure’s already comprehensive suite. This is a great feature, even in its current preview state. However, due to its recent release, it can be difficult to find helpful descriptions of features already available. In this blog, we will highlight five great features of the Windows Azure Backup Preview and a short description of each one.
1. Scheduled Backup
In the world of backup and recovery, scheduled backups are not necessarily a new feature. New or old, it is definitely a convenient and required backup feature, especially for those who value data integrity.
Windows Azure does a great job of providing users with a simple interface that can be configured on the Windows Azure Backup Agent snap in for Windows Server 2012. The snap in (see Figure 1) allows users to easily customize a specific backup schedule, frequency, and granularity for that specific server. After it is configured, the specified data for backup is locally compressed, encrypted, and sent to Azure for storage.
Figure 1: Windows Azure Scheduling Backup Wizard
2. Granular Recovery
While scheduled and convenient backups are helpful, they don’t mean anything unless an equally granular and robust recovery solution is in place. Windows Azure Backup has provided users with just that. The Windows Azure Backup Storage can be recovered from your local server or from the Azure Management Console. The user can decide what folder or file they would like to recover and specify exactly which backup version they wish to recover.
In the event of a server failure, it is even possible to stand up a new instance of that server and recover your data from Windows Azure Recovery Services. Whether you’re missing a file, folder, or entire server, Windows Azure Recovery Services can meet all your needs.
3. Compressed and Encrypted Traffic
If you are already familiar with Windows Azure, you are likely aware that any traffic sent to the Azure Cloud is compressed and encrypted on your local machine before being uploaded. This is not any different for Windows Azure Recovery Services. The reason it has been included as a great feature relates to how it affects your monthly bill. Windows Azure will only bill on your compressed backups. Since Azure charges by GB/month, this can save you a significant amount of money.
4. Competitive Pricing
Windows Azure Backup is a Cloud service; therefore, it is a monthly subscription service just like the rest of Azure’s features. It is priced by the average GB/month. For example, if your compressed storage is 20 GB for the first half of the month and 40 GB for the second half of the month, you will pay the average or 30 GB. The current price is $0.50 per GB/month, but while the service is in preview, Microsoft is offering a 50% discount across the board. This proves to be an attractive and cost-effective option for customers who are considering moving their backup solution to the Cloud. Considering the initial capital saved by choosing a Cloud backup service, this can be extremely beneficial in many scenarios.
Figure 2: Windows Azure Backup Pricing
5. Backups Live in Cloud
In my opinion, the final and most important feature is a simple one. All the backed up content lives in Microsoft’s Cloud. That means two things: (1) no paying for on-premise storage solutions and (2) no data loss during power outages or server failures. It is a simple feature, but if you consider the implications of loosing backed up data, it’s a very important one.
In conclusion, Windows Azure Backup is a highly convenient and robust solution for anyone considering Cloud backup and recovery solutions. The five features described above are only a few of the features included in this fine-tuned machine that is called Windows Azure Backup.
If you are interested in learning more about Windows Azure Backup, please feel free to contact Credera or visit our blog.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Owner with 51-200 employees
Active Directory in Azure – Step by Step
Ever since Windows Azure Infrastructure Services were announced in preview I keep hearing questions "How to run Active Directory in Azure VM? And then join other computers to it". This article assumes that you already know how install and configure Active Directory Directory Services Role, Promote to Domain Controller, join computers to a Domain, Create and manage Azure Virtual Networks, Create and manage Azure Virtual Machines and add them to Virtual Network.
Disclaimer: Use this solution at your own risk. What I describe here is purely my practical observation and is based on repeatable reproduction. Things might change in the future.
The foundation pillar for my setup is the following (totally mine!) statement: The first Virtual Machine you create into an empty Virtual Network in Windows Azure will get the 4th IP Address in the sub-net range. That means, that if your sub-net address space is 192.168.0.0/28, the very first VM to boot into that network will get IP Address 192.168.0.4. The given VM will always get this IP Address across intentional reboots, accidental restarts, system healing (hardware failure and VM re-instantiating) etc., as long as there is no other VM booting while that first one is down.
First, lets create the virtual network. Given the knowledge from my foundation pillar, I will create a virtual network with two separate addressing spaces! One addressing space would be 192.168.0.0/28. This will be the addressing space for my Active Directory and Domain Controller. Second one will be 172.16.0.0/22. Here I will add my client machines.
Next is one of the the most important parts – assign DNS server for my Virtual Network. I will set the IP Address of my DNS server to 192.168.0.4! This is because I know (assume) the following:
- The very first machine in a sub-network will always get the 4th IP address from the allocated pool;
- I will place only my AD/DC/DNS server in my AD Designated network;
Now divide the network into address spaces as described and define the subnets. I use the following network configuration which you can import directly (however please note that you must have already created the AffinityGroup referred in the network configuration! Otherwise network creation will fail):
01
<
NetworkConfiguration
02 |
xmlns:xsd = "http://www.w3.org/2001/XMLSchema"
|
03 |
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
|
04 |
xmlns = "http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration" >
|
05 |
< VirtualNetworkConfiguration >
|
06 |
< Dns >
|
07 |
< DnsServers >
|
08 |
< DnsServer name = "NS" IPAddress = "192.168.0.4" />
|
09 |
</ DnsServers >
|
10 |
</ Dns >
|
11 |
< VirtualNetworkSites >
|
12 |
< VirtualNetworkSite name = "My-AD-VNet" AffinityGroup = "[Use Existing Affinity Group Name]" >
|
13 |
< AddressSpace >
|
14 |
< AddressPrefix >192.168.0.0/29</ AddressPrefix >
|
15 |
< AddressPrefix >172.16.0.0/22</ AddressPrefix >
|
16 |
</ AddressSpace >
|
17 |
< Subnets >
|
18 |
< Subnet name = "ADDC" >
|
19 |
< AddressPrefix >192.168.0.0/29</ AddressPrefix >
|
20 |
</ Subnet >
|
21 |
< Subnet name = "Clients" >
|
22 |
< AddressPrefix >172.16.0.0/22</ AddressPrefix >
|
23 |
</ Subnet >
|
24 |
</ Subnets >
|
25 |
</ VirtualNetworkSite >
|
26 |
</ VirtualNetworkSites >
|
27 |
</ VirtualNetworkConfiguration >
|
28 |
</ NetworkConfiguration >
|
Now create new VM from gallery – picking up your favorite OS Image. Assign it to sub-net ADDC. Wait to be provisioned. RDP to it. Add AD Directory Services server role. Configure AD. Add DNS server role (this will be required by the AD Role). Ignore the warning that DNS server requires fixed IP Address. Do not change network card settings! Configure everything, restart when asked. Promote computer to Domain Controller. Voilà! Now I have a fully operations AD DS + DC.
Let's add some clients to it. Create a new VM from gallery. When prompted, add it to the Clients sub-net. When everything is ready and provisioned, log-in to the VM (RDP). Change the system settings – Join a domain. Enter your configured domain name. Enter domain administrator account when prompted. Restart when prompted. Voilà! Now my new VM is joined to my domain.
Why it works? Because I have:
- Defined DNS address for my Virtual Network to have IP Address of 192.168.0.4
- Created dedicated Address Space for my AD/DC which is 192.168.0.0/29
- Placed my AD/DC designated VM in its dedicated address space
- Created dedicated Address Space for client VMs, which does not overlap with AD/DC designated Address Space
- I put client VMs only in designated Address Space (sub-net) and never put them in the sub-net of AD/DC
Of course you will get same result if with a single Address Space and two sub-nets. Being careful how you configure the DNS for the Virtual Network and which sub-net you put your AD and your Client VMs in.
This scenario is validated, replayed, reproduced tens of times, and is being used in production environments in Windows Azure. However – use it at your own risk.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.

Buyer's Guide
Download our free Microsoft Azure Report and get advice and tips from experienced pros
sharing their opinions.
Updated: December 2024
Popular Comparisons
Amazon AWS
Red Hat OpenShift
Oracle Cloud Infrastructure (OCI)
Akamai Connected Cloud (Linode)
Google Cloud
VMware Tanzu Platform
SAP Cloud Platform
Salesforce Platform
Pivotal Cloud Foundry
Alibaba Cloud
Google Firebase
IBM Public Cloud
VMware Cloud Foundation
Nutanix Cloud Clusters (NC2)
SAP S4HANA on AWS
Buyer's Guide
Download our free Microsoft Azure Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
- Gartner's Magic Quadrant for IaaS maintains Amazon Web Service at the top of the Leaders quadrant. Do you agree?
- PaaS solutions: Areas for improvement?
- Rackspace, Dimension Data, and others that were in last year's Challenger quadrant became Niche Players: Agree/ Disagree
- What Is The Biggest Difference Between Microsoft Azure and Oracle Cloud Platform?
- Which backup and recovery solution can backup Azure machines to its own (dedicated) cloud?
- Which is better - SAP Cloud Platform or Microsoft Azure?
- Which solution do you prefer: Alibaba Cloud or Microsoft Azure?
- How does Microsoft MDS (vs Informatica MDM) fit with Azure architecture?
- SAP HANA Enterprise Cloud (HEC): how to migrate to Microsoft Azure?
- Does F5 Advanced WAF work with Azure App Service?
Hi Travis Brown.
Can you please tell that what is the maximum size of message that can be added to the queue for transmission?