NGINX for Service Fabric
AspNetCore and Service Fabric, both being relatively new in the market, have become fairly stable in the last couple of months. Usually, when you create a Service Fabric application with a stateless service built atop of AspNetCore, the service would use the HttpSys server (formerly WebListener). This is because the HttpSys driver present on windows is production-grade and can be exposed to the Internet. In fact IIS (commonly used to host AspNetCore apps and also used by Azure Web Apps), makes use of the HttpSys driver. This approach works fine without port sharing across multiple hostnames. There are at least two other approaches to this which I will discuss later in this post.
Service Fabric is a powerful platform. Besides giving you the raw power of the virtual machine, you get fine-grained control on scaling. Worth mentioning is the Actor programming model which is useful in given scenarios.
In this particular scenario, I needed to host only 4 AspNetCore applications in a 5-node cluster. Since these applications are self-hosted they need to listen for communication on their own port. These are multi-tenant application where tenants are resolved via the hostname i.e. from a base domain example.com, you’d have tenants such as:
The same would be done for the other 3 domains: example.info, example.org, example.net
1. Azure App Service
First, I must say this can be handled very efficiently with Azure App Service but while the app was running, I seemed to hit some limits that forced me to move to Service Fabric. Scaling up and out on App Service did not solve the issue. Further, one of the services required to connect on-premise with an old CISCO VPN device (not in my control) which can only work with policy-based VPN Gateway on Azure. Azure App Service only supports route-based VPN Gateway. No need to go any further.
2. HttpSys based server only
This requires setting up the services on different ports so as to use different certificates for the different domains. The resulting URLs would look like:
- https://tenant1.example.com/api/v1/data
- https://tenant2.example.org:3400/api/v1/data
- https://tenant3.example.info:6293/api/v1/data
- https://tenant4.example.net:7437/api/v1/data
With an exception of the first one, the others look very ugly (I know what you are thinking … but it’s a matter of perspective).
The problem with this approach is also with handling the certificate. Service Fabric requires that you supply the certificate thumbprint in the endpoint binding policy of the application manifest. I don’t like having to bind the https certificate to my code. The best I could do is to make the thumbprint a parameter that I can override. However, there is no getting around the requirement to deploy the certificate to the nodes beforehand using the Azure Key Vault. In addition, it can be ’real song and dance’ when your certificate process requires changing the certificates every so often such as when Lets Encrypt 90-day certificates start supporting wild card certificates in January 2018.
3. Azure Application Gateway
Here, the service offered will do all the heavy lifting including load balancing, SSL offloading/termination, URL rewriting. There are discussions of the same here, here, here and many more you can find on Google.
This was the best option I thought I had until I realized there is no support for wildcard domains in the HTTP listener. There is a UserVoice request here.
In addition, besides paying for the instance(s), once still has to pay for data transfer incurred as indicated in the pricing footnotes. Why do you do this Microsoft?
4. Azure API Management
This would work well in this scenario. The official documentation does a good job of explaining the details. However, my biggest issue here was the pricing starting at $49.
5. Nginx
Honestly, I was really trying to avoid having to write my own solution but I was tired of searching. I came across Haishi’s blog post which he did so well. You basically only need to install the NuGet package in your solution Service Fabric application project and the post-install scripts will set it up as a guest executable. In the post, he also explains how to set it up as service from scratch. The latter approach is what fits my scenario because adding the same to multiple applications running in service fabric would be rather untidy. It seemed best to have Nginx run as an only service in an application of its own. Configuring Nginx is rather painless, I got it running in less than 1 hour. In the end, my URLs looked better:
- https://tenant1.example.com/api/v1/data
- https://tenant2.example.org/api/v1/data
- https://tenant3.example.info/api/v1/data
- https://tenant4.example.net/api/v1/data
I borrowed a lot from Haishi’s blog post so I’d encourage you to read it first if you haven’t already done so. I added this section to just explain why I need to change the implementation. The folder structure remained largely the same.
Service Code
I only added a few elements to the original implementation
Usage
I did not want to repeat the same process setting up Nginx on another cluster, I prepared a package to allow reuse. The only things that change are the upstream ports (a.k.a. backend ports), certificates and hostnames. Therefore the only file I change is the nginx.conf file.
You can download the application package here
Extract the file and use the three simple scripts (Deploy, Undeploy, and Upgrade). Using these scripts you can deploy the pre-compiled package to your cluster. You must first call Connect-ServiceFabricCluster if you are not already connected to the cluster.
To deploy the application:
Before deployment, you should edit the configuration file named nginx.conf in the Config folder of the service package names NginxWrapperPkg. Further details on configuration can be found here.
Disclaimer
- The code/package provided is to be used at your own risk and there are no guarantees whatsoever
- A managed service is preferable to this approach due to patching etc.
- Nginx configuration provided has not been hardened, which is a task the user should undertake at their own pace and time.