From Code to Installers: How I Built azddns with .NET, AOT, and Real-World Packaging

From Code to Installers: How I Built azddns with .NET, AOT, and Real-World Packaging

Dynamic DNS (DDNS) is a method of automatically updating DNS records (like the A/AAAA for a domain) when an IP address changes Wikipedia. In plain terms, if your server’s IP is not static (common on home networks or cloud setups), DDNS ensures your chosen hostname always maps to the current IP. Traditionally, people use services like No-IP or DynDNS with clients such as ddclient. As their docs explain, it’s commonly used to keep a domain name pointing at a service on a network with a regularly changing IP.

Why did I create azddns? Well, I’m an Azure user who wanted an easy, cross-platform way to update Azure DNS records dynamically, without relying on third-party DNS providers. Existing tools like ddclient weren’t tailored for Azure DNS zones (support is limited or requires custom scripting), and using the full Azure CLI in a cron job felt heavy. I wanted as a simple CLI that could run anywhere, from a Raspberry Pi at home to a Kubernetes cluster, to update an Azure DNS A record when the public IP changes. I wanted it to be a single, self-contained binary (no installed runtimes or interpreters needed), much like Go-based utilities. This made .NET’s Native AOT the obvious choice being a C# guy. It can compile a .NET app directly to a standalone native executable with fast startup and low memory usage. Modern .NET (since .NET 7/8) supports ahead-of-time compilation, meaning our tool can run on machines without the .NET runtime installed, a big win for portability. It got better with .NET 9 (latest of writing this) and even better in the later frameworks. See docs.

Building a .NET Native AOT CLI for Dynamic DNS

Building the core app in .NET was straightforward. azddns is essentially a console app that determines the machine’s current public IP, and updates a specific DNS A/AAAA record in an Azure DNS zone to that IP (creating the record if needed). Azure provides a robust SDK for DNS management, so I used the Azure SDK (Azure.ResourceManager.Dns source code, NuGet) to interact with DNS zones and record sets. Authentication is handled via DefaultAzureCredential (so it can use environment client secrets, managed identities, credentials from the CLI, Visual Studio, etc).

In pseudocode, the logic is roughly:

string ip = GetPublicIPv4();  // e.g. call an external service or DNS resolver
var credential = new DefaultAzureCredential();
var client = new ArmClient(credential);
 
// Get the DNS zone resource
var subscription = await client.GetDefaultSubscriptionAsync();
var rg = await subscription.Value.GetResourceGroupAsync("<resourceGroupName>");
var dnsZone = await rg.Value.GetDnsZoneAsync("<zoneName>");
 
// Upsert the A record
await dnsZone.Value.CreateOrUpdateDnsARecordAsync("<recordName>", ipAddress: ip, ttl: 300);

The above is illustrative. The actual code handles config input, IPv6, error cases, etc.

Enabling Native AOT in .NET was the interesting part. By adding <PublishAot>true</PublishAot> in the project file and publishing for a specific runtime, the .NET build generates a single native executable per target OS/ARCH. I set up the project to target multiple runtime identifiers (win-x86, win-x64, win-arm64, linux-x64, linux-arm64, osx-x64, osx-arm64) and used the .NET CLI to publish each. The result: seven binaries (one per platform/arch) that include the app and the minimal required .NET runtime components ahead-of-time compiled. No JIT, no DLLs next to the exe/dll; just one file per platform. The docs show linux-32bit support for AoT but I could not get it to work and it seems to be broken as per this issue which means no running on a PI Zero, at this time.

The came my next challenge: The Azure SDK for .NET does not fully support Native AOT. I discovered that when I tried to read an existing DNS record, the SDK hit a wall. It threw an exception about “reflection-based serialization” being disabled. I raised an issue, hoping future SDK versions will use AOT-friendly code section. Essentially, the Azure DNS SDK was calling JsonSerializer.Deserialize for a type (SystemData) without AOT-available metadata, causing a runtime error in a trimmed/AOT app. The error message clearly indicated the issue: "Reflection-based serialization has been disabled for this application. Either use the source generator APIs or explicitly configure the JsonSerializerOptions.TypeInfoResolver property.".

In other words, the JSON serialization in the SDK relied on reflection which Native AOT doesn’t allow by default. I worked around this by setting the JsonSerializerIsReflectionEnabledByDefault flag to true using a property to the project file. More info in docs.

The rest of the application code worked great under AOT. The final native binaries were within 10MB. This is acceptable given they bundle the Azure SDK and authentication logic. Performance-wise, azddns starts up in less than a blink and consumes minimal memory, confirming that the Native AOT approach in .NET can yield Go-like usability for CLI tools.

Packaging azddns for Cross-Platform Use

Having a set of binaries is nice, but I wanted installation to be as frictionless as possible for users on each platform. This meant distributing azddns in various formats:

  • Plain archives (zip/tar.gz)

    The simplest option. I zip the Windows binaries and tarball the Linux/macOS binaries. Users can download an archive from the GitHub Releases page, extract it, and run azddns. This is straightforward, but not the slickest experience (no auto-updates, manual PATH management, etc.). It’s mainly provided as a fallback or for use in Docker images and such.

  • Homebrew (macOS/Linux)

    Mac users love Homebrew, so I created a Homebrew formula for this. Rather than have Homebrew build from source, the formula fetches our pre-built tar.gz for macOS/Linux. (Homebrew supports distributing binary "bottles".) The formula then installs the azddns binary to the user’s PATH. Under the hood, I maintain a tap repository with the azddns formula, pointing to each new version’s tarball and SHA256 checksum. Homebrew even handles whether you’re on Apple Silicon or Intel, pulling the correct binary for arm64 or x64.

  • Scoop (Windows)

    In the Windows world, Scoop is a popular package manager for developers. More like the Homebrew that Windows will likely never have. I added the tool to a Scoop bucket by writing a simple JSON manifest. The manifest lists the download URL of the zip, the binary name, version, and hash. Windows users can do scoop install azddns (after adding the bucket), and Scoop will place the azddns.exe in their PATH. This beats manually clicking around to download a zip. I chose Scoop over MSI installers or Chocolatey for now, because it’s lightweight and easily supports installing a single exe. (PowerShell's Invoke-WebRequest and Expand-Archive can achieve the same in a pinch, but Scoop automates the process and updates.)

  • Native Linux packages

    I particularly wanted to avail this as a .deb (for Debian/Ubuntu), an .rpm (for RHEL/Fedora/CentOS), and even an Alpine .apk package. This would allow installation via the system package managers (apt, dnf/yum, apk) and integration with system services if needed. Rather than hand-crafting Debian control files or RPM spec files, I used an awesome tool called FPM (Effing Package Management). FPM can take a directory of files and churn out a deb or rpm (and many other formats) in one command. See docs. In the build workflow, after publishing the Linux binary, the workflow scripts create a folder (let’s call it a .pkgroot directory) that mimics the filesystem layout of the install. For example, into .pkgroot/usr/bin/ I place the azddns binary. Then an FPM command is run for each package format:

    # DEB package for Ubuntu/Debian (amd64 example)
    fpm -s dir -t deb -n azddns -v 0.1.0 -C .pkgroot .
     
    # RPM package for RHEL/Fedora
    fpm -s dir -t rpm -n azddns -v 0.1.0 -C .pkgroot .
     
    # Alpine APK package (musl-based, needs special dependency)
    fpm -s dir -t apk -n azddns -v 0.1.0 --provides azddns --depends gcompat -C .pkgroot .

    These are illustrative. In reality (see source code on GitHub) I also specify architecture (--architecture) and iteration/release numbers

    FPM fills in package metadata like maintainer, description, license, etc. Notably, for Alpine’s .apk I added a dependency on gcompat. Why? Our Linux binary is built against glibc, but Alpine Linux uses musl libc by default. gcompat is a compatibility library that provides glibc APIs on Alpine. See docs. By marking it as a dependency, when a user installs azddns-*.apk, it will also pull in gcompat to ensure the binary can run. I have the same in the Docker image. An alternative could have been compiling a separate binary targeting musl, but using gcompat was simpler for now.

Automating Builds and Releases with GitHub Actions

Manually building all these binaries and packaging them in multiple formats would be tedious and error-prone. I decided from the start to automated everything with GitHub Actions. Looking at the workflow file is best because it is self explanatory. However, some few things worth mentioning.

Matrix build for all OS/arch combinations

I set up a build matrix in the workflow covering Windows, Linux, and macOS, each targeting x86, x64 and ARM64 where applicable. This means the workflow spins up jobs on: Windows (x64, ARM64), Ubuntu (x64, ARM64), macOS (ARM64 Apple Silicon). The macos-* images al run on ARM64 (I believe Apple Silicon) starting with macos-14 but luckily they support building for both x64 and ARM64. For Windows either machine can also build for all three architectures (x86, x64, ARM64). However, for Ubuntu/Linux, the situation for cross compiling requires more hacking. I tried what the official docs offered on how to build ARM64 binary on a x64 host but it kept failing until I set IlcPath. While it worked, I eventually removed it.

Still, for testing, using QEMU emulator wasn't as easy for me but thankfully, GitHub recently availed a public preview of ARM64 runners for public projects. Read changelog for Linux (ubuntu-24.04-arm) and Windows (windows-11-arm). As of the time of this writing, it's barely a month for the Windows ones so I must have been lucky.

Provenance attestation

As a cherry on top, the workflow generates a signed provenance attestation for the release artifacts and docker images. This is part of embracing supply-chain security best practices. GitHub’s actions/attest-build-provenance action makes it super easy to produce an in-toto SLSA attestation. This is essentially a signed JSON document that describes how and where the binaries were built. The attestation is stored alongside the artifacts (and in an immutable public log via Sigstore). In practical terms, this means anyone downloading azddns can verify that the binary was indeed built from this source repo in GitHub Actions, and not tampered with. A provenance attestation provides details about the build process and is cryptographically signed, giving consumers confidence in the origin of the software. While not all users will care to verify this, it’s an investment in future-proofing the security of the distribution (and it was interesting to set up!).

More on provenance attestation in the GitHub docs, Docker docs, and this blog by Andrew Lock.

Lessons Learned and Next Steps

Building azddns was a great exercise in using modern .NET for a real-world utility and making it easy to consume on any system. Some key takeaways and lessons:

  1. .NET Native AOT is ready for prime time

    It enabled a truly self-contained CLI tool with excellent performance. I did run into an edge case with reflection (the Azure SDK issue) which taught me to be mindful of libraries that might not be fully AOT-compatible. In the future, using IL trimming and source-generated serialization (or waiting for SDK fixes) will be important as .NET AOT adoption grows.

  2. Cross-platform distribution requires stepping out of the comfort zone of any single ecosystem

    I had to learn about Homebrew formulae, Scoop buckets, Debian package structure, RPM spec conventions, etc. Tools like FPM saved a ton of time by abstracting away the differences. Still, testing each package in its target OS was crucial (for example, discovering the need for gcompat on Alpine).

  3. Automation is your friend

    Setting up the GitHub Actions workflow with a build matrix was initially a bit complex, but it ensures that with every git tag I can produce all artifacts consistently across more machines than I could possible have at one time. It also opens the door to adding automated tests (for example, I could have a job spin up an Azure DNS zone in a test subscription and run azddns against it to verify end-to-end functionality on each build). Additionally, the CI gave me a path to implement security measures like provenance attestation with minimal overhead once configured.

  4. Distribution and updates

    While one can manually download a binary, having the tool in package managers makes it much more likely to be used regularly. Users can update via their familiar tools (brew upgrade, scoop update, apt upgrade if I end up providing a repo, etc.). Speaking of which, a TODO is to host the Linux packages in an apt/yum/apk repository with proper GPG signing. Right now, I provide the .deb/.rpm/.apk files via GitHub Releases, but the user has to install them manually (or via a one-liner). Setting up a repository (perhaps using a service) would allow apt-get/apk/dnf/yum to consume the tool with package verification. This would round out the professional delivery of the tool.

  5. Modern .NET for CLIs

    Perhaps the biggest outcome is showcasing that C#/.NET can be used to build great cross-platform CLI tools. It’s not just the domain of Go or Rust. I was able to leverage Azure SDKs and familiar libraries, then deliver the tool in a form that feels native to each OS. For example, on a Linux server, an admin can apt install azddns and manage it via systemd, just as they might a tool written in C. On a Kubernetes cluster (which often runs Alpine-based images), one could run azddns in a sidecar or Deployment to update DNS (though Kubernetes also has the ExternalDNS project for dynamic DNS). The point is, the language/runtime is no longer a barrier to distribution.

In conclusion, azddns was a fun project. From writing the code to packaging it for every major platform. With .NET and Native AOT, I could have my cake (.NET’s developer productivity and Azure integration) and eat it too (Go-like deployment simplicity). If you’re interested, check out the azddns repository on GitHub for the source code and more details, and feel free to try it out for your own Azure DNS dynamic DNS needs.

Happy DNS updating!