AVAHI vs systemd-resolved: if multicast is restricted on Wi‑Fi/VLANs, which one gives better diagnostics and failure modes?
Networking System Software

AVAHI vs systemd-resolved: if multicast is restricted on Wi‑Fi/VLANs, which one gives better diagnostics and failure modes?

11 min read

When Wi‑Fi controllers or VLAN policies restrict multicast, mDNS/DNS‑SD breaks in ways that are often opaque to end users. At that point, the tools and daemons you use matter less for “making it work” and more for “making it obvious what’s broken.” From the perspective of a distro maintainer and network services engineer, Avahi and systemd‑resolved occupy different layers here, and they give you very different diagnostic handles when multicast is impaired.

Quick Answer: The best overall choice for LAN service discovery and diagnosing mDNS failure on constrained Wi‑Fi/VLANs is Avahi.
If your priority is *system‑wide .local resolution integration and unified logging across classic DNS + mDNS, systemd‑resolved is often a stronger fit.
For mixed setups where you must run Avahi but still need centralized name‑resolution behavior, consider using Avahi plus nss‑mdns and leaving systemd‑resolved for unicast DNS only.


At-a-Glance Comparison

RankOptionBest ForPrimary StrengthWatch Out For
1Avahi (with nss‑mdns)Debugging and operating mDNS/DNS‑SD on multicast‑constrained networksClear separation of concerns, protocol‑level tools (avahi-browse, avahi-resolve), Bonjour/Zeroconf‑style behaviorRequires nss‑mdns and nsswitch changes for *.local resolution; separate from systemd‑resolved
2systemd‑resolved (with mDNS enabled)Unified hostname resolution stack (unicast DNS + mDNS) and journal‑integrated diagnosticsSingle daemon for DNS, DNSSEC, and mDNS; logs in the same place as other systemd componentsmDNS behavior is less transparent; fewer DNS‑SD‑aware tools; behavior can vary with distro defaults
3Avahi + systemd‑resolved split responsibilitiesMixed environments where you need Avahi’s DNS‑SD plus systemd‑resolved’s DNS featuresLets Avahi own DNS‑SD while systemd‑resolved handles classic DNS, VPN split‑DNS, etc.More moving parts; you must be deliberate about who answers *.local and how nsswitch is wired

Comparison Criteria

We evaluated these setups against three practical criteria you hit immediately when multicast is filtered on Wi‑Fi/VLANs:

  • Protocol‑level visibility:
    How easily can you see what’s happening on the wire (or not happening) for mDNS/DNS‑SD? This covers tools, logs, and whether the daemon exposes behavior in a way that matches the mDNS/DNS‑SD model.

  • Failure‑mode clarity for users and operators:
    When multicast is blocked, flooded, or rate‑limited, do users see predictable, debuggable failures (e.g., “no services found on this interface”), or do they just see random timeouts and intermittent hostnames?

  • Integration with system‑wide name resolution:
    How cleanly do mDNS failures surface when programs resolve hostname.local via nsswitch? Does the behavior stay consistent across “all system programs using nsswitch,” not just CLI testing tools?


Detailed Breakdown

1. Avahi (Best overall for protocol‑level diagnostics on broken multicast)

Avahi ranks as the top choice because it implements mDNS/DNS‑SD directly, exposes a D‑Bus API, and ships with tools that show you exactly how service discovery behaves on each interface when multicast is restricted.

Avahi is defined as “a system which facilitates service discovery on a local network via the mDNS/DNS‑SD protocol suite.” It is primarily targeted at Linux, ships by default in most distributions, and is explicitly Bonjour/Zeroconf‑compatible. That already tells you how it behaves under multicast problems: it sticks close to the protocol and makes the state visible.

What it does well

  • Protocol‑aligned diagnostics (browsing and resolution):
    Avahi gives you tooling that maps cleanly to how mDNS/DNS‑SD works:

    • avahi-browse -a or avahi-browse -rt _ipp._tcp show you per‑interface service browse results.
    • avahi-resolve-host-name hostname.local and avahi-resolve-address let you test hostname lookups independent of nsswitch.
      On a Wi‑Fi/VLAN where multicast is filtered, these commands give you immediate feedback: no responses, no services, or only locally published ones (you’ll see flags like AVAHI_LOOKUP_RESULT_LOCAL or “our own” in the API).
  • Clear separation of interface and domain behavior:
    Avahi’s D‑Bus API and client library surface details like:

    • Which interfaces are used (AVAHI_IF_UNSPEC vs a specific NIC).
    • Lookup result flags: AVAHI_LOOKUP_RESULT_MULTICAST, AVAHI_LOOKUP_RESULT_WIDE_AREA, AVAHI_LOOKUP_RESULT_LOCAL, AVAHI_LOOKUP_RESULT_OUR_OWN, AVAHI_LOOKUP_RESULT_CACHED, AVAHI_LOOKUP_RESULT_STATIC.
      When multicast is broken on a given Wi‑Fi or VLAN, you typically see either:
    • Only LOCAL/OUR_OWN results, because nothing from the network comes back, or
    • Timeouts with no FOUND signals on the D‑Bus API.
      That makes it straightforward to say “the stack is fine, the network dropped multicast.”
  • Straightforward service publication for testing:
    For debugging constrained networks, you often want a simple, known‑good service to advertise:

    • Drop an XML file into /etc/avahi/services and Avahi publishes it; no custom code required.
    • Because Avahi is Bonjour/Zeroconf‑compatible, you can cross‑check with macOS clients on the same Wi‑Fi/VLAN and see whether they see the service or not.
      If two Avahi hosts on the same SSID can’t see each other’s static services, the multicast problem is almost certainly in the Wi‑Fi or VLAN configuration.
  • *System‑wide .local resolution with nss‑mdns:
    The closely related nss-mdns project “allows hostname lookup of *.local hostnames via mDNS in all system programs using nsswitch.”
    This is crucial for failure‑mode clarity:

    • getent hosts hostname.local behaves the same as any random CLI tool that uses system resolver APIs.
    • When multicast is blocked, you see a consistent timeout or NXDOMAIN‑style failure through the same path everything else uses.
      That’s much easier to debug than a situation where only some tools use mDNS while others silently fall back to unicast DNS guesses.

Tradeoffs & Limitations

  • Requires explicit integration for name resolution:
    Avahi’s primary API is D‑Bus “and is required for usage of most of Avahi,” but it does not own /etc/resolv.conf or the core resolver stack. For full coverage:

    • You need nss-mdns installed.
    • /etc/nsswitch.conf must include mdns or mdns4_minimal in the hosts: line.
      If this isn’t done, hostname.local may work in avahi-resolve-host-name but fail in generic programs, which can obscure network‑level multicast issues until you fix the integration.
  • No central “DNS + mDNS” view:
    Avahi doesn’t try to unify unicast DNS, DNSSEC, and mDNS as one stack. That’s good for clarity, but it means:

    • To understand overall name‑resolution behavior, you still need to inspect your system resolver, systemd‑resolved (if enabled), or other DNS components.
    • Logs are separate from systemd‑resolved’s journal entries.

Decision Trigger

Choose Avahi (with nss‑mdns) if you want to:

  • Debug mDNS/DNS‑SD behavior explicitly on Wi‑Fi/VLANs where multicast might be filtered.
  • See per‑interface browse and resolve behavior via D‑Bus and CLI tools.
  • Keep Bonjour/Zeroconf semantics that match how macOS and other Zeroconf stacks behave.

You’re prioritizing protocol‑level visibility and clean service discovery semantics over having a single monolithic resolver for everything.


2. systemd‑resolved (Best for unified system‑wide resolver behavior)

systemd‑resolved is the stronger fit if your main concern is that every application on the system sees the same DNS and mDNS behavior and you want diagnostics in the same place as the rest of your systemd logs, even when multicast is constrained.

While Avahi is “a system which facilitates service discovery on a local network via the mDNS/DNS‑SD protocol suite,” systemd‑resolved’s job is broader: it’s a general resolver daemon handling unicast DNS, DNSSEC, split DNS, and (optionally) mDNS. That shifts its focus from DNS‑SD browsing to “what IPs do I return for this name?”

What it does well

  • Single resolver with journal‑based logging:
    When systemd‑resolved owns /etc/resolv.conf and is configured to support mDNS for *.local, you get:

    • One daemon to ask about both example.com (unicast DNS) and hostname.local (mDNS).
    • Logs for all resolution attempts available via journalctl -u systemd-resolved.
      In multicast‑restricted environments, you can see:
    • Queries sent on specific interfaces.
    • Whether systemd‑resolved fell back or timed out.
    • Error messages when interfaces or protocols are disabled.
  • Consistent behavior across all applications out‑of‑the-box on some distros:
    Many distributions wire systemd‑resolved as the default resolver:

    • /etc/nsswitch.conf uses resolve so getent hosts and typical libc resolution flows go through systemd‑resolved.
    • If mDNS is enabled, hostname.local requests traverse the same daemon.
      That means the user’s experience and your debugging view are aligned: if resolution fails, you can almost always reproduce it with resolvectl query hostname.local and inspect the journal.
  • Predictable fallback and caching logic:
    systemd‑resolved has explicit states around:

    • Cache (RESOLVED vs not).
    • Protocol preference (unicast vs mDNS).
    • Domains defined by DHCP or VPN.
      On broken multicast Wi‑Fi/VLANs, you typically see consistent timeouts rather than partially visible services. For hostname resolution, this is often simpler to explain than the richer DNS‑SD behavior Avahi exposes.

Tradeoffs & Limitations

  • Less visibility into DNS‑SD semantics:
    systemd‑resolved is not a DNS‑SD browser. It focuses on resolving hostnames, not on listing or navigating _service._proto records across interfaces.
    That means:

    • You don’t get avahi-browse‑style visibility into “which services are visible on this interface.”
    • When multicast is restricted, you can observe failed mDNS lookups but not easily inspect which service announcements would have been seen.
  • More distro‑ and config‑dependent mDNS behavior:
    The exact mDNS behavior of systemd‑resolved depends on:

    • The systemd version.
    • Whether mDNS is enabled per link in networkd/NetworkManager.
    • How nsswitch.conf is wired.
      On some setups, you may see:
    • .local handled by systemd‑resolved, bypassing Avahi even when Avahi is installed.
    • Or the opposite: Avahi plus nss‑mdns owning .local, and systemd‑resolved not participating.
      If you mix Avahi and systemd‑resolved without a clear plan, multicast failures can appear as “flaky behavior” rather than a crisp “Wi‑Fi filters multicast” diagnosis.

Decision Trigger

Choose systemd‑resolved (with mDNS enabled) if you want:

  • A single resolver daemon handling both classic DNS and mDNS.
  • Logs for all resolution attempts in the systemd journal.
  • Consistent host resolution semantics for *.local across all applications, with less interest in service browsing and DNS‑SD‑level debugging.

You’re prioritizing unified resolver behavior and centralized logging over detailed DNS‑SD inspection.


3. Avahi + systemd‑resolved split (Best for mixed DNS‑SD + unified DNS resolver needs)

Avahi + systemd‑resolved with carefully split responsibilities stands out in environments where you need both: Bonjour/Zeroconf‑style DNS‑SD and systemd‑resolved’s broader DNS features, while still having understandable failure modes on multicast‑restricted Wi‑Fi/VLANs.

In this scenario, you let each component do the job it is best at:

  • Avahi owns DNS‑SD browsing/publishing and, with nss‑mdns, can handle *.local lookups.
  • systemd‑resolved owns unicast DNS, DNSSEC, and split DNS for VPNs and corporate domains.

What it does well

  • Preserves Avahi’s DNS‑SD semantics and tooling:
    You still have all the Avahi machinery:

    • D‑Bus API for dynamic registration and browsing.
    • /etc/avahi/services for static service publication.
    • CLI tools for per‑interface discovery.
      That gives you protocol‑level visibility when multicast is throttled, particularly for services like:
    • Printers (_ipp._tcp, _printer._tcp).
    • File sharing (_afpovertcp._tcp, SMB‑related services).
    • Peer applications.
  • Keeps systemd‑resolved for traditional DNS workflows:
    systemd‑resolved continues to:

    • Handle example.com, corporate zones, and split DNS from DHCP/VPN.
    • Integrate with system networking tools and journald.
      This avoids upsetting distro defaults that increasingly rely on systemd‑resolved.

Tradeoffs & Limitations

  • *You must choose who owns .local:
    The biggest trap in this mixed setup is letting both daemons claim *.local without a clear plan. Recommendations:

    • If you want Avahi + nss‑mdns to handle .local, ensure hosts: in nsswitch.conf lists mdns before resolve, or omit resolve for .local.
    • If you prefer systemd‑resolved for .local, don’t configure nss-mdns, or place mdns after resolve and be aware that Avahi’s CLI and D‑Bus API will still operate independently.
      For diagnosis on multicast‑restricted Wi‑Fi/VLANs, consistency matters more than which one you choose. You want the same stack handling .local everywhere so failures are reproducible.
  • More moving parts when debugging:
    When things fail, you now have:

    • Avahi logs and D‑Bus signals.
    • systemd‑resolved logs and resolvectl output.
    • Potentially tcpdump/wireshark to verify multicast traffic.
      In return, you get the flexibility to diagnose problems from both the DNS‑SD angle and the generic DNS resolver angle.

Decision Trigger

Choose Avahi + systemd‑resolved with clear split responsibilities if you want:

  • Full DNS‑SD capabilities and concrete Avahi diagnostics.
  • To keep systemd‑resolved as your unicast DNS resolver for the rest of the system.
  • A path that aligns with how many modern distributions ship systemd but still rely on Avahi as a standard component.

You’re prioritizing flexibility and compatibility with distro defaults, at the cost of a bit more configuration discipline.


Final Verdict

On Wi‑Fi and VLANs where multicast is restricted, Avahi (with nss‑mdns for system‑wide .local support) gives you the clearest diagnostics and most understandable failure modes:

  • It speaks mDNS/DNS‑SD directly and exposes that behavior through D‑Bus, static XML in /etc/avahi/services, and CLI tooling.
  • You can see exactly which interfaces discover which services and whether results are local, cached, multicast, or wide‑area, via well‑defined flags (e.g., AVAHI_LOOKUP_RESULT_MULTICAST, AVAHI_LOOKUP_RESULT_LOCAL).
  • With nss‑mdns wired into nsswitch, all “system programs using nsswitch” see the same *.local behavior you test with Avahi, which makes multicast‑layer failures unambiguous.

systemd‑resolved is the better fit if you want a single resolver managing both DNS and mDNS with centralized logging, but it doesn’t replace Avahi’s DNS‑SD‑specific visibility when multicast is misbehaving. In many real fleets, the most robust pattern is:

  • Let Avahi own mDNS/DNS‑SD (and optionally .local via nss‑mdns).
  • Let systemd‑resolved own unicast DNS and split DNS.
  • Be explicit about your nsswitch ordering so failure modes are consistent and reproducible.

If service discovery is meant to be “boring and predictable” on your LAN—even when the Wi‑Fi controller decides to mangle multicast—Avahi should be the first tool you reach for.

Next Step

Get Started