Table of Contents

When configuring a home network with locally-hosted services, it’s common to use a reverse-proxy to provide easy-to-remember names for all your local sites. But can that reverse proxy be abused to provide external attackers access to your internal sites?


A typical home network setup with locally-hosted services (e.g. AdGuard or Home Assistant) might contain a reverse proxy to handle some custom addresses so that:

The configuration might look something like (using Caddy syntax since it’s very readable):

// Internal-only sites/convenient redirects
http://router.lan {
http://server.lan {

// Public site, http://plex.lan {
More detailed explanation

Somewhat out-of-scope of this post, but some additional background on how these local services are normally configured might be useful here.

  • A local server (e.g. raspberry pi, unraid machine) is given a static IP (above, and hosts a number of services, e.g.:
    • some admin interface, often a webserver allowing for remote control, listening on a nonstandard port (above, 7800).
    • a plex server, which binds to port 32400.
    • the reverse proxy itself (in the above example, a caddy server). This service binds to ports 80 and 443 on the server: when hit with a request for e.g. http://router.lan, it will forward the traffic to the configured internal host (in this case
  • DNS servers are configured such that:
    • external (public) DNS servers route to the public IP address of the router. This allows external clients to look up and hit the local router’s external interface, so it can proxy to the internal service (see the router section below).
    • internal DNS servers (often just the DNS settings on the router) route *.lan to the server running our services. This allows internal clients using the “internal” addresses like https://router.lan to resolve the server.
  • The router is configured to:
    • route all http/https traffic hitting the external interface on ports 80/443 to the server running the reverse proxy ( This is “port-forwarding” in IPv4 parlance.

Let’s trace a request to see why all the above steps are necessary. An internal client requests http://router.lan:

  1. the client’s DNS resolver contacts the router to resolve router.lan.
  2. the router returns the server’s IP (
  3. the client opens an http connection to requesting the domain router.lan.
  4. the reverse proxy webserver is listening on, so looks up router.lan in its configuration and finds it then proxies the client’s request to that address (the router’s web interface).
  5. the client sees the router’s web interface.

A request from an external client is very similar, just with a slightly different route into the network. For a request to

  1. the client’s DNS resolver contacts a public DNS to resolve
  2. the public DNS returns our router’s public IP.
  3. the client sends an http request to the router’s public IP requesting domain
  4. the router port-forwards the request to our server (, and we’re in a very similar place to the end of step 3 in the above internal example.
  5. as above starting from step 4.

This works great! Internal clients can access internal sites like http://router.lan, and external clients can access our “deemed public” sites like Internal clients can even use the external address and we should resolve the site correctly.

And external clients can’t access our internal sites, since for anything outside our network the *.local domains won’t resolve to our server. Right? Wrong.

The vulnerability

In a nutshell, the vulnerability is that external clients can force an arbitrary domain name to resolve to a specific IP. The most common method for doing this is adding an entry in /etc/hosts, like:

<victim's router's public IP> router.lan

An easier-to-test method is to provide a DNS resolution override to curl, e.g.:

$ curl --resolve router.lan:80:<victim's router's public IP> router.lan

This forces the request to hit your router with an internal domain name, which will proxy the attacker through to your internal sites.

This attack vector relies on the attacker knowing both your network’s public IP (not hard to obtain if you’re hosting other public sites) and the internal domain names you use. The internal domain names are easy to dictionary-attack, since the primary purpose is to be memorable for humans.


Most reverse proxies provide a way of allowlisting only certain IP ranges. For the cider example above, we can tweak our internal sites to use remote_ip:

http://server.lan {
    // Only allow internal clients.


A more bulletproof (paranoid?) solution is to configure an authentication backend that sits between your reverse proxy and any sites being proxied to, such as Authelia. This is what I ended up switching to after realising this vulnerability on my own setup.

With this, before the reverse proxy proxies any requests, it’ll verify the client is authenticated+authorised (and redirect them to the authentication portal if not). The only way to bypass this is to hit the raw IP directly rather than using an address (which should only be possible for devices already on the local network).

Although it sounds unwieldy, this “full” solution is very low-friction. Devices only need to login once then are authenticated for a long duration (I think 30d by default). For clients which are on the local network you can require only a password to login, making the monthly refresh quick and easy. For external clients you can additionally require 2FA, which is suitable for the occasional times one needs to log in from a remote device for the first time.