Huntarr: The Security Nightmare You Need to Know About

Read Time: 6 min.

You know that feeling when you finally get your self‑hosted setup looking slick, everything automated, dashboards everywhere… and then you realize the shiny new thing you installed is basically a smash‑and‑grab window into your entire stack? That’s Huntarr in a nutshell. Someone tried to build a “smart” meta‑layer on top of the *arr ecosystem… and vibe‑coded their way into turning a solid family of tools into a security circus sideshow. If you care even a little about not handing strangers the keys to your servers, this is worth your next few minutes.

What Huntarr Tried To Be (And How It Face‑Planted)

Huntarr positioned itself as the brains on top of Sonarr, Radarr, Prowlarr and friends. One app to rule your requests, automation, media management, all that cozy “smart home media but cooler” energy.

The problem: instead of building on the *arr ecosystem carefully, it basically tore off all the locks and taped your API keys to the front door.

The Fun One: Your Entire Config Dumped, No Login Needed

There’s an endpoint in Huntarr:

POST /api/settings/general

No login. No API token. No session. Nothing. Just walk right in.

You send it a harmless‑looking JSON like:

curl -X POST http://your-huntarr:9705/api/settings/general -H "Content-Type: application/json" -d '{"proxy_enabled": true}'

And Huntarr lovingly responds with:

  • API keys for Sonarr, Radarr, Prowlarr, Lidarr, Readarr, Whisparr, etc.
  • URLs and configs for each connected app
  • Basically the full settings set for the entire stack

Anyone who can hit that URL — LAN or internet — gets admin‑level access to your media ecosystem in one request. Doesn’t matter how carefully you locked down Sonarr/Radarr themselves; Huntarr just bypassed all of that.

And remember: a lot of people do expose this stuff to the internet, even though they shouldn’t. Huntarr actively leaned into that by including request/portal features meant for outside access.

How Bad Did It Actually Get?

This wasn’t one dumb route someone forgot to protect. This was systemic “I don’t know how auth works” level bad. A few greatest hits:

1. Unauthenticated 2FA Enrollment

  • POST /api/user/2fa/setup
  • No session? No problem.
  • It just hands you:
  • The QR code for the owner account

Then you call /api/user/2fa/verify, and boom: you’ve just enrolled your own authenticator on the main account. No password, no prior login, full takeover.

2. Resetting Setup And Replacing The Owner

  • POST /api/setup/clear
  • Again, no auth check.
  • It returns “Setup progress cleared.” Then you just walk through the setup flow and create a brand‑new “owner” account like you just installed the app for the first time.

That’s not a bug. That’s not an oversight. That’s “I didn’t think at all about what this endpoint means in a real environment.”

3. Recovery Keys Without Being Logged In

  • POST /auth/recovery-key/generate
  • With {"setup_mode": true}
  • Hits business logic with no authentication and returns logic‑level responses instead of 401/403.

Combining that “setup_mode” bypass with the other issues, you’ve basically got multiple ways to walk right into the owner account without a password.

4. Classic ZIP Slip + Path Traversal, Running As Root

  • zipfile.extractall() used on user‑uploaded ZIP files with no path sanitization.
  • The container runs as root. Of course it does.
  • Backup restore/delete endpoints use user‑controlled path pieces straight into the filesystem and Shutil.rmtree().

So, not only can someone steal your keys, they might also be able to write or delete arbitrary files if they can upload or manipulate backups.

This Isn’t Just Bugs – It’s Vibe Coding With Security Buzzwords

What really pushes this from “whoops” to “yikes” is the way the project was run.

The “Cybersecurity Professional” Angle

The maintainer said things like:

  • They “work in cybersecurity.”
  • They have “cybersecurity steering documents” for hardening.
  • They’ve spent “120+ hours in the last 4 weeks” on this with those documents.

If any of that were translating into actual competence, you wouldn’t be:

  • Whitelisting /api/settings/general out of auth entirely.
  • Returning 2FA/TOTP secrets to unauthenticated users.
  • Using extractall() straight from user upload while running as root.

This is intro‑level “security 101” stuff. It’s the kind of thing basic tools like bandit scream about immediately, and they were run in the review — and, surprise, they lit up.

Commit History: Chaos As A Development Style

You can see the vibe coding in the Git history:

  • Tons of commits named “update”, “Patch”, “Bug Patch”, “change”.
  • Huge diffs, minutes apart, no PRs, no reviews.
  • New features slammed in rapidly with no sign anyone stopped to ask “what does this do to security?”

Good open‑source projects are a little boring: they move slower, have PRs, discussions, people nitpicking little things. That boring process is why they’re not setting your stack on fire.

Handling Criticism: Ban, Delete, Vanish

When people raised these issues:

  • Posts on r/huntarr got removed.
  • The security reviewer was banned from the subreddit.
  • Threads where the maintainer made big claims about security work were deleted.
  • Eventually, r/huntarr went private and the repo got deleted or made private.

That’s not how you respond to critical vulnerabilities if you’re serious. That’s how you respond when the brand matters more than the users.

This Also Screws Over The *arr Reputation

And here’s the collateral damage: *Huntarr makes the entire arr ecosystem look bad, and it doesn’t deserve that.

Sonarr, Radarr, etc.:

  • Have been around for years.
  • Have a ton of real‑world use.
  • Are fairly conservative with changes and security.

They’re not perfect, but they’re solid for what they are: LAN‑only admin‑style tools that assume you’re not just yeeting them raw onto the open internet.

Then along comes Huntarr, plugging right into those apps, bypassing their auth, and puking out their keys to anyone who asks. From the outside, it just looks like “all this *arr stuff is insecure.” That’s unfair, but that’s how reputation works — weakest link sets the tone.

And yes, before anyone says it: *you should not be exposing anyarr server directly to the internet anyway. They’re meant for trusted networks or at least proper reverse‑proxy + auth + VPN layers. But Huntarr took that already‑shaky pattern and cranked the risk to 11.

So What Should People Actually Do?

If You Ran Huntarr At Any Point

Treat it like this:

  • Stop using it. Kill the container, remove it from your Reverse Proxy, drop the DNS record.
  • Rotate all your keys:
  • Sonarr, Radarr, Prowlarr, Lidarr, Readarr, Whisparr, etc.
  • Plex token if it was connected.
  • Anything else you wired into Huntarr.
  • If it was ever reachable from the internet, assume those keys are burned.

Firewall alone doesn’t fix the core reality: the app itself was fundamentally insecure. If something on your LAN got compromised, or someone else had access, those endpoints were gift‑wrapped.

Going Forward With Self‑Hosted Stuff

A few simple sanity checks before you trust any new “super‑dashboard”:

  • Does it have:
  • A clear auth/security model?
  • An actual PR/review process on GitHub?
  • Tests or CI that run at least basic checks?
  • What do the issues look like?
  • Any security conversations?
  • Any history of how they handled reports?

And for remote access:

  • Don’t expose these admin tools raw to the internet.
  • Use:
  • VPN (WireGuard, Tailscale, etc.), or
  • Reverse proxy + strong auth (Authelia, Authentik, OAuth4, Cloudflare Access), maybe plus IP restrictions.
  • Treat every wrapper/orchestrator as more dangerous, not less, because it usually holds the keys to everything else.

Let’s Close This Out With The Actual Takeaway

If I have to boil this down:

  • Huntarr was vibe‑coded security theater sitting on top of actually decent tools.
  • It didn’t just have a couple bugs; it was architecturally unsafe — unauthenticated config dumps, account takeovers, file‑system shenanigans.
  • The response from the maintainer wasn’t “own it, fix it, document it”; it was bans, deletions, and vanishing.

What can we take from this? Self‑hosting is powerful and fun, but the second you start bolting on random “all‑in‑one” tools that touch everything, you’re not just adding convenience — you’re betting your entire stack on whether that one dev knows what they’re doing. In this case, they didn’t.

I’ll end with saying: if you care about your setup, kill Huntarr if you’re still running it, rotate your keys, and be way more skeptical the next time some shiny meta‑tool promises to “simplify everything.” And if you’ve got thoughts, experiences, or you’ve seen other projects pulling this kind of stunt, throw them in the comments, share this around, and maybe we can collectively shame fewer people into vibe‑coding security‑critical software.

Leave a Reply

Your email address will not be published. Required fields are marked *