Over the last few weeks, my development environment has been a bit of a rollercoaster—in a good way. In my previous blog, I talked about experimenting with GitHub Actions runners. That journey has evolved from running random Ubuntu runners → running runners inside VMs → running them in Kubernetes → and now testing out GitHub ARC (Actions Runner Controller) to dynamically request runner capacity on demand.
Honestly, it’s been a blast. But along the way, I hit some real-world challenges that mirror the same ones I see in the enterprise security space.
Centralized Security Scanning + Hitting Rate Limits
In my home lab, I run what I call Centralized GitHub Actions:
- pull security scanning Docker images
- run the scans
- send results to DefectDojo
- repeat this across multiple repos
Since I love experimenting with open-source security tooling, I try to run just about every scanner I can get my hands on.
And then I hit the wall.
Docker pull limits. Pip download limits. Everything.
Docker Hub’s limits are designed for normal users. I am not a normal user when it comes to automated pulls:
- Unauthenticated Docker Hub pull limit: ~100
- Authenticated: ~200
- GitHub-hosted runners get special higher limits (but only when the request originates from GitHub’s infra)
- On-prem GitHub Actions runners get none of those benefits
So I burned through my pull quota constantly.
Exploring Solutions to Rate Limits
In enterprise environments, people usually take one of three approaches:
1. Use AWS ECR or another paid cloud registry
You pay for it, but you get predictable throughput and higher pull limits.
2. Use a vendor-managed registry appliance
Good for enterprise scale. Not worth it for a home lab.
3. Build a Docker Proxy Cache (the path I picked)
A Docker proxy cache isn’t a full registry—it’s more like a caching reverse proxy:
- First pull: fetch from Docker Hub and store locally
- Subsequent pulls: instant, local, no external rate limit hit
I deployed mine per Kubernetes cluster at first, got it mostly working, and then moved toward centralizing it so all runners call the same cache endpoint.
Then pip started complaining about SSL certificates…
…so, I fixed that too.
Now it mostly works—but I still want to revisit the idea of a true self-hosted registry with caching capabilities. Not sure if it’s worth the complexity yet.
Enterprise Problems Show Up in Home Labs
One eye-opening lesson:
The exact same problems I see in enterprise CI/CD show up in my personal environment.
Why?
Because rate limits, scanner behavior, and Docker pull patterns don’t care whether you’re a Fortune 500 or a guy in his garage. The constraints are identical.
Rethinking How We Run Security Scans
Running every scanner on every pull request sounded awesome… until I tried it.
Security scans are computationally expensive. Some scanners take minutes; some take forever. Running everything on every PR is:
- wasteful
- slow
- not developer-friendly
- and honestly not necessary
A more practical pattern is emerging:
1. Base Scans → Run on a Cron Schedule
This keeps a high-level view of system health.
2. PR Scans → Run selectively
Only run the scanners that add value during development.
3. Adaptive Scans → Run based on the diff
Imagine:
- If secrets are detected → run secret scanners
- If Dockerfile changes → run container hardening checks
- If a lot of files changed → bump up scan intensity
This “context-aware scanning” is something I want to explore. Developers could even toggle this:
- “Always run full security scans for my PRs.”
- “Run lightweight scans unless something looks suspicious.”
That flexibility is powerful.
Running Scans Outside CI Jobs Using Webhooks
One of the coolest things I’ve rediscovered:
GitHub Webhooks let you run security scans outside the CI job entirely.
That means:
- CI stays fast
- scanners can run asynchronously
- failures don’t block merges
- logs stay out of the GitHub Actions UI clutter
When I was first setting up ARC, I noticed that every DefectDojo upload job appeared in the GitHub Actions queue—even though they didn’t belong there.
This made it obvious:
CI jobs should not handle everything.
Sometimes you want:
- CI job finishes →
- a webhook triggers →
- async scanners run somewhere else →
- results go to DefectDojo
- developers stay unblocked
This is something I want to build into my workflow logic.
To-Do List From This Work
A few tasks emerged from this whole process:
- Improve centralized Docker caching
- Explore hybrid scanning (cron + PR-based + adaptive)
- Build logic to run certain scans outside the CI job
- Add a PR flag to allow developers to request extra scans
- Clean up DefectDojo upload jobs to avoid cluttering the CI timeline