Mastering subdomain enumeration is an essential skill for anyone aiming to fully understand and leverage web architecture.
Why?
I think a recurring theme you will find in here is subdomain enumeration. You might say that I am fixated on this at points (hopefully that will change in the future, I am still have the mentality of a newbie in the big scheme of things), however I do believe that in order to grasp the entirety of what a web application does and how to dive deep into it, first you need a MAP.
"Maps" are the first things people use in any type of projects to better plan their work and drive success in any field or domain. More important in Bug Bounty since the purpose is to gain access to any sensitive data we need to be able to find any entry front or back door to the targets we are trying to get juicy bugs on.
A study case that I have been exploring recently is a one liner that I want to deconstruct here to better understand a suite of tools by Project Discovery and use them to leverage deep scanning of potential targets.
This command chain automates a comprehensive reconnaissance workflow using ProjectDiscovery tools to discover attack surfaces on redacted.com. Here's the breakdown:
1. Subdomain Discovery
subfinder -d redacted.com -all: Enumerates subdomains using all available passive sources.
anew subs.txt: Appends new findings to subs.txt.
2. DNS Validation
shuffledns: Resolves subdomains using trusted DNS resolvers (resolvers.txt).
dnsx: Filters out unresolved subdomains, leaving verified targets.
3. Port Scanning
naabu -nmap: Scans open ports and runs nmap scripts for service detection.
-rate 5000: High-speed scanning (5k packets/sec).
4. HTTP Probe
httpx: Identifies live web services on open ports.
5. Web Crawling
katana -kf all -jc: Crawls all URLs, including JavaScript-rendered content.
anew ensures incremental updates without duplicates.
Combines passive enumeration (subfinder), active validation (dnsx), and exploit verification (nuclei).
Optimized for bug bounty/recon pipelines with speed controls (-rate) and filters
A better in-depth explanation
Letβs break down this command chain into bite-sized pieces. Imagine youβre exploring a giant mansion (your target domain) β weβll start by finding all the hidden doors (subdomains), check which ones are unlocked (active services), and finally search for treasure chests (vulnerabilities).
Phase 1: Subdomain Discovery β Mapping the Mansion
Tool:subfinder -d redacted.com -all | anew subs.txt
This is your digital metal detector. Subfinder queries 50+ public databases like Shodan and VirusTotal to find every possible subdomain (.redacted.com). The -all flag ensures no stone is left unturned.
Pro Tip:anew acts like your assistant, neatly organizing new findings into subs.txt without duplicates.
Phase 2: DNS Validation β Testing the Door Locks
Tool:shuffledns -d redacted.com -r resolvers.txt -w n0kovo_subdomains_huge.txt | anew subs.txt
Not all subdomains are active. ShuffleDNS uses a massive wordlist (n0kovo_subdomains_huge.txt) to brute-force common names (admin.redacted.com, test.redacted.com) while verifying results through reliable DNS servers (resolvers.txt).
Phase 3: Filtering Ghost Subdomains
Tool:dnsx -l subs.txt -r resolvers.txt | anew resolved.txt
This separates real subdomains from false positives. DNSx acts like a bouncer, only letting in subdomains that properly resolve to IP addresses. Your clean list goes to resolved.txt.
Phase 4: Port Scanning β Checking Windows & Backdoors
Tool:naabu -l resolved.txt -nmap -rate 5000 | anew ports.txt
Naabu knocks on every digital door (port) at lightning speed (5,000 packets/sec). When it finds an open port (like 80 for HTTP or 443 for HTTPS), it calls Nmap to identify services β think of it as checking whether a door leads to a kitchen or a vault.
Phase 5: Web Service Discovery
Tool:httpx -l ports.txt | anew alive.txt
Not every open port has a website. HTTPx is your virtual flashlight, shining light on ports running actual web services. It checks for HTTP/HTTPS responses and saves live URLs to alive.txt.
Phase 6: Web Crawling β Exploring the Rooms
Tool:katana -list alive.txt -kf all -jc | anew urls.txt
Katana becomes your robotic explorer, crawling every link and JavaScript-rendered content (-jc). It maps out hidden paths like /login.php?redirect=admin and saves them to urls.txt. The -kf all flag ensures it follows every possible trail.
Phase 7: Vulnerability Hunting β The Treasure Hunt
Tool:nuclei -l urls.txt -es info,unknown -ept ssl -ss template-spray | anew nuclei.txt
Nuclei uses 1,400+ predefined templates to check for vulnerabilities. Weβre skipping boring info-level findings (-es info,unknown) and SSL checks (-ept ssl) to focus on critical issues. The -ss template-spray flag efficiently tests all URLs against multiple vulnerabilities at once.
Why This Workflow Rocks for Beginners
Automates the boring stuff β Focus on analyzing results instead of manual tasks
Progressively builds data β Each tool feeds into the next like a production line
Easy to customize β Swap wordlists or add new tools as you learn
Next Steps: Use the urls.txt output to manually test interesting endpoints. Check for:
Forgotten debug parameters (?debug=true)
Unprotected API routes (/api/v1/users)
Weird error messages hinting at SQLi/XSS
Remember β recon is 80% of the battle. With this pipeline, youβre already ahead of 90% of new hunters! Subfinder deep dive ShuffleDNS mechanics Workflow design principles Katana crawling capabilities