🌐
Subdomain Enumeration Guide
  • Home 🏠
  • Introduction
    • What's the need ?πŸ€”
    • Prerequisites
  • Types
    • Horizontal Enumeration
    • Vertical Enumeration
  • Passive Techniques
    • Passive Sources
    • Certificate Logs
    • Recursive Enumeration
  • Active Techniques
    • DNS Bruteforcing
    • Permutation/Alterations
    • Scraping(JS/Source code)
    • Google analytics
    • TLS, CSP, CNAME Probing
    • VHOST probing
  • Web probing
  • Automation πŸ€–
Powered by GitBook
On this page
  • What is passive subdomain enumeration?
  • A) Passive DNS gathering tools
  • 1) Amass
  • Configuring amass:
  • Running Amass:
  • 2) Subfinder
  • Configuring Subfinder:
  • Running Subfinder:
  • 3) Assetfinder
  • 4) Findomain
  • Configuring Findomain:
  • Running Findomain:
  • B) Internet Archives
  • 5) Gau
  • 6) Waybackurls
  • C) GitHub Scraping
  • 7) Github-subdomains
  • D) Rapid7 Project Sonar dataset(depreciated)
  • 8) Crobat
  • Installation:
  • Running:
  • That's it !!! Done with passive things

Was this helpful?

  1. Passive Techniques

Passive Sources

PreviousVertical EnumerationNextCertificate Logs

Last updated 2 years ago

Was this helpful?

What is passive subdomain enumeration?

Passive subdomain enumeration is a technique to query for passive DNS datasets provided by services like , , , , , , etc. to obtain the subdomains of a particular target. Here we don't send any active probes to our target, instead passively try to scrape information available from the internet.

There are in total around that provide such datasets to query them. It's difficult to manually query these third-party services thus, to ease this process various tools are developed which automate these processes.

It's highly recommended to read section first, before proceeding further.

  1. Passive DNS enumeration tools

  2. Internet Archive

  3. Github Scraping

  4. GitLab Scraping

A) Passive DNS gathering tools

1) Amass

  • Language: Go

  • Total Passive Sources: 82

Installation:

go install -v github.com/owasp-amass/amass/v3/...@master

Setting up Amass config file:

  • To make it possible for Amass to query the passive DNS datasets, it necessary for us to setup the API keys of those services in the Amass configuration file.

  • By default, amass config file is located at $HOME/.config/amass/config.ini

  • Now let's set up our API keys in the config.iniconfig file.

  • Open the config file in a text editor and then uncomment the required lines and add your API keys.

# https://otx.alienvault.com (Free)
[data_sources.AlienVault]
[data_sources.AlienVault.Credentials]
apikey = dca0d4d692a6fd757107333d43d5f284f9a38f245d267b1cd72b4c5c6d5c31


#How to Add 2 API keys for a single service
# https://app.binaryedge.com (Free)
[data_sources.BinaryEdge]
ttl = 10080
[data_sources.BinaryEdge.account1]
apikey = d749e0d3-ff9e-gcd0-a913-b5e62f6f216a
[data_sources.BinaryEdge.account2]
apikey = afdb97ff-t65e-r47f-bba7-c51dc5d83347

Running Amass:

  • After setting up API keys now we are good to run amass.

amass enum -passive -d example.com -config config.ini -o output.txt

Flags:-

  • enum - Perform DNS enumeration

  • passive - passively collect information through the data sources mentioned in the config file.

  • config - Specify the location of your config file (default: $HOME/.config/amass/config.ini )

  • o - Output filename

2) Subfinder

  • Language: Go

  • Total Passive Sources: 38

Installation:

go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest

Setting up Subfinder configuration file:

  • Subfinder's default config file location is at $HOME/.config/subfinder/provider-config.yaml

  • After your first installation, if you didn't find the configuration file populated by default run the following command again subfinder in order to get it generated.

  • The subfinder config file follows YAML(YAML Ain't Markup Language) syntax. So, you need to be careful that you don't break the syntax. It's better that you use a text editor and set up syntax highlighting.

Example config file:-

  • Some passive sources like Censys , PassiveTotal use 2 keys in combination in order to authenticate a user. For such services, both values need to be mentioned with a colon(:) in between them. (Check how have I mentioned the "Censys" source values- APP-id:Secret in the below example )

  • Subfinder automatically detects its config file only if at the default position.

securitytrails: []
censys:
  - ac244e2f-b635-4581-878a-33f4e79a2c13:dd510d6e-1b6e-4655-83f6-f347b363def9
shodan:
  - AAAAClP1bJJSRMEYJazgwhJKrggRwKA
github:
  - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
  - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
passivetotal:
  - sample-email@user.com:password123

Running Subfinder:

subfinder -d example.com -all -config config.yaml -o output.txt

Flags:-

  • d - Specify our target domain

  • all - Use all passive sources (slow enumeration but more results)

  • config - Config file location

3) Assetfinder

  • Language: Go

  • Total passive sources: 9

go install github.com/tomnomnom/assetfinder@latest

Running:

assetfinder --subs-only example.com > output.txt

4) Findomain

  • Language: Rust

  • Total Passive sources: 21

Installation:-

wget -N -c https://github.com/Findomain/Findomain/releases/download/9.0.0/findomain-linux.zip
unzip findomain-linux.zip
mv findomain /usr/local/bin/findomain
chmod 755 /usr/local/bin/findomain

Configuration:-

  • You need to define API keys in your .bashrc or .zshrc .

  • Findomain will pick up them automatically.

export findomain_virustotal_token="API_KEY"
export findomain_fb_token="API_KEY"

Running Findomain:

findomain -t example.com -u output.txt

Flags:-

  • t - Target domain

  • u - Output file

B) Internet Archives

Internet Archives deploy their own web crawlers and indexing systems that crawl each website on the internet. Hence, they have historical data of all the websites that once existed. hence, Internet Archives can be a useful source to grab subdomains of a particular target that once existed and later perform permutations(more on this later) on them to get more valid subdomains.

Internet Archive when queried gives back URLs. Since we are only concerned with the subdomains, we need to process those URLs to grab only unique FQDN subdomains from them.

5) Gau

  • Language: Go

  • Sources:

Installation:

go install github.com/lc/gau/v2/cmd/gau@latest

Running gauplus:

gau --threads 5 --subs example.com |  unfurl -u domains | sort -u -o output_unfurl.txt

Flags:

  • threads - How many workers to spawn

  • subs - Include subdomains of the target domain

6) Waybackurls

  • Language: Go

  • Sources:

Installation:

go install github.com/tomnomnom/waybackurls@latest

Running Waybackurls:

waybackurls example.com |  unfurl -u domains | sort -u -o output.txt

C) GitHub Scraping

7) Github-subdomains

  • Language: Go

Organizations sometimes host their source code on GitHub, also employees working at these organizations sometimes leak the source code on GitHub. Additionally, I have came around instances where security researchers host their reconnaissance data in public repositories. The tool Github-subdomains can help you extract these exposed/leaked subdomains of your target from GitHub.

Installation:

go install github.com/gwen001/github-subdomains@latest
  • For github-subdomains to scrap domains from GitHub you need to specify a list of GitHub access tokens.

  • These access tokens are used by the tool to perform searches and find subdomains on behalf of you.

  • I always prefer that you make at least 10 tokens from 3 different accounts(30 in total) to avoid rate limiting.

  • Specify 1 token per line.

Running github-subdomains:

github-subdomains -d example.com -t tokens.txt -o output.txt

Flags:

  • d - target

  • t - file containing tokens

  • o - output file

D) Rapid7 Project Sonar dataset(depreciated)

This internet-wide DNS dataset could be an excellent resource for us to grab our subdomains right? But querying such large datasets could take up significant time. That's when Crobat comes to the rescue.

  • Language: Go

Installation:

go get github.com/cgboal/sonarsearch/cmd/crobat

Running:

crobat -s example.com > output.txt

Flags:

  • s - Target Name

Liked my work? Don't hesitate to buy me a coffee XDD

Author: (mainly ).

is a Swiss army knife for subdomains enumeration that outperforms passive enumeration the best. Amass queries the most number of third-party services which results in more subdomains of a particular target. are passive services that amass queries.

Configuring amass:

Since amass written in Go, you need your Go environment properly set up( to setup Go environment)

to my amass config file for reference.

To get to know to create API keys, check out .

Refer to (this is exactly how your amass config file should be)

Tip: After configuring your config file in order to verify whether the API keys have been correctly set up or not you can use this command: amass enum -list -config config.ini

Author:

is yet another great tool that one should have in their pipeline. There are some unique sources that subfinder queries for, that amass doesn't. This tool is been developed by the famous ProjectDiscovery team, who's tools are used by every other bugbounty hunter.

Configuring Subfinder:

to my subfinder config file for reference.

Tip:- To view the sources that require API keys subfinder -ls command

Author:

Don't know why did I include this tooljust because its build by the legend ? It doesn't give any unique subdomains compared to other tools but it's extremely fast.

Author:

is one of the standard subdomain finder tools in the industry. Another extremely fast enumeration tool. It also has a paid version that offers much more features like subdomain monitoring, resolution, less resource consumption.

Configuring Findomain:

Depending on your architecture download binary from

For this, we use a tool called . This tool helps to extract the domain name from a list of URLs.

Author:

works by querying all the above 4 internet archive services and grabs all the URLs that their internet-wide crawler had once crawled. So through this process we get tons of URL's belonging to our target that once existed. After collecting the URLs we extract only the domain/subdomain part from those URLs.

Author:

works similar to Gau, but I have found that it returns some unique data that Gau couldn't find. Hence, we need to include waybackurls in our arsenal.

Author:

Configuring github-subdomains​​:

is an article on how you can generate your GitHub access tokens.

is a security research project by Rapid7 that conducts internet-wide scans. Rapid7 has been generous and made this data freely available to the public. Project Sonar contains with a total size of over 66.6 TB which are updated on a regular basis. You can read here how you can parse these datasets on your own using this .

8)

Author:

has done an excellent work of parsing and indexing the whole Rapid7 Sonar dataset into MongoDB and creating an API to query this database. This Crobat API is freely available at .More over he developed a command-line tool that uses this API and returns the results at a blazing fast speed.

That's it !!! Done with passive things

βš™οΈ
πŸ§™β€β™‚οΈ
βš™οΈ
πŸ§™β€β™‚οΈ
βš™οΈ
βš™οΈ
🏁
🏁
SecurityTrails
Censys
Shodan
BinaryEdge
VirusTotal
Whoisxmlapi
90 passive DNS sources/services
this
Amass
Subfinder
Assetfinder
Findomain
gau
waybackurls
github-subdomains
gitlab-subdomains
OWASP
caffix
Amass
These
Steps
Link
this article
my config file
projectdiscovery
Subfinder
Link
tomnomnom
πŸ˜‚
Tomnomnom
Edu4rdSHL
Findomain
here
unfurl
lc
web.archive.org
index.commoncrawl.org
otx.alienvault.com
urlscan.io
Gau
tomnomnom
web.archive.org
index.commoncrawl.org
www.virustotal.com
Waybackurls
gwen001
Here
Project Sonar
8 different datasets
guide
Crobat
Cgboal
Cgboal
https://sonar.omnisint.io/
❀️
πŸ’™
πŸ’š
πŸ’š
πŸ’™
❀️
https://www.buymeacoffee.com/siddheshparab