Passive Sources
Last updated
Was this helpful?
Last updated
Was this helpful?
Passive subdomain enumeration is a technique to query for passive DNS datasets provided by services like , , , , , , etc. to obtain the subdomains of a particular target. Here we don't send any active probes to our target, instead passively try to scrape information available from the internet.
There are in total around that provide such datasets to query them. It's difficult to manually query these third-party services thus, to ease this process various tools are developed which automate these processes.
It's highly recommended to read section first, before proceeding further.
Passive DNS enumeration tools
Internet Archive
Github Scraping
GitLab Scraping
Language: Go
Total Passive Sources: 82
Installation:
Setting up Amass config file:
To make it possible for Amass to query the passive DNS datasets, it necessary for us to setup the API keys of those services in the Amass configuration file.
By default, amass config file is located at $HOME/.config/amass/config.ini
Now let's set up our API keys in the config.ini
config file.
Open the config file in a text editor and then uncomment the required lines and add your API keys.
After setting up API keys now we are good to run amass.
Flags:-
enum - Perform DNS enumeration
passive - passively collect information through the data sources mentioned in the config file.
config - Specify the location of your config file (default: $HOME/.config/amass/config.ini
)
o - Output filename
Language: Go
Total Passive Sources: 38
Installation:
Setting up Subfinder configuration file:
Subfinder's default config file location is at $HOME/.config/subfinder/provider-config.yaml
After your first installation, if you didn't find the configuration file populated by default run the following command again subfinder
in order to get it generated.
The subfinder config file follows YAML(YAML Ain't Markup Language) syntax. So, you need to be careful that you don't break the syntax. It's better that you use a text editor and set up syntax highlighting.
Example config file:-
Some passive sources like Censys
, PassiveTotal
use 2 keys in combination in order to authenticate a user. For such services, both values need to be mentioned with a colon(:) in between them. (Check how have I mentioned the "Censys" source values- APP-id
:Secret
in the below example )
Subfinder automatically detects its config file only if at the default position.
Flags:-
d - Specify our target domain
all - Use all passive sources (slow enumeration but more results)
config - Config file location
Language: Go
Total passive sources: 9
Running:
Language: Rust
Total Passive sources: 21
Installation:-
Configuration:-
You need to define API keys in your .bashrc
or .zshrc
.
Findomain will pick up them automatically.
Flags:-
t - Target domain
u - Output file
Internet Archives deploy their own web crawlers and indexing systems that crawl each website on the internet. Hence, they have historical data of all the websites that once existed. hence, Internet Archives can be a useful source to grab subdomains of a particular target that once existed and later perform permutations(more on this later) on them to get more valid subdomains.
Internet Archive when queried gives back URLs. Since we are only concerned with the subdomains, we need to process those URLs to grab only unique FQDN subdomains from them.
Language: Go
Sources:
Flags:
threads - How many workers to spawn
subs - Include subdomains of the target domain
Language: Go
Sources:
Language: Go
Organizations sometimes host their source code on GitHub, also employees working at these organizations sometimes leak the source code on GitHub. Additionally, I have came around instances where security researchers host their reconnaissance data in public repositories. The tool Github-subdomains can help you extract these exposed/leaked subdomains of your target from GitHub.
Installation:
For github-subdomains to scrap domains from GitHub you need to specify a list of GitHub access tokens.
These access tokens are used by the tool to perform searches and find subdomains on behalf of you.
I always prefer that you make at least 10 tokens from 3 different accounts(30 in total) to avoid rate limiting.
Specify 1 token per line.
Running github-subdomains:
Flags:
d - target
t - file containing tokens
o - output file
This internet-wide DNS dataset could be an excellent resource for us to grab our subdomains right? But querying such large datasets could take up significant time. That's when Crobat comes to the rescue.
Language: Go
Flags:
s - Target Name
Author: (mainly ).
is a Swiss army knife for subdomains enumeration that outperforms passive enumeration the best. Amass queries the most number of third-party services which results in more subdomains of a particular target. are passive services that amass queries.
Since amass written in Go, you need your Go environment properly set up( to setup Go environment)
to my amass config file for reference.
To get to know to create API keys, check out .
Refer to (this is exactly how your amass config file should be)
Tip: After configuring your config file in order to verify whether the API keys have been correctly set up or not you can use this command: amass enum -list -config config.ini
Author:
is yet another great tool that one should have in their pipeline. There are some unique sources that subfinder queries for, that amass doesn't. This tool is been developed by the famous ProjectDiscovery team, who's tools are used by every other bugbounty hunter.
to my subfinder config file for reference.
Tip:- To view the sources that require API keys subfinder -ls
command
Author:
Don't know why did I include this tooljust because its build by the legend ? It doesn't give any unique subdomains compared to other tools but it's extremely fast.
Author:
is one of the standard subdomain finder tools in the industry. Another extremely fast enumeration tool. It also has a paid version that offers much more features like subdomain monitoring, resolution, less resource consumption.
Depending on your architecture download binary from
For this, we use a tool called . This tool helps to extract the domain name from a list of URLs.
Author:
works by querying all the above 4 internet archive services and grabs all the URLs that their internet-wide crawler had once crawled. So through this process we get tons of URL's belonging to our target that once existed. After collecting the URLs we extract only the domain/subdomain part from those URLs.
Author:
works similar to Gau, but I have found that it returns some unique data that Gau couldn't find. Hence, we need to include waybackurls in our arsenal.
Author:
Configuring github-subdomainsββ:
is an article on how you can generate your GitHub access tokens.
is a security research project by Rapid7 that conducts internet-wide scans. Rapid7 has been generous and made this data freely available to the public. Project Sonar contains with a total size of over 66.6 TB which are updated on a regular basis. You can read here how you can parse these datasets on your own using this .
Author:
has done an excellent work of parsing and indexing the whole Rapid7 Sonar dataset into MongoDB and creating an API to query this database. This Crobat API is freely available at .More over he developed a command-line tool that uses this API and returns the results at a blazing fast speed.