Scripting My Custom Script Automation Tool For Web Application Hacking & Reconnaissance

Subh Dhungana
6 min readNov 8, 2022

--

Built my automation recon script tool for web application hacking & reconnaissance tasks which I am going to share. I found these codes and tool extremely useful and time saving. I’m going to share the chunks of code that I wrote to create this tool which helped me for my hacking purposes. I would explain simultaneously here.

#!/bin/bash
DOMAIN=$1
DIRECTORY=${DOMAIN}_recon
mkdir $DIRECTORY

I wrote above chunks of code for variable purpose. “DOMAIN”, “DIRECTORY” is the variable name. The variables were assingned. “DOMAIN” will take the input domain from the user and create new directory for web app hacking and reconnaissance tasks.

The above codes will create one folder as the result.

#assetfinder                            
echo " Now, Assetfinder"
assetfinder -subs-only $DOMAIN | tee $DIRECTORY/ast.txt echo " "
#subfinder
echo " Now, Subfinder"
subfinder -d $DOMAIN | tee $DIRECTORY/subf.txt echo " "
#amass
echo "Now, Amass"
amass enum --passive -d $DOMAIN | tee $DIRECTORY/amass.txt

Now, the first process is the subdomain enumeration. I’ve used mainly three tools for subdomain enumeration i.e:

i) assetfinder

ii) subfinder

iii) amass

There’s nothing in above chunks of code. “echo” command was being used just to print the desired lines as it is. I’ve just used the code to run tools like assetfinder, subfinder, amass as per their documented instruction in github.

The above given codes will produce results in three files i.e. ast.txt, subf.txt and amass.txt. The results will be stored inside the domain directory. There will be three results in a text file i.e.

i) ast.txt

ii) subf.txt

iii) amass.txt.

#arranging files
echo " Arranging Subdomains into one"
cat $DIRECTORY/ast.txt $DIRECTORY/subf.txt $DIRECTORY/amass.txt | sort -u | tee $DIRECTORY/subdomains.txt
rm -rf $DIRECTORY/ast.txt $DIRECTORY/subf.txt $DIRECTORY/amass.txt
echo "Done "
echo " "

Now, we have three files i.e.

i) ast.txt

ii) subf.txt

iii) amass.txt

I wrote the above codes in such a way that, they’ll combine three results into one single text file i.e.subdomains.txt. And then, the automation code will delete other three files like ast.txt, subf.txt, amass.txt.

Then, we’ll have one one result i.e. subdomains.txt. This file will have all the subdomain enumeration result from above three tools.

#httpx and httprobe
cat $DIRECTORY/subdomains.txt | httpx | tee $DIRECTORY/livesubdomains.txt
echo "httpx done"
echo "Now, filtering live subdomains and scanning open ports like 81, 8080, 8000, 8443"
cat $DIRECTORY/subdomains.txt | httprobe -p http:81 -p http:8000 -p http:8080 -p https:8443 -c 50 | tee $DIRECTORY/httprobeOpenPorts.txt
echo "httprobe done"
cat $DIRECTORY/livesubdomains.txt | httpx -title -status-code -fr -o $DIRECTORY/httpxSubdomains.txt

Now, I have subdomains.txt file. I have to filter out the dead subdomains and produce only live subdomains. So, I wrote above codes which will take all the subdomains from subdomains.txt file, and it’ll produce the result in livesubdomains.txt file. The livesubdomains.txt file will now have only live subdomains. This task was being accomplished using tool called httpx.

I wrote another code for scanning the open ports using tool called httprobe. “echo” command was simply being used just to print the line as it is. Using above codes, the result for open ports filter will be produced on httprobeOpenPorts.txt file

Now, I wrote another line of code to print out the result of subdomains description and heading. The result for all live subdomains small description and headings will be stored in httpxSubdomains.txt file.

So, above codes will produce three results:

i) livesubdomains.txt (only live subdomains will be stored in this file)

ii)httprobeOpenPorts.txt (subdomains endpoints with open port will be stored in this file)

iii) httpxSubdomains.txt (live subdomains with their title description will be stored in this file)

#aquatone
echo " Now, screenshot using aquatone"
echo " "
cd $DIRECTORY
mkdir aquatoneDatas
cd aquatoneDatas
cat ../livesubdomains.txt | aquatone
echo " "
echo " aquatone done"
cd ../../

Now, I have all the live subdomains stored in the text file called liveSubdomains.txt. I coded above chunks in such a way that the tool called aquatone will take all the subdomains from file called liveSubdomains.txt. Then the tool will take the screenshot of all the subdomains and it’ll store the result in the folder aquatone.

#waybackurls
echo " Now Waybackurls"
cat $DIRECTORY/livesubdomains.txt | waybackurls | tee $DIRECTORY/wayback.txt
echo "waybackurls done "
echo " "
#gau
echo "Now Gau"
cat $DIRECTORY/livesubdomains.txt | gau | tee $DIRECTORY/gau.txt
echo " "
echo "Gau done"
#arranging content discovery files
echo "Now, arranging content discovery files"
cat $DIRECTORY/wayback.txt $DIRECTORY/gau.txt | sort -u | tee $DIRECTORY/urls.txt
rm -rf $DIRECTORY/wayback.txt $DIRECTORY/gau.txt
cat $DIRECTORY/urls.txt | uro | httpx -mc 200 | tee $DIRECTORY/live_urls.txt
cat $DIRECTORY/live_urls.txt | grep “.php” | cut -f1 -d”?” | sed ‘s:/*$::’ | sort -u | tee $DIRECTORY/php_endpoints_urls.txt

The crucial step for web application penetration testing is collecting all the urls and hidden endpoints from the subdomains. There’s the result of subdomains stored in a file called liveSubdomains.txt. Above codes will produce hidden urls and endpoints in three files:

i) wayback.txt [ Hidden endpoints and urls will be collected in this text file ]

ii) gau.txt [ Hidden endpoints and urls will be collected in this text file ]

iii) urls.txt [ This text file will contain the combined result of above two files endpoints ]

iv) live_urls.txt [ This text file will have filtered live urls and endpoints from above file urls.txt. It’ll only contain live endpoints and urls]

v) php_endpoints_urls.txt [ This will contain only endpoints with php created urls ]

#Gather jsfilesurls
cat $DIRECTORY/live_urls.txt | grep ".js$" | uniq | sort | tee $DIRECTORY/Jsurlsfiles1.txt
echo " "
cat $DIRECTORY/live_urls.txt | subjs | sort -u | tee $DIRECTORY/Jsurlsfiles2.txt
cat $DIRECTORY/Jsurlsfiles1.txt $DIRECTORY/Jsurlsfiles2.txt | sort -u | tee $DIRECTORY/js_urls_files.txt
rm -rf $DIRECTORY/Jsurlsfiles1.txt $DIRECTORY/Jsurlsfiles2.txt
echo "js files scan completed"
#linkfinder
echo "Now linkfinder"
echo " "
cat $DIRECTORY/js_urls_files.txt | while read url; do python3 /root/Desktop/automation_embedded_tools/LinkFinder.py -d -i $url -o cli | tee js_endpoints.txt
echo " "
echo "linkfinder completed"

Since now, I have live_urls.txt file which contains all the live urls and endpoints. Now, I had to collect js files endpoints from live_urls.txt file. So, Jsurlsfiles1.txt file will be produced which will contain js endpoints using grep linux command. I also coded in such a way that, it will also produce Jsurlsfiles2.txt js file using subjs tool. The results will be combined using those two files and the text file will be produced called js_urls_files.txt

Another file called js_endpoints.txt will be produced which will contain all the hidden links and endpoints from the js file i.e. js_urls_files.txt

# gf pattern filter
cat $DIRECTORY/live_urls.txt | gf xss | tee $DIRECTORY/gfxss.txt
cat $DIRECTORY/live_urls.txt | gf ssrf | tee $DIRECTORY/gfssrf.txt
cat $DIRECTORY/live_urls.txt | gf upload-fields | tee $DIRECTORY/gfupload.txt
cat $DIRECTORY/live_urls.txt | gf sqli | tee $DIRECTORY/gfsqli.txt
cat $DIRECTORY/live_urls.txt | gf redirect | tee $DIRECTORY/gfredirect.txt
cat $DIRECTORY/live_urls.txt | gf rce | tee $DIRECTORY/gfrce.txt
cat $DIRECTORY/live_urls.txt | gf idor | tee $DIRECTORY/gfidor.txt
cat $DIRECTORY/live_urls.txt | gf lfi | tee $DIRECTORY/gflfi.txt
echo " "
mkdir $DIRECTORY/gfTool
mv $DIRECTORY/gfxss.txt $DIRECTORY/gfssrf.txt $DIRECTORY/gfupload.txt $DIRECTORY/gfsqli.txt $DIRECTORY/gfredirect.txt $DIRECTORY/gfrce.txt $DIRECTORY/gfidor.txt $DIRECTORY/gflfi.txt $DIRECTORY/gfTool/

GF pattern tool was being used to filter the vulnerable endpoints from the live_urls.txt file. The result will be stored in below file:

i) gfxss.txt [ xss vulnerable endpoints ]

ii) gfssrf.txt [ ssrf vulnerable endpoints ]

iii) gfsqli.txt [ sqli vulnerable endpoints ]

iv) gfrce.txt [ rce vulnerable endpoints ]

v) gfidor.txt [ idor vulnerable endpoints ]

vi) gflfi.txt [ lfi vulnerable endpoints ]

#dirsearch
echo "Now dirsearch"
python3 /root/Desktop/automation_embedded_tools/dirsearch/dirsearch.py -u www.$DOMAIN -o $DIRECTORY/dirsearchResult.txt
# nuclei basic use
echo "Now using nuclei tool"
cat $DIRECTORY/livesubdomains.txt | nuclei -c 100 -silent -t /root/Desktop/automation_embedded_tools/nuclei-templates/ | tee $DIRECTORY/Nucleiresults.txt

Now, I needed to find additional hidden web directories from the main target domain. So I created above chunks of code which will search for the hidden web directories and it’ll store the result in dirsearchResult.txt

Also, I’ve added nuclei tool in my automation script. It’ll take livesubdomains.txt file and produce the result in a text file called Nucleiresults.txt. The file will contain bug reports that it’ll find. This text file Nucleiresults.txt will display maximum bug vulnerable endpoints and result with necessary explanation of bugs.

Thank You.

Shubham Dhungana.

Cyber Security Researcher, Pentester & Bug Bounty Hunter

--

--

Subh Dhungana
Subh Dhungana

Written by Subh Dhungana

Offensive Side | Bug Bounty Hunter | Programmer Having Affair With Cyber Sec