Waybackurls And Discovering Parameters
Archived web pages refer to copies or snapshots of websites captured at specific points in time. These snapshots are created and stored by web archiving services, such as the Wayback Machine by the Internet Archive. Archiving web pages allows for the preservation of web content, enabling users to access and reference past versions of websites.
Waybackurls is a command-line tool that extracts URLs from the Wayback Machine’s archived web pages. It allows you to retrieve historical snapshots of websites and gather valuable information for bug bounty recon and other security testing purposes. Here’s how you can use Waybackurls:
you can use the following command to save the output to a file called urls.txt :
waybackurls example.com > urls.txt
Now you can use grep to find some sensitive endpoints and informations disclosed via archives
you can use any keywords like Admin, user, email, token, keys.. etc.
Finding parameters from Waybackurls
By discovering and analyzing parameters, you can assess how they are handled, validated, or sanitized by the application. This can lead to the identification of vulnerabilities such as SQL injection, cross-site scripting (XSS), command injection, path traversal, and more.
You can use the following regex for greping and saving parameters as a wordlist, also remove duplicates.
grep -oP ‘(?<=\?|&)\w+(?==|&)’ urls.txt | sort -u
As you can see this will help to make wordlists of parameters from a list of urls.
Using this technique I got a lots of bounties for bugs like information disclosures, broken access issues etc.. Give it a try.
Also please suggest what to write on upcoming blogs.
you can now buy me a coffee: https://www.buymeacoffee.com/r0074g3n7
In my previous blogs I have covered following:
Bug Bounty Recon (Part-2)
Previous Part: https://aswinthambipanik07.medium.com/bug-bounty-recon-part-1-dad7f86d1b0f