🪢Web Fuzzer & Scanner

The built-in web fuzzers are powerful tools that act as web fuzzing, enumeration, and brute-forcing tools. This may also act as an alternative for many of the CLI-based fuzzers we use, like ffuf, dirbuster, gobuster, wfuzz, among others.

Burp's web fuzzer is called Burp Intruder

First we would capture our request, then right-click on the request and select Send to Intruder, or use the shortcut [CTRL+I] to send it to Intruder.

The second tab, 'Positions', is where we place the payload position pointer, which is the point where words from our wordlist will be placed and iterated over.

To check whether a web directory exists, our fuzzing should be in 'GET /DIRECTORY/', such that existing pages would return 200 OK, otherwise we'd get 404 NOT FOUND.

We would start by clicking on the word we want to fuxx and then click the "Add" button

The final thing to select in the target tab is the Attack Type. The attack type defines how many payload pointers are used and determines which payload is assigned to which position. For simplicity, we'll stick to the first type, Sniper but there is also the Cluster bomb that is very useful

On the third tab, 'Payloads', we get to choose and customize our payloads/wordlists.

You can have several payload types such as:

  • Simple List: The basic and most fundamental type. We provide a wordlist, and Intruder iterates over each line in it.

  • Runtime file: Similar to Simple List, but loads line-by-line as the scan runs to avoid excessive memory usage by Burp.

  • Character Substitution: Lets us specify a list of characters and their replacements, and Burp Intruder tries all potential permutations.

Let's say we added our word and seleected the simple list payload attack, we could now load a wordlist in the payload options, create it or download it such as /opt/useful/SecLists/Discovery/Web-Content/common.txt

There are plenty of other options such as Grep Match of filter by code 200 etc...

And when everythong is set up we can launch the attack and wait for our payload to match a request:

Burp Scanner

An essential feature of web proxy tools is their web scanners. Burp Suite comes with Burp Scanner, a powerful scanner for various types of web vulnerabilities, using a Crawler for building the website structure, and Scanner for passive and active scanning.

To start a scan, we can right-click on it once we locate it in the history, and then select Scan to be able to configure the scan before we run it, or select Passive/Active Scan to quickly start a scan with the default configurations

If we go to (Target>Site map), it will show a listing of all directories and files burp has detected in various requests that went through its proxy

We saw that we could add to scope but we can also remove from scope

Crawler

Once we have our scope ready, we can go to the Dashboard tab and click on New Scan to configure our scan, which would be automatically populated with our in-scope items:

A Web Crawler navigates a website by accessing any links found in its pages, accessing any forms, and examining any requests it makes to build a comprehensive map of the website.

We can crawl but we can also adit and crawl

Passive Scanner

We can scan 2 ways: A Passive Vulnerability Scan and an Active Vulnerability Scan.

Unlike an Active Scan, a Passive Scan does not send any new requests but analyzes the source of pages already visited in the target/scope and then tries to identify potential vulnerabilities.

To Do passive scan or Passively scan this target. The Passive Scan will start running, and its task can be seen in the Dashboard tab as well. Once the scan finishes, we can click on View Details to review identified vulnerabilities and then select the Issue activity tab:

Active Scanner

That is the most powerfull scanner of burp:

  1. It starts by running a Crawl and a web fuzzer (like dirbuster/ffuf) to identify all possible pages

  2. It runs a Passive Scan on all identified pages

  3. It checks each of the identified vulnerabilities from the Passive Scan and sends requests to verify them

  4. It performs a JavaScript analysis to identify further potential vulnerabilities

  5. It fuzzes various identified insertion points and parameters to look for common vulnerabilities like XSS, Command Injection, SQL Injection, and other common web vulnerabilities

Once we run the scan, we can see this dashboard:

Once the scan is done, we can look at the Issue activity pane in the Dashboard tab to view and filter all of the issues identified so far.

And for more details:

Last updated