Cara menggunakan php list directories only

Passing php and html as extensions with -f/--force-extensions flag will generate the following dictionary:

admin
admin.php
admin.html
admin/

  • Overwrite extensions:

login.html

Passing jsp and jspa as extensions with -O/--overwrite-extensions flag will generate the following dictionary:

login.html
login.jsp
login.jspa

Options

Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit

  Mandatory:
    -u URL, --url=URL   Target URL(s), can use multiple flags
    -l PATH, --url-file=PATH
                        URL list file
    --stdin             Read URL(s) from STDIN
    --cidr=CIDR         Target CIDR
    --raw=PATH          Load raw HTTP request from file (use '--scheme' flag
                        to set the scheme)
    -s SESSION_FILE, --session=SESSION_FILE
                        Session file
    --config=PATH       Path to configuration file (Default:
                        'DIRSEARCH_CONFIG' environment variable, otherwise
                        'config.ini')

  Dictionary Settings:
    -w WORDLISTS, --wordlists=WORDLISTS
                        Customize wordlists (separated by commas)
    -e EXTENSIONS, --extensions=EXTENSIONS
                        Extension list separated by commas (e.g. php,asp)
    -f, --force-extensions
                        Add extensions to the end of every wordlist entry. By
                        default dirsearch only replaces the %EXT% keyword with
                        extensions
    -O, --overwrite-extensions
                        Overwrite other extensions in the wordlist with your
                        extensions (selected via `-e`)
    --exclude-extensions=EXTENSIONS
                        Exclude extension list separated by commas (e.g.
                        asp,jsp)
    --remove-extensions
                        Remove extensions in all paths (e.g. admin.php ->
                        admin)
    --prefixes=PREFIXES
                        Add custom prefixes to all wordlist entries (separated
                        by commas)
    --suffixes=SUFFIXES
                        Add custom suffixes to all wordlist entries, ignore
                        directories (separated by commas)
    -U, --uppercase     Uppercase wordlist
    -L, --lowercase     Lowercase wordlist
    -C, --capital       Capital wordlist

  General Settings:
    -t THREADS, --threads=THREADS
                        Number of threads
    -r, --recursive     Brute-force recursively
    --deep-recursive    Perform recursive scan on every directory depth (e.g.
                        api/users -> api/)
    --force-recursive   Do recursive brute-force for every found path, not
                        only directories
    -R DEPTH, --max-recursion-depth=DEPTH
                        Maximum recursion depth
    --recursion-status=CODES
                        Valid status codes to perform recursive scan, support
                        ranges (separated by commas)
    --subdirs=SUBDIRS   Scan sub-directories of the given URL[s] (separated by
                        commas)
    --exclude-subdirs=SUBDIRS
                        Exclude the following subdirectories during recursive
                        scan (separated by commas)
    -i CODES, --include-status=CODES
                        Include status codes, separated by commas, support
                        ranges (e.g. 200,300-399)
    -x CODES, --exclude-status=CODES
                        Exclude status codes, separated by commas, support
                        ranges (e.g. 301,500-599)
    --exclude-sizes=SIZES
                        Exclude responses by sizes, separated by commas (e.g.
                        0B,4KB)
    --exclude-text=TEXTS
                        Exclude responses by text, can use multiple flags
    --exclude-regex=REGEX
                        Exclude responses by regular expression
    --exclude-redirect=STRING
                        Exclude responses if this regex (or text) matches
                        redirect URL (e.g. '/index.html')
    --exclude-response=PATH
                        Exclude responses similar to response of this page,
                        path as input (e.g. 404.html)
    --skip-on-status=CODES
                        Skip target whenever hit one of these status codes,
                        separated by commas, support ranges
    --min-response-size=LENGTH
                        Minimum response length
    --max-response-size=LENGTH
                        Maximum response length
    --max-time=SECONDS  Maximum runtime for the scan
    --exit-on-error     Exit whenever an error occurs

  Request Settings:
    -m METHOD, --http-method=METHOD
                        HTTP method (default: GET)
    -d DATA, --data=DATA
                        HTTP request data
    --data-file=PATH    File contains HTTP request data
    -H HEADERS, --header=HEADERS
                        HTTP request header, can use multiple flags
    --header-file=PATH  File contains HTTP request headers
    -F, --follow-redirects
                        Follow HTTP redirects
    --random-agent      Choose a random User-Agent for each request
    --auth=CREDENTIAL   Authentication credential (e.g. user:password or
                        bearer token)
    --auth-type=TYPE    Authentication type (basic, digest, bearer, ntlm, jwt,
                        oauth2)
    --cert-file=PATH    File contains client-side certificate
    --key-file=PATH     File contains client-side certificate private key
                        (unencrypted)
    --user-agent=USER_AGENT
    --cookie=COOKIE

  Connection Settings:
    --timeout=TIMEOUT   Connection timeout
    --delay=DELAY       Delay between requests
    --proxy=PROXY       Proxy URL (HTTP/SOCKS), can use multiple flags
    --proxy-file=PATH   File contains proxy servers
    --proxy-auth=CREDENTIAL
                        Proxy authentication credential
    --replay-proxy=PROXY
                        Proxy to replay with found paths
    --tor               Use Tor network as proxy
    --scheme=SCHEME     Scheme for raw request or if there is no scheme in the
                        URL (Default: auto-detect)
    --max-rate=RATE     Max requests per second
    --retries=RETRIES   Number of retries for failed requests
    --ip=IP             Server IP address

  Advanced Settings:
    --crawl             Crawl for new paths in responses

  View Settings:
    --full-url          Full URLs in the output (enabled automatically in
                        quiet mode)
    --redirects-history
                        Show redirects history
    --no-color          No colored output
    -q, --quiet-mode    Quiet mode

  Output Settings:
    -o PATH, --output=PATH
                        Output file
    --format=FORMAT     Report format (Available: simple, plain, json, xml,
                        md, csv, html, sqlite)
    --log=PATH          Log file

Configuration

By default,

login.html
login.jsp
login.jspa
4 inside your dirsearch directory is used as the configuration file but you can select another file via
login.html
login.jsp
login.jspa
5 flag or
login.html
login.jsp
login.jspa
6 environment variable.

# If you want to edit dirsearch default configurations, you can
# edit values in this file. Everything after `#` is a comment
# and won't be applied

[general]
threads = 25
recursive = False
deep-recursive = False
force-recursive = False
recursion-status = 200-399,401,403
max-recursion-depth = 0
exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/
random-user-agents = False
max-time = 0
exit-on-error = False
# subdirs = /,api/
# include-status = 200-299,401
# exclude-status = 400,500-999
# exclude-sizes = 0b,123gb
# exclude-text = "Not found"
# exclude-regex = "^403$"
# exclude-redirect = "*/error.html"
# exclude-response = 404.html
# skip-on-status = 429,999

[dictionary]
default-extensions = php,aspx,jsp,html,js
force-extensions = False
overwrite-extensions = False
lowercase = False
uppercase = False
capitalization = False
# exclude-extensions = old,log
# prefixes = .,admin
# suffixes = ~,.bak
# wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt

[request]
http-method = get
follow-redirects = False
# headers-file = /path/to/headers.txt
# user-agent = MyUserAgent
# cookie = SESSIONID=123

[connection]
timeout = 7.5
delay = 0
max-rate = 0
max-retries = 1
## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme
# scheme = http
# proxy = localhost:8080
# proxy-file = /path/to/proxies.txt
# replay-proxy = localhost:8000

[advanced]
crawl = False

[view]
full-url = False
quiet-mode = False
color = True
show-redirects-history = False

[output]
## Support: plain, simple, json, xml, md, csv, html, sqlite
report-format = plain
autosave-report = True
autosave-report-folder = reports/
# log-file = /path/to/dirsearch.log
# log-file-size = 50000000

How to use

Cara menggunakan php list directories only

Some examples for how to use dirsearch - those are the most common arguments. If you need all, just use the -h argument.

Simple usage

python3 dirsearch.py -u https://target

python3 dirsearch.py -e php,html,js -u https://target

index
index.asp
index.aspx
0


Pausing progress

dirsearch allows you to pause the scanning progress with CTRL+C, from here, you can save the progress (and continue later), skip the current target, or skip the current sub-directory.

Cara menggunakan php list directories only


Recursion

  • Recursive brute-force is brute-forcing continuously the after of found directories. For example, if dirsearch finds
    login.html
    login.jsp
    login.jspa
    
    7, it will brute-force
    login.html
    login.jsp
    login.jspa
    
    8 (
    login.html
    login.jsp
    login.jspa
    
    9 is where it brute forces). To enable this feature, use -r (or --recursive) flag

index
index.asp
index.aspx
1

  • You can set the max recursion depth with --max-recursion-depth, and status codes to recurse with --recursion-status

index
index.asp
index.aspx
2

  • There are 2 more options: --force-recursive and --deep-recursive

    • Force recursive: Brute force recursively all found paths, not just paths end with
      login.html
      login.jsp
      login.jspa
      
      2
    • Deep recursive: Recursive brute-force all depths of a path (
      Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]
      
      Options:
        --version             show program's version number and exit
        -h, --help            show this help message and exit
      
        Mandatory:
          -u URL, --url=URL   Target URL(s), can use multiple flags
          -l PATH, --url-file=PATH
                              URL list file
          --stdin             Read URL(s) from STDIN
          --cidr=CIDR         Target CIDR
          --raw=PATH          Load raw HTTP request from file (use '--scheme' flag
                              to set the scheme)
          -s SESSION_FILE, --session=SESSION_FILE
                              Session file
          --config=PATH       Path to configuration file (Default:
                              'DIRSEARCH_CONFIG' environment variable, otherwise
                              'config.ini')
      
        Dictionary Settings:
          -w WORDLISTS, --wordlists=WORDLISTS
                              Customize wordlists (separated by commas)
          -e EXTENSIONS, --extensions=EXTENSIONS
                              Extension list separated by commas (e.g. php,asp)
          -f, --force-extensions
                              Add extensions to the end of every wordlist entry. By
                              default dirsearch only replaces the %EXT% keyword with
                              extensions
          -O, --overwrite-extensions
                              Overwrite other extensions in the wordlist with your
                              extensions (selected via `-e`)
          --exclude-extensions=EXTENSIONS
                              Exclude extension list separated by commas (e.g.
                              asp,jsp)
          --remove-extensions
                              Remove extensions in all paths (e.g. admin.php ->
                              admin)
          --prefixes=PREFIXES
                              Add custom prefixes to all wordlist entries (separated
                              by commas)
          --suffixes=SUFFIXES
                              Add custom suffixes to all wordlist entries, ignore
                              directories (separated by commas)
          -U, --uppercase     Uppercase wordlist
          -L, --lowercase     Lowercase wordlist
          -C, --capital       Capital wordlist
      
        General Settings:
          -t THREADS, --threads=THREADS
                              Number of threads
          -r, --recursive     Brute-force recursively
          --deep-recursive    Perform recursive scan on every directory depth (e.g.
                              api/users -> api/)
          --force-recursive   Do recursive brute-force for every found path, not
                              only directories
          -R DEPTH, --max-recursion-depth=DEPTH
                              Maximum recursion depth
          --recursion-status=CODES
                              Valid status codes to perform recursive scan, support
                              ranges (separated by commas)
          --subdirs=SUBDIRS   Scan sub-directories of the given URL[s] (separated by
                              commas)
          --exclude-subdirs=SUBDIRS
                              Exclude the following subdirectories during recursive
                              scan (separated by commas)
          -i CODES, --include-status=CODES
                              Include status codes, separated by commas, support
                              ranges (e.g. 200,300-399)
          -x CODES, --exclude-status=CODES
                              Exclude status codes, separated by commas, support
                              ranges (e.g. 301,500-599)
          --exclude-sizes=SIZES
                              Exclude responses by sizes, separated by commas (e.g.
                              0B,4KB)
          --exclude-text=TEXTS
                              Exclude responses by text, can use multiple flags
          --exclude-regex=REGEX
                              Exclude responses by regular expression
          --exclude-redirect=STRING
                              Exclude responses if this regex (or text) matches
                              redirect URL (e.g. '/index.html')
          --exclude-response=PATH
                              Exclude responses similar to response of this page,
                              path as input (e.g. 404.html)
          --skip-on-status=CODES
                              Skip target whenever hit one of these status codes,
                              separated by commas, support ranges
          --min-response-size=LENGTH
                              Minimum response length
          --max-response-size=LENGTH
                              Maximum response length
          --max-time=SECONDS  Maximum runtime for the scan
          --exit-on-error     Exit whenever an error occurs
      
        Request Settings:
          -m METHOD, --http-method=METHOD
                              HTTP method (default: GET)
          -d DATA, --data=DATA
                              HTTP request data
          --data-file=PATH    File contains HTTP request data
          -H HEADERS, --header=HEADERS
                              HTTP request header, can use multiple flags
          --header-file=PATH  File contains HTTP request headers
          -F, --follow-redirects
                              Follow HTTP redirects
          --random-agent      Choose a random User-Agent for each request
          --auth=CREDENTIAL   Authentication credential (e.g. user:password or
                              bearer token)
          --auth-type=TYPE    Authentication type (basic, digest, bearer, ntlm, jwt,
                              oauth2)
          --cert-file=PATH    File contains client-side certificate
          --key-file=PATH     File contains client-side certificate private key
                              (unencrypted)
          --user-agent=USER_AGENT
          --cookie=COOKIE
      
        Connection Settings:
          --timeout=TIMEOUT   Connection timeout
          --delay=DELAY       Delay between requests
          --proxy=PROXY       Proxy URL (HTTP/SOCKS), can use multiple flags
          --proxy-file=PATH   File contains proxy servers
          --proxy-auth=CREDENTIAL
                              Proxy authentication credential
          --replay-proxy=PROXY
                              Proxy to replay with found paths
          --tor               Use Tor network as proxy
          --scheme=SCHEME     Scheme for raw request or if there is no scheme in the
                              URL (Default: auto-detect)
          --max-rate=RATE     Max requests per second
          --retries=RETRIES   Number of retries for failed requests
          --ip=IP             Server IP address
      
        Advanced Settings:
          --crawl             Crawl for new paths in responses
      
        View Settings:
          --full-url          Full URLs in the output (enabled automatically in
                              quiet mode)
          --redirects-history
                              Show redirects history
          --no-color          No colored output
          -q, --quiet-mode    Quiet mode
      
        Output Settings:
          -o PATH, --output=PATH
                              Output file
          --format=FORMAT     Report format (Available: simple, plain, json, xml,
                              md, csv, html, sqlite)
          --log=PATH          Log file
      
      1 => add
      Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]
      
      Options:
        --version             show program's version number and exit
        -h, --help            show this help message and exit
      
        Mandatory:
          -u URL, --url=URL   Target URL(s), can use multiple flags
          -l PATH, --url-file=PATH
                              URL list file
          --stdin             Read URL(s) from STDIN
          --cidr=CIDR         Target CIDR
          --raw=PATH          Load raw HTTP request from file (use '--scheme' flag
                              to set the scheme)
          -s SESSION_FILE, --session=SESSION_FILE
                              Session file
          --config=PATH       Path to configuration file (Default:
                              'DIRSEARCH_CONFIG' environment variable, otherwise
                              'config.ini')
      
        Dictionary Settings:
          -w WORDLISTS, --wordlists=WORDLISTS
                              Customize wordlists (separated by commas)
          -e EXTENSIONS, --extensions=EXTENSIONS
                              Extension list separated by commas (e.g. php,asp)
          -f, --force-extensions
                              Add extensions to the end of every wordlist entry. By
                              default dirsearch only replaces the %EXT% keyword with
                              extensions
          -O, --overwrite-extensions
                              Overwrite other extensions in the wordlist with your
                              extensions (selected via `-e`)
          --exclude-extensions=EXTENSIONS
                              Exclude extension list separated by commas (e.g.
                              asp,jsp)
          --remove-extensions
                              Remove extensions in all paths (e.g. admin.php ->
                              admin)
          --prefixes=PREFIXES
                              Add custom prefixes to all wordlist entries (separated
                              by commas)
          --suffixes=SUFFIXES
                              Add custom suffixes to all wordlist entries, ignore
                              directories (separated by commas)
          -U, --uppercase     Uppercase wordlist
          -L, --lowercase     Lowercase wordlist
          -C, --capital       Capital wordlist
      
        General Settings:
          -t THREADS, --threads=THREADS
                              Number of threads
          -r, --recursive     Brute-force recursively
          --deep-recursive    Perform recursive scan on every directory depth (e.g.
                              api/users -> api/)
          --force-recursive   Do recursive brute-force for every found path, not
                              only directories
          -R DEPTH, --max-recursion-depth=DEPTH
                              Maximum recursion depth
          --recursion-status=CODES
                              Valid status codes to perform recursive scan, support
                              ranges (separated by commas)
          --subdirs=SUBDIRS   Scan sub-directories of the given URL[s] (separated by
                              commas)
          --exclude-subdirs=SUBDIRS
                              Exclude the following subdirectories during recursive
                              scan (separated by commas)
          -i CODES, --include-status=CODES
                              Include status codes, separated by commas, support
                              ranges (e.g. 200,300-399)
          -x CODES, --exclude-status=CODES
                              Exclude status codes, separated by commas, support
                              ranges (e.g. 301,500-599)
          --exclude-sizes=SIZES
                              Exclude responses by sizes, separated by commas (e.g.
                              0B,4KB)
          --exclude-text=TEXTS
                              Exclude responses by text, can use multiple flags
          --exclude-regex=REGEX
                              Exclude responses by regular expression
          --exclude-redirect=STRING
                              Exclude responses if this regex (or text) matches
                              redirect URL (e.g. '/index.html')
          --exclude-response=PATH
                              Exclude responses similar to response of this page,
                              path as input (e.g. 404.html)
          --skip-on-status=CODES
                              Skip target whenever hit one of these status codes,
                              separated by commas, support ranges
          --min-response-size=LENGTH
                              Minimum response length
          --max-response-size=LENGTH
                              Maximum response length
          --max-time=SECONDS  Maximum runtime for the scan
          --exit-on-error     Exit whenever an error occurs
      
        Request Settings:
          -m METHOD, --http-method=METHOD
                              HTTP method (default: GET)
          -d DATA, --data=DATA
                              HTTP request data
          --data-file=PATH    File contains HTTP request data
          -H HEADERS, --header=HEADERS
                              HTTP request header, can use multiple flags
          --header-file=PATH  File contains HTTP request headers
          -F, --follow-redirects
                              Follow HTTP redirects
          --random-agent      Choose a random User-Agent for each request
          --auth=CREDENTIAL   Authentication credential (e.g. user:password or
                              bearer token)
          --auth-type=TYPE    Authentication type (basic, digest, bearer, ntlm, jwt,
                              oauth2)
          --cert-file=PATH    File contains client-side certificate
          --key-file=PATH     File contains client-side certificate private key
                              (unencrypted)
          --user-agent=USER_AGENT
          --cookie=COOKIE
      
        Connection Settings:
          --timeout=TIMEOUT   Connection timeout
          --delay=DELAY       Delay between requests
          --proxy=PROXY       Proxy URL (HTTP/SOCKS), can use multiple flags
          --proxy-file=PATH   File contains proxy servers
          --proxy-auth=CREDENTIAL
                              Proxy authentication credential
          --replay-proxy=PROXY
                              Proxy to replay with found paths
          --tor               Use Tor network as proxy
          --scheme=SCHEME     Scheme for raw request or if there is no scheme in the
                              URL (Default: auto-detect)
          --max-rate=RATE     Max requests per second
          --retries=RETRIES   Number of retries for failed requests
          --ip=IP             Server IP address
      
        Advanced Settings:
          --crawl             Crawl for new paths in responses
      
        View Settings:
          --full-url          Full URLs in the output (enabled automatically in
                              quiet mode)
          --redirects-history
                              Show redirects history
          --no-color          No colored output
          -q, --quiet-mode    Quiet mode
      
        Output Settings:
          -o PATH, --output=PATH
                              Output file
          --format=FORMAT     Report format (Available: simple, plain, json, xml,
                              md, csv, html, sqlite)
          --log=PATH          Log file
      
      2,
      Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]
      
      Options:
        --version             show program's version number and exit
        -h, --help            show this help message and exit
      
        Mandatory:
          -u URL, --url=URL   Target URL(s), can use multiple flags
          -l PATH, --url-file=PATH
                              URL list file
          --stdin             Read URL(s) from STDIN
          --cidr=CIDR         Target CIDR
          --raw=PATH          Load raw HTTP request from file (use '--scheme' flag
                              to set the scheme)
          -s SESSION_FILE, --session=SESSION_FILE
                              Session file
          --config=PATH       Path to configuration file (Default:
                              'DIRSEARCH_CONFIG' environment variable, otherwise
                              'config.ini')
      
        Dictionary Settings:
          -w WORDLISTS, --wordlists=WORDLISTS
                              Customize wordlists (separated by commas)
          -e EXTENSIONS, --extensions=EXTENSIONS
                              Extension list separated by commas (e.g. php,asp)
          -f, --force-extensions
                              Add extensions to the end of every wordlist entry. By
                              default dirsearch only replaces the %EXT% keyword with
                              extensions
          -O, --overwrite-extensions
                              Overwrite other extensions in the wordlist with your
                              extensions (selected via `-e`)
          --exclude-extensions=EXTENSIONS
                              Exclude extension list separated by commas (e.g.
                              asp,jsp)
          --remove-extensions
                              Remove extensions in all paths (e.g. admin.php ->
                              admin)
          --prefixes=PREFIXES
                              Add custom prefixes to all wordlist entries (separated
                              by commas)
          --suffixes=SUFFIXES
                              Add custom suffixes to all wordlist entries, ignore
                              directories (separated by commas)
          -U, --uppercase     Uppercase wordlist
          -L, --lowercase     Lowercase wordlist
          -C, --capital       Capital wordlist
      
        General Settings:
          -t THREADS, --threads=THREADS
                              Number of threads
          -r, --recursive     Brute-force recursively
          --deep-recursive    Perform recursive scan on every directory depth (e.g.
                              api/users -> api/)
          --force-recursive   Do recursive brute-force for every found path, not
                              only directories
          -R DEPTH, --max-recursion-depth=DEPTH
                              Maximum recursion depth
          --recursion-status=CODES
                              Valid status codes to perform recursive scan, support
                              ranges (separated by commas)
          --subdirs=SUBDIRS   Scan sub-directories of the given URL[s] (separated by
                              commas)
          --exclude-subdirs=SUBDIRS
                              Exclude the following subdirectories during recursive
                              scan (separated by commas)
          -i CODES, --include-status=CODES
                              Include status codes, separated by commas, support
                              ranges (e.g. 200,300-399)
          -x CODES, --exclude-status=CODES
                              Exclude status codes, separated by commas, support
                              ranges (e.g. 301,500-599)
          --exclude-sizes=SIZES
                              Exclude responses by sizes, separated by commas (e.g.
                              0B,4KB)
          --exclude-text=TEXTS
                              Exclude responses by text, can use multiple flags
          --exclude-regex=REGEX
                              Exclude responses by regular expression
          --exclude-redirect=STRING
                              Exclude responses if this regex (or text) matches
                              redirect URL (e.g. '/index.html')
          --exclude-response=PATH
                              Exclude responses similar to response of this page,
                              path as input (e.g. 404.html)
          --skip-on-status=CODES
                              Skip target whenever hit one of these status codes,
                              separated by commas, support ranges
          --min-response-size=LENGTH
                              Minimum response length
          --max-response-size=LENGTH
                              Maximum response length
          --max-time=SECONDS  Maximum runtime for the scan
          --exit-on-error     Exit whenever an error occurs
      
        Request Settings:
          -m METHOD, --http-method=METHOD
                              HTTP method (default: GET)
          -d DATA, --data=DATA
                              HTTP request data
          --data-file=PATH    File contains HTTP request data
          -H HEADERS, --header=HEADERS
                              HTTP request header, can use multiple flags
          --header-file=PATH  File contains HTTP request headers
          -F, --follow-redirects
                              Follow HTTP redirects
          --random-agent      Choose a random User-Agent for each request
          --auth=CREDENTIAL   Authentication credential (e.g. user:password or
                              bearer token)
          --auth-type=TYPE    Authentication type (basic, digest, bearer, ntlm, jwt,
                              oauth2)
          --cert-file=PATH    File contains client-side certificate
          --key-file=PATH     File contains client-side certificate private key
                              (unencrypted)
          --user-agent=USER_AGENT
          --cookie=COOKIE
      
        Connection Settings:
          --timeout=TIMEOUT   Connection timeout
          --delay=DELAY       Delay between requests
          --proxy=PROXY       Proxy URL (HTTP/SOCKS), can use multiple flags
          --proxy-file=PATH   File contains proxy servers
          --proxy-auth=CREDENTIAL
                              Proxy authentication credential
          --replay-proxy=PROXY
                              Proxy to replay with found paths
          --tor               Use Tor network as proxy
          --scheme=SCHEME     Scheme for raw request or if there is no scheme in the
                              URL (Default: auto-detect)
          --max-rate=RATE     Max requests per second
          --retries=RETRIES   Number of retries for failed requests
          --ip=IP             Server IP address
      
        Advanced Settings:
          --crawl             Crawl for new paths in responses
      
        View Settings:
          --full-url          Full URLs in the output (enabled automatically in
                              quiet mode)
          --redirects-history
                              Show redirects history
          --no-color          No colored output
          -q, --quiet-mode    Quiet mode
      
        Output Settings:
          -o PATH, --output=PATH
                              Output file
          --format=FORMAT     Report format (Available: simple, plain, json, xml,
                              md, csv, html, sqlite)
          --log=PATH          Log file
      
      3)
  • If there are sub-directories that you do not want to brute-force recursively, use

    Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]
    
    Options:
      --version             show program's version number and exit
      -h, --help            show this help message and exit
    
      Mandatory:
        -u URL, --url=URL   Target URL(s), can use multiple flags
        -l PATH, --url-file=PATH
                            URL list file
        --stdin             Read URL(s) from STDIN
        --cidr=CIDR         Target CIDR
        --raw=PATH          Load raw HTTP request from file (use '--scheme' flag
                            to set the scheme)
        -s SESSION_FILE, --session=SESSION_FILE
                            Session file
        --config=PATH       Path to configuration file (Default:
                            'DIRSEARCH_CONFIG' environment variable, otherwise
                            'config.ini')
    
      Dictionary Settings:
        -w WORDLISTS, --wordlists=WORDLISTS
                            Customize wordlists (separated by commas)
        -e EXTENSIONS, --extensions=EXTENSIONS
                            Extension list separated by commas (e.g. php,asp)
        -f, --force-extensions
                            Add extensions to the end of every wordlist entry. By
                            default dirsearch only replaces the %EXT% keyword with
                            extensions
        -O, --overwrite-extensions
                            Overwrite other extensions in the wordlist with your
                            extensions (selected via `-e`)
        --exclude-extensions=EXTENSIONS
                            Exclude extension list separated by commas (e.g.
                            asp,jsp)
        --remove-extensions
                            Remove extensions in all paths (e.g. admin.php ->
                            admin)
        --prefixes=PREFIXES
                            Add custom prefixes to all wordlist entries (separated
                            by commas)
        --suffixes=SUFFIXES
                            Add custom suffixes to all wordlist entries, ignore
                            directories (separated by commas)
        -U, --uppercase     Uppercase wordlist
        -L, --lowercase     Lowercase wordlist
        -C, --capital       Capital wordlist
    
      General Settings:
        -t THREADS, --threads=THREADS
                            Number of threads
        -r, --recursive     Brute-force recursively
        --deep-recursive    Perform recursive scan on every directory depth (e.g.
                            api/users -> api/)
        --force-recursive   Do recursive brute-force for every found path, not
                            only directories
        -R DEPTH, --max-recursion-depth=DEPTH
                            Maximum recursion depth
        --recursion-status=CODES
                            Valid status codes to perform recursive scan, support
                            ranges (separated by commas)
        --subdirs=SUBDIRS   Scan sub-directories of the given URL[s] (separated by
                            commas)
        --exclude-subdirs=SUBDIRS
                            Exclude the following subdirectories during recursive
                            scan (separated by commas)
        -i CODES, --include-status=CODES
                            Include status codes, separated by commas, support
                            ranges (e.g. 200,300-399)
        -x CODES, --exclude-status=CODES
                            Exclude status codes, separated by commas, support
                            ranges (e.g. 301,500-599)
        --exclude-sizes=SIZES
                            Exclude responses by sizes, separated by commas (e.g.
                            0B,4KB)
        --exclude-text=TEXTS
                            Exclude responses by text, can use multiple flags
        --exclude-regex=REGEX
                            Exclude responses by regular expression
        --exclude-redirect=STRING
                            Exclude responses if this regex (or text) matches
                            redirect URL (e.g. '/index.html')
        --exclude-response=PATH
                            Exclude responses similar to response of this page,
                            path as input (e.g. 404.html)
        --skip-on-status=CODES
                            Skip target whenever hit one of these status codes,
                            separated by commas, support ranges
        --min-response-size=LENGTH
                            Minimum response length
        --max-response-size=LENGTH
                            Maximum response length
        --max-time=SECONDS  Maximum runtime for the scan
        --exit-on-error     Exit whenever an error occurs
    
      Request Settings:
        -m METHOD, --http-method=METHOD
                            HTTP method (default: GET)
        -d DATA, --data=DATA
                            HTTP request data
        --data-file=PATH    File contains HTTP request data
        -H HEADERS, --header=HEADERS
                            HTTP request header, can use multiple flags
        --header-file=PATH  File contains HTTP request headers
        -F, --follow-redirects
                            Follow HTTP redirects
        --random-agent      Choose a random User-Agent for each request
        --auth=CREDENTIAL   Authentication credential (e.g. user:password or
                            bearer token)
        --auth-type=TYPE    Authentication type (basic, digest, bearer, ntlm, jwt,
                            oauth2)
        --cert-file=PATH    File contains client-side certificate
        --key-file=PATH     File contains client-side certificate private key
                            (unencrypted)
        --user-agent=USER_AGENT
        --cookie=COOKIE
    
      Connection Settings:
        --timeout=TIMEOUT   Connection timeout
        --delay=DELAY       Delay between requests
        --proxy=PROXY       Proxy URL (HTTP/SOCKS), can use multiple flags
        --proxy-file=PATH   File contains proxy servers
        --proxy-auth=CREDENTIAL
                            Proxy authentication credential
        --replay-proxy=PROXY
                            Proxy to replay with found paths
        --tor               Use Tor network as proxy
        --scheme=SCHEME     Scheme for raw request or if there is no scheme in the
                            URL (Default: auto-detect)
        --max-rate=RATE     Max requests per second
        --retries=RETRIES   Number of retries for failed requests
        --ip=IP             Server IP address
    
      Advanced Settings:
        --crawl             Crawl for new paths in responses
    
      View Settings:
        --full-url          Full URLs in the output (enabled automatically in
                            quiet mode)
        --redirects-history
                            Show redirects history
        --no-color          No colored output
        -q, --quiet-mode    Quiet mode
    
      Output Settings:
        -o PATH, --output=PATH
                            Output file
        --format=FORMAT     Report format (Available: simple, plain, json, xml,
                            md, csv, html, sqlite)
        --log=PATH          Log file
    
    4

index
index.asp
index.aspx
3


Threads

The thread number (-t | --threads) reflects the number of separated brute force processes. And so the bigger the thread number is, the faster dirsearch runs. By default, the number of threads is 25, but you can increase it if you want to speed up the progress.

In spite of that, the speed still depends a lot on the response time of the server. And as a warning, we advise you to keep the threads number not too big because it can cause DoS (Denial of Service).

index
index.asp
index.aspx
4


Prefixes / Suffixes

  • --prefixes: Add custom prefixes to all entries

index
index.asp
index.aspx
5

Wordlist:

index
index.asp
index.aspx
6

Generated with prefixes:

index
index.asp
index.aspx
7

  • --suffixes: Add custom suffixes to all entries

index
index.asp
index.aspx
8

Wordlist:

index
index.asp
index.aspx
9

Generated with suffixes:

admin
0


Blacklist

Inside the

Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit

  Mandatory:
    -u URL, --url=URL   Target URL(s), can use multiple flags
    -l PATH, --url-file=PATH
                        URL list file
    --stdin             Read URL(s) from STDIN
    --cidr=CIDR         Target CIDR
    --raw=PATH          Load raw HTTP request from file (use '--scheme' flag
                        to set the scheme)
    -s SESSION_FILE, --session=SESSION_FILE
                        Session file
    --config=PATH       Path to configuration file (Default:
                        'DIRSEARCH_CONFIG' environment variable, otherwise
                        'config.ini')

  Dictionary Settings:
    -w WORDLISTS, --wordlists=WORDLISTS
                        Customize wordlists (separated by commas)
    -e EXTENSIONS, --extensions=EXTENSIONS
                        Extension list separated by commas (e.g. php,asp)
    -f, --force-extensions
                        Add extensions to the end of every wordlist entry. By
                        default dirsearch only replaces the %EXT% keyword with
                        extensions
    -O, --overwrite-extensions
                        Overwrite other extensions in the wordlist with your
                        extensions (selected via `-e`)
    --exclude-extensions=EXTENSIONS
                        Exclude extension list separated by commas (e.g.
                        asp,jsp)
    --remove-extensions
                        Remove extensions in all paths (e.g. admin.php ->
                        admin)
    --prefixes=PREFIXES
                        Add custom prefixes to all wordlist entries (separated
                        by commas)
    --suffixes=SUFFIXES
                        Add custom suffixes to all wordlist entries, ignore
                        directories (separated by commas)
    -U, --uppercase     Uppercase wordlist
    -L, --lowercase     Lowercase wordlist
    -C, --capital       Capital wordlist

  General Settings:
    -t THREADS, --threads=THREADS
                        Number of threads
    -r, --recursive     Brute-force recursively
    --deep-recursive    Perform recursive scan on every directory depth (e.g.
                        api/users -> api/)
    --force-recursive   Do recursive brute-force for every found path, not
                        only directories
    -R DEPTH, --max-recursion-depth=DEPTH
                        Maximum recursion depth
    --recursion-status=CODES
                        Valid status codes to perform recursive scan, support
                        ranges (separated by commas)
    --subdirs=SUBDIRS   Scan sub-directories of the given URL[s] (separated by
                        commas)
    --exclude-subdirs=SUBDIRS
                        Exclude the following subdirectories during recursive
                        scan (separated by commas)
    -i CODES, --include-status=CODES
                        Include status codes, separated by commas, support
                        ranges (e.g. 200,300-399)
    -x CODES, --exclude-status=CODES
                        Exclude status codes, separated by commas, support
                        ranges (e.g. 301,500-599)
    --exclude-sizes=SIZES
                        Exclude responses by sizes, separated by commas (e.g.
                        0B,4KB)
    --exclude-text=TEXTS
                        Exclude responses by text, can use multiple flags
    --exclude-regex=REGEX
                        Exclude responses by regular expression
    --exclude-redirect=STRING
                        Exclude responses if this regex (or text) matches
                        redirect URL (e.g. '/index.html')
    --exclude-response=PATH
                        Exclude responses similar to response of this page,
                        path as input (e.g. 404.html)
    --skip-on-status=CODES
                        Skip target whenever hit one of these status codes,
                        separated by commas, support ranges
    --min-response-size=LENGTH
                        Minimum response length
    --max-response-size=LENGTH
                        Maximum response length
    --max-time=SECONDS  Maximum runtime for the scan
    --exit-on-error     Exit whenever an error occurs

  Request Settings:
    -m METHOD, --http-method=METHOD
                        HTTP method (default: GET)
    -d DATA, --data=DATA
                        HTTP request data
    --data-file=PATH    File contains HTTP request data
    -H HEADERS, --header=HEADERS
                        HTTP request header, can use multiple flags
    --header-file=PATH  File contains HTTP request headers
    -F, --follow-redirects
                        Follow HTTP redirects
    --random-agent      Choose a random User-Agent for each request
    --auth=CREDENTIAL   Authentication credential (e.g. user:password or
                        bearer token)
    --auth-type=TYPE    Authentication type (basic, digest, bearer, ntlm, jwt,
                        oauth2)
    --cert-file=PATH    File contains client-side certificate
    --key-file=PATH     File contains client-side certificate private key
                        (unencrypted)
    --user-agent=USER_AGENT
    --cookie=COOKIE

  Connection Settings:
    --timeout=TIMEOUT   Connection timeout
    --delay=DELAY       Delay between requests
    --proxy=PROXY       Proxy URL (HTTP/SOCKS), can use multiple flags
    --proxy-file=PATH   File contains proxy servers
    --proxy-auth=CREDENTIAL
                        Proxy authentication credential
    --replay-proxy=PROXY
                        Proxy to replay with found paths
    --tor               Use Tor network as proxy
    --scheme=SCHEME     Scheme for raw request or if there is no scheme in the
                        URL (Default: auto-detect)
    --max-rate=RATE     Max requests per second
    --retries=RETRIES   Number of retries for failed requests
    --ip=IP             Server IP address

  Advanced Settings:
    --crawl             Crawl for new paths in responses

  View Settings:
    --full-url          Full URLs in the output (enabled automatically in
                        quiet mode)
    --redirects-history
                        Show redirects history
    --no-color          No colored output
    -q, --quiet-mode    Quiet mode

  Output Settings:
    -o PATH, --output=PATH
                        Output file
    --format=FORMAT     Report format (Available: simple, plain, json, xml,
                        md, csv, html, sqlite)
    --log=PATH          Log file
5 folder, there are several "blacklist files". Paths in those files will be filtered from the scan result if they have the same status as mentioned in the filename.

Example: If you add

Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit

  Mandatory:
    -u URL, --url=URL   Target URL(s), can use multiple flags
    -l PATH, --url-file=PATH
                        URL list file
    --stdin             Read URL(s) from STDIN
    --cidr=CIDR         Target CIDR
    --raw=PATH          Load raw HTTP request from file (use '--scheme' flag
                        to set the scheme)
    -s SESSION_FILE, --session=SESSION_FILE
                        Session file
    --config=PATH       Path to configuration file (Default:
                        'DIRSEARCH_CONFIG' environment variable, otherwise
                        'config.ini')

  Dictionary Settings:
    -w WORDLISTS, --wordlists=WORDLISTS
                        Customize wordlists (separated by commas)
    -e EXTENSIONS, --extensions=EXTENSIONS
                        Extension list separated by commas (e.g. php,asp)
    -f, --force-extensions
                        Add extensions to the end of every wordlist entry. By
                        default dirsearch only replaces the %EXT% keyword with
                        extensions
    -O, --overwrite-extensions
                        Overwrite other extensions in the wordlist with your
                        extensions (selected via `-e`)
    --exclude-extensions=EXTENSIONS
                        Exclude extension list separated by commas (e.g.
                        asp,jsp)
    --remove-extensions
                        Remove extensions in all paths (e.g. admin.php ->
                        admin)
    --prefixes=PREFIXES
                        Add custom prefixes to all wordlist entries (separated
                        by commas)
    --suffixes=SUFFIXES
                        Add custom suffixes to all wordlist entries, ignore
                        directories (separated by commas)
    -U, --uppercase     Uppercase wordlist
    -L, --lowercase     Lowercase wordlist
    -C, --capital       Capital wordlist

  General Settings:
    -t THREADS, --threads=THREADS
                        Number of threads
    -r, --recursive     Brute-force recursively
    --deep-recursive    Perform recursive scan on every directory depth (e.g.
                        api/users -> api/)
    --force-recursive   Do recursive brute-force for every found path, not
                        only directories
    -R DEPTH, --max-recursion-depth=DEPTH
                        Maximum recursion depth
    --recursion-status=CODES
                        Valid status codes to perform recursive scan, support
                        ranges (separated by commas)
    --subdirs=SUBDIRS   Scan sub-directories of the given URL[s] (separated by
                        commas)
    --exclude-subdirs=SUBDIRS
                        Exclude the following subdirectories during recursive
                        scan (separated by commas)
    -i CODES, --include-status=CODES
                        Include status codes, separated by commas, support
                        ranges (e.g. 200,300-399)
    -x CODES, --exclude-status=CODES
                        Exclude status codes, separated by commas, support
                        ranges (e.g. 301,500-599)
    --exclude-sizes=SIZES
                        Exclude responses by sizes, separated by commas (e.g.
                        0B,4KB)
    --exclude-text=TEXTS
                        Exclude responses by text, can use multiple flags
    --exclude-regex=REGEX
                        Exclude responses by regular expression
    --exclude-redirect=STRING
                        Exclude responses if this regex (or text) matches
                        redirect URL (e.g. '/index.html')
    --exclude-response=PATH
                        Exclude responses similar to response of this page,
                        path as input (e.g. 404.html)
    --skip-on-status=CODES
                        Skip target whenever hit one of these status codes,
                        separated by commas, support ranges
    --min-response-size=LENGTH
                        Minimum response length
    --max-response-size=LENGTH
                        Maximum response length
    --max-time=SECONDS  Maximum runtime for the scan
    --exit-on-error     Exit whenever an error occurs

  Request Settings:
    -m METHOD, --http-method=METHOD
                        HTTP method (default: GET)
    -d DATA, --data=DATA
                        HTTP request data
    --data-file=PATH    File contains HTTP request data
    -H HEADERS, --header=HEADERS
                        HTTP request header, can use multiple flags
    --header-file=PATH  File contains HTTP request headers
    -F, --follow-redirects
                        Follow HTTP redirects
    --random-agent      Choose a random User-Agent for each request
    --auth=CREDENTIAL   Authentication credential (e.g. user:password or
                        bearer token)
    --auth-type=TYPE    Authentication type (basic, digest, bearer, ntlm, jwt,
                        oauth2)
    --cert-file=PATH    File contains client-side certificate
    --key-file=PATH     File contains client-side certificate private key
                        (unencrypted)
    --user-agent=USER_AGENT
    --cookie=COOKIE

  Connection Settings:
    --timeout=TIMEOUT   Connection timeout
    --delay=DELAY       Delay between requests
    --proxy=PROXY       Proxy URL (HTTP/SOCKS), can use multiple flags
    --proxy-file=PATH   File contains proxy servers
    --proxy-auth=CREDENTIAL
                        Proxy authentication credential
    --replay-proxy=PROXY
                        Proxy to replay with found paths
    --tor               Use Tor network as proxy
    --scheme=SCHEME     Scheme for raw request or if there is no scheme in the
                        URL (Default: auto-detect)
    --max-rate=RATE     Max requests per second
    --retries=RETRIES   Number of retries for failed requests
    --ip=IP             Server IP address

  Advanced Settings:
    --crawl             Crawl for new paths in responses

  View Settings:
    --full-url          Full URLs in the output (enabled automatically in
                        quiet mode)
    --redirects-history
                        Show redirects history
    --no-color          No colored output
    -q, --quiet-mode    Quiet mode

  Output Settings:
    -o PATH, --output=PATH
                        Output file
    --format=FORMAT     Report format (Available: simple, plain, json, xml,
                        md, csv, html, sqlite)
    --log=PATH          Log file
6 into
Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit

  Mandatory:
    -u URL, --url=URL   Target URL(s), can use multiple flags
    -l PATH, --url-file=PATH
                        URL list file
    --stdin             Read URL(s) from STDIN
    --cidr=CIDR         Target CIDR
    --raw=PATH          Load raw HTTP request from file (use '--scheme' flag
                        to set the scheme)
    -s SESSION_FILE, --session=SESSION_FILE
                        Session file
    --config=PATH       Path to configuration file (Default:
                        'DIRSEARCH_CONFIG' environment variable, otherwise
                        'config.ini')

  Dictionary Settings:
    -w WORDLISTS, --wordlists=WORDLISTS
                        Customize wordlists (separated by commas)
    -e EXTENSIONS, --extensions=EXTENSIONS
                        Extension list separated by commas (e.g. php,asp)
    -f, --force-extensions
                        Add extensions to the end of every wordlist entry. By
                        default dirsearch only replaces the %EXT% keyword with
                        extensions
    -O, --overwrite-extensions
                        Overwrite other extensions in the wordlist with your
                        extensions (selected via `-e`)
    --exclude-extensions=EXTENSIONS
                        Exclude extension list separated by commas (e.g.
                        asp,jsp)
    --remove-extensions
                        Remove extensions in all paths (e.g. admin.php ->
                        admin)
    --prefixes=PREFIXES
                        Add custom prefixes to all wordlist entries (separated
                        by commas)
    --suffixes=SUFFIXES
                        Add custom suffixes to all wordlist entries, ignore
                        directories (separated by commas)
    -U, --uppercase     Uppercase wordlist
    -L, --lowercase     Lowercase wordlist
    -C, --capital       Capital wordlist

  General Settings:
    -t THREADS, --threads=THREADS
                        Number of threads
    -r, --recursive     Brute-force recursively
    --deep-recursive    Perform recursive scan on every directory depth (e.g.
                        api/users -> api/)
    --force-recursive   Do recursive brute-force for every found path, not
                        only directories
    -R DEPTH, --max-recursion-depth=DEPTH
                        Maximum recursion depth
    --recursion-status=CODES
                        Valid status codes to perform recursive scan, support
                        ranges (separated by commas)
    --subdirs=SUBDIRS   Scan sub-directories of the given URL[s] (separated by
                        commas)
    --exclude-subdirs=SUBDIRS
                        Exclude the following subdirectories during recursive
                        scan (separated by commas)
    -i CODES, --include-status=CODES
                        Include status codes, separated by commas, support
                        ranges (e.g. 200,300-399)
    -x CODES, --exclude-status=CODES
                        Exclude status codes, separated by commas, support
                        ranges (e.g. 301,500-599)
    --exclude-sizes=SIZES
                        Exclude responses by sizes, separated by commas (e.g.
                        0B,4KB)
    --exclude-text=TEXTS
                        Exclude responses by text, can use multiple flags
    --exclude-regex=REGEX
                        Exclude responses by regular expression
    --exclude-redirect=STRING
                        Exclude responses if this regex (or text) matches
                        redirect URL (e.g. '/index.html')
    --exclude-response=PATH
                        Exclude responses similar to response of this page,
                        path as input (e.g. 404.html)
    --skip-on-status=CODES
                        Skip target whenever hit one of these status codes,
                        separated by commas, support ranges
    --min-response-size=LENGTH
                        Minimum response length
    --max-response-size=LENGTH
                        Maximum response length
    --max-time=SECONDS  Maximum runtime for the scan
    --exit-on-error     Exit whenever an error occurs

  Request Settings:
    -m METHOD, --http-method=METHOD
                        HTTP method (default: GET)
    -d DATA, --data=DATA
                        HTTP request data
    --data-file=PATH    File contains HTTP request data
    -H HEADERS, --header=HEADERS
                        HTTP request header, can use multiple flags
    --header-file=PATH  File contains HTTP request headers
    -F, --follow-redirects
                        Follow HTTP redirects
    --random-agent      Choose a random User-Agent for each request
    --auth=CREDENTIAL   Authentication credential (e.g. user:password or
                        bearer token)
    --auth-type=TYPE    Authentication type (basic, digest, bearer, ntlm, jwt,
                        oauth2)
    --cert-file=PATH    File contains client-side certificate
    --key-file=PATH     File contains client-side certificate private key
                        (unencrypted)
    --user-agent=USER_AGENT
    --cookie=COOKIE

  Connection Settings:
    --timeout=TIMEOUT   Connection timeout
    --delay=DELAY       Delay between requests
    --proxy=PROXY       Proxy URL (HTTP/SOCKS), can use multiple flags
    --proxy-file=PATH   File contains proxy servers
    --proxy-auth=CREDENTIAL
                        Proxy authentication credential
    --replay-proxy=PROXY
                        Proxy to replay with found paths
    --tor               Use Tor network as proxy
    --scheme=SCHEME     Scheme for raw request or if there is no scheme in the
                        URL (Default: auto-detect)
    --max-rate=RATE     Max requests per second
    --retries=RETRIES   Number of retries for failed requests
    --ip=IP             Server IP address

  Advanced Settings:
    --crawl             Crawl for new paths in responses

  View Settings:
    --full-url          Full URLs in the output (enabled automatically in
                        quiet mode)
    --redirects-history
                        Show redirects history
    --no-color          No colored output
    -q, --quiet-mode    Quiet mode

  Output Settings:
    -o PATH, --output=PATH
                        Output file
    --format=FORMAT     Report format (Available: simple, plain, json, xml,
                        md, csv, html, sqlite)
    --log=PATH          Log file
7, whenever you do a scan that
Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit

  Mandatory:
    -u URL, --url=URL   Target URL(s), can use multiple flags
    -l PATH, --url-file=PATH
                        URL list file
    --stdin             Read URL(s) from STDIN
    --cidr=CIDR         Target CIDR
    --raw=PATH          Load raw HTTP request from file (use '--scheme' flag
                        to set the scheme)
    -s SESSION_FILE, --session=SESSION_FILE
                        Session file
    --config=PATH       Path to configuration file (Default:
                        'DIRSEARCH_CONFIG' environment variable, otherwise
                        'config.ini')

  Dictionary Settings:
    -w WORDLISTS, --wordlists=WORDLISTS
                        Customize wordlists (separated by commas)
    -e EXTENSIONS, --extensions=EXTENSIONS
                        Extension list separated by commas (e.g. php,asp)
    -f, --force-extensions
                        Add extensions to the end of every wordlist entry. By
                        default dirsearch only replaces the %EXT% keyword with
                        extensions
    -O, --overwrite-extensions
                        Overwrite other extensions in the wordlist with your
                        extensions (selected via `-e`)
    --exclude-extensions=EXTENSIONS
                        Exclude extension list separated by commas (e.g.
                        asp,jsp)
    --remove-extensions
                        Remove extensions in all paths (e.g. admin.php ->
                        admin)
    --prefixes=PREFIXES
                        Add custom prefixes to all wordlist entries (separated
                        by commas)
    --suffixes=SUFFIXES
                        Add custom suffixes to all wordlist entries, ignore
                        directories (separated by commas)
    -U, --uppercase     Uppercase wordlist
    -L, --lowercase     Lowercase wordlist
    -C, --capital       Capital wordlist

  General Settings:
    -t THREADS, --threads=THREADS
                        Number of threads
    -r, --recursive     Brute-force recursively
    --deep-recursive    Perform recursive scan on every directory depth (e.g.
                        api/users -> api/)
    --force-recursive   Do recursive brute-force for every found path, not
                        only directories
    -R DEPTH, --max-recursion-depth=DEPTH
                        Maximum recursion depth
    --recursion-status=CODES
                        Valid status codes to perform recursive scan, support
                        ranges (separated by commas)
    --subdirs=SUBDIRS   Scan sub-directories of the given URL[s] (separated by
                        commas)
    --exclude-subdirs=SUBDIRS
                        Exclude the following subdirectories during recursive
                        scan (separated by commas)
    -i CODES, --include-status=CODES
                        Include status codes, separated by commas, support
                        ranges (e.g. 200,300-399)
    -x CODES, --exclude-status=CODES
                        Exclude status codes, separated by commas, support
                        ranges (e.g. 301,500-599)
    --exclude-sizes=SIZES
                        Exclude responses by sizes, separated by commas (e.g.
                        0B,4KB)
    --exclude-text=TEXTS
                        Exclude responses by text, can use multiple flags
    --exclude-regex=REGEX
                        Exclude responses by regular expression
    --exclude-redirect=STRING
                        Exclude responses if this regex (or text) matches
                        redirect URL (e.g. '/index.html')
    --exclude-response=PATH
                        Exclude responses similar to response of this page,
                        path as input (e.g. 404.html)
    --skip-on-status=CODES
                        Skip target whenever hit one of these status codes,
                        separated by commas, support ranges
    --min-response-size=LENGTH
                        Minimum response length
    --max-response-size=LENGTH
                        Maximum response length
    --max-time=SECONDS  Maximum runtime for the scan
    --exit-on-error     Exit whenever an error occurs

  Request Settings:
    -m METHOD, --http-method=METHOD
                        HTTP method (default: GET)
    -d DATA, --data=DATA
                        HTTP request data
    --data-file=PATH    File contains HTTP request data
    -H HEADERS, --header=HEADERS
                        HTTP request header, can use multiple flags
    --header-file=PATH  File contains HTTP request headers
    -F, --follow-redirects
                        Follow HTTP redirects
    --random-agent      Choose a random User-Agent for each request
    --auth=CREDENTIAL   Authentication credential (e.g. user:password or
                        bearer token)
    --auth-type=TYPE    Authentication type (basic, digest, bearer, ntlm, jwt,
                        oauth2)
    --cert-file=PATH    File contains client-side certificate
    --key-file=PATH     File contains client-side certificate private key
                        (unencrypted)
    --user-agent=USER_AGENT
    --cookie=COOKIE

  Connection Settings:
    --timeout=TIMEOUT   Connection timeout
    --delay=DELAY       Delay between requests
    --proxy=PROXY       Proxy URL (HTTP/SOCKS), can use multiple flags
    --proxy-file=PATH   File contains proxy servers
    --proxy-auth=CREDENTIAL
                        Proxy authentication credential
    --replay-proxy=PROXY
                        Proxy to replay with found paths
    --tor               Use Tor network as proxy
    --scheme=SCHEME     Scheme for raw request or if there is no scheme in the
                        URL (Default: auto-detect)
    --max-rate=RATE     Max requests per second
    --retries=RETRIES   Number of retries for failed requests
    --ip=IP             Server IP address

  Advanced Settings:
    --crawl             Crawl for new paths in responses

  View Settings:
    --full-url          Full URLs in the output (enabled automatically in
                        quiet mode)
    --redirects-history
                        Show redirects history
    --no-color          No colored output
    -q, --quiet-mode    Quiet mode

  Output Settings:
    -o PATH, --output=PATH
                        Output file
    --format=FORMAT     Report format (Available: simple, plain, json, xml,
                        md, csv, html, sqlite)
    --log=PATH          Log file
6 returns 403, it will be filtered from the result.


Filters

Use -i | --include-status and -x | --exclude-status to select allowed and not allowed response status-codes

For more advanced filters: --exclude-sizes, --exclude-texts, --exclude-regexps, --exclude-redirects and --exclude-response

admin
1

admin
2

admin
3

admin
4

admin
5


Raw request

dirsearch allows you to import the raw request from a file. The content would be something looked like this:

admin
6

Since there is no way for dirsearch to know what the URI scheme is, you need to set it using the

Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit

  Mandatory:
    -u URL, --url=URL   Target URL(s), can use multiple flags
    -l PATH, --url-file=PATH
                        URL list file
    --stdin             Read URL(s) from STDIN
    --cidr=CIDR         Target CIDR
    --raw=PATH          Load raw HTTP request from file (use '--scheme' flag
                        to set the scheme)
    -s SESSION_FILE, --session=SESSION_FILE
                        Session file
    --config=PATH       Path to configuration file (Default:
                        'DIRSEARCH_CONFIG' environment variable, otherwise
                        'config.ini')

  Dictionary Settings:
    -w WORDLISTS, --wordlists=WORDLISTS
                        Customize wordlists (separated by commas)
    -e EXTENSIONS, --extensions=EXTENSIONS
                        Extension list separated by commas (e.g. php,asp)
    -f, --force-extensions
                        Add extensions to the end of every wordlist entry. By
                        default dirsearch only replaces the %EXT% keyword with
                        extensions
    -O, --overwrite-extensions
                        Overwrite other extensions in the wordlist with your
                        extensions (selected via `-e`)
    --exclude-extensions=EXTENSIONS
                        Exclude extension list separated by commas (e.g.
                        asp,jsp)
    --remove-extensions
                        Remove extensions in all paths (e.g. admin.php ->
                        admin)
    --prefixes=PREFIXES
                        Add custom prefixes to all wordlist entries (separated
                        by commas)
    --suffixes=SUFFIXES
                        Add custom suffixes to all wordlist entries, ignore
                        directories (separated by commas)
    -U, --uppercase     Uppercase wordlist
    -L, --lowercase     Lowercase wordlist
    -C, --capital       Capital wordlist

  General Settings:
    -t THREADS, --threads=THREADS
                        Number of threads
    -r, --recursive     Brute-force recursively
    --deep-recursive    Perform recursive scan on every directory depth (e.g.
                        api/users -> api/)
    --force-recursive   Do recursive brute-force for every found path, not
                        only directories
    -R DEPTH, --max-recursion-depth=DEPTH
                        Maximum recursion depth
    --recursion-status=CODES
                        Valid status codes to perform recursive scan, support
                        ranges (separated by commas)
    --subdirs=SUBDIRS   Scan sub-directories of the given URL[s] (separated by
                        commas)
    --exclude-subdirs=SUBDIRS
                        Exclude the following subdirectories during recursive
                        scan (separated by commas)
    -i CODES, --include-status=CODES
                        Include status codes, separated by commas, support
                        ranges (e.g. 200,300-399)
    -x CODES, --exclude-status=CODES
                        Exclude status codes, separated by commas, support
                        ranges (e.g. 301,500-599)
    --exclude-sizes=SIZES
                        Exclude responses by sizes, separated by commas (e.g.
                        0B,4KB)
    --exclude-text=TEXTS
                        Exclude responses by text, can use multiple flags
    --exclude-regex=REGEX
                        Exclude responses by regular expression
    --exclude-redirect=STRING
                        Exclude responses if this regex (or text) matches
                        redirect URL (e.g. '/index.html')
    --exclude-response=PATH
                        Exclude responses similar to response of this page,
                        path as input (e.g. 404.html)
    --skip-on-status=CODES
                        Skip target whenever hit one of these status codes,
                        separated by commas, support ranges
    --min-response-size=LENGTH
                        Minimum response length
    --max-response-size=LENGTH
                        Maximum response length
    --max-time=SECONDS  Maximum runtime for the scan
    --exit-on-error     Exit whenever an error occurs

  Request Settings:
    -m METHOD, --http-method=METHOD
                        HTTP method (default: GET)
    -d DATA, --data=DATA
                        HTTP request data
    --data-file=PATH    File contains HTTP request data
    -H HEADERS, --header=HEADERS
                        HTTP request header, can use multiple flags
    --header-file=PATH  File contains HTTP request headers
    -F, --follow-redirects
                        Follow HTTP redirects
    --random-agent      Choose a random User-Agent for each request
    --auth=CREDENTIAL   Authentication credential (e.g. user:password or
                        bearer token)
    --auth-type=TYPE    Authentication type (basic, digest, bearer, ntlm, jwt,
                        oauth2)
    --cert-file=PATH    File contains client-side certificate
    --key-file=PATH     File contains client-side certificate private key
                        (unencrypted)
    --user-agent=USER_AGENT
    --cookie=COOKIE

  Connection Settings:
    --timeout=TIMEOUT   Connection timeout
    --delay=DELAY       Delay between requests
    --proxy=PROXY       Proxy URL (HTTP/SOCKS), can use multiple flags
    --proxy-file=PATH   File contains proxy servers
    --proxy-auth=CREDENTIAL
                        Proxy authentication credential
    --replay-proxy=PROXY
                        Proxy to replay with found paths
    --tor               Use Tor network as proxy
    --scheme=SCHEME     Scheme for raw request or if there is no scheme in the
                        URL (Default: auto-detect)
    --max-rate=RATE     Max requests per second
    --retries=RETRIES   Number of retries for failed requests
    --ip=IP             Server IP address

  Advanced Settings:
    --crawl             Crawl for new paths in responses

  View Settings:
    --full-url          Full URLs in the output (enabled automatically in
                        quiet mode)
    --redirects-history
                        Show redirects history
    --no-color          No colored output
    -q, --quiet-mode    Quiet mode

  Output Settings:
    -o PATH, --output=PATH
                        Output file
    --format=FORMAT     Report format (Available: simple, plain, json, xml,
                        md, csv, html, sqlite)
    --log=PATH          Log file
9 flag. By default, dirsearch automatically detects the scheme.


Wordlist formats

Supported wordlist formats: uppercase, lowercase, capitalization

Lowercase:

admin
7

Uppercase:

admin
8

Capital:

admin
9


Exclude extensions

Use -X | --exclude-extensions with an extension list will remove all paths in the wordlist that contains the given extensions

# If you want to edit dirsearch default configurations, you can
# edit values in this file. Everything after `#` is a comment
# and won't be applied

[general]
threads = 25
recursive = False
deep-recursive = False
force-recursive = False
recursion-status = 200-399,401,403
max-recursion-depth = 0
exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/
random-user-agents = False
max-time = 0
exit-on-error = False
# subdirs = /,api/
# include-status = 200-299,401
# exclude-status = 400,500-999
# exclude-sizes = 0b,123gb
# exclude-text = "Not found"
# exclude-regex = "^403$"
# exclude-redirect = "*/error.html"
# exclude-response = 404.html
# skip-on-status = 429,999

[dictionary]
default-extensions = php,aspx,jsp,html,js
force-extensions = False
overwrite-extensions = False
lowercase = False
uppercase = False
capitalization = False
# exclude-extensions = old,log
# prefixes = .,admin
# suffixes = ~,.bak
# wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt

[request]
http-method = get
follow-redirects = False
# headers-file = /path/to/headers.txt
# user-agent = MyUserAgent
# cookie = SESSIONID=123

[connection]
timeout = 7.5
delay = 0
max-rate = 0
max-retries = 1
## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme
# scheme = http
# proxy = localhost:8080
# proxy-file = /path/to/proxies.txt
# replay-proxy = localhost:8000

[advanced]
crawl = False

[view]
full-url = False
quiet-mode = False
color = True
show-redirects-history = False

[output]
## Support: plain, simple, json, xml, md, csv, html, sqlite
report-format = plain
autosave-report = True
autosave-report-folder = reports/
# log-file = /path/to/dirsearch.log
# log-file-size = 50000000
0

Wordlist:

admin
admin.php
admin.html
admin/
0

After:

admin
admin.php
admin.html
admin/
1


Scan sub-directories

  • From an URL, you can scan a list of sub-directories with --subdirs.

admin
admin.php
admin.html
admin/
2


Proxies

dirsearch supports SOCKS and HTTP proxy, with two options: a proxy server or a list of proxy servers.

admin
admin.php
admin.html
admin/
3

admin
admin.php
admin.html
admin/
4

admin
admin.php
admin.html
admin/
5


Reports

Supported report formats: simple, plain, json, xml, md, csv, html, sqlite

admin
admin.php
admin.html
admin/
6

admin
admin.php
admin.html
admin/
7


More example commands

admin
admin.php
admin.html
admin/
8

admin
admin.php
admin.html
admin/
9

login.html
0

login.html
1

There are more to discover, try yourself!

Support Docker

Install Docker Linux

Install Docker

login.html
2

To use docker you need superuser power

Build Image dirsearch

To create image

login.html
3

dirsearch is the name of the image and v0.4.3 is the version

Using dirsearch

For using

login.html
4

References

  • Comprehensive Guide on Dirsearch by Shubham Sharma
  • Comprehensive Guide on Dirsearch Part 2 by Shubham Sharma
  • How to Find Hidden Web Directories with Dirsearch by GeeksforGeeks
  • GUÍA COMPLETA SOBRE EL USO DE DIRSEARCH by ESGEEKS
  • How to use Dirsearch to detect web directories by EHacking
  • dirsearch how to by VK9 Security
  • Find Hidden Web Directories with Dirsearch by Wonder How To
  • Brute force directories and files in webservers using dirsearch by Raj Upadhyay
  • Live Bug Bounty Recon Session on Yahoo (Amass, crts.sh, dirsearch) w/ @TheDawgyg by Nahamsec
  • Dirsearch to find Hidden Web Directories by Irfan Shakeel
  • Getting access to 25000 employees details by Sahil Ahamad
  • Best Tools For Directory Bruteforcing by Shubham Goyal
  • Discover hidden files & directories on a webserver - dirsearch full tutorial by CYBER BYTES

Tips

  • The server has requests limit? That's bad, but feel free to bypass it, by randomizing proxy with
    # If you want to edit dirsearch default configurations, you can
    # edit values in this file. Everything after `#` is a comment
    # and won't be applied
    
    [general]
    threads = 25
    recursive = False
    deep-recursive = False
    force-recursive = False
    recursion-status = 200-399,401,403
    max-recursion-depth = 0
    exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/
    random-user-agents = False
    max-time = 0
    exit-on-error = False
    # subdirs = /,api/
    # include-status = 200-299,401
    # exclude-status = 400,500-999
    # exclude-sizes = 0b,123gb
    # exclude-text = "Not found"
    # exclude-regex = "^403$"
    # exclude-redirect = "*/error.html"
    # exclude-response = 404.html
    # skip-on-status = 429,999
    
    [dictionary]
    default-extensions = php,aspx,jsp,html,js
    force-extensions = False
    overwrite-extensions = False
    lowercase = False
    uppercase = False
    capitalization = False
    # exclude-extensions = old,log
    # prefixes = .,admin
    # suffixes = ~,.bak
    # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt
    
    [request]
    http-method = get
    follow-redirects = False
    # headers-file = /path/to/headers.txt
    # user-agent = MyUserAgent
    # cookie = SESSIONID=123
    
    [connection]
    timeout = 7.5
    delay = 0
    max-rate = 0
    max-retries = 1
    ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme
    # scheme = http
    # proxy = localhost:8080
    # proxy-file = /path/to/proxies.txt
    # replay-proxy = localhost:8000
    
    [advanced]
    crawl = False
    
    [view]
    full-url = False
    quiet-mode = False
    color = True
    show-redirects-history = False
    
    [output]
    ## Support: plain, simple, json, xml, md, csv, html, sqlite
    report-format = plain
    autosave-report = True
    autosave-report-folder = reports/
    # log-file = /path/to/dirsearch.log
    # log-file-size = 50000000
    1
  • Want to find out config files or backups? Try
    # If you want to edit dirsearch default configurations, you can
    # edit values in this file. Everything after `#` is a comment
    # and won't be applied
    
    [general]
    threads = 25
    recursive = False
    deep-recursive = False
    force-recursive = False
    recursion-status = 200-399,401,403
    max-recursion-depth = 0
    exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/
    random-user-agents = False
    max-time = 0
    exit-on-error = False
    # subdirs = /,api/
    # include-status = 200-299,401
    # exclude-status = 400,500-999
    # exclude-sizes = 0b,123gb
    # exclude-text = "Not found"
    # exclude-regex = "^403$"
    # exclude-redirect = "*/error.html"
    # exclude-response = 404.html
    # skip-on-status = 429,999
    
    [dictionary]
    default-extensions = php,aspx,jsp,html,js
    force-extensions = False
    overwrite-extensions = False
    lowercase = False
    uppercase = False
    capitalization = False
    # exclude-extensions = old,log
    # prefixes = .,admin
    # suffixes = ~,.bak
    # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt
    
    [request]
    http-method = get
    follow-redirects = False
    # headers-file = /path/to/headers.txt
    # user-agent = MyUserAgent
    # cookie = SESSIONID=123
    
    [connection]
    timeout = 7.5
    delay = 0
    max-rate = 0
    max-retries = 1
    ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme
    # scheme = http
    # proxy = localhost:8080
    # proxy-file = /path/to/proxies.txt
    # replay-proxy = localhost:8000
    
    [advanced]
    crawl = False
    
    [view]
    full-url = False
    quiet-mode = False
    color = True
    show-redirects-history = False
    
    [output]
    ## Support: plain, simple, json, xml, md, csv, html, sqlite
    report-format = plain
    autosave-report = True
    autosave-report-folder = reports/
    # log-file = /path/to/dirsearch.log
    # log-file-size = 50000000
    2 and
    # If you want to edit dirsearch default configurations, you can
    # edit values in this file. Everything after `#` is a comment
    # and won't be applied
    
    [general]
    threads = 25
    recursive = False
    deep-recursive = False
    force-recursive = False
    recursion-status = 200-399,401,403
    max-recursion-depth = 0
    exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/
    random-user-agents = False
    max-time = 0
    exit-on-error = False
    # subdirs = /,api/
    # include-status = 200-299,401
    # exclude-status = 400,500-999
    # exclude-sizes = 0b,123gb
    # exclude-text = "Not found"
    # exclude-regex = "^403$"
    # exclude-redirect = "*/error.html"
    # exclude-response = 404.html
    # skip-on-status = 429,999
    
    [dictionary]
    default-extensions = php,aspx,jsp,html,js
    force-extensions = False
    overwrite-extensions = False
    lowercase = False
    uppercase = False
    capitalization = False
    # exclude-extensions = old,log
    # prefixes = .,admin
    # suffixes = ~,.bak
    # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt
    
    [request]
    http-method = get
    follow-redirects = False
    # headers-file = /path/to/headers.txt
    # user-agent = MyUserAgent
    # cookie = SESSIONID=123
    
    [connection]
    timeout = 7.5
    delay = 0
    max-rate = 0
    max-retries = 1
    ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme
    # scheme = http
    # proxy = localhost:8080
    # proxy-file = /path/to/proxies.txt
    # replay-proxy = localhost:8000
    
    [advanced]
    crawl = False
    
    [view]
    full-url = False
    quiet-mode = False
    color = True
    show-redirects-history = False
    
    [output]
    ## Support: plain, simple, json, xml, md, csv, html, sqlite
    report-format = plain
    autosave-report = True
    autosave-report-folder = reports/
    # log-file = /path/to/dirsearch.log
    # log-file-size = 50000000
    3
  • Want to find only folders/directories? Why not combine
    # If you want to edit dirsearch default configurations, you can
    # edit values in this file. Everything after `#` is a comment
    # and won't be applied
    
    [general]
    threads = 25
    recursive = False
    deep-recursive = False
    force-recursive = False
    recursion-status = 200-399,401,403
    max-recursion-depth = 0
    exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/
    random-user-agents = False
    max-time = 0
    exit-on-error = False
    # subdirs = /,api/
    # include-status = 200-299,401
    # exclude-status = 400,500-999
    # exclude-sizes = 0b,123gb
    # exclude-text = "Not found"
    # exclude-regex = "^403$"
    # exclude-redirect = "*/error.html"
    # exclude-response = 404.html
    # skip-on-status = 429,999
    
    [dictionary]
    default-extensions = php,aspx,jsp,html,js
    force-extensions = False
    overwrite-extensions = False
    lowercase = False
    uppercase = False
    capitalization = False
    # exclude-extensions = old,log
    # prefixes = .,admin
    # suffixes = ~,.bak
    # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt
    
    [request]
    http-method = get
    follow-redirects = False
    # headers-file = /path/to/headers.txt
    # user-agent = MyUserAgent
    # cookie = SESSIONID=123
    
    [connection]
    timeout = 7.5
    delay = 0
    max-rate = 0
    max-retries = 1
    ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme
    # scheme = http
    # proxy = localhost:8080
    # proxy-file = /path/to/proxies.txt
    # replay-proxy = localhost:8000
    
    [advanced]
    crawl = False
    
    [view]
    full-url = False
    quiet-mode = False
    color = True
    show-redirects-history = False
    
    [output]
    ## Support: plain, simple, json, xml, md, csv, html, sqlite
    report-format = plain
    autosave-report = True
    autosave-report-folder = reports/
    # log-file = /path/to/dirsearch.log
    # log-file-size = 50000000
    4 and
    # If you want to edit dirsearch default configurations, you can
    # edit values in this file. Everything after `#` is a comment
    # and won't be applied
    
    [general]
    threads = 25
    recursive = False
    deep-recursive = False
    force-recursive = False
    recursion-status = 200-399,401,403
    max-recursion-depth = 0
    exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/
    random-user-agents = False
    max-time = 0
    exit-on-error = False
    # subdirs = /,api/
    # include-status = 200-299,401
    # exclude-status = 400,500-999
    # exclude-sizes = 0b,123gb
    # exclude-text = "Not found"
    # exclude-regex = "^403$"
    # exclude-redirect = "*/error.html"
    # exclude-response = 404.html
    # skip-on-status = 429,999
    
    [dictionary]
    default-extensions = php,aspx,jsp,html,js
    force-extensions = False
    overwrite-extensions = False
    lowercase = False
    uppercase = False
    capitalization = False
    # exclude-extensions = old,log
    # prefixes = .,admin
    # suffixes = ~,.bak
    # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt
    
    [request]
    http-method = get
    follow-redirects = False
    # headers-file = /path/to/headers.txt
    # user-agent = MyUserAgent
    # cookie = SESSIONID=123
    
    [connection]
    timeout = 7.5
    delay = 0
    max-rate = 0
    max-retries = 1
    ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme
    # scheme = http
    # proxy = localhost:8080
    # proxy-file = /path/to/proxies.txt
    # replay-proxy = localhost:8000
    
    [advanced]
    crawl = False
    
    [view]
    full-url = False
    quiet-mode = False
    color = True
    show-redirects-history = False
    
    [output]
    ## Support: plain, simple, json, xml, md, csv, html, sqlite
    report-format = plain
    autosave-report = True
    autosave-report-folder = reports/
    # log-file = /path/to/dirsearch.log
    # log-file-size = 50000000
    5!
  • The mix of
    # If you want to edit dirsearch default configurations, you can
    # edit values in this file. Everything after `#` is a comment
    # and won't be applied
    
    [general]
    threads = 25
    recursive = False
    deep-recursive = False
    force-recursive = False
    recursion-status = 200-399,401,403
    max-recursion-depth = 0
    exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/
    random-user-agents = False
    max-time = 0
    exit-on-error = False
    # subdirs = /,api/
    # include-status = 200-299,401
    # exclude-status = 400,500-999
    # exclude-sizes = 0b,123gb
    # exclude-text = "Not found"
    # exclude-regex = "^403$"
    # exclude-redirect = "*/error.html"
    # exclude-response = 404.html
    # skip-on-status = 429,999
    
    [dictionary]
    default-extensions = php,aspx,jsp,html,js
    force-extensions = False
    overwrite-extensions = False
    lowercase = False
    uppercase = False
    capitalization = False
    # exclude-extensions = old,log
    # prefixes = .,admin
    # suffixes = ~,.bak
    # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt
    
    [request]
    http-method = get
    follow-redirects = False
    # headers-file = /path/to/headers.txt
    # user-agent = MyUserAgent
    # cookie = SESSIONID=123
    
    [connection]
    timeout = 7.5
    delay = 0
    max-rate = 0
    max-retries = 1
    ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme
    # scheme = http
    # proxy = localhost:8080
    # proxy-file = /path/to/proxies.txt
    # replay-proxy = localhost:8000
    
    [advanced]
    crawl = False
    
    [view]
    full-url = False
    quiet-mode = False
    color = True
    show-redirects-history = False
    
    [output]
    ## Support: plain, simple, json, xml, md, csv, html, sqlite
    report-format = plain
    autosave-report = True
    autosave-report-folder = reports/
    # log-file = /path/to/dirsearch.log
    # log-file-size = 50000000
    6,
    # If you want to edit dirsearch default configurations, you can
    # edit values in this file. Everything after `#` is a comment
    # and won't be applied
    
    [general]
    threads = 25
    recursive = False
    deep-recursive = False
    force-recursive = False
    recursion-status = 200-399,401,403
    max-recursion-depth = 0
    exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/
    random-user-agents = False
    max-time = 0
    exit-on-error = False
    # subdirs = /,api/
    # include-status = 200-299,401
    # exclude-status = 400,500-999
    # exclude-sizes = 0b,123gb
    # exclude-text = "Not found"
    # exclude-regex = "^403$"
    # exclude-redirect = "*/error.html"
    # exclude-response = 404.html
    # skip-on-status = 429,999
    
    [dictionary]
    default-extensions = php,aspx,jsp,html,js
    force-extensions = False
    overwrite-extensions = False
    lowercase = False
    uppercase = False
    capitalization = False
    # exclude-extensions = old,log
    # prefixes = .,admin
    # suffixes = ~,.bak
    # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt
    
    [request]
    http-method = get
    follow-redirects = False
    # headers-file = /path/to/headers.txt
    # user-agent = MyUserAgent
    # cookie = SESSIONID=123
    
    [connection]
    timeout = 7.5
    delay = 0
    max-rate = 0
    max-retries = 1
    ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme
    # scheme = http
    # proxy = localhost:8080
    # proxy-file = /path/to/proxies.txt
    # replay-proxy = localhost:8000
    
    [advanced]
    crawl = False
    
    [view]
    full-url = False
    quiet-mode = False
    color = True
    show-redirects-history = False
    
    [output]
    ## Support: plain, simple, json, xml, md, csv, html, sqlite
    report-format = plain
    autosave-report = True
    autosave-report-folder = reports/
    # log-file = /path/to/dirsearch.log
    # log-file-size = 50000000
    7,
    # If you want to edit dirsearch default configurations, you can
    # edit values in this file. Everything after `#` is a comment
    # and won't be applied
    
    [general]
    threads = 25
    recursive = False
    deep-recursive = False
    force-recursive = False
    recursion-status = 200-399,401,403
    max-recursion-depth = 0
    exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/
    random-user-agents = False
    max-time = 0
    exit-on-error = False
    # subdirs = /,api/
    # include-status = 200-299,401
    # exclude-status = 400,500-999
    # exclude-sizes = 0b,123gb
    # exclude-text = "Not found"
    # exclude-regex = "^403$"
    # exclude-redirect = "*/error.html"
    # exclude-response = 404.html
    # skip-on-status = 429,999
    
    [dictionary]
    default-extensions = php,aspx,jsp,html,js
    force-extensions = False
    overwrite-extensions = False
    lowercase = False
    uppercase = False
    capitalization = False
    # exclude-extensions = old,log
    # prefixes = .,admin
    # suffixes = ~,.bak
    # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt
    
    [request]
    http-method = get
    follow-redirects = False
    # headers-file = /path/to/headers.txt
    # user-agent = MyUserAgent
    # cookie = SESSIONID=123
    
    [connection]
    timeout = 7.5
    delay = 0
    max-rate = 0
    max-retries = 1
    ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme
    # scheme = http
    # proxy = localhost:8080
    # proxy-file = /path/to/proxies.txt
    # replay-proxy = localhost:8000
    
    [advanced]
    crawl = False
    
    [view]
    full-url = False
    quiet-mode = False
    color = True
    show-redirects-history = False
    
    [output]
    ## Support: plain, simple, json, xml, md, csv, html, sqlite
    report-format = plain
    autosave-report = True
    autosave-report-folder = reports/
    # log-file = /path/to/dirsearch.log
    # log-file-size = 50000000
    8 and will reduce most of noises + false negatives when brute-forcing with a CIDR
  • Scan a list of URLs, but don't want to see a 429 flood?
    # If you want to edit dirsearch default configurations, you can
    # edit values in this file. Everything after `#` is a comment
    # and won't be applied
    
    [general]
    threads = 25
    recursive = False
    deep-recursive = False
    force-recursive = False
    recursion-status = 200-399,401,403
    max-recursion-depth = 0
    exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/
    random-user-agents = False
    max-time = 0
    exit-on-error = False
    # subdirs = /,api/
    # include-status = 200-299,401
    # exclude-status = 400,500-999
    # exclude-sizes = 0b,123gb
    # exclude-text = "Not found"
    # exclude-regex = "^403$"
    # exclude-redirect = "*/error.html"
    # exclude-response = 404.html
    # skip-on-status = 429,999
    
    [dictionary]
    default-extensions = php,aspx,jsp,html,js
    force-extensions = False
    overwrite-extensions = False
    lowercase = False
    uppercase = False
    capitalization = False
    # exclude-extensions = old,log
    # prefixes = .,admin
    # suffixes = ~,.bak
    # wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt
    
    [request]
    http-method = get
    follow-redirects = False
    # headers-file = /path/to/headers.txt
    # user-agent = MyUserAgent
    # cookie = SESSIONID=123
    
    [connection]
    timeout = 7.5
    delay = 0
    max-rate = 0
    max-retries = 1
    ## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme
    # scheme = http
    # proxy = localhost:8080
    # proxy-file = /path/to/proxies.txt
    # replay-proxy = localhost:8000
    
    [advanced]
    crawl = False
    
    [view]
    full-url = False
    quiet-mode = False
    color = True
    show-redirects-history = False
    
    [output]
    ## Support: plain, simple, json, xml, md, csv, html, sqlite
    report-format = plain
    autosave-report = True
    autosave-report-folder = reports/
    # log-file = /path/to/dirsearch.log
    # log-file-size = 50000000
    9 will help you to skip a target whenever it returns 429
  • The server contains large files that slow down the scan? You might want to use
    python3 dirsearch.py -u https://target
    
    0 HTTP method instead of
    python3 dirsearch.py -u https://target
    
    1
  • Brute-forcing CIDR is slow? Probably you forgot to reduce request timeout and request retries. Suggest:
    python3 dirsearch.py -u https://target
    
    2

Contribution

We have been receiving a lot of helps from many people around the world to improve this tool. Thanks so much to everyone who have helped us so far! See CONTRIBUTORS.md to know who they are.