ReconRover (WIP)
Reconnaissance plays a crucial role in understanding the security posture of a target system. In this blog post, we will explore a script that utilizes the Nmap and Requests libraries to perform basic reconnaissance tasks on both internal and web targets. The script includes port scanning, banner retrieval, HTTP header analysis, and vulnerability checks. Let’s dive into the details.

Step 1: Performing a Port Scan
The script uses the Nmap library to scan the specified target IP address for open ports. It connects to each open port and retrieves the banner information. This step provides valuable insights into the services running on the target system.

Step 2: Retrieving Files from Web Targets
For web targets, the script uses the Requests library to send an HTTP GET request to the target’s root URL. It retrieves a list of files found on the webpage. This step assists in understanding the structure and content of the web application.

Step 3: Analyzing HTTP Headers
The script sends another HTTP GET request to the target’s root URL, this time to retrieve the response headers. It checks for the presence of expected headers and identifies missing or misconfigured headers. Additionally, the script performs additional authenticated header checks, such as testing for clickjacking protection. The findings are saved in a report file named “headers_report.txt”.

Step 4: Verbose Server Headers and Redirections
The script checks for verbose server headers, which can potentially reveal sensitive information about the server. It saves the findings in the “headers_report.txt” report file. Additionally, the script checks for insecure redirections and saves any discovered issues in the same report file.

Step 5: Cross-Domain Scripts and “robots.txt” Analysis
The script checks for cross-domain scripts in the HTML response and prints any external scripts detected. This step helps identify potential security risks associated with script inclusion from external domains. Furthermore, the script checks for the presence of a “robots.txt” file and saves its contents in a file named “robots.txt”. This file often contains directives and guidelines for search engine crawlers.

Step 6: Checking TRACE Method and Saving Findings
The script checks if the TRACE method is enabled on the target. This method can potentially lead to Cross-Site Tracing (XST) attacks. The findings are saved in the “headers_report.txt” report file.

Step 7: Menu-Driven Interface
The script provides a menu-driven interface where the user can choose between performing an internal scan (internal_recon function), a web scan (web_recon function), or exiting the program (exit function). This user-friendly interface allows for easy navigation and execution of the desired reconnaissance tasks.

Conclusion:
The script presented in this blog post demonstrates the utilization of the Nmap and Requests libraries to perform basic reconnaissance tasks on both internal and web targets. By combining port scanning, banner retrieval, HTTP header analysis, and vulnerability checks, organizations can gain valuable insights into potential security risks. This comprehensive approach empowers organizations to proactively address vulnerabilities and strengthen their overall security posture.