The Uniscan is a vulnerability scanner for Web applications, written in perl for Linux environment. Currently the version of Uniscan is 6.2. Below are listed the features of Uniscan.
- Identification of system pages through a Web Crawler.
- Use of threads in the crawler.
- Control the maximum number of requests the crawler.
- Control of variation of system pages identified by Web Crawler.
- Control of file extensions that are ignored.
- Test of pages found via the GET method.
- Test the forms found via the POST method.
- Support for SSL requests (HTTPS).
- Proxy support.
- Generate site list using Google.
- Generate site list using Bing.
- Plug-in support for Crawler.
- Plug-in support for dynamic tests.
- Plug-in support for static tests.
- Plug-in support for stress tests.
- Multi-language support.
- Web client.
- GUI client written in perl using tk.
Directory checking: The use of directory checking is helpful to discover hidden directories, in other words, which does not contain a link to them. The contents of directories that are not found by the crawler are not tested, this check feeds the Uniscan crawler and prevents this problem occurring.
File checking: The file checking follows the same principle of directory checking.
Checking robots.txt file: Checking the robots.txt file serves to feed the crawler with directories and files that may not be found by the crawler. This check ignores Allow and Disallow pattern of this type of file, any directory and file found here will be added to the crawler.
Checking sitemap.xml file: The checking of the file sitemap.xml follows the same principles of checking the robots.txt file.
Crawler: The function of the crawler is to navigate into target site searching pages that will be tested later by engine tests and collect sensitive information.
Posted by Mohit Kumar at Sunday, August 19, 2012