"One machine can do the work of fifty ordinary men. No machine can do the work of an extraordinary man." - Elbert Hubbard
Back when I was just starting to learn to break web applications, I exploited absolutely everything with a text editor and a browser. Certain vulnerabilities like cross-site scripting were easy to exploit with just a browser. Others which required fiddling with low-level things like my User-Agent header were not possible or plausible (you can change your UA in Firefox without any plugins, but it's a pain and you can't do it selectively for particular domains so you might end up attacking someone unintentionally).
Some vulnerabilities – like blind SQL injection flaws – would take so long to exploit manually that it just wasn't worth it. However, I despised the idea of letting an automated program do the work for me, so I stuck to tools which required me to know what I was doing. (And admittedly, I was worried that I would become adept at using tools I didn't understand thoroughly.)
After a while, I added an intercepting HTTP proxy to my arsenal of tools. It wasn't automated (so I wasn't breaking my rule) and it opened up many more possibilities. The vulnerabilities I couldn't previously exploit were now (mostly) exploitable. This intercepting proxy, Burp Proxy (now Burp Suite), was so useful and expanded my ability to pop boxen to such a great degree that it started to be the tool I went to first when attempting to exploit any flaw that I found. I still consider Burp Suite to be my favorite tool for web app testing.
Eventually, however, this needed to change.
The Work of Fifty Ordinary Men
The first exception I made to my "no automated tools" rule was for John the Ripper. Upon finding password hashes in a database I was accessing via SQL injection, I had no choice but to use a hash cracker. There was absolutely no point in attempting to crack these hashes manually, so I bent my rule and started using hash crackers.
After that, I learned about blind SQL injection, and upon understanding it, knew that the techniques for exploiting this were not something you'd want to do by hand unless you like writing and submitting SQL injection strings for hours and hours. With some hesitation, I added sqlmap to my toolbox.
The next exception I made was for Nikto. (I've already written about Nikto and how CORE IMPACT Pro comes with a module which allows you to run the Nikto tests against your targets!) But I used it primarily to find back-end web pages and directories which the admin thought would never be found. This is not the case when you name your directories things like "admin" and "backup." Nikto ended up being the final nail in the coffin for my rule.
Through this gradual weakening and eventual dissolution of my silly personal standard, I came to the conclusion that automation is not inherently a bad thing. After all, if you're using a computer at all you're automating tasks all the time. If you think about it, using a browser automates the tasks of building a network connection, transferring data, and then parsing that data and displaying content based on that data.
Given that, it only makes sense to take automation a step further. Moreover, there are certain things which you can't (or really shouldn't) do manually. This is mostly due to time limitations.
If you already understand how something works and how you can automate it, you should. If you can free up more of your time you can test more thoroughly, or maybe just go have a sandwich while your tests run. Overall, more time is a good thing.
The Work of an Extraordinary Man
There are certainly advantages to testing manually:
First of all, it forces you to know what you're doing. If you don't know what you're doing you'll learn quickly for the same reasons that being stranded in another country with a foreign language is considered by many to be the quickest way to pick up a new language.
Secondly, some things can never be automated. Computers are not very good at recognizing subjective concepts, such as how an application is supposed to work, or ideas like malice and sensitivity. For instance, how many of you reading this have had your copy of Netcat deleted by your anti-virus system while you were trying to use it?
Long story short, an automated scanner might be able to pick up that there was something like CC numbers or SSNs on the site, but if the information in that database did not contain computer-identifiable sensitive information like that, an automated system would never, ever identify that it was an issue.
Third, if something goes wrong with your automation, you'll likely need to start everything over again. More importantly, the damage caused by an error in your automation might not be something you're willing to accept.
The Balance, and Achieving It with IMPACT Pro
Here's another quote for you:
"Eat nails; Die a winner!" - Advice Dog
Maybe eating nails will make you tough, but it'll also make you dead. Don't make your job unnecessarily hard, unless you're trying to learn something. You don't grow if you're comfortable. But don't automate everything, either! It's important to maintain a balance.
Since I work for Core, I'd like to take some time to mention how to maintain this sort of balance with some of the features in IMPACT Pro (otherwise they make me go back in the box, I don't like the box, it's dark and scary in there).
Let's say that you're looking for SQL injection flaws in a target web site. You could use the SQL injection analyzer available within IMPACT to fuzz out flaws in the pages for you and automatically exploit them, but let's say you'd like to look for SQLi flaws manually. Not a problem. Once you've found one, you can tell IMPACT where it is with the "Setup SQL Agent Manually" module. Just tell IMPACT where the page is, what the parameters are and what values should be given, then specify the vulnerable parameter, and if you need to, the needed encoding and backend database (or just let IMPACT figure it out for you).
Once you've fed IMPACT that information, IMPACT will happily jump through the hoops of SQLi exploitation (error-based blind SQL injection is also implemented) and confirm that we can gain control of the database. Once it does, it abstracts the details of exploitation from you and allows you to take some nifty post-exploitation actions. For instance, if the database user specified in the web application is an admin or has rights to run processes (I love you xp_cmdshell <333 XOXO) you can use the database to Trojan the host and drop an OS Agent on the host, further expanding your capabilities (and allowing you to use the database server to tunnel into the internal network and start wreaking havoc!).
If not, you can still open an SQL shell and run any command you please, just as if you were logged into the database as the given user. Additionally, in case you don't know SQL or don't feel like writing out all the SQL queries, you can use the modules we've created to do some common post-exploitation tasks. For instance, there's a module called "Check for Sensitive Information" which will pull out the contents of whatever database you choose and check them to see if they look like credit card or social security numbers. Additionally, if you'd like to pull out the authentication information from the database you can do that with the "Get Database Logins" module.
So, info-warriors, keep on fighting the good fight, whether or not you're automating that fight. ;)
-Daniel Crowley, Technical Specialist, Core Security Technologies