Suppose that you're a hacker facing a partially known network, and want to make the best use of your exploits. The choice of the best sequence of exploits that will give you access to your objective depends:
(i) on the information gathered about the network (by executing network discovery, port scanning and OS detection modules),
(ii) on the information that you have about your own exploits!
Which are the requirements of your exploits? What is their probability of success when their requirements are met? And when their requirements are not met? Should you spend more time making an exploit more reliable, or writing an exploit for a new vulnerability?
Giving a precise answer to those questions requires a significant testing (and engineering) effort. Namely to test each exploit in a wide variety of configurations -- of different operating systems and running applications -- and for each configuration, to record relevant metrics on the exploit's behavior:
- Did it result in installing an agent (taking control of the target machine)?
- How long did it take?
- How much network traffic was generated?
Now suppose that you want to build an "automated hacker'' i.e. to model the attacking process and use planning techniques to automatically generate courses of action. In the second part of the talk, we will show how this problem can be tackled, in particular we will present how the information concerning the exploits' behavior can be used (in practice) as input for an automated pen testing engine.