matousec.com (site map)

Poll

On Windows 7 (or Vista) I use

  unlimited administrator's account (57.86%)

  limited administrator's account (16.49%)

  common user's account (12.65%)

  nothing (I do not use Win 7/Vista) (13.92%)

more

results

Proactive Security Challenge

Frequently asked questions





Proactive Security Challenge has been replaced with Proactive Security Challenge 64!



Contents:

Testing guidelines

Question: Q: What is the exact procedure to reproduce your results of Proactive Security Challenge?

Answer:We start with a clean machine with Microsoft Windows XP SP3 installed. Microsoft Internet Explorer 8 is used as the default browser. Every Proactive Security Challenge test is performed by at least two independent testers who must agree on the results unanimously.

To reproduce the results, follow these steps:

  1. If you are using the SSTS Configurator tool, use it to create snapshot of the system prior the tested product's installation. Make sure Internet Explorer and Task Manager are running while the Configurator tool runs.
  2. Install the tested product. Install all its components.
  3. Reboot the system.
  4. Update the tested product if possible.
  5. Start Internet Explorer and surf the web, use Windows Explorer to visit an Internet page, use ping.exe to ping an Internet address, schedule a task using at.exe, run a visual basic script, run the Task Manager and terminate some processes with it, reboot the machine. Run the programs also using cmd.exe, especially Internet Explorer and Windows Explorer which are to be executed with parameters. If the tested product asks you about any action choose Allow and remember if possible.
  6. Configure the tested product to its highest usable security settings as defined in Methodology and rules.
  7. Make sure the tested product runs in an interactive mode or is set to a similar policy, in which all undecided actions of applications cause the product to ask user about the decision.
  8. If you are using the SSTS Configurator tool, use it to create configuration file now. Make sure Internet Explorer and Task Manager are running while the Configurator tool runs. Do not forget to configure the LAN inteface address in the generated ssts.conf configuration file. Modify the generated configuration file so that the lists of files and registry entries contain only those objects that were created by the tested product or its installer.
  9. Enable a password protection for product's termination if available.
  10. Reboot the machine and repeat the step 4, then perform Windows Update. Make sure that every common action is allowed without any questions from the tested product. If it still asks, change its settings to satisfy this condition and repeat this step.
  11. If there are no more alerts the machine is ready for testing. Run Internet Explorer and Task Manager and let them run especially during the testing.
  12. If you are not using the SSTS Configurator tool, manually make lists of all processes, services, device drivers, files and registry entries that were not present in the system prior the product's installation and update the ssts.conf. Do not forget to configure the LAN inteface address too. Make sure Internet Explorer and Task Manager are running while you are making the lists.
  13. Save the product's configuration and rules database and possibly the system state.
  14. Copy files needed by a test to the testing machine. If there are problems with the product's anti-virus engine create an exception for the copied files. If it is not possible, disable the anti-virus engine.
  15. Run a test. If the product passed the test, repeat the test and if the product passes it for the second time, mark the test as passed and continue with the next test. If the product failed the test, mark the test as failed, restore the saved configuration and rules database and remove all changes the test made, reboot the system and continue with the next test. If the test caused BSOD or damaged the system in a way it is not bootable again, proceed as if the test was failed.
  16. If a technique of a termination test was successful but the test did not report a failure because no processes were terminated, try to repeat the test several times and try to use the product as much as possible. This includes attempts to perform the product update, file system scanning if available, various actions with products logs and reports etc. Do not blindly trust the result reported by the test, if you are not sure about the test result follow the scoring definition for the given test. My leaks page or network sniffer may be handy too.

Some tests require special approach. For Driver Verifier test, the procedure is as follows:

  1. Enable all options in Driver Verifier and select all drivers of the tested product – i.e. all drivers that were not installed in the system prior to the tested product's installation.
  2. Reboot the machine.
  3. Wait at least five minutes before logging in.
  4. Start ping.exe with -t argument to repeatedly ping arbitrary Internet server.
  5. Run several instances of the default browser and some other applications in order to consume all available memory and thus force the system to swap. Use up to 150% of available memory.
  6. Try to surf the web.
  7. Close the opened applications as fast as possible in order to free the used memory.
  8. Perform several operations with the product:
    • Try to update the product.
    • Try to run an unknown application that attempts to execute a controlled action, create an allow rule for it.
    • Try to run an unknown application that attempts to execute a controlled action, create a deny rule for it.
    • Change a couple of settings in the graphic interface of the tested product. Try to choose settings that may affect the behavior of the product's drivers.
    • Try to surf the web.
    • Shutdown the machine.

If everything goes OK and the machine survives until its shutdown, the product is given 100% score. Otherwise see the scoring definition of this test.

For BSODhook and ShadowHook tests, the procedure is as follows:

  1. Find SSDT and GDI hooks.
  2. Pick up a function and run the test.
  3. If the tested product asks about any action choose allow and create a rule if possible except for special actions such as shutting down the system, which would end the test. In such special cases you may block the action. If possible, choose to block it only once.
  4. After each function, check whether all system functions are working properly – try to run a new console and a new graphic application and try to access the Internet.
  5. If everything is working properly continue with the next function. Otherwise test the function one more time and if the same problems occur, the product did not pass the test of this function.

This approach is the same for most of the products. Occasionally, however, we encounter products which require individual approach. If the product is suspected that it is designed to protect only against a specific implementation of a test or it offends the testing suite in another way, we implement a modification of the test in order to bypass the protection. Other special cases are usually mentioned in the Further notes section in the PDF report.

Note that the above information may be updated, improved or changed in case of updates of the testing suite, the testing process or the project rules.

Product requirements

Question: What kind of products are suitable for Proactive Security Challenge testing and which are not?

Answer: We often receive requests to test security products that are not suitable for Proactive Security Challenge. It is important to understand what kind of products do we test. The primary requirements are that the product implements application-based security model and behavior blocking. This means that it allows its users to control selected actions of applications. Among behavior blocking capabilities, the product must be able to control applications' network access. Then we require the product's project to be alive. We are not interested in already dead projects without a future although exceptions may appear. Finally, we require the tested version of the product to be stable, publicly available in English and run on Windows OS that is currently supported by the challenge. Most of the products called an Internet security suite, a personal firewall, a HIPS, a behavior blocker do meet all these criteria and hence they are suitable for Proactive Security Challenge testing.

On the other hand, there are many products that are not suitable for our project. Security products that are built to protect only a single process are not suitable – various Internet browser security add-ons, sandboxes or virtualizations, for example. Also behavior blockers that focus on a single type of malware are not suitable – e.g. anti-keyloggers, malware removers. All pattern based systems that are not based on application behavior are not suitable – this includes all anti-virus and anti-malware solutions that are not delivered with application-based security modul.

The security software that is NOT suitable for Proactive Security Challenge testing just because it is not publicly available or stable can be tested in private (commercial) Proactive Security Challenge but without a chance to publish its results. Any other security software that is NOT suitable can be tested on commercial basis outside Proactive Security Challenge.

Testing request

Question: How can I request a testing of my favorite product?

Answer: Simply visit our contact us form and send us names of the products you would like us to test. Your votes for testing these products will be added to our database. When we receive a significant number of votes for a single product, we will include it in our tests.

Security products versus Proactive Security Challenge, instead of malware

Question: How is avoided a danger that security product vendors will start to focus on fighting against Proactive Security Challenge instead of malware? The reason can be immediate positive business impact of successfully passed tests.

Answer: We have faced this problem since leak-testing. Some vendors really fight the tests and not their attacking techniques. Some vendors optimize against the given set of tests rather than solving the causes.

If we have a suspicion that the tested product attacks some test directly, we use internally modified versions of the tests to prove it. If we can prove such behavior, we mention this in the report and the product fails the test.

Another situation is when the vendors blindly add functionality to their software to pass some technique. In such case, their users might be confused by absurd, false, misleading or somehow bad alerts, popups and questions. In this case, such a product might get through our tests but it would be unusable for most of users. We hope that vendors will not do this for their own good.

To prevent the unwanted behavior of the vendors, we are going to add new tests to the system and test selected products against the new tests without prior notices to their vendors. For this purpose we will select, preferentially, the prodcuts of those vendors that concentrate on fighting the tests instead of the real security of their products. This approach should give us more accurate results in a sense of their real security.

Finally, we have also set a fixed rules about the frequency of testing, this should also help. However, our original rules about paid retesting allowed vendors to make quick silent fixes and order retesting with the only intention of replacing the old results with new and better results. This is why we have added new rules that limit paid retesting too.

Termination tests' methodology

Question: The methodology for termination tests seems to indicate that termination of any of the security product's processes results in a failure in the test. I disagree with that methodology as the main features of the product may be unaffected by the termination (e.g. if the process that was terminated was only the tray icon) or the product may have some kind of "fail-safe" (e.g. blocking all connections if the processes are not running). I think a test (e.g. "leaktest.exe") should be run after a termination to see if the protection is still working or not. If the product stopped the test after the termination it should receive a partial score (e.g. 50% of the normal score for the termination test).

Answer: The idea behind our scoring system is the simplicity of the tests. We can not really say how the termination of one component affects the whole protection system unless we analyse the system deeply. We do not do that in Proactive Security Challenge. Imagine a product that implements the GUI component which communicates with the user. Imagine that if this component is terminated, the product blocks all connections to the Internet. You say that if we run "leaktest.exe" to verify the protection, it will tell us whether the protection is weakened.

In a classic model of a driver, service and GUI component there are communications channels opened between these components. And these channels may be implemented so that only one connection is allowed to prevent malicious software to connect to the channel and send requests over it. If the GUI component is terminated, it may become possible to connect to these channels and attack the service or driver component through them. The verification you suggest does not reveal this case and there are many other situations that should be verified before we could say that the protection was not weakened.

Termination of any of the product's component is a security issue. In our scoring system it is penalized and we are not aware of any easy modification that would make the system more accurate or more fair.

Administrator's or limited account

Question: I'm just curious if these tests are carried out under an administrative or limited account?

Answer: According to our poll more than 80% of people use the fully privileged account. This is why we perform our tests under administrator's account, to be as close to the real scenario as possible.