In this two-part blog, we will be discussing multi-step forms. In part 1, we will see how multi-step forms affect scoping a test; while in part two we will go through techniques involved in testing multi-step forms. Before we delve into the details, let’s cover some basics.
What Is a Form?
A form is used on webpages to let users submit data to the application and interact with it. The reasons for them are manifold and their use are essential to an interactive internet: Be it ordering a pizza online, writing a forum post or just checking the weather in a certain zip code , each requires user input, that is usually collected through forms. Below is an example for a simple contact form that lets the user submit a contact request. All the necessary data is collected in a form on page 1 (S1) and is then sent, processed and output in a single request (S2).
Figure 1: A single page contact form with 13 input fields
What is a multi-step form?
A multi-step form is spread across several pages and requires the user to follow a specific route comprised of several steps/forms (e.g. by clicking the next button several times), before the actual processing of the data happens. This can often be seen on insurance or banking sites when applying for an insurance policy or a credit, e-commerce/bidding websites, quiz applications, etc. These forms usually require a lot of information (personal data, financial data, property details …). Below is an example of a simple multi-step form where input data is collected on page 1 (S1) and page 2 (S2) and sent to the server respectively. Upon requesting page 3 the server processes the data that has previously been collected and outputs the result. This decoupling of input and output makes testing and especially input ‘fuzzing’ (we’ll learn about that in a few seconds) a tad bit more complex.
Figure 2: A multi-step form with two input pages (8 + 5 = 13 input fields) and one result page
Part 1 - Scoping pitfalls
When scoping (i.e. estimating the necessary time for a test) applications, multi-step forms can constitute a hike in the amount of effort required for the overall project. But why is that?
The reason for this lies in the nested nature of these forms and the different combinations of input and output that unfold a whole lot of possibilities. But first things first:
What is “fuzzing”? (a.k.a. “Throwing stuff at a wall and see what sticks”)
An integral part of every penetration test (“pentest”) is input and output validation. This is usually done both manually and automatically and ensures that potentially malicious data is escaped properly, before being processed/stored/displayed. Typical examples for this type of vulnerability include Cross-Site Scripting (XSS), SQL Injections, local and remote file inclusion, and remote code execution.
During an average test, a single input field is tested on average with around 1000 different payload strings. Each submitted payload has to be checked to see if it deviates from the normal behaviour of the application, e.g. is the input ‘reflected’ (in layman’s terms: shown on the page somewhere) on another page? If so, is it safely escaped or does it break out of the page syntax and enable the execution of script code? Another metric could be the time needed for the response to return, to see if the payload might have triggered something in the background, even if the payload string is not reflected on the page.
Naturally these checks become more complicated if the input and output processing is spread across several pages.
Example Scoping Calculation (simplified)
Let’s assume we are scoping the effort required to test the contact form shown above (fig. 1.) It is a single form with 13 input fields that are entered on the first page and are immediately processed when the ‘Send message!’ button is clicked. For the sake of simplicity, let’s assume there is no session to keep track of, or other complex back-end functions. This equates to a rather easy calculation of 13*1000 = 13, 000 requests.
With a multi-step form this becomes slightly more difficult, because:
- The input can be reflected at various places during the process;
- A user could move back and forth between the pages (a page might restore previously set values); and,
- The server needs to keep track of a session state (i.e. a cookie).
Therefore, instead of having just one workflow there are several different ‘runs’ (example, fig 2). A common mistake that can be seen in the wild is when input validation tests carried out automatically using a tool (e.g. burp) are performed on individual steps only, without going all the way from the beginning to the end of a multi-step form. This is usually a limitation of the testing approach, rather than the tool itself. With such an approach, one only sees a small piece of the picture and might miss critical information.
Naïve approach
Consider testing the multi-step process described in fig 2. This form is divided into three pages: ‘Personal Details’, ‘Payment Details’ and ‘Summary’. A naïve tester assessing individual steps only, will in a first step test the 8 input fields with 1000 different payloads per input field and will always get the ‘Payment Details’ page as response. This page however does not process or reflect any input from the form on page 1, instead it only contains another form with five input fields. This naïve approach will therefore not identify a XSS vulnerability that might be present, for example in the address field.
In a second step, the tester might want to test the input fields of the second page. However, the tester could face two show-stoppers here. Firstly, the server could invalidate the session if it’s keeping track of the previous entries. This commonly happens when one tries to access multi-step forms out of the intended order, e.g. directly accessing page 2, without going through page 1 first.
Secondly, the forms on page 1 and page 2 might be processed differently. In our example, many input fields of page 1 are reflected on the summary page, whereas input fields of page 2 are heavily masked.
The naïve approach, where page 1 and page 2 are tested individually, would roughly take 8*1000 (page 1) + 5*1000 (page 2) = 13.000 requests.
Thorough approach
We’ve seen a naïve approach, but how would a thorough, or ‘proper’ approach work?
One would always start from the beginning of the multi-step form, ‘go to’ the place where the fuzzing is supposed to happen and then ‘walk’ through to the end of the multi-step form. It is important to stress that the end is not necessarily the last page of the workflow, it is rather the step where one wants to check for a potential reflection.
Where could this be? It is not only the obvious summary page that displays the information entered in page 1 and page 2. It can also be that the server tries to do us a favour and automatically fills in previously entered information when going back and forth between the pages. So let’s look at the fuzzing workflows for our example from fig.2, an easy multi-step form with 8 input fields on the first and another 5 input fields on the second page. At the end of the multi-step form is single summary page:
1) Go to page 1; perform input validation on the fields of page 1. Proceed to page 2. Proceed to the end.
2) Go to page 1; perform input validation on the fields of page 1. Proceed to page 2. Proceed to the end. From here, go back to page 1.
3) Go to page 1; perform input validation on the fields of page 1. Proceed to page 2. Proceed to the end. From here, go back to page 2.
4) Go to page 1; proceed to page 2. Perform input validation on the fields of page 2. Proceed to the end.
5) Go to page 1; proceed to page 2. Perform input validation on the fields of page 2. Proceed to the end. From here, go back to page 1.
6) Go to page 1; proceed to page 2. Perform input validation on the fields of page 2. Proceed to the end. From here, go back to page 2.
7) Go to page 1; perform input validation on the
fields of page 1. Proceed to page 2. From here go back to page 1.
As we can see, properly fuzzing all the logical flows through the multi-step form accumulates quite a few requests. In our example, it takes 167.000 requests, which is a factor of almost 13 times versus the naïve approach that can be utilised for a single-step contact form. More importantly, this figure grows exponentially with increasing number of steps.
Conclusions
So, will a multi-step form always increase the effort required? Not necessarily. It pretty much depends on the individual process and how and when data is processed, as well as which logical flows are permitted. A good understanding of the possible and permitted workflows on both sides is essential to a satisfying scoping.
For a tester it is important to know about these flows and how to properly utilise their tools to align to these workflows. But that is a tale for another blog post …
Contact and Follow-Up
Michael works in our Assurance team, from our Essen office. See the contact page for ways to get in touch.
Article Link: http://contextis.com/resources/blog/testing-multi-step-forms/