Wednesday 8 December 2010

Generating random strings with VBScript

How to generate random strings of various lengths for use as a password database in bulk user creation or any other purpose? It is relatively simple task with some VBScript skills. Of course it won’t be really random as VBScript can only generate pseudo-random numbers. That means that if someone will know the random seed and entropy used to generate the passwords it will be possible to reproduce the same set of passwords.

The script which I’m presenting here will generate random number and then use it as an index to the entropy array (i.e. array of all characters which could be used in the password). Number of characters in the password is also random between the PASS_LEN_MIN and PASS_LEN_MAX boundaries (inclusive). To make all passwords the same length, use the same number for PASS_LEN_MIN and PASS_LEN_MAX.

If security is the main concern than this utility shouldn't probably be used. Having said that here are some tips on how to improve passwords quality:
  • expanding entropy  – to add special characters
  • mix entropy – entropy character order could be altered and then removed after the passwords are generated to make the process even less predictable
  • use different seed – in order to make the random number sequence less predictable a user specified seed could be used. This can be done by passing a parameter to the Randomize function:
    Randomize 767554354

    Note: Provided that the script is unchanged using the same seed will always generate the same result file. By default there is no seed value passed to the Randomize function which will use current timestamp.
  • random pick – generate more passwords than required and pick a random subset

By default the script will generate 100 passwords and will save them to the PasswordDictionary.txt file in the script working folder. Running the script from the Desktop will generate password file called PasswordDictionary.txt on the Desktop.

The Random String Generator utility can be downloaded from here.

Saturday 16 October 2010

LoadRunner Clean-up utility is a simple VBScript tool which will remove all unnecessary files from the LoadRunner script folder to reduce its size. Utility is pre-configured to remove all standard files which are generated by LR during script or scenario execution.

All default settings can be modified by editing first few lines of the script file. Following is the first section of the script which defines which files and folders should be removed from the script. Please note the in order to simplify and make easier to read I’ve removed full list of files and folders for each of the four filters.

To see full list and explanation of VUGen files which can safety be removed from script folder please read this post.

Dim aFolderFilter: aFolderFilter = Array(".DS_Store", "result1")
Dim aFileNameFilter: aFileNameFilter = Array("pre_cci.c", "output.txt")
Dim aFileExtentionFilter: aFileExtentionFilter = Array("idx", "bak", "ci")
Dim aFileFilterRegEx: aFileFilterRegEx = Array("combined_*.c")

Dim aIgnoreFolders: aIgnoreFolders = Array(".Trash")

'Leave blank to use script location as a start
Dim sStartFolder : sStartFolder = ""

Const CLEANUP_SCRIPT_NAME = "CleanUp.vbs"
'------------------------------------------------------------------------
' You shouldn't need to change anything below this line
'------------------------------------------------------------------------

Filters
The four first lines in the script define four filters which will be used to recognise files and folders to remove.
  • aFolderFilter – list of folder names which should be removed
  • aFileNameFilter – exact file names for files which to be removed
  • aFileExtentionFilter – exact extension of the files to be removed
  • aFileFilterRegEx – file names to be removed where asterisk (*) represents any sequence of characters
To add new entry, simply insert additional parameter to the Array function call. In the following example new folder name called data is added to the folder filter:

Dim aFolderFilter: aFolderFilter = Array(".DS_Store", "result1", "data")


Start folder
Utility will scan all files and subfolders of the folder where it is located and remove all (see description of Trash folder below) files matching filter description. It scans the folder structure recursively going into all sub folders and using the same filters for each folder.

This behaviour can be overwritten by setting start location in the script configuration section. If the sStartFolder value is set to empty string (represented by two double-quotes) the default behaviour is used and the script starts processing from the folder where the script is located.

Using following line instead of the default one will start processing from the “C:\LoadRunnerScripts” folder regardless of where the script file is located:

Dim sStartFolder : sStartFolder = "C:\LoadRunnerScripts"

Clean-up script will always process folders before attempting to remove any files. This ensures that no individual files will be removed using file matching filters in case the whole folder is due to be removed.


Cascade
Script will always attempt to locate custom CleanUp script in each subfolder it tries to process. That means that if the script finds CleanUp.vbs file inside any subfolder it will delegate responsibility of processing that subfolder to the custom script and will move to the next folder.

Name of the CleanUp script can be set to other one by changing value of the constant variable called: CLEANUP_SCRIPT_NAME.


Trash folder
Clean-up utility will not remove any file or folder from the machine but only move it to the Trash folder located in the same folder as the script. If the script is copied to and run from “Desktop\LR_Repository” Trash folder will be created within “Desktop\LR_Repository” folder.

Utility will recreate folder structure of the original files and folders within Trash folder. If the file which is to be removed is located in “Desktop\LR_Repository\Scripts\HR\HolidayApproval\output.txt” it will be moved to “Desktop\LR_Repository\.Trash\Scripts\HR\HolidayApproval\output.txt”

The same rule apply to folders.

Sunday 10 October 2010

Put your LoadRunner scripts on diet!

There are multiple ways of making LR script folder a bit smaller in size which could be beneficial, if the script needs to be archived and stored for extended periods of time. Just to name a few methods here:

Operate from ZIP file
  • Saving and opening LoadRunner scripts in ZIP archives
  • Saving LoadRunner scripts in compressed Windows folders
  • Removing unnecessary files
Maintaining big script repository and taking regular backup of the entire script repository could be very disk-space hungry exercise particularly if all runtime data files and execution results are also unnecessarily being backed up.

As an example simple few steps script could have size of about 10MB after recording and the folder size could grow to 13MB after script execution which is about 30% increase. This increase in size caused by run-time data files, logs, VUGen execution result folders and backup files which are not required for the script to work properly.

I want to make clear here what I mean when I say VUGen execution result folders I don’t mean a result folder which is created during LR scenario execution. VUGen results are stored within a script folder and are created when the script is run on its own from within VUGen and are called result1, result2, result3, etc.
The simplest solution to save some space quickly (which I’ve been using for the past years) is to remove VUGen execution result folders which will usually release few hundred MB of disk space without removing any vital part of the script.

Additional space can be released by removing variety of other files which I will now explain.

Folders:


  • data – useful folder containing html files, pictures and other items captured by LaodRunner during recording. I would recommend keeping that folder within the script folder but if the disk space usage is crucial and scripts size needs to be reduced for archiving this folder could also be deleted. It is not required for successful execution of the test script.
    result1 – default result directory for VUGen execution result folder. Note that tester can specify different result folder manually. LR will create folder with next number if it is not possible to obtain full rights to result1 folder.
  • .DS_Store – this folder is not created by LR or even Windows operating system but by Mac OS. It can be created if someone accesses script folder from Mac which might be the case if the scripts are stored in shared location such as corporate shared drive.

File extensions:


  • idx – binary index file for used to store parameter values
  • bac – backup files for script, output, and other types of VUGen files
  • ci – it is a final Compiled Image of the LR script

Full file names:
  • pre_cci.c – concatenation of all C code and lrun.h header file
  • output.txt – same text which is displayed in LR GUI in Replay Log
  • options.txt – options passed to the LR compiler pre-processor
  • logfile.txt – additional log file
  • logfile.log – additional log file
  • mdrv_cmd.txt – parameters passed to the mdrv.exe utility which compiles and runs the LR script
Other files with random filename parts (asterisk represents random part in the file name):
  • combined_*.c – concatenated/combined list of header files used by LR script
  • mdrv*.log – log file for the mdrv.exe process
All of those files could be removed manually with a help of windows build-in search utility. In order to simplify that task I’ve built small but mighty tool in VBSctript which can automate this task.

To learn more about a clean-up tool for LR script folders please read on here. You will find a link to the utility and brief explanation on how to configure it to your linking.

If you want to understand how LaodRunner compiler is working "under to hood" have a look at this the LoadTester website.

Friday 8 October 2010

On-line tools useful in scripting

JSON 2 HTML
With the arrival of Web 2.0 technologies such as AJAX and its variation based on JSon (AJAJ) performance testers have to deal with new kind of requests and responses.
Although JSON structure is more lightweight it is harder to read JSON code then it is to read XML.
Code prettifies can make it a bit simpler to understand the structure of JSON object however it is still a bit difficult to read in the plain text form.

While looking for a solution I come across this fantastic JSON to HTML converter tool which is available online at: http://json.bloople.net/


Posix timestamp
Occasionally web application can use timestamps in the web request or response in the POSIX/Unix time format. It is a 10 or 13 digit number which represents number of seconds (for 10 digit number) or milliseconds (for 13 digit number) since 1 January 1970.

Since converting the POSIX time to the form we can understand is probably not an option for anyone here is a link to the online tool which can be of use to everyone. It allows for converting POSIX timestamp to human readable version and vice versa: http://www.epochconverter.com/


XPath test bed
While working with XML documents it is sometimes useful to refer to the individual nodes by their location and attributes. Having to re-run the whole test each time we get the XPath query wrong can be very frustrating. This online allows for uploading XML document and running XPath query against it: http://www.whitebeam.org/library/guide/TechNotes/xpathtestbed.rhtm


URL encode and decode
Any string which is passed to the server in the Query String has to be URL encoded before it can be sent to the server.
In most of the cases it is just a single value or just few words which can be easily read with some basic understanding of URLEncoding structure.

For those more complex cases where a longer value is URLEncoded following online tool can be used. It can encode and decode any string: http://www.albionresearch.com/misc/urlencode.php


Syntax highlighter
This last utility is probably less useful then the others but nevertheless can be handy at a times. As the name suggests syntax highlighter can apply style colours to many types of code making it easier to read http://tohtml.com/jScript/

Tuesday 28 September 2010

Random data generation tool

In performance testing is is often required to have sufficient amount of volume data (i.e. data present in database before commencing with the performance test) and parameter data (i.e. data used for parametrisation) for the realistic load test. 
Random data generator

Although random names, street names, email addresses can be generated using random characters it is sometimes required to use realistically looking test data. This can happen if the system under test is validating data against some kind of pattern in order to filter out potential attacks.

Random data generator is a simple Java based tool which can generate random data based on the user specified input file. Generated data is saved in a CSV format which can then be opened in Excel for easy viewing.

The tool comes with sample database of base data (in CSV format) which should be sufficient to generate enough quality and random test data for most of the projects. Bundled data source could be further expanded or even replaced by your own data making the output data more relevant for the specific use.

Following table summarizes type of test data the tool can generate
FieldNotes 
 Name from data source
 Surname from data source
 Company name from data source
 Date between two bundary dates
 Flat number random number
 House number random number
 House name not implemented
 Street name from data source
 Town name from data source
 Email address combination of name surname and company name
 Phone number random number with predefined prefix
 Generic random number random number between two bundary values

Since the tool is Java based it can be run in any operating system which supports Java.

Tool can be downloaded from here.

Wednesday 22 September 2010

Basic Unix/Linux commands

In any type of testing (let it be functional or non-functional) some basic Linux/Unix skills could be very beneficial (if test environment is using that Operating System). Testers could connect to the server and perform basic tasks on their own releasing development team from some duties such as:
  • restarting the application server and/or the Operating System
  • accessing server logs to verify that the scripts are not causing server side exceptions not displayed in the browser
  • parsing server logs to calculate application load profile (i.e. number of concurrent users, pauses between requests, etc.)
The most common way to connect to the Unix system is to use SSH client such as Putty. This utility connects to the remote Unix server and allows for unlimited access to the server, provided that the user has sufficient access rights.
Open Putty session


It is considered a bad practise to share Administrator’s (or how it is called in Unix environment root’s) password with everyone and thus you should always use username with limited privileges for the day-to-day tasks.

The basic commands anyone connecting to the Unix server should know could be categorised into few groups.
  • Directory navigation and discovery
    • cd - change current folder to another (e.g. “cd /etc/init.d”)
    • ls - list content of the current folder often used with -lah attribute for additional info about file size, last modification data and permission (e.g. “ls -lah”)
    • pwd - displays current folder (e.g. "pwd")
    • find - search for files is a specified folder (e.g. “find /var/log -name .log”)
    • du - checks file or folder size (e.g. “du -hs /var/log”)
  • Text file processing
    • vi - very popular text editor for Unix operating systems; might be a bit difficult for a non-experienced user. Easier alternatives would be “nano” and “pico”
    • cat - prints file content in the console (e.g. “cat /var/log/messages”)
    • less - text viewer which allows backward navigation (e.g. “less /etc/passwd”); to quit press Ctrl+C
    • echo - prints text passed to the command in the console or sends it to the file (e.g. “echo “Hello World” > /tmp/filename.txt”). Can also be used to clear content of the file (e.g. “echo -n /tmp/zimbra.log”)
    • grep - filters the file or result of other command and prints lines containing search term (e.g. “cat /tmp/server.log |grep error” or “ps aux |grep apache”)
    • sed - command line version of the search and replace utility (e.g. “cat file.txt| sed -e "s/SearchFor/ReplaceWith/g" > ResultFile.txt”)
    • awk - pattern scanning and text processing language. Following example will parse /etc/passwd file using colon as a field delimiter (-F:) and print the content of the second column for each record (e.g. “cat /etc/passwd| awk -F: '{ print $1 }'”)
  • Operating System - Process control
    • ps - list all processes running in the system (e.g. for linux “ps aux” and for unix “ps -ef”)
    • kill - forces the process to quit (e.g. “kill -9 ProcessID”); Process id can be found by executing ps command
    • killall - terminates all processes by the process name (e.g. “killall zimbra”)
    • free - displayes amount of free and used memory in the system (e.g. “free”)
    • top - lists all processes and sort them by CPU usage (e.g. “top”); to quit press Ctrl+C




If the only reason for accessing Unix server is to read/modify files or transfer files between the server and the local workstation it might be a better option to use other tool with graphical user interface (to which Windows users will be more used to) such as WinScp.


WinScp uses SSH protocol (which is the same protocol Putty is using) to connect to the Unix server and transfer files over secure connection to and from the local host.



The tool provides two different connection options (i.e. scp, ftp). FTP will only work if the FTP server is running on the remote server wheres scp will work always as it is using SSH connection for file transfers. Both options are equally secure as all the data is sent over secure SSH channel.



Both tools are free to use and their small size (around few MB) makes them ideal to use out of the memory stick.

Thursday 12 August 2010

Replacing dynamic parameters

The last piece of the correlation puzzle is to replace dynamic parameters hard coded is the script with the extracted values. Although this could be done directly in the WebTest editor it is more time consuming and error prone. Since the VSTS WebTests are also plain XML files it is simpler to open them as such in XML viewer and replace hard-coded values with parameters using search and replace functionality.

To open WebTest as an XML document right click on the WebTest in VSTS Solution Explorer and select “Open With…” con text menu option. From the Open With selection window select “XML (Text) Editor” and click OK.

VSTS parameter tag contains two parameter value attributes one with the actual value used during test execution and one with the recorded value.

When the form post parameter is bound to the extracted parameter or data source value in WebTest designer only Value attribute is altered and the original field value is preserved in RecordedValue attribute. This allows VSTS to restore original value of the field upon unbinding.

WebTests recorded in Fiddler always have their RecordedValue attribute blank.

With the correct search and replace rule we could hit two birds with one store and update RecordedValue attributes while replacing hardcoded values.

In this example we could search for:
Value="536877651" RecordedValue=""
and replace with:
Value="{{activityId1}}" RecordedValue="536877651"
Since the dynamic value appear in other context (e.g. as a part of longer expression) it is still required to perform another search and replace just on the value.

The last type of requests which needs to have their hardcoded dynamic parameters replaced with extracted values are requests containing String Body.

Request with String Body
Each requests containing String Body needs to be examined individually and any hardcoded values needs to be replaced.

The reason why this cannot be done from within XML as it was the case with Query String and From Post Data parameters is that String Body is an encrypted.

Wednesday 28 July 2010

Identifying and locating dynamic values

Identify dynamic parameters
Although it is certainly possible to identify some of the dynamic parameters just by looking at the single recording session it is far better to record the same script twice and then check which values stays the same and which are dynamic and need to be correlated.

Dynamic parameters can appear as part of the Query String, Form Post data (i.e. web form) or Post Body string (e.g. XML, JSON, etc.)

It is useful to create a list with all dynamic parameters with their respective values.


Locate in which response the server is sending required dynamic value to the browser 
Once the dynamic parameters have been identified Fiddler search function can be used to locate the first occurrence of the required value in server response.

Following search criteria will look through only server responses for number “536877651” and mark matching sessions in yellow:

Fiddler find window
Once the request is found Fiddler Inspectors tab can be used to learn request parameters and response data.

The most convenient way to work with the server response is to open raw data file by clicking “View in Notepad” button under Raw sub-tab in response pane.

Once the value is located in the text editor some XML/HTML skills are required to identify the context in which the value is used. The value might be a form field, HTML tag attribute, HTML tag value/innertext or something more exotic such as JSON serialised object.



For example for the following HTML fragment:
      
Although it would be possible to extract value of hidden field _activityId_ using text extraction rule or hidden field extraction rule it is always best to use the more specific rule which in this case would be hidden field rule. The benefit of using hidden field over extract text rule is that the former will work even if there is additional attribute between name and value attributes.

If the value cannot be located in any server response before the request where it needs to be sent then it is either generated by script or entered by user.

Data entered by application users is called parameter data and it is usually parameterised using VSTS Data Source feature.

If the value is generated by the script then some investigation is usually required to discover what the dynamic value means (i.e. unique/random value, date/time). Once the logic behind script generated values is known tester can create a custom request plugin which will generate the vale at run time.

For the full guide to developing VSTS WebTests please refer to the Using Fiddler with VSTS.

Monday 26 July 2010

Parametrising server address

The final touch to the script would be to parametrize server address so that one script could be run against different environments. This could be very handy if the scripts needs to be recorded before the application is deployed to the production (or copy of production) environment where it is to be performance tested.

Depending on the environment configuration the difference between URLs could be:

  • web server hostname/IP address
    Environment1: http://development
    Environment2: http://staging
  • port number
    Environment1: http://application:8080
    Environment2: http://application:7800
  • context path
    Environment1: http://application/inc10
    Environment2: http://application/inc11
  • or any combination of the above

The first scenario (different hostname or IP address) is by far more common. It is also very easy to configure in WebTest using "Parameterize Web Servers" button in the WebTest tool bar. The same approach will also work for the second case where the application server hostname and/or port is different between both environments.

WebTest context parameter
As the result of this web server URL in the whole script is replaced with parameter and new Context Parameter is automatically added to the WebTest.


It is best to keep parameter name generic (i.e. DevWebServer is a bad name since it won't be valid once URL is replaced with one pointing to Staging environment).

In case the difference between environment is reflected by different context path (e.g. http://application/ContextPath) server address parametrization cannot be done in a simple way as above.
Since I always tend to avoid doing thinks manually (which could be time consuming and error prone) I had to find another way of doing this. The easiest way I found was via editing WebTest as a XML document and running search and replace on the application address.

Name of the parameter which will replace server address would be the name of the context parameter containing application URL which would need to be created manually. This can be done by right clicking on  the WebTest parent node and selecting "Add Context Parameter" from the context menu.
If the name of the context parameter which contains server address is ApplicationServer then server address should be replaced with {{ApplicationServer}} and the exact text which is being replaced should be set as a value of this context parameter.

This approach will work with any combination of the three URL parts (address, port and context path).

As a final note please remember that web address parametrisation should only be used if both environments have exactly the same version of the application deployed.

Thursday 22 July 2010

Troubleshooting fiddler

Error while exporting WebTest from Fiddler "One or more plugins returned an error, but the remaining plugins."


This is a known issue which could be fixed following instruction from here.



Fiddler recording is missing requests
This could be due to too many simultaneous downloads as I experienced while recording scripts for AJAX application. This could be resolved by increasing number of concurrent connections in Internet Explorer following this guide.

Fiddler is not recording requests
Fiddler web browser plug-in will usually re-configure web browser and set itself as local proxy. If for any reason this automatic set up failed it might be necessary to manually configure web browser (or any application) to use Fiddler as a proxy server. By default fiddler will be listening on localhost on port 8888
The same approach could be used to configure any application based on the HTTP protocol to use Fiddler as a proxy and thus allow it to record a web script.

HTTPS traffic is encrypted "Host CONNECT"
This could be because HTTPS decryption setting is switched off in Tools > Fiddler Options > HTTPS
Once enabled it is a good idea to turn on "Hide HTTPs connects" rule from Rules main menu.

Sunday 18 July 2010

Correlating dynamic parameters

Once the script has been recorded using Fiddler tester can start correlating web requests. The aim of this process is to ensure that scripts will handle user session and other dynamic parameters in the same way as web browser would do.

Additionally to session id web application can use wide range of other dynamic parameters generated on-the-fly. For example in an e-commerce application each customer might be assigned a unique shipping cart ID which could contain a unique list of items in the shopping basket.

Since the shopping cart id is generated only after the customer adds the first item to the basket it cannot be known prior to the execution time. WebTest script will have to capture shopping cart id generated by the server and use it while adding products to the shopping basket.

If the script correlation were skipped performance test could still add items to the shopping cart created by web application during script recording. Depending on the tested application this can result in:
  • no obvious functional error but only one recorded shopping card id is shared between all simulated users skewing test result
  • http error page with a http error status code (i.e. 500)
  • custom error page which might appear as a pass (based on the http response status code) but in fact it is a page containing error message

Performance tester needs to pay close attention to all dynamic parameters and sometimes discuss their purpose/importance with developers to ensure that realistic load is generated. It is probably a good strategy to assume that all requests failed (even if passed according to http status code) and verify each of them manually.

In general script correlation can be broke down into fore stages process:
The real benefit of using Fiddler to record VSTS WebTest becomes apparent during first two stages of the script correlation process.

When business processes are recorded in VSTS only web requests are captured by the recording engine. Using VSTS alone tester would have to execute the script multiple time each time getting slightly further in the correlation process.
If application data is single use only correlation can become even more tedious and time consuming.

On the other hand Fiddler recording contains both request and response for the whole recorded business process. Once dynamic parameters have been identified in request it can be searched for within all server responses using Fiddler and located within seconds.

For the full guide to developing VSTS WebTests please refer to the Using Fiddler with VSTS.