Open System Testing Architecture

TOC PREV NEXT INDEX



Appendix: HTTP Test Executer Initialization File


The Initialization file TestExecuter_web.ini is copied to the OpenSTA Engines directory when OpenSTA is installed. This file contains parameters that can be modified to customize the operation of the HTTP Test Executer.

If the TestExecuter_web.ini file is not found, the HTTP Test Executer uses the default parameter values.

This file has four sections FILES, SOCKET, TEST and THREAD POOL. The parameters that may appear in each section are listed below.

FILES

This section contains parameters related to the HTTP Test Executer Trace file, Trace.txt. This file is located in the OpenSTA Engines directory.

Parameters:

TraceLevel:

Filters what information is output to the trace file. Range: 0-1000.
If this parameter is set to zero, or is not specified, the trace level is set to the value specified in the Trace Settings dialog within Commander. However, if the trace level specified here is higher than that specified in Commander, the higher trace level is used.
This allows the trace level for the HTTP Task Group Executer on each Host to be set independently.

Current supported values:

0 = Errors only (Default value)
10 = Low level tracing
20 = Medium level tracing
50 = Detailed tracing
1000= Full trace (This value can produce a large Trace file)
SOCKET

This section contains parameters related to socket I/O.

Parameters:

MaxSocketDataBuffersCount:

The number of memory buffers reserved to store received data. Each buffer is the size of the operating system's memory page (4Kb on x86). Too high a value for this parameter will cause an unnecessarily large amount of memory to be reserved. This is not necessarily a problem since the memory is not committed until it is actually required. Too low a value will cause a test to fail, because there is an insufficient number of buffers.
Default: 64000.

SocketDataBuffersGrowingCount:

The number of buffers allocated to store received data when more buffers are required. Each buffer is the size of the operating system's memory page (4Kb on x86). The buffers are allocated from the reserved pool, whose size is specified by the MaxSocketDataBuffersCount parameter.
Default: 2000.

MaxSSLConcurrentReq:

The maximum number of SSL buffers that it is estimated will be used at the same time.
This should be set to: No. of Virtual Users * No. of sockets (1 to 4) per Virtual User
Default: 8000.

SSLGrowingBuffCount:

The number of SSL buffers that will be allocated when more buffers are required.
Default: 1000.

TCP_KeepAlives:

Enable or disable TCP Keepalives. If this parameter is set to a value of 1, TCP Keepalives are enabled for all TCP connections established by the HTTP Task Group Executer. This causes the Executer to emit a TCP Keepalive, every second, after a TCP connection has been inactive for a period of time. On Windows 2000, this period is specified by the KeepAliveInterval parameter. On Windows NT, it is fixed at 2 hours. If an error is detected by a TCP Keepalive, an error message is logged to the Audit Log and Error Log and the associated virtual user is aborted.
TCP Keepalives can be used, to prevent virtual users 'hanging' when no response is received for TCP requests issued on their behalf, e.g. because of the failure of a TCP connection. There is a slight performance hit in using this feature, so for greatest efficiency, it should be disabled if it is not required.
If this parameter is not specified, or is set to a value of 0, TCP Keepalives are disabled and virtual users will wait indefinitely for TCP requests to complete.
Default: 0

KeepAliveInterval:

When TCP_KeepAlives is set to a value of 1 and the Executer Host is running Windows 2000, this parameter specifies the time period in milliseconds after which the HTTP Task Group Executer will emit TCP Keepalives for an inactive TCP connection.
Default: 30000
TEST

This section contains Test related parameters.

Parameters:

BrowserParallelism:

Maximum number of requests that the browser normally manages at the same time.
According to RFC 2616 this should be 2 for HTTP 1.1, although in practice it can frequently be as high as 4. The Scripts generated by the Script Modeler, can be used to determine the value for this parameter for your browser(s).
Default: 4.

InitialVirtualUserCount:

The number of Virtual User Control Blocks pre-allocated at the start of a Test. Pre-allocating Control Blocks avoids the overhead of allocating them during the Test. The optimum value for this parameter is the total number of Virtual Users that are to run during a Test. This way no Control Blocks will need to be allocated during the Test-run, and, if at some time during the Test, all Virtual Users are executing simultaneously, all the Control Blocks will be in use.
Default: 1000.

VirtualUserGrowBy:

The number of Virtual User Control Blocks allocated when more Virtual Users are required during a Test-run.
Default: 20.
THREAD POOL

This section contains parameters controlling the behavior of the thread pool.

Parameters:

ThreadPoolConcurrentThreads:

The number of concurrent threads. A value of zero indicates one thread per CPU.
Recommended range: 0 - (4 * number of CPUs).
Default: 0 (1 thread per CPU).

ThreadPoolSize:

The number of threads available in the thread pool. A value of zero creates a thread pool size of 25 * ThreadPoolConcurrentThreads.
Recommended range: 0 - 100.
Default: 0 (25 * ThreadPoolConcurrentThreads).
Setting the MaxSocketDataBuffersCount Parameter

This parameter should ideally be set to the maximum the number of buffers that are required at any one time. This means that no superfluous space is reserved and all reserved space is used.

One way of calculating this value, is to estimate the maximum number of buffers required for a socket on a thread and then to perform the following calculation:

No. of Sockets per VU * No. of VUs * Max. no. of buffers required per socket.

This allocates enough buffers for each Virtual User to process the largest item concurrently. This may not be realistic, for example, if the largest item is very large compared to others and is not processed very often.

Another way of calculating the value, is to determine a more realistic value for the number of buffers required by an individual user across all sockets and then to perform the following calculation:

No. of VUs * No. of buffers required per VU (across all sockets).

The received data buffer size is equal to the size of the system's memory page (4Kb on x86).

How the above may be used in practice, is probably best illustrated by an example. Consider a very simple HTTP test specifying 10 virtual users, each issuing no more than 2 requests in parallel: a 2Kb HTML page, containing a 23Kb GIF image.

The first formula above would result in a value of 120 for MaxSocketDataBuffersCount, i.e.:

2 * 10 * 6 (No. of Sockets per VU * No. of VUs * Max. no. of buffers required per socket)

Why 6? Because 6 buffers (of 4Kb each) are required to receive 23Kb (the size of the largest item). However, in this example there are only two items to be processed, so if one socket is processing the GIF image (23Kb) then the other socket must be processing the HTML page (2Kb). Therefore, the second formula above would be more appropriate and would result in a value of 70 for MaxSocketDataBuffersCount, i.e.:

10 * 7 (No. of VUs * No. of buffers required per VU (across all sockets)).

Why 7? Because 7 buffers (of 4Kb each) are required to receive 25Kb (23Kb + 2Kb the maximum size of the items to be processed concurrently by a thread).

Although the example is very simple, it does illustrate how the two formulae can be applied in practice.

Below is a sample INI file:

[FILES]

TraceLevel=500

[SOCKET]

MaxSocketDataBuffersCount=64000

SocketDataBuffersGrowingCount=2000

MaxSSLConcurrentReq=8000

SSLGrowingBuffCount=1000

[TEST]

BrowserParallelism=4

InitialVirtualUserCount=1000

VirtualUserGrowBy=20

[THREAD POOL]

ThreadPoolSize=0

ThreadPoolConcurrentThreads=0

See also:

Test Executers


OpenSTA.org
Mailing Lists
Further enquiries
Documentation feedback
CYRANO.com
TOC PREV NEXT INDEX