RestReplay Documentation

RestReplay Quick Reference

 
 

1. Introduction

1.a. Features

1.b Overview

1.c Concepts

1.c.1 Expressions

1.c.2 Values in Contexts

1.c.3 Common Useful Methods in Contexts

1.c.4 Vars

1.c.5 Headers

1.c.6 Expected Codes

1.c.7 Mutators

1.c.8 Looping

1.c.9 AutoDelete

1.d Runtime

1.e Tools

1.e.1 EvalReport

1.e.2 ResourceManager Summary

1.e.2 onSummary event

2. Reference Documentation

2.a. Installation

2.b. Configuration

2.b.1 General Rules

2.c Running

2.c.1 Command-line parameters

2.c.2 XML configuration parameters in master file

2.c.3 XML configuration parameters in control file

2.c.4 Cleanup

3. RestReplay Examples

3.a.1 Using DELETE for lists of resources.

3.a.2 Using XPath to dig elements out of XML or JSON.

3.a.3 Using XPath part 2

3.a.4 Jexl Script

3.a.x More Examples

1. RestReplay Introduction

RestReplay is a utility to send REST requests to the services layer (including JSON, XML, and multipart XML requests), read responses, and compare
the resulting payloads with templates. 

At the end of a run, RestReplay produces a report, such as this self-test sample report, which shows you a summary and detail of all the calls to your services layer.  The reports have both a high-level view for administration and monitoring, and a low-level view useful for debugging issues or communicating results to developers or QA.

RestReplay was designed to fit into Continuous Integration tools such as Maven and Jenkins.  You can run RestReplay continuously, for service layer monitoring.  Developers can run RestReplay during a developer build, and Build Administrators can run RestReplay on commit, tag, or push events, to ensure a build produces a services layer that survives integration testing.

RestReplay is designed for simple, file-based, DRY configuration, that is programmable by beginning programmers. (It also has APIs for advanced users who wish to drive the tests from Java, or extend test types by writing plugins.)  For most users, however, look through our self-test source code for full examples of running tests, chaining tests, looping tests, and validating result IDs.  RestReplay is also designed to be configured many ways, for many kinds of projects.  A good, safe bet is to follow the style in our self-test.  See: self-test sample report and self-test source code.

All configuration parameters are covered below, in the order of the configuration XML files, after the discussion of the command line.  You can use the handy links in the Quick Reference cards on this page to jump to the different elements to read syntax and examples.

RestReplay lets you automate entire workflows over your REST layer.  You can automate tests to login, request resources and APIs, extract data from the response, such as resource IDs, and use these IDs in subsequent tests.  For example, each of these would be a test within one testGroup called OrderFlow: 

RestReplay executes the tests in this order you specify, and lets you pass variables in and out of each test.  Each test can also have a small bit of code to count json arrays, dig in with jsonp or XPath, etc.  These are documented below.

Most things are done for you by the framework, so you just specify the URL, the type of request, and create files for any POST or upload content, and hit Go!  RestReplay runs your request, and  reports on the result.  Many times, success in testing and monitoring is just knowing that a particular sequence of HTTP transactions succeeded.   For these kinds of testing, there is no programming.  You'll merely replace values with variables that you define, such as ${ORDER_ID}, as you wire the tests together. 

Using the examples in our self-test, the documentation below, the Jexl Documentation, and our Javadoc, you can also program RestReplay to do more complicated workflows.

1.a. Features

1.b Overview

RestReplay is run from the command line (or ant, maven, or Jenkins).  It reads your configuration files, and along with startup parameters, determines which testGroups and which tests to run.  Along the way, it initializes and evaluates any variables you have set up, such as hosts, ports, resource IDs, logins, and so on. 

It runs the tests in the order you define, synchronously.  As each test is run, it exports variables which become available to subsequent tests.  Also after each test, the response is evaluated for schema correctness, and any Validator you have defined can run using Jexl or Javascript code. Read more about the runtime workflow at 1.d.1 Runtime.

So you can think of it following stories, or use cases, or workflows, exercising your services layer. 

After the tests have run, RestReplay compiles a directory of stand-alone reports that drill down from your high level view (the Master) to the detail view (the individual testGroup/test reports).  Tweaking the RunOptions gives you control over the output on the command line, and also in the HTML report.  These reports do not depend on anything else, and can be archived, since they show stats, results, documentation of your APIs, and contain all the input and output payloads.  See: self-test sample report

After the tests have run, you can check the code coverage of your services layer with external tools such as EclEmma / JaCoCo, and also use other queries to check the state of your datastores. 

RestReplay tests your services, but it also drives them exactly as a user or bot would, so think of the next steps in your integration testing.

1.c Concepts

See the Reference Documentation for syntax, structure, discussion, and examples of code.  This section covers concepts that are common to many of the configurable and scriptable elements documented in the Reference Documentation.

1.c.1 Expressions

Expressions are defined using Jexl syntax.

Expressions are always evaluated in a context. (Actually, a series of nested contexts, similar to Javascript scoping.  RestReplay automatically provides these contexts, for each testGroup, eachTest, and each evaluation.)  Some values and tools will always be available in the context, such as "kit", "tools", and "this".  For running tests, all previously run tests are available, by ID, in the context.  So any value sent to or received from a service, either from URLs, Headers, or payloads, can be retrieved using an expression.  You can place values into the context with the var element.

An expression is evaluated just before it is used.  (Subsequent evaluations use the latest values available from referenced data sources.) An expression in a test will not be evaluated until that test is run.  Expressions stored elsewhere and referenced (such as Master vars, and headers) are also evaluated just as the test is run.   Exported vars are evaluated after the test is run, and retain their values for the rest of the run through the testGroup.

Expressions are strings.  They are surrounded by bash-like braces: ${ORDER_ID}  looks up the value of the expression ORDER_ID.  If ORDER_ID is defined in the "context" then its value is returned.  Say ORDER_ID is set to a string: "abc123" , then the expression's value is like so:

expression value
${ORDER_ID} abc123

Now we could use this value in an expression to get a resource by URL:

and RestReplay will hit the service:

You can also have more complicated expressions, like:

These expressions were used in vars, explained here

See also: 3.a.4 Jexl Script for an example of using a more complicated Jexl expression.

1.c.2 Values in Contexts

In Jexl Expressions

serviceResult org.dynamide.restreplay.ServiceResult - also stored as "this". Contains all vars and results, as well as IDs, timing, stati, etc.
this org.dynamide.restreplay.ServiceResult - also stored as "serviceResult".
serviceResultsMap Map<String, org.dynamide.restreplay.ServiceResult> - a map of all previously run tests in this same testGroup, kept in insertion order of the tests run. Key is the ID of the test element. The special values thisand result are also available in the serviceResultsMap, and point to the current serviceResult.
tools org.dynamide.util.Tools
kit org.dynamide.restreplay.Kit
this.mutator.getIndex() - zero-based index of which mutation loop this is.
__FILE__ String name of the validator file. (only available in validators)
this.LoopIndex If this test is run as a loop, then you can know the zero-based loop index by calling this.LoopIndex

Javascript contexts get the same objects.  Additionally, the following methods are useful in javascript.  Note that "this" in Javascript is NOT the serviceResult.

kit.newStringArray(int)
var stringArray = kit.newStringArray(r.count);
for (var i=0; i<r.count; i++){
    stringArray[i] = r.orders[i].order_id;
}
serviceResult.result The body of the response. If it is JSON, you can turn it into a javascript object with:
var r = JSON.parse(serviceResult.result);


serviceResult.mutator.getIndex() zero-based index of which mutation loop this is.
serviceResult.mutator.getMutationId() Unique string name of this mutation, formatted by the mutator.




You may specify that a test should loop. You supply either the loop="" attribute, or the <loop> element, with an expression that returns a number, an array, a Collection, or a Map (see Loop examples) The loop value is a Jexl expression, or an integer. For example:


The last example is an array of maps. Each of the maps becomes loop.value. So you could access the value for key "a" like this: ${loop.value.a}

RestReplay loops the test over the range you provide, and for each iteration, makes the following values available in contexts for evaluation of filenames, vars, headers, and validators. Javascript validators also get these values via the global namespace. In both javascript and jexl, the names of the variables are shown in the table.

loop A loop object that has index, key, and value properties. You may specify an array of String, an array of Object, a Collection, or a Map. (see Loop examples) If the loop expression evaluates to one of these, the variables below apply.
loop.index The loop index number of this iteration, zero-based. So if you specify loop="3", the test will be run three times, with the values of loop.index being: 0, 1, and 2.
loop.key If the loop expression returns a map like {"a":"a value","b":"b value"}, then the test will be run twice. On the first iteration, loop.index=0, loop.key="a", and loop.value="b value". Then, on the second iteration, when loop.index=1, loop.key will be "b", and loop.value will be "b value".
loop.value If the loop object is an array, the loop.value will be the array element at the index loop.index. For a map, the value is the value for the current key, available in loop.key. RestReplay evaluates these for you,
For maps:
   loop.value == loop[loop.key]
For arrays:
   loop.value == loop[loop.index]
                    
loop.object Set if the loop expression returns an object. Will probably be a Map, Array, or Collection.
See the many test cases in the self-test that use looping and vars. Specifically, see this testGroup: Loops, which uses looping and vars.

1.c.3 Common Useful Methods in Contexts

Note that the variable named "this" is equivalent to serviceResult in the following examples, e.g. these two statements are identical:

this.addExport(key,value) Creates a var as an export, available to subsequent tests.
Exports added with this.addExport() get tracked automatically if called from a Validator, and the list of exports added per validator is shown in the detail report as an Alert from the validator. The values of these exports can then be seen in the variables row (exports are a different background color--see Legend at bottom of page).
See the discussion under 3.a.1.DELETE for an example of using addExport().
this.testID
this.testGroupID
this.testIDLabel
this.idFromMutator
this.fullURL
this.deleteURL
this.mutator.getIndex() zero-based index of which mutation loop this is.
this.mutator.getMutationId() Unique string name of this mutation, formatted by the mutator.
this.mutationID Same as this.getMutationId()
kit.dates.*

1.c.4 Vars

The vars/var tag appears in several places in the configuration file trees.
They use the same rules discussed under /restReplay/testGroup/vars/var.

Vars in different xml config elements have different scoping rules.

Here are the places vars can exist:

    /restReplay/testGroup/vars/var
    /restReplay/testGroup/test/vars/var
    /restReplay/testGroup/test/exports/vars/var
    /restReplayMaster/vars/var
    /restReplayMaster/env/vars/var
    /restReplayMaster/run/vars/var

See also: 3.a.4 Jexl Script for an example of using a more complicated Jexl expression as a var.

Here is the list of vars shown above in the discussion on 1.c.1 Expressions

1.c.5 Headers

See: /restReplay/testGroup/headers/header

1.c.6 Expected Codes

See /restReplay/testGroup/test/expected/code

1.c.7 Mutators

See /restReplay/testGroup/test/mutator

1.c.8 Looping

You can loop tests two ways with RestReplay.  First, you can use <test ID="myLoopyTest" loop="2">...</test> to loop a test.  Of course, the value provided to the loop parameter can be a Jexl Expression, so you can run the loop a variable number of times:

<test ID="deleteOrders" loop="${size(ordersListTest.ORDER_IDS)}">

See a full example in 3.a.1 DELETE

You can also loop using a VarMutator.

1.c.9 AutoDelete

AutoDelete is turned on with global parameters and testGroup parameters.

RestReplay keeps track of the Location header sent by services. This allows you to later delete that resource.  Some services do not follow this pattern of returning a Location header after creating a resource with a POST, so RestReplay lets you define an expression for the DeleteURL. 

If AutoDelete is on for your testGroup, then if either of Location or DeleteURL are set, then at the end of a testGroup, then RestReplay will call DELETE on the Location or DeleteURL for every test that used POSTs.

1.d.1 Runtime

At runtime, RestReplay consults parameters passed in on the command line, or from maven surefire, or from java:exec which uses parameters found in the pom.xml project file under options for the exec-maven-plugin plugin.

RestReplay then determines which master file to use. This file points to testGroups in control files that contain tests.

The default behavior of RestReplay is to look for a directory, called the RestReplay directory. This is defined using -testdir parameter.

When running inside maven surefire, this is pre-defined to be a resource directory in the source code module. Maven defines a variable called basedir, which points at the project directory for the current pom.xml. In the pom.xml file, you can point maven exec:java at the testdir (called "tests" in this example) like so:

<argument>-testdir</argument> <argument>${basedir}/tests</argument>
See the full example of how to run maven's exec:java with command line parameters here: https://github.com/dynamide/RestReplay/blob/master/pom.xml

Within the RestReplay directory, there are various master files, some control files, and directories for each service. You can select a particular master with the command line parameter -master

Most services should define their control file within their directory, e.g.

restreplay/master.xml //the master file used by the nightly build. restreplay/objectexit/ //the directory for a service called, say, objectexit,
// with all its control files, and request/response templates.
restreplay/objectexit/object-exit.xml //the control file for this service, objectexit. restreplay/objectexit/res/ //a sub-directory to contain response templates.

For development, you may define your own master. See "2.c. Running"
For integration testing, use the predefined master file called master.xml.

There is a test that checks the RestReplay installation itself.  It lives in
     _self_test/master-self-test.xml

Within a control file, RestReplay finds testGroup elements as specified on the command line or in the master file, and then runs these testGroup elements.

For each testGroup, a namespace is set up that will contain the results of all tests run in that testGroup, and any variables defined as the test runs. This namespace contains variables that dereference to java objects, so Strings, Date objects, etc. may be put into the namespace. For every test, RestReplay creates a named object of type org.dynamide.restreplay.ServiceResult, so that after that test has run, you may pull fields out of the ServiceResult object, such as responseCode, fullURL, location, and calculated fields such as from the Location: header.

You may also use methods on the ServiceResult object that give you XPath access to any field in any part either sent or recieved from the server in that test.

ServiceResult methods and fields:

public class ServiceResult :: //dig into the schema part from the expanded request, identified by partName, // and get the text value of node identified by path public String sent(String partName, String path); //path is either XPath or JsonPath. //dig into the schema part returned by the server, identified by partName, // and get the text value of node identified by xpath public String got(String partName, String path); //path is either XPath or JsonPath. public String location //the Location: header value of the resource, which is a full URL.

For each "test" node in a testGroup, RestReplay sets up a request, using the URI (which may contain expressions), the http method, and any parts specified for a POST or a PUT. The parts are read from text files in a directory relative to the RestReplay base directory, and may contain/define variables which get passed in. The value of these variables is evaluated in the control file just prior to the test node being run. So previous tests may be referenced. The parts are expanded for any expressions, and then put together in the appropriate http sequence and submitted to the server. The response from the server is then made available to the test to check using the "response" node. Note that only JSON and XML responses are reported, not HTML. The "response" node allows you to specify a template file to compare, and any  variables declared will be evaluated just prior to comparing the response from the server to the response template file.

After the testGroup has run, RestReplay will perform cleanup. You may control cleanup via the "autoDeletePOSTS" attribute of "testGroup".

You may also programmatically call RestReplay, in which case you can programmatically control cleanup of individual tests. See "2.c.4 Cleanup"

Finally, results are aggregated and spit out to the command line console, and/or to reports. You can customize how verbose these results are using the "dump" node in the master file.  Regardless of dump settings, all results go to the stand-alone html reports.

If all testGroups and all tests succeed, then RestReplay tells maven surefire that the suite succeeded. Otherwise, maven surefire is told that the suite failed. In this case, you may examine the console, or look at the maven surefire report, which will contain a tabular summary of all testGroups and tests. You may have to click "show all results" in the maven surefire report.

RestReplay also creates a stand-alone report. Look at the command line output where RestReplay will report the location of this HTML report, e.g. something of this form:

Master Report Index: ./tests/reports/index.local.las-master.xml.html

This reports directory starts with an index file named after any master file you have run, which links to individual control file results.

1.e Tools

RestReplay reports include sections that help developers debug their test cases. On the Master page, you can optionally turn on the ResourceManager Summary. You can also optionally provide an onSummary event, which can be used to spit out all the APIs called, for example. On the detail pages, the EvalReport is always included.

1.e.1 EvalReport

The EvalReport shows all evaluations of variables whilst running tests. The output is organized in the order that tests are run. To see detail all the way into expressions containing expressions, click the link at the top of the EvalReport called "Show EvalReport Detail".

For each TestGroup.Test, there is a link to the test output, the vars and references are shown, then all evaluations are shown with expansions/substitutions. Each expansion shows its context, then its raw value, then its expanded value. Vars are any legal variable names that are accessed, whilst references are expressions that are evaluated that may point to imports or other tests. (See the Legend at the bottom of the report page for more info on how the EvalReport is presented.)

By tracing these vars and references, you can refactor your tests so that each Test or TestGroup imports and exports variables, rather than expressions that create messy dependencies. (A clean dependency is when testA exports a variable, and testB imports that variable but testB does not have a complicated expression that digs info out of testA.)

1.e.2 ResourceManager Summary

The ResourceManager Summary shows every resource that is pulled from the file system or the class path (which can reference resources stored in the RestReplay jar file). Relative resource names are reported, plus attributes related to how that resource was found.

Cached resources are highlighted with green font.
Missing resources are highlighted with red font. This is the best way to find broken links to resources.

1.e.2 onSummary event

There is an example of how to write this script in the source distribution under RestReplay/src/main/resources/restreplay/_self_test/s/master-self-test-onSummary.js By using this onSummary script, you can spit out all the APIs called during a run.
Documentation is here: /restReplayMaster/event

2. Reference Documentation

2.a. Installation

RestReplay is downloaded as a jar, so you simply put this jar on your classpath and run it, with some options. Alternatively, you can integrate RestReplay into your Maven build. To run it on Jenkins, just make RestReplay.jar available on the Jenkins job's classpath, and use the command line parameters.

To see the structure of the testdir that will hold your tests, un-jar the RestReplay.jar file, and look for the subdirectory src/main/resources/restreplay/. This is our -testdir. In it we have defined a project called _self-test. The _self-test project has working, precise tests that cover most of our functionality. To write tests, you'll want to look at a body of tests, such as our _self-test which has great examples, and uses the organizational features of RestReplay. You should also have a -testdir, and have some organization that stores tests by groups that make sense for your project. Feel free to mimic the structure of _self-test. But know that RestReplay can run this _self-test from the classpath, that is, from the copy inside the jar file. So if you don't obscure the name _self-test, then you can always run our test suite on startup, which simply tests that everything works from RestReplay, over http to a local, temporary test service it spins up. You can copy, and or override the _self-test to do your bidding if you need a custom setup. This works because the tests are found on your -testdir path first, and if not, are looked up from the jar file. In this same way, the reports use our CSS file, which you can override once you see the path from the jar file. So in our jar file is the resource src/main/resources/restreplay/_includes/reports-include.css, which you can override with a copy in your -testdir directory structure like so, where TEST_DIR is the directory where you have told RestReplay is the root of all your tests:
TEST_DIR/_includes/reports-include.css

2.b. Configuration

2.b.1 General Rules

General Variable Syntax: starts with letter or underscore (_), contains letters, numbers, underscore. No other special characters, especially not the dollar-sign ($).

For an ID attribute to be valid, please use General Variable Syntax.

Element names should also follow the General Variable Syntax.

General Filename syntax: please stick to General Variable Syntax plus hyphen (-) and period (.).

Filenames may contain the delimiter / to denote directories. These will be transformed into platform-specific directory separators.

2.c Running

You can run RestReplay on the command-line, or from the command-line of maven surefire.

To run from the command line, using a jar distribution, then this example would run the environment local using the directory /.tests for control files and will put reports in /.tests/reports. From that directory, it will find the master control file, master.xml .

java -jar lib/RestReplay-1.0.4-standalone.jar \ -testdir ./tests \ -master master.xml \ -env local

This would run RestReplay, telling it where to find tests, and which master to run.  It also passes in an environment ID, which selects per-environment settings from the master file's <env> section.


To run using the maven surefire test, which will find _self_test/master-self-test.xml on the path or in the Jar file:

mvn -o install

Or, to override the port that the self-test will open, use:

mvn -o install -Dport=29003

To run using the maven surefire test pointed at a master, e.g. master.xml in ./tests/, simply:

mvn -o install \
-D-testdir=./tests \
-D-master=master.xml

or define your own master file, which you create:

vi ./my-master-file.xml mvn test -Dmaster=my-master-file.xml \
-DforkMode=never \
-Dtest=myTestID

2.c.1 Command-line parameters


Option Example command-line parameter Notes
testdir -testdir ./tests The path to all the tests.
reports -reports /mnt/reports Specify output reports dir, default is "reports" relative to testdir
testGroup -testGroup main Run one testGroup when running with -control.
test -test pitoken The ID from a test element when running one control file.
env -env local One env ID defined in your master file.
autoDeletePOSTS -autoDeletePOSTS true Override the setting from the commandline, but will NOT override setting in control files.
dumpResults -dumpResults true If you are not running a master, you can control dumps here.
control -control my-control.xml Run one control file, optionally without a master. Requires at least testGroup, and also accepts test. If you run with a master, then masterVars and master headers are pulled in first (since we are specifying the control file, no /restReplayMaster/run element or /restReplayMaster/run/vars children are consulted). Otherwise, all vars and headers must run stand-alone within the control file.
master -master my-master.xml Run one master file.  If this is set, and the option control is used, then testGroup, and test are used, otherwise testGroup and test are ignored.
selftest -selftest Run the self-test. If the default port conflicts on your system, override it with -port. The self-test spins up a basic http server, and points the self-test at http://localhost:port where port is defined by the -port parameter, or uses its default. At the end of the test, the server is stopped.
port -port 8081 Allows you to override the default port that the self-test opens. Default is 28080

When running in Maven, you must pass these command line parameters via the Java -D mechanism, using an = sign to set the value, e.g.

mvn exec:java -Dtestdir=./tests

Here are some more examples of running on the command line.

In this run, we are hitting the control file pi/pitoken.xml with no master file.  So we specify testGroup, test within that control file.

java -jar lib/RestReplay-1.0.4-standalone.jar \
-testdir ./tests \
-control pi/pitoken.xml \
-testGroup main \
-test pitoken


In this run, we are running with a -master and a -control, AND a -testGroup.  This runs only that testGroup, but picks up vars and headers from the master. (Since no /restReplayMaster/run element is used, there are no vars from /restReplayMaster/run/vars.

java -jar lib/RestReplay-1.0.4-standalone.jar \
-testdir ./tests \
-master my-master.xml
-control myservice/myservice.xml \
-testGroup deleteAllWidgets


In this run, we are sending reports to a different directory (./myreports) than the default (tests/reports)

java -jar lib/RestReplay-1.0.4-standalone.jar \
-testdir ./tests \
-master master.xml \
-reports ./myreports

2.c.2 XML configuration parameters in master file

Click on an element to jump to documentation for that element and its attributes.

Master control file
  <restReplayMaster>
      <dump/>
      <auths>
          <auth>
      </auths>
      <runOptions>
          <connectionTimeout>
          <socketTimeout>
          <errorsBecomeEmptyStrings>
          <acceptAlertLevel>
          <skipMutators>
          <skipMutatorsOnFailure>
          <dumpMasterSummary>
          <dumpRunOptions>
          <dumpResourceManagerSummary>
          <reportResourceManagerSummary>
          <reportResponseRaw>
          <reportPayloadsAsXML>
          <failTestOnErrors>
          <failTestOnWarnings>
          <outputServiceResultDB>
      </runOptions>
      <vars>
          <var />
      </vars>
      <envs>
          <vars>
            <var />
          </vars>
      </envs>
      <event />
      <run />
          <vars>
              <var />
          </vars>
      </run>
  </restReplayMaster>
/restReplayMaster
child elements dump
auths
runOptions
vars
envs
run
/restReplayMaster/dump
@dumpServiceResult auto - like detailed, but always shows payloads if there is an error
full - like detailed, but defaults to showing payloads
detailed - enough information to debug
minimal - a one-line display. Note that only JSON and XML response payloads are reported, not HTML.
@payloads dumps payloads after summary result.
Matrix:

payloads truepayloads false
autodetail + payloads detail +payloads if error
fulldetail + payloads detail
detaileddetail detail
minimalminimal minimal
Example:
<dump dumpServiceResult="detailed" payloads="false" />
/restReplayMaster/auths

A list of users and their related Base-64 encrypted username:password strings. You can generate these strings in number of ways:
online: 
    http://www.motobit.com/util/base64-decoder-encoder.asp.
                  
    https://www.base64encode.org/


javascript in browser: 
    function utf8_to_b64(str) {
        return window.btoa(unescape(encodeURIComponent("myusername:mypassword")));
    }
    utf8_to_b64("myusername:mypassword");
   
dynamide:   
    (new com.dynamide.util.Base64Encoder("myusername:mypassword")).processString();
@default the user ID to use if none is specified in the test
/restReplayMaster/auths/auth
@ID a user ID
element text Base-64 encrypted username:password to be used with "Basic Authentication" the same way a browser would. See: /restReplayMaster/auths
Example:

Example:

<auths default="admin@example.org"> <auth ID="admin@example.org">dXNlcjFAbXVzZXVtMS5vcmc6dXNlcjFAbXVzZXVtMS5vcmc=</auth> <auth ID="bigbird2010">YmlnYmlyZDIwMTA6YmlnYmlyZDIwMTA=</auth> <auth ID="elmo2010">ZWxtbzIwMTA6ZWxtbzIwMTA=</auth> </auths>
/restReplayMaster/runOptions
connectionTimeout int: 30000Timeout for connecting to remote servers, in milliseconds.
socketTimeout int: 30000Timeout for data transfer, in milliseconds.
errorsBecomeEmptyStrings boolean: true|false Set to true if you want undefined vars and errors in var syntax to become empty strings, otherwise they are left intact in the template.
acceptAlertLevel Alert.LEVEL: Alert.LEVEL.OK A String without quotes, one of: OK, WARN, ERROR.  (As defined in Alert.LEVEL enum.)
[Not implemented. When implemented, will abort the testGroup or Master run if Warnings or Errors occur in individual tests.]
skipMutators boolean: true|false All mutators in all tests will be skipped.  Only the test will be run, and not any variations generated by the <mutator> set within the test.
skipMutatorsOnFailure boolean: true|false All mutators in all tests will be skipped.  Only the test will be run, and not any variations generated by the <mutator> set within the test.
dumpMasterSummary boolean: true|false
dumpRunOptions boolean: true|false
dumpResourceManagerSummary boolean: true|false The ResourceManager finds files in the classpath and the filesystem.  The HTML report for the Master contains a listing of all resources searched for.  This setting allows you to dump that list to the console, if true.
reportResourceManagerSummary boolean: true|false Same as dumpResourceManagerSummary, but output goes to html report.
reportResponseRaw boolean: true|false In html report, include RESPONSE (raw). The response is normally fed through a JSON pretty print. If that causes problems, turn reportResponseRaw on. It will not wrap in the browser, so the output doesn't look great, but it shows exactly the whitespace and all other characters that the server actually sent. Note that only JSON and XML responses are reported, not HTML.
reportPayloadsAsXML boolean: true|false This controls whether a JSON to XML transformation is applied to payloads, for information purposes only. The output is ony sent to the html report, and is not used in any verifications.
failTestOnErrors boolean: true|false Simply overrides display of SUCCESS to be FAILURE for a test if there are any errors.  Errors come from things like malformed expressions, files not found, socket timeouts on URLs, etc.  Does not abort testGroup or Master runs.
failTestOnWarnings boolean: true|false  Simply overrides display of SUCCESS to be FAILURE for a test if there are any warnings.  Warnings come from things like variables that are not defined but are used in expressions.  Does not abort testGroup or Master runs.
outputServiceResultDB

boolean: true|false If true, RestReplay writes out a flat-file database of all ServiceResult objects, serialized to JSON, in directories named after their control files, testGroups, and directories in the test tree. The root of the database is ${testdir}/db/

There is an experimental viewer for these in our source, in /src/main/resources/restreplay/_includes/service-result-handlebars-template.html, which ends up in the RestReplay jar file as _includes/service-result-handlebars-template.html.

condensedHeaders Comma separated list of String. Include any headers for which you want values condensed into comma-separated header values when a duplicate header specification is seen in a test. This is standard for the header ACCEPT, so that if you specify ACCEPT twice, say once in the testGroup, and once in a test, then both values get sent with something like this: Accept: application/json, text/json. You can override which headers RestReplay will do this for. By default, RestReplay comes with [ACCEPT, CONTENT-TYPE, COOKIE], which you can add to with one
<condensedHeaders>X-FOOBAR</condensedHeaders>
or override with a blank list with
<condensedHeaders>NONE</condensedHeaders>
or blank out then add a custom set:
<condensedHeaders>NONE,COOKIE,X-FOOBAR</condensedHeaders>
Example:
<runOptions> <connectionTimeout>3000</connectionTimeout> <socketTimeout>3000</socketTimeout> <errorsBecomeEmptyStrings>true</errorsBecomeEmptyStrings> <acceptAlertLevel>OK</acceptAlertLevel> <failTestOnWarnings>true</failTestOnWarnings> <failTestOnErrors>true</failTestOnErrors> <dumpResourceManagerSummary>false</dumpResourceManagerSummary> <skipMutators>false</skipMutators> <outputServiceResultDB>false</outputServiceResultDB> <condensedHeaders>X-FOO, X-BAR</condensedHeaders>  </runOptions>
Discussion:
{rest-replay-base-directory}/runOptions.xml is the best place to do site-wide settings for RestReplay.
It contains a <runOptions> root element.

All of these options can be controlled in a master file, too.
In this case, your master file contains <runOptions> under the root element <restReplayMaster>

When overriding RunOptions in a master file, consider these two development scenarios:
            1) you are developing locally, and are setting per-test cases while you run tests,
               and you may or may not check in your changes.
                  In this case, change any settings you like as close to the testCase as you can in the chain:
                        runOptions > master > testGroup > testCase
                  But feel free to change runOptions.xml and your master file, which ever is easiest.
Before committing your code, validate that you no longer need those options,
or consider coordinating the change with the team, since these settings affect many tests.

2) you are committing this master to source control, so that it gets picked up by automated builds. In this case, you should have a reason, coordinated with your team,
for overriding the default file (runOptions.xml) or the master for each project. Options: "connectionTimeout" "socketTimeout" The most reasonable override in a master file's RunOptions is the timeouts: these may vary by service, or host so one clean organization of test would be by services or known clusters of hosts with particular timeouts. You could have one control file, say, "master-edge-services.xml", with one testGroup, "timeCriticalServices", and in that testGroup put all the services from, say, a time-critical cluster. In that control file, set the RunOption timeouts to be low. In the Master, set the timeouts to be more "normal". All the other tests will run with "normal", and tests in master-edge-services.xml in testGroup "timeCriticalServices" will run with low timeouts. "errorsBecomeEmptyStrings" would be set locally if you were debugging. "dumpResourceManagerSummary" should probably be best left to the central location in the file (runOptions.xml) since it controls output to the console. "skipMutators" will skip processing for Mutators. Mutators take one test and replays it with modified payloads. The heirarchy is: RunOptions : class defaults runOptions.xml : default for all tests my-master.xml : overrides properties for this master. You can have many masters and run them all from the command line or test suite by using the startup command-line parameter: -master relative-path-to-master-file or for maven, surefire, etc: -Dmaster=relative-path-to-master-file
(see docs/RestReplay.html for examples).

/restReplayMaster/vars/var

See also 1.c.4 Vars
@id The name of the variable. var elements declared here are available to all tests. These are useful for defining host endpoints, or other values that will apply across all tests under a master.
Master Example (with vars, uri, and protoHostPort):
<restReplayMaster> <protoHostPort>${SNOOPSERVER}</protoHostPort> <vars> <var ID="PISERVER">http://piserver.example.com</var> <var ID="LASSERVER">http://localhost:8080</var> <var ID="SNOOPSERVER">http://localhost:28082</var> </vars> ... </restReplayMaster> Then, in a test file, you can use full URLs which reference these servers defined in the master control file: <restReplay> <protoHostPort>${LASSERVER}</protoHostPort> <testGroup ID="mainTestGroup" autoDeletePOSTS="false"> <test ID="pitoken"> <method>POST</method> <uri>${PISERVER}/v1/piapi-int/tokens</uri> <filename>pitoken.json</filename> </test> <test ID="createAssignment"> <method>POST</method> <uri>${SNOOPSERVER}/las-api/api/grid-apex/assignments</uri> <filename>create-assignment.json</filename> ... </test> <test ID="getAssignments"> <method>GET</method> <uri>/las-api/api/grid-apex/assignments</uri> <filename>create-assignment.json</filename> ... </test> </testGroup> </restReplay> "uri" values that have a full url will go to that host. If they don't have http://host:port, then they will use "protoHostPort" element from the test file, then the master file. In this case: createAssignment will go against SNOOPSERVER: http://localhost:28082 getAssignments will go against LASSERVER: http://localhost:8080 pitoken will go against PISERVER http://piserver.example.com
But if protoHostPort were not defined in this test file, then the protoHostPort will be pulled in from the master file. <restReplayMaster> <protoHostPort>${SNOOPSERVER}</protoHostPort> <vars> <var ID="PISERVER">http://piserver.example.com</var> <var ID="LASSERVER">http://localhost:8080</var> <var ID="SNOOPSERVER">http://localhost:28082</var> </vars> ... </restReplayMaster> <restReplay> <testGroup ID="secondTestGroup" autoDeletePOSTS="false"> <test ID="getAssignments"> <method>GET</method> <uri>/las-api/api/grid-apex/assignments</uri> <filename>create-assignment.json</filename> ... </test> </testGroup> </restReplay> In this case, getAssignments will use the default from protoHostPort in the master file: secondTestGroup.getAssignments will go against SNOOPSERVER http://localhost:28082
/restReplayMaster/envs
  Contains blocks of vars/var elements which define variables available to all tests run by this master.
The calling program can supply the environment ID in the command-line parameter -env.

If calling restReplayInstance.readOptionsFromMasterConfigFile() programmatically, call restReplayInstance.setEnvID() first.
child elements vars
vars/var
@ID The environment ID, which is available for selection from the command line parameter -env, or the call RestReplay.setEnvID()
default If true, this is the env which will be used if the user supplies none on the command line or programmatically.
Example:
<restReplayMaster> <protoHostPort>${BESERVER}</protoHostPort> <envs> <env ID="local" default="true"> <vars> <var ID="LISERVER">http://li-api.example.com</var> <var ID="BESERVER">http://localhost:8080</var> <var ID="SELFTEST_SERVER">http://localhost:18080</var> </vars> </env> <env ID="dev"> <vars> <var ID="LISERVER">http://li-api.example.com</var> <var ID="BESERVER">http://be.example.com</var> </vars> </env> <env ID="qa"> <vars> <var ID="LISERVER">http://li-api.example.com</var> <var ID="BESERVER">http://be.example.com</var> </vars> </env> </envs> ...
/restReplayMaster/envs/vars
child elements var
/restReplayMaster/envs/vars/var

A single variable, which will be available to all tests called by this master.
Behaves exactly like /restReplayMaster/vars/var  except that the vars come from the environment specified.
See examples at /restReplayMaster/envs
See also 1.c.4 Vars
/restReplayMaster/event

Names an event handler which gets fired at some phase. Below are the named events, specified by the attribute ID. NOTE: You only get these events if you run a Master. Running a test or testGroup without a Master will skip these events and other Master control file features.
Currently supported event IDs:
onBeginMaster
onSummary
onEndMaster
ID="onSummary" Names an event handler which gets fired when the report is in the summary phase. You can put any javascript you want in here, and have access to the Master, and the List of Lists of ServiceResults for the whole run. The code can be inline, or in a relative filename, just like a validator. If you return a value from the event handler, or if you put an expression on the last line of the handler, that (html) result ends up at the end of the report on the Master Report Index page. In this way, you can extend the report with any html you want, pulling from the ServiceResult values that represent each test. If you need to create your own file output, you can call master.getTestDir() and any of the File handling functions in FileTools, such as: kit.getFileTools().saveFile(...) See the javadoc for FileTools

Variables in the context of the event handler:
master : pointer to the Master running this test.
kit : org.dynamide.restreplay.Kit object
tools : org.dynamide.util.Tools object
serviceResultsListList : a List<List<ServiceResult>>

The outer list of serviceResultsListList is a list of all testGroups, the inner list is a list of all the tests within that testGroup. The outer list doesn't have a property for the testGroupID, but each ServiceResult does have a testGroupID property.
ID="onBeginMaster" Fires after the Master has been configured, but just before it runs any tests.
ID="onEndMaster" Fires after the report has been written. The event handler is an appropriate place for programmatic cleanup of resources not cleaned up by autoDeletePOSTS.
Example:
(inline)
<event ID="onSummary" lang="javascript"><![CDATA[ var outstring = "<h4>APIs called</h4>"; var arr = []; for (var i=0; i<serviceResultsListList.size(); i++) { var serviceResultsList = serviceResultsListList.get(i); for (var j=0; j<serviceResultsList.size(); j++) { var serviceResult = serviceResultsList.get(j); arr.push(serviceResult.method+" "+serviceResult.fullURL); } } outstring += "<p>list: <br />"+arr.join("<br />")+"</p>"; outstring; ]]></event>
Example:
(filename)
Reference a script file in the master file master-self-test.xml :
<event ID="onSummary" lang="javascript" filename="_self_test/s/master-self-test-onSummary.js"/>
Then put the javascript in that file _self_test/s/master-self-test-onSummary.js :
var outstring = "<h4>APIs called</h4>"; var arr = []; for (var i=0; i<serviceResultsListList.size(); i++) { var serviceResultsList = serviceResultsListList.get(i); for (var j=0; j<serviceResultsList.size(); j++) { var serviceResult = serviceResultsList.get(j); arr.push(serviceResult.method+" "+serviceResult.fullURL); } } outstring += "<p>list: <br />"+arr.join("<br />")+"</p>"; outstring;
The name of the master file and the name of the script can be any valid filenames.
/restReplayMaster/run
@controlFile name of control file, relative to RestReplay directory
@testGroup ID of single testGroup within control file to run. To run multiple tests, use multiple /restReplayMaster/run elements.
child elements vars
Example:
<run controlFile="objectexit/object-exit.xml" testGroup="CRUDL" />
Example:
<restReplayMaster> .... <run controlFile="objectexit/object-exit.xml" testGroup="CRUDL" /> <run controlFile="objectexit/object-exit.xml" testGroup="CRUDL" />
Example: In this example, mastervars are declared right in the <run> element, and will affect only this run. These are read in after mastervars declared in /restReplayMaster/envs/env/vars and /restReplayMaster/vars
<restReplayMaster> .... <run controlFile="_self_test/self-test.xml" testGroup="Mastervars"> <vars> <var ID="MASTERVAR1">mastervar_value_1</var> <var ID="MASTERVAR2">mastervar_value_2</var> </vars> </run>

2.c.3 XML configuration parameters in control file

Click on an element to jump to documentation for that element and its attributes.

Test control file
  <restReplay>
      <auths>
          <auth></auth>
      </auths>
      <testGroup>
          <comment />
          <imports />
          <protoHostPort />
          <vars />
          <headers />
          <test>
              <comment />
              <method />
              <uri />
              <filename />
<expected> <code /> </expected>
 <mutator> <expected> <code /> </expected> </mutator> <vars> <var /> </vars> <exports> <vars> <var /> </vars> </exports> <headers> <header /> </headers> <parts> <part> <label /> <filename /> <var /> </part> </parts> <response> <expected> <dom /> <code /> </expected> <validator/> <parts> <part> <label /> <filename /> <var /> </part> </parts> </response> </test> </testGroup> </restReplay>
/restReplay
restReplay Root element of a RestReplay control file.
child nodes protoHostPort
auths
testGroup
/restReplay/auths
auths This element adds to any auths placed in the master file. Any ID defined in the control file overrides an auth of the same ID defined in the master file.
See: /restReplayMaster/auths
child elements auth
@default the user ID to use if none is specified in the test
/restReplay/auths/auth
@ID a cspace user ID
element text Base-64 encrypted username:password
See: /restReplayMaster/auths
Example:

Example:

<auths default="admin@example.org"> <auth ID="admin@example.org">YWRtaW5AY29sbGVjdGlvbnNwYWNlLm9yZzpBZG1pbmlzdHJhdG9y</auth> </auths>
/restReplay/testGroup
testGroup Runs a group of tests
child elements comment
headers
vars
test
@ID A unique, valid ID for this testGroup.
The master file uses this ID in its /restReplayMaster/run element to specify this testGroup to run.
@autoDeletePOSTS "true" - resource will be autoDeleted.
"false" - resources will not be autoDeleted
Example:

Example:

<testGroup ID="CRUDL" autoDeletePOSTS="true"> <test ID="foo">....</test> </testGroup>
/restReplay/testGroup/comment
comment

Any text you want to include in the report to describe this testGroup. Appears just under the testGroup banner in the master report index and in the detail pages. You may include any HTML markup.

/restReplay/testGroup/imports
imports

Import the vars from another test, which must have been run prior to this test (in the same RestReplay run). If the import is not available, warnings will appear and halt the testGroup.

Example:
<testGroup ID="ImportsExample"> <imports> <import ID="createOrder" control="orders/orders.xml" testGroup="main" test="createOrder" /> <import ID="createUser" control="orders/orders.xml" testGroup="main" test="createUser" /> </imports> <vars> <var ID="ORDER_ID">${createOrder.ORDER_ID}</var> </vars> ...
/restReplay/testGroup/vars/var
var A variable that will be expanded just before the test is run. Available to use in uri element.
The body of the element is a  Jexl2 Expression, discussed in this document under 1.c.1 Expressions and 1.c.2 Values in Contexts.

See also: 1.c.4 Vars

For more programmatic horsepower (such as javascript/rhino, and XPath expressions) see the discussion on response validators:
     /restReplay/testGroup/test/response/validator.
@ID The ID attribute of var makes that variable available in the test's namespace under that ID.  So requests can use ${COURSE_ID} to get the value 540f4515e4b004e7d8f22fc8 because COURSE_ID was the ID attribute value, in this next example. In other words, the value of the ID becomes a variable in the target language. 
Example:
<restReplay> <protoHostPort>...</protoHostPort> <testGroup ID="mainTestGroup"> <vars> <var ID="COURSE_ID">540f4515e4b004e7d8f22fc8</var> <var ID="TEMPLATE_ID">87fff5f0-652d-11e4-8229-179090669834</var> </vars> ... <test ID="getAssignments"> <method>GET</method> <uri>/las-api/api/grid-apex/courses/${COURSE_ID}/assignments</uri>
/restReplay/testGroup/headers/header
header A header that will be expanded just before the test is run. If you need to force a particular test to skip including all the headers set by the testGroup, set inheritHeaders="false" on that <test> node.
Example:
<restreplay> <testGroup ID="mainTestGroup"> <headers> <header name="X-ApiKey">19d26d452432968c9b636c9daf0c6d9b</header> <header name="x-authorization">${pitoken.got("//data")}</header> </headers> <vars> <var ID="COURSE_ID">540f4515e4b004e7d8f22fc8</var> <var ID="TEMPLATE_ID">87fff5f0-652d-11e4-8229-179090669834</var> </vars> <test ID="pitoken" inheritHeaders="false"> <-- Will NOT inherit headers --> <method>POST</method> <uri>${PISERVER}/v1/piapi-int/tokens</uri> <filename>pitoken.json</filename> </test> <test ID="deleteTemplate"> <-- Will inherit headers: X-ApiKey and x-authorization --> <method>DELETE</method> <uri>/las-api/api/grid-apex/courses/${COURSE_ID}/templates/${TEMPLATE_ID}</uri> </test> </restreplay>
Example:
<restreplay> <testGroup ID="mainTestGroup"> <headers> <header name="X-ApiKey">19d26d452432968c9b636c9daf0c6d9b</header> <header name="x-authorization">${pitoken.got("//data")}</header> </headers> <vars> <var ID="COURSE_ID">540f4515e4b004e7d8f22fc8</var> <var ID="TEMPLATE_ID">87fff5f0-652d-11e4-8229-179090669834</var> </vars> <test ID="pitoken" inheritHeaders="false"> <method>POST</method> <uri>${PISERVER}/v1/piapi-int/tokens</uri> <filename>pitoken.json</filename> <-- Will NOT inherit headers --> <-- And can still define its own headers --> <headers> <header name="X-SomeParameter">19d26d452432968c9b636c9daf0c6d9b</header> </headers> </test> </restreplay>
/restReplay/testGroup/test
test A test exists in a testGroup, and defines one request/response/validation set.
@ID This ID must be unique within this file. Will become a variable name in the expression context bound to the ServiceResult object for this request/response.
@auth the ID of the /restReplay/auths/auth or /restReplayMaster/auths/auth element to use for this request.
IMPORTANT: These are sticky - they stick around until reset by the next @auth specified in a test, in exec order of the tests in this file.
@inheritHeaders boolean: true|false Default=true. If false, headers set in the master or testGroup will not be used by this test: only headers set within this test are sent to the server.
@loop OPTIONAL. integer. Default=1. Sets the number of loops to run. Repeats the service call loop times. Within validators, scripts, and expressions, the value of the current loop index (zero-based) is this.LoopIndex
Use 0 (zero) to exclude this test. The test will not run or show up in reports. (NOTE: Normally, you'll want to exclude tests by putting them in another testGroup.)

Example: (loop 3 times, from loopyTest_0 to loopyTest_2 )
<test ID="loopyTest" loop="3">...
child elements comment
method
uri
expected
part
parts
Example
<test ID="oePersonauthority"> <method>POST</method> <uri>/cspace-services/personauthorities/</uri> <part> <label>personauthorities_common</label> <filename>objectexit/oePersonauthority.xml</filename> </part> </test>
Example This example shows switching auths between tests, and how the auths are sticky -- they stick around until another is set, or reset with auth="".

"permBigbird" and "accountBigbird" tests are run as user admin@example.org.

"dimension1" and "dimension1" tests are run as user bigbird2010.

This would test the flow of allowing an admin account to change a regular user's permissions and accounts, then testing that that regular user can access other services.
<test ID="permBigbird" auth="admin@example.org"> <method>POST</method> <uri>/cspace-services/authorization/permissions</uri> <filename>security/1-bigbird-permission.xml</filename> </test>

<test ID="accountBigbird"> <method>POST</method> <uri>/cspace-services/accounts</uri> <filename>security/5-account-bigbird.xml</filename> </test>

<test ID="dimension1" auth="bigbird2010"> <method>POST</method> <uri>/cspace-services/dimensions/</uri> <part> <label>dimensions_common</label> <filename>dimension/1.xml</filename> </part> </test>
<test ID="dimension2"> <method>PUT</method> <uri>/cspace-services/dimensions/${dimension1.CSID}</uri> <part> <label>dimensions_common</label> <filename>dimension/2-put.xml</filename> </part> </test>
/restReplay/testGroup/test/comment
comment

Any text you want to include in the report for this test, on the same line as the testID. If the text is long, comment appears with a "more..." link. You may include any HTML markup.

/restReplay/testGroup/test/vars/var
var The same as /restReplay/testGroup/vars/var, but limited to one test.  See also 1.c.4 Vars
/restReplay/testGroup/test/expected/code
code See also: response/expected/code

<expected> contains zero to many <code range="" /> elements. 

The child or text of this element should be empty when this is a child of test, and not a child of mutator.   When the tag is found under <mutator><expected><code> , then additional rules apply--please see that section.

The <code> element defines acceptable HTTP response codes for groups of mutations.  If you don't specify test/expected/code/ then the default is as though you specified
<expected><code range="2x"/></expected>

<test ID="specificCodes"> <method>GET</method>
 <uri>/tagonomy?mock=true</uri> <expected>
<code range="200" /> <code range="202,204-206" /> </expected>
</test>
These range attributes work like printer page ranges.   It is a string that contains a comma separated list of ranges, where a range is a low and a high, inclusive.  A single number without a hyphen is a range with the high and the low being the same.  A list may be a single number or series wildcard, in which case there are no commas.

Basic syntax examples for the range attribute:
200      The range will be from 200 to 200, exluding all other 2x values such as 201.
2x        The range will be from 200-299.
200-209     from-to, inclusive.  Internally these map to ranges (org.dynamide.restreplay.Range).
200,201,204    comma separated list. response codes 200, 201, and 204 are acceptable, but 202, 203 are not, nor any other codes.  Call these lists.
2x,301-302    commas and  ranges (pairs separated by hyphens) may be combined.  
 x or xx or X or XX    are all wildcards for the http response code groups, so 2X and 2xx both expand to 200-299. range="200" only succeeds if server sends exactly 200, but range="2x" succeeds if server sends a 200, a 201, a 202, and so on.

Here are other range examples:
<code range="100" /> <code range="1xx" /> <code range="2xx" /> <code range="2x" /> <code range="2XX" /> <code range="2X" /> <code range="3xx" /> <code range="2xx" /> <code range="4xx" /> <code range="5xx" /> <code range="100,200,300" /> <code range="100-400" /> <code range="202,204-206" /> <code range="100-201,300-399,404-499" />
For convenience, you may place the expected/code node either at the top level, or in response/expected/code. Here are two examples showing the difference:
<test ID="testWithTopLevelExpectedCode"> <comment>In this form, the top-level expected cannot include a dom node. dom node is read from response/expected/dom </comment> <method>GET</method> <uri>/orders</uri> <expected> <code range="200,404"/> </expected> <response> <filename>orders/r/Order1.json</filename> <expected> <dom> <REMOVED range="0" /> <DIFFERENT range="0" /> </dom> </expected> </response> </test> <test ID="testWithResponseLevelExpectedCode"> <comment>This version is nested, and includes both code and dom nodes. code is only read from response/expected/code if expected/code is missing. </comment> <method>GET</method> <uri>/orders</uri> <response> <filename>orders/r/Order1.json</filename> <expected> <dom> <REMOVED range="0" /> <DIFFERENT range="0" /> </dom> <code range="200,404"/> </expected> </response> </test>
/restReplay/testGroup/test/headers/header
header The same as restReplay/testGroup/headers/header, but limited to one test
/restReplay/testGroup/test/exports
exports After the test is run, export a variable which can be referenced by another test, and can use values from the response. You could define this in another, dependent test using syntax like ${firstTest.someExpression} but using exports puts that definition closer to the test, and so makes the dependent test cleaner.
child elements vars
vars/var - The same as restReplay/testGroup/vars/var
/restReplay/testGroup/test/exports/vars/var
var Evaluated after the test is run; otherwise similar to restReplay/testGroup/vars/var (and limited to one test). See also 1.c.4 Vars
/restReplay/testGroup/test/method
method GET
POST
PUT
DELETE
/restReplay/testGroup/test/uri
uri The portion of the URL after protocol, host, and port.
Example: Examples:
If URL is  http://localhost:8180/cspace-services/personauthorities/
then uri is /cspace-services/personauthorities/
The uri contains any required queryString parameters:
<uri>/cspace-services/objectexit/?sortBy=&pgNum=2&pgSz=1</uri>

The uri element may contain expression variables:

<uri>/cspace-services/personauthorities/${oePersonauthority.CSID}</uri>

This example uses the CSID returned in the Location header the server sent back when executing <test ID="oePersonauthority">

/restReplay/testGroup/test/filename

If the request is not multipart, you can simply place the <filename> element under the <test> element.  If the request is mime-multipart, this element goes in a <part> element. Like all filenames in RestReplay, the value is relative to the -testdir directory.
Example: <filename>_self_test/content-mutator-test.json</filename>
/restReplay/testGroup/test/mutator
@skipParent If true, then the mutator will skip running the test before it runs the mutations. The parent test will show up in the report, with a response code of 0, the label skipped=true, and will be missing headers and responses. If false or absent, the mutator will run the test (as the parent) then run the mutations (as children). In the report, you can recognize mutations because they are indented, and have the label type: which shows the mutator type.
Discussion The <mutator> element loads a Mutator, defined by the type attribute.  Currently, defined mutators are: "ExcludeFields" and "VarMutator".

Mutators look at the JSON for a POST or PUT, and mutates the payload somehow. They make the mutation ID and Index available.
It then executes a request to the server with this mutation and reports results underneath the test, one for each variant. 

The <code> element defines acceptable HTTP response codes for groups of mutations.  Each mutation is named using a strategy of the mutator implementation, and may reflect the index, or a key field.   In the example below, mutation numOfDrafts expects the server to succeed in the range 200-299.  The mutations book_id and  course_id are expected to be rejected by the server with response code in the 400-499 and 500-599 ranges.  The default (*) is to expect the server to respond to all other named mutations with code 200 in the statement <code range="200">*</code>

See also
<test><expected><code> documentation for how code ranges work outside of mutators.
See also <test><response><expected level=""> documentation for how the expected tag implements tree matching.
See also: Looping tests with test@loop

ExcludeFields - Discussion This mutator (type="ExcludeFields") looks at the JSON for a POST or PUT, and one-by-one removes the top-level fields.  It then executes a request to the server with this mutation and reports results underneath the test, one for each variant. 

The <code> element defines acceptable HTTP response codes for groups of mutations.  Since this mutator looks at top-level fields, each mutation is named after the missing field. Note that when expected/code nodes are used under mutator elements, then code must contain mutation ids, or the wildcard (*).  If you omit the id or the wildcard, that code range will default as though you had used * and will conflict with any other wildcard code ranges.

ExcludeFields Example:
<test ID="dynamideUseMutator"> <method>POST</method> <uri>/tagonomy?mock=true</uri> <filename>_self_test/content-mutator-test.json</filename> <mutator type="ExcludeFields"> <expected> <code range="200">*</code> <code range="200-299">numOfDrafts</code> <code range="400-499,500-599">book_id, course_id</code> </expected> </mutator> </test>
Example showing default wildcard. Mutations that are not book_id, and course_id are expected to get a 200 response code:
<test ID="dynamideUseMutator"> <method>POST</method> <uri>/tagonomy?mock=true</uri> <filename>_self_test/content-mutator-test.json</filename> <mutator type="ExcludeFields"> <expected> <code range="200" /> <code range="400-499,500-599">book_id, course_id</code> </expected> </mutator> </test>
This example will cause an error, because you have specified the default twice (once with * and once with an empty text node) :
<test ID="dynamideUseMutator"> <method>POST</method> <uri>/tagonomy?mock=true</uri> <filename>_self_test/content-mutator-test.json</filename> <mutator type="ExcludeFields"> <expected> <code range="200">*</code> <code range="202"></code> <!-- error: this will be a duplicate for * --> <code range="202" /> <!-- same error with stand-alone tag --> <code range="400-499,500-599">book_id, course_id</code> </expected> </mutator> </test>

VarMutator Example:

VarMutator re-applies the service call, once for each of these vars blocks, which set vars before the call, in the file-order of the vars groups. IDs are informational strings--set to any alphanumeric string. Here is an example from the RestReplay self-test.

Note that inside the test, you can use this.mutator.getMutationID() which will return the ID attribute for this var, and this.mutator.getIndex() to get the loop index.

<test ID="selftestUseVarMutator"> <method>PUT</method> <uri>/tagonomy?mock=true&mutation=${this.mutation}&forceCode=300</uri> <filename>_self_test/var-mutator-test.json</filename> <vars> <var ID="DAYS">${24*60*60*1000}</var> <var ID="BASE_DATE">${tools.now()}</var> <var ID="DUE_DATE_MILLIS">${BASE_DATE +0 * DAYS}</var> <var ID="FOOBAR">${this.testID}</var> <var ID="DueDate2">Due ${BASE_DATE} -basedate</var> <var ID="DueDate">Due ${kit.dates.getMonthName(BASE_DATE)} ${kit.dates.getDayOfMonth(BASE_DATE)}</var> </vars> <mutator type="VarMutator"> <expected> <code range="4xx">*</code> </expected> <vars ID="0"> <var idbase="DUE_DATE_MILLIS">${BASE_DATE +1 * DAYS}</var> <var idbase="FOOBAR">${this.testID} set in 0.</var> </vars> <vars ID="1"> <var idbase="DUE_DATE_MILLIS">${BASE_DATE +2 * DAYS}</var> <var idbase="FOOBAR">${this.testID + ' mutator index: ' + this.mutator.getIndex() }</var> </vars> <vars ID="2"> <var idbase="DUE_DATE_MILLIS">${BASE_DATE +3 * DAYS}</var> <var idbase="FOOBAR">${this.testID + ' mutator index: ' + this.mutator.getIndex() }</var> </vars> <vars ID="3"> <var idbase="FOOBAR">${this.testID + ' mutator index: ' + this.mutator.getIndex() }</var> </vars> </mutator> </test>

/restReplay/testGroup/test/parts
parts Optional container for "part" elements. Needed if there are more than one part element, otherwise not needed.
/restReplay/testGroup/test/part
/restReplay/testGroup/test/parts/part
part If there is just one part, the parts container element may be omitted.
A schema part sent to server in request, identified by a schema name in the "label" child element.
Models a request schema part sent in either a multipart mime message,
or in a CSpace 1.4 style application/xml message delimited by xml elements describing the schema part.
/restReplay/testGroup/test/parts/part/label
label A schema name for this part, such as "objectexit_common", or "collectionspace_core".
/restReplay/testGroup/test/parts/part/filename
filename The filename of the payload, relative to the RestReplay directory (not relative to the control file).
Example: &<filename>objectexit/object-exit.xml</filename>
/restReplay/testGroup/test/parts/part/var
var Declares a variable to be made available to the expansion of the part template.
See also /restReplay/testGroup/vars/var for a discussion of the expression languaged used inside a var.
@ID the name of the variable that can be dereferenced in an expression e.g. <var ID="depositor">...</var>
Example: var may be dereferenced in an expression in the request payload.like this:
<depositor>${depositor}</depositor>

The var itself may include any expression syntax in its element text, so that vars can be set based on any previous tests or java objects in the expression namespace.
e.g.

<var ID="depositor">${oePersonGET.got("persons_common","//refName")}</var>

In context, this looks like:
<test ID="oeObject"> <method>POST</method> <uri>/cspace-services/objectexit/</uri> <part> <label>objectexit_common</label> <filename>objectexit/oeObject.xml</filename> <var ID="depositor">${oePersonGET.got("persons_common","//refName")}</var> <var ID="currentOwner">${oePersonGET.got("persons_common","//refName")}</var> <var ID="exitNumber">oeObject-exitnum-123</var> </part> </test>

In this example, the ServiceResult object from the previous test identified by <test ID="oePersonGET"> exposes a method called got(String partLabel, String elementXPath), which can extract the node text for an element in a schema part returned by a response.

/restReplay/testGroup/test/response
response Container for evaluating the response from the server.
child elements expected
parts
part
validator
/restReplay/testGroup/test/response/validator
validator Contains a filename reference to a validator script. Like all filenames in the control file it is relative to the tests directory.
The script validates the response from the server. Available to the validator are the request, response, and everything in the ServiceResult class, and the map of all previously run tests in this test group (in the file order of the control file).
variables in context The standard Values in Contexts apply, depending on whether you are using Javascript or Jexl.  Additionally, the value __FILE__ is the full path to the validator.
Exports are tracked

Exports added with this.addExport() get tracked automatically, and the list of exports added per validator is shown in the detail report as an Alert from the validator. The values of these exports can then be seen in the variables row (exports are a different background color--see Legend at bottom of page).

@lang Supported values are javascript and jexl. Default is jexl. This is the language used for the validator. Jexl scripts must be surrounded by ${}, and you may have multiple blocks like that, wrapped in text, e.g. Foo${result.testID} Bar Baz ${result.gotExpectedResult()}.
JavaScript blocks should have a return statement as the last line, or an expression as the last line, which will be the result.
example
 (control file)
<test ID="testA"> <uri> ... <response> <validator lang="jexl">assignments/res/testA.validator.jexl</validator> </response> </test> <test ID="testB"> <uri> ... <response> <filename>assignments/res/testB.json</filename> <validator lang="javascript">assignments/res/testB.validator.js</validator> </response> </test>
Note: in this example, the <filename> element points to a file containing the expected response. RestReplay automatically compares the response to this expected response file for content, based on the element /restReplay/testGroup/test/response/expected. The validator is then run as a separate validation step.
example
 (script file)
 @lang="jexl"
assignments/res/testA.validator.jexl
${ var foo = 1; return result.result; }
example
 (script file)
 @lang="javascript"
assignments/res/testB.validator.js
var label = serviceResult.testIDLabel; var result = serviceResult.result; stdout.println("LOG from javascript: "+label); stdout.println("LOG from javascript: "+result); "OK from JS";
/restReplay/testGroup/test/response/expected
@dom Single element for evaluating the response from the server. Specify the type of comparisons for validation with
attribute level: can be "ZERO", "TEXT", "ADDOK", "TREE", "TREE_TEXT", and "STRICT".
(For all tests, whitespace allowed by XML is ignored, and also ignoring whitespace at either end of child node text.
i.e. a child's text node is trimmed prior to comparison.) Note that only JSON and XML respons payloads are reported, not HTML.
  • ZERO: does no checking. If the server sends a valid reponse, the reponse succeeds.
  • ADDOK: Checks for TEXT differences. Nodes added by server, are allowed. Nodes expected by test but ommitted by server are in error.
  • TEXT: makes sure nodes from the server match nodes from the file named in "filename" child element of "response".
    Allows the server to add nodes, e.g. CSID, inAuthority, and other calculated fields may be skipped.
  • TREE: makes sure no nodes/children are added or removed.
  • TREE_TEXT: makes sure nodes match, and that no nodes/children are added or removed.
  • STRICT makes sure all nodes match exactly. This is the strictest level of match.
Example:
... <response> <expected dom="TEXT" /> </response>
dom child element which contains specific criteria for passing a dom validation.
Example:
... <expected> <dom> <MATCHED range="3" /> <DIFFERENT range="0" /> </dom> </expected>
code child element which behaves exactly like test/expected/code if that element is not present, for example:
Example:
<test ID="testWithResponseLevelExpectedCode"> <comment>This version is nested, and includes both code and dom nodes. code is only read from response/expected/code if expected/code is missing. </comment> <method>GET</method> <uri>/orders</uri> <response> <filename>orders/r/Order1.json</filename> <expected> <dom> <REMOVED range="0" /> <DIFFERENT range="0" /> </dom> <code range="200,404"/> </expected> </response> </test>
/restReplay/testGroup/test/response/parts
parts Optional container for part elements. Required if more than one part element.
/restReplay/testGroup/test/response/parts/part
part A template containing the schema part expected from the server, as opposed to the actual part sent from the server.
/restReplay/testGroup/test/response/parts/part/label
label The name of the schema, such as "objectexit_common", or "collectionspace_core".
/restReplay/testGroup/test/response/parts/part/filename
filename The filename of the template for comparing to what the server sent.
/restReplay/testGroup/test/response/parts/part/var
var A variable declaration to be passed into the template.
@ID ID defines the variable name that will appear in the expression context, and the text of the var node will be evaluated just prior to comparing the template to the response from the server.
Example:

The file named in /restReplay/testGroup/test/response/part/filename is a template, which may contain variables.
Those variables should be defined in var child elements at this level. This keeps complicated variables out of the template file and in the control file. Templates should just expose simple variables, such as ${depositor} and not complicated variables that dig into the objects in the namespace, e.g. this would be considered bad form if found inside a template file:

<!-- DON'T DO THIS IN TEMPLATE FILE --> <shortIdentifier>${test2.sent("pa_com","//sid")}</shortIdentifier>

It would be better to defined that variable in the control file, and then pass that single variable into the template.
e.g. define this in the template file "objectexit/res/test2GET.res.xml"

<shortIdentifier>${sid}</shortIdentifier>

then instantiate that variable in the control file:

<response> <expected level="TEXT" /> <part> <label>t2_com</label> <filename>objectexit/res/test2GET.res.xml</filename> <var ID="sid">${test2.sent("pa_com","//id")}</var> </part> </response>

2.c.4 Cleanup

After the testGroup has run, RestReplay will perform cleanup. You may control cleanup via the "autoDeletePOSTS" attribute of "testGroup". If true, then any POSTs that returned a "Location: " header will result in RestReplay sending a DELETE on that Location: URL. If false, RestReplay will perform no automated cleanup for that testGroup. If you called RestReplay programmatically, then you may initiate cleanup manually by calling on these methods of org.dynamide.restreplay.RestReplay

public List<ServiceResult> autoDelete(String logName);

or

public static List<ServiceResult> autoDelete(Map<String, ServiceResult> serviceResultsMap, String logName);

3. RestReplay Examples

3.a.1 Using DELETE for lists of resources.

Say we had a service listening on /api/orders that would return a list of orders we had created. It might return JSON like this:

Let's say previous tests have populated some Orders, and now we want to clean them all up. We could use autoDeletePOST or set up Delete URLs, but here is how to set up the REST workflow manually. We could set up two tests in one testGroup in RestReplay: one to call the service to get this list, and a second to delete all the resources found on the list.

        
            GET
            /api/orders
            
                orders/res/getOrdersAfterCreate.validator.js
            
        


          
              DELETE
              /api/orders/${getOrdersAfterCreate.ORDER_IDS[this.LoopIndex]}
          
        

The first test calls the file orders/res/getOrdersAfterCreate.validator.js Here is what is in that file:

Now let's walk through what we have done.

In this example, we first run a test called getOrdersAfterCreate, which gets the list of orders, after some other tests have created Order resources on the server.

After the response comes back, RestReplay runs any validator found, in this case orders/res/getOrdersAfterCreate.validator.js which is shown below. This validator looks into the JSON that came back, and digs out the count and the orders order_id's. If the count were not sent by your server, you can use Javascript to find the length of the r.orders array. This validator builds up a Java Array of String, then passes back to the test as an export.

Note that to create Java arrays of String, or String[], you must call either the long version:

var ordersArray = java.lang.reflect.Array.newInstance(java.lang.String, r.count);
or the short version:
var ordersArray = kit.newStringArray(theCount);

Then we run the delete test, which looks at the array of order IDs in the export getOrdersAfterCreate.ORDER_IDS. This works because getOrdersAfterCreate is the name of the test, and each test stores its exported vars for use by subsequent tests.

Note that size() is a function provided by Jexl.  It looks at the length of arrays, and the size of collections.  Here we use it to tell RestReplay how many times to loop, so that we perform one DELETE request for each ID in the list.

Also, this.LoopIndex is a loop counter provided by RestReplay when looping.

Specifically, if getOrdersAfterCreate.ORDER_IDS is an array of string like ["abc123","abc456"],
then in memory it looks like this:
0 abc123
1 abc456

Then size(getOrdersAfterCreate.ORDER_IDS) is 2, and loop="${size(getOrdersAfterCreate.ORDER_IDS)}" turns into loop="2", so RestReplay will run deleteOrders 2 times so that the following 2 URI's get hit:

DELETE /api/orders/abc123
DELETE /api/orders/abc456
because the first time through the loop,
/api/orders/${getOrdersAfterCreate.ORDER_IDS[this.LoopIndex]}

becomes
/api/orders/${getOrdersAfterCreate.ORDER_IDS[0]}

becomes
/api/orders/abc123


Then the second time through,
/api/orders/${getOrdersAfterCreate.ORDER_IDS[this.LoopIndex]}

becomes
/api/orders/${getOrdersAfterCreate.ORDER_IDS[1]}

becomes
/api/orders/abc456

3.a.2 Using XPath to dig elements out of XML or JSON.

The serviceResult.got() method is used to get data nodes from tree-structured data received from the server. the key passed to got() is an XPath that will work on any XML request. JSON requests are mapped to XML for using XPath, and this works well. You can also use a validator, which will run Javascript and give you JSON dotted syntax.

In this case, the XPath is //data which means "find the value of the node called data, anywhere in the tree".

Note: you can use either XPath or JsonPath.

First, let's say we have a token service, that when called, returns some JSON like so, and all we need to grab from the token is the value of the data element from that JSON:

Now we call this service like so:

Subsequent tests can now make the token available to their own templates like so:

or can use the token in the URI like so:

in which case, the URI would expand to: /tagonomy?mock=true&token=id=ffffffff540e662de4b01ef1c4c50faf;token-expires=112567

3.a.3 Using XPath part 2

Next we'll show another way to export that same value, and some cleaner ways to use it. These are all equivalent to the code above in 3.a.2, Using XPath

Note: you can use either XPath or JsonPath.

First, the selftestToken could be modified to export a var

After which you can use the token in the URI like so:

or you can use the token in headers like so:

3.a.4 Jexl Script

Expressions in Jexl can span multiple lines, and have return statements, so they can be used as scripts. In this example, a complicated expression is made manageable by breaking it into lines and steps. But it is still one "expression".

This example shows that in a var, you can use Jexl2 Expressions.
Note that kit is predefined to be valid inside a Jexl evaluator.
Note the use of CDATA to protect the jexl from xml rules.  If you don't use the characters: <,  and &, you can skip the CDATA.
The CDATA begins with <![CDATA[ and ends with ]]> 
The Jexl begins with ${ and ends with }
<var ID="MyGregorianDate"><![CDATA[
${ if (kit != null) { var ts = kit.gregorian.timestampUTC(); kit.out.println(''); kit.out.println(' ==> Running test script, in selftest.xml:selftestGroup:selftestScriptTest at '+ts); kit.out.println(''); return ''+ts; } return "kit not defined in selftestScriptTest"; }
]]></var>

3.a.x More Examples

The best source for examples are the ones that run every time we build or install RestReplay: our _self-test. This ships with every RestReplay.jar, and can be unpacked from there, or can be seen on our _self-test GitHub page.

You can see the reports produced by our _self-test here: Sample Reports.

Especially, you can see all our tests here:
self-test.xml
master-self-test.xml
You can run these tests with a command line like this which will dump reports into the current directory in a directory called ./reports Use the latest version. For this example we use version 1.0.4 .

java -jar ./RestReplay-1.0.4-standalone.jar -selftest -testdir ./

You can unpack the examples with this command:

jar xvf ./RestReplay-1.0.4-standalone.jar

Once unpacked, you'll see our _self-test in ./restreplay/_self_test/, and you notice that ./restreplay/ is our effective -testdir.


RestReplay and dynamide.org are licensed under Apache 2.0. See our LICENSE.